CRM 101: What is CRM?
This is a simple definition of CRM.
Customer relationship management (CRM) is a technology for managing all your company’s relationships and interactions with customers and potential customers. The goal is simple: Improve business relationships to grow your business. A CRM system helps companies stay connected to customers, streamline processes, and improve profitability.
When people talk about CRM, they are usually referring to a CRM system, a tool that helps with contact management, sales management, agent productivity, and more. CRM tools can now be used to manage customer relationships across the entire customer lifecycle, spanning marketing, sales, digital commerce, and customer service interactions.
A CRM solution helps you focus on your organization’s relationships with individual people — including customers, service users, colleagues, or suppliers — throughout your lifecycle with them, including finding new customers, winning their business, and providing support and additional services throughout the relationship.
Who is CRM for?
A CRM system gives everyone — from sales, customer service, business development, recruiting, marketing, or any other line of business — a better way to manage the external interactions and relationships that drive success. A CRM tool lets you store customer and prospect contact information, identify sales opportunities, record service issues, and manage marketing campaigns, all in one central location — and make information about every customer interaction available to anyone at your company who might need it.
With visibility and easy access to data, it's easier to collaborate and increase productivity. Everyone in your company can see how customers have been communicated with, what they’ve bought, when they last purchased, what they paid, and so much more. CRM can help companies of all sizes drive business growth, and it can be especially beneficial to a small business, where teams often need to find ways to do more with less.
Here’s why CRM matters to your business.
CRM is the largest and fastest-growing enterprise application software category, and worldwide spending on CRM is expected to reach USD $114.4 billion by the year 2027. If your business is going to last, you need a strategy for the future that’s centered around your customers, and enabled by the right technology. You have targets for sales, business objectives, and profitability. But getting up-to-date, reliable information on your progress can be tricky. How do you translate the many streams of data coming in from sales, customer service, marketing, and social media monitoring into useful business information?
A CRM system can give you a clear overview of your customers. You can see everything in one place — a simple, customizable dashboard that can tell you a customer’s previous history with you, the status of their orders, any outstanding customer service issues, and more. You can even choose to include information
The document discusses different types of data that can be collected in statistics including categorical vs. quantitative data, discrete vs. continuous data, and different levels of measurement for data including nominal, ordinal, interval, and ratio scales. It also discusses key concepts such as parameters, statistics, populations, and samples. Potential pitfalls in statistical analysis are outlined such as misleading conclusions, nonresponse bias, and issues with survey question wording and order.
This document discusses data management and analysis for monitoring and evaluation. It covers topics such as data capture, data cleaning, data security, and data analysis. The objectives are to understand data management rules and roles, implement a data management system, and strengthen skills in data analysis and interpretation. Data capture methods include paper forms, databases, and personal digital assistants. Data cleaning involves checking for completeness, consistency, plausibility, duplicates, and outliers. Data security requires restricting access, backups, and anonymous storage. Data analysis turns raw data into useful information by answering questions through comparison, statistics, and interpretation.
This document discusses quantitative and qualitative data analysis. It defines key terms like analysis, hypothesis, descriptive statistics, inferential statistics, and parametric and nonparametric tests. It explains the steps of quantitative data analysis which include data preparation, describing the data through summary statistics, drawing inferences through inferential statistics, and interpreting the results. Common parametric tests include t-tests, ANOVA, and correlation. Common nonparametric tests include chi-square, median, Mann-Whitney, and Wilcoxon tests. The document emphasizes accurate presentation of analyzed data through narratives and tables.
The document provides an introduction to statistical concepts, explaining that statistics is used to extract useful information from data to help with decision making. It discusses different types of data, variables, methods of data collection and quality, as well as statistical analysis techniques including descriptive statistics, inferential statistics, frequency distributions, graphs and charts. The goal of statistics is to summarize and analyze data to draw conclusions and make informed business decisions.
This document provides an overview and introduction to a statistics course being taught.
The key points are:
1) The course will cover topics such as data management, visualization, descriptive statistics, statistical distributions, confidence intervals, hypothesis testing, and regression analysis.
2) It will include lectures, exercises, assignments and online quizzes. There will be three midterm exams and a final exam to assess students.
3) Students are expected to attend lectures in person and exercises online. The instructor and teaching assistant contact details are provided for any questions or special arrangements.
Introduction to Data Analysis for Nurse ResearchersRupa Verma
The document provides an overview of data analysis for nursing students. It discusses the importance of statistical training for establishing cause-and-effect relationships and measuring health outcomes. The key steps of data analysis are described, including computing, editing, coding, selecting software, entering, cleaning and classifying data. Both descriptive and inferential statistical methods are covered. Descriptive statistics summarize and describe data through measures like frequency, percentage, mean, median and mode. Inferential statistics allow drawing conclusions about populations from samples using parametric or nonparametric tests. Qualitative data analysis involves coding, identifying themes in the data, and interpreting patterns.
This document discusses collecting sample data from populations. It defines key terms like population, sample, census, and observational study vs experiment. It describes different levels of data measurement and types of data. Random sampling methods like simple random sampling are described as the gold standard. Other sampling techniques including systematic, stratified, cluster, and convenience are covered. The document discusses experimental design concepts like replication, blinding, and randomization. It also addresses observational study designs and controlling variables. Sources of error in sampling like sampling error and nonresponse are identified.
Statistics is the science of collecting, organizing, summarizing, presenting, and analyzing numerical data. It has two main fields - descriptive statistics which summarizes data, and inferential statistics which makes generalizations beyond the data. There are different types of variables, sources of data, methods of data presentation including tables, graphs, and textual descriptions. Common statistical terms include population, sample, measurement, and classification of variables. Sampling allows studying a small part of the population and generalizing to the whole. Probability and non-probability sampling methods are described.
This document outlines the marketing research process and design. It discusses defining the research problem, estimating the value of information, selecting the data collection approach, measurement techniques, sampling, analysis methods, ethics, costs, and proposal elements. The goal of research design is to generate the most valuable information relative to costs by specifying procedures for collecting and analyzing necessary data to identify or react to problems or opportunities. Key decisions include what information to generate, data collection and measurement approaches, and analytical methods.
Data Analysis & Interpretation and Report WritingSOMASUNDARAM T
Statistical Methods for Data Analysis (Only Theory), Meaning of Interpretation, Technique of Interpretation, Significance of Report Writing, Steps, Layout of Research Report, Types of Research Reports, Precautions while writing research reports
This document provides an introduction to statistical theory. It discusses why statistics are studied and defines key statistical concepts such as populations, samples, parameters, statistics, descriptive statistics, inferential statistics, and the different types of data and variables. It also covers experimental design, methods for collecting data such as surveys and sampling, and different sampling methods like random, stratified, cluster, and systematic sampling.
The document provides information about biostatistics and statistical methodology. It begins with definitions of statistics and biostatistics. It then discusses topics like sampling, types of sampling techniques, measures of central tendency, measures of dispersion, and tests of significance. Specifically, it covers [1] the differences between probability and non-probability sampling, [2] common measures of central tendency like mean, median and mode, [3] measures of dispersion like range, mean deviation and standard deviation, and [4] tests of significance like the standard error test and chi-square test.
This document provides an overview of basic statistical concepts. It discusses that statistics involves collecting, organizing, analyzing, and interpreting quantitative data. There are two main divisions of statistics: descriptive statistics, which are used to summarize and describe basic features of data, and inferential statistics, which are used to make inferences about populations based on samples. The document also covers topics such as populations and samples, levels of measurement, data collection methods, sampling techniques, and ways to present statistical data through tables, graphs, and other visual formats.
This document discusses data management and analysis for monitoring and evaluation. It covers topics such as data capture, data cleaning, data security, and data analysis. The objectives are to understand data management rules and roles, implement a data management system, and strengthen skills in data analysis and interpretation. Data capture methods include paper forms, databases, and personal digital assistants. Data cleaning involves checking for completeness, consistency, plausibility, duplicates, and outliers. Data security requires restricting access, backups, and anonymous storage. Data analysis turns raw data into useful information by answering questions through comparison, statistics, and interpretation.
This document discusses quantitative and qualitative data analysis. It defines key terms like analysis, hypothesis, descriptive statistics, inferential statistics, and parametric and nonparametric tests. It explains the steps of quantitative data analysis which include data preparation, describing the data through summary statistics, drawing inferences through inferential statistics, and interpreting the results. Common parametric tests include t-tests, ANOVA, and correlation. Common nonparametric tests include chi-square, median, Mann-Whitney, and Wilcoxon tests. The document emphasizes accurate presentation of analyzed data through narratives and tables.
The document provides an introduction to statistical concepts, explaining that statistics is used to extract useful information from data to help with decision making. It discusses different types of data, variables, methods of data collection and quality, as well as statistical analysis techniques including descriptive statistics, inferential statistics, frequency distributions, graphs and charts. The goal of statistics is to summarize and analyze data to draw conclusions and make informed business decisions.
This document provides an overview and introduction to a statistics course being taught.
The key points are:
1) The course will cover topics such as data management, visualization, descriptive statistics, statistical distributions, confidence intervals, hypothesis testing, and regression analysis.
2) It will include lectures, exercises, assignments and online quizzes. There will be three midterm exams and a final exam to assess students.
3) Students are expected to attend lectures in person and exercises online. The instructor and teaching assistant contact details are provided for any questions or special arrangements.
Introduction to Data Analysis for Nurse ResearchersRupa Verma
The document provides an overview of data analysis for nursing students. It discusses the importance of statistical training for establishing cause-and-effect relationships and measuring health outcomes. The key steps of data analysis are described, including computing, editing, coding, selecting software, entering, cleaning and classifying data. Both descriptive and inferential statistical methods are covered. Descriptive statistics summarize and describe data through measures like frequency, percentage, mean, median and mode. Inferential statistics allow drawing conclusions about populations from samples using parametric or nonparametric tests. Qualitative data analysis involves coding, identifying themes in the data, and interpreting patterns.
This document discusses collecting sample data from populations. It defines key terms like population, sample, census, and observational study vs experiment. It describes different levels of data measurement and types of data. Random sampling methods like simple random sampling are described as the gold standard. Other sampling techniques including systematic, stratified, cluster, and convenience are covered. The document discusses experimental design concepts like replication, blinding, and randomization. It also addresses observational study designs and controlling variables. Sources of error in sampling like sampling error and nonresponse are identified.
Statistics is the science of collecting, organizing, summarizing, presenting, and analyzing numerical data. It has two main fields - descriptive statistics which summarizes data, and inferential statistics which makes generalizations beyond the data. There are different types of variables, sources of data, methods of data presentation including tables, graphs, and textual descriptions. Common statistical terms include population, sample, measurement, and classification of variables. Sampling allows studying a small part of the population and generalizing to the whole. Probability and non-probability sampling methods are described.
This document outlines the marketing research process and design. It discusses defining the research problem, estimating the value of information, selecting the data collection approach, measurement techniques, sampling, analysis methods, ethics, costs, and proposal elements. The goal of research design is to generate the most valuable information relative to costs by specifying procedures for collecting and analyzing necessary data to identify or react to problems or opportunities. Key decisions include what information to generate, data collection and measurement approaches, and analytical methods.
Data Analysis & Interpretation and Report WritingSOMASUNDARAM T
Statistical Methods for Data Analysis (Only Theory), Meaning of Interpretation, Technique of Interpretation, Significance of Report Writing, Steps, Layout of Research Report, Types of Research Reports, Precautions while writing research reports
This document provides an introduction to statistical theory. It discusses why statistics are studied and defines key statistical concepts such as populations, samples, parameters, statistics, descriptive statistics, inferential statistics, and the different types of data and variables. It also covers experimental design, methods for collecting data such as surveys and sampling, and different sampling methods like random, stratified, cluster, and systematic sampling.
The document provides information about biostatistics and statistical methodology. It begins with definitions of statistics and biostatistics. It then discusses topics like sampling, types of sampling techniques, measures of central tendency, measures of dispersion, and tests of significance. Specifically, it covers [1] the differences between probability and non-probability sampling, [2] common measures of central tendency like mean, median and mode, [3] measures of dispersion like range, mean deviation and standard deviation, and [4] tests of significance like the standard error test and chi-square test.
This document provides an overview of basic statistical concepts. It discusses that statistics involves collecting, organizing, analyzing, and interpreting quantitative data. There are two main divisions of statistics: descriptive statistics, which are used to summarize and describe basic features of data, and inferential statistics, which are used to make inferences about populations based on samples. The document also covers topics such as populations and samples, levels of measurement, data collection methods, sampling techniques, and ways to present statistical data through tables, graphs, and other visual formats.
Module - 5 International Fisher EffectIRP.pdfprofgnagarajan
International Fisher Effect (IFE) is an economic theory stating that the expected disparity between the exchange rate of two currencies is approximately equal to the difference between their countries' nominal interest rates.
A measles outbreak originating in West Texas has been linked to confirmed cases in New Mexico, with additional cases reported in Oklahoma and Kansas. The current case count is 817 from Texas, New Mexico, Oklahoma, and Kansas. 97 individuals have required hospitalization, and 3 deaths, 2 children in Texas and one adult in New Mexico. These fatalities mark the first measles-related deaths in the United States since 2015 and the first pediatric measles death since 2003.
The YSPH Virtual Medical Operations Center Briefs (VMOC) were created as a service-learning project by faculty and graduate students at the Yale School of Public Health in response to the 2010 Haiti Earthquake. Each year, the VMOC Briefs are produced by students enrolled in Environmental Health Science Course 581 - Public Health Emergencies: Disaster Planning and Response. These briefs compile diverse information sources – including status reports, maps, news articles, and web content– into a single, easily digestible document that can be widely shared and used interactively. Key features of this report include:
- Comprehensive Overview: Provides situation updates, maps, relevant news, and web resources.
- Accessibility: Designed for easy reading, wide distribution, and interactive use.
- Collaboration: The “unlocked" format enables other responders to share, copy, and adapt seamlessly. The students learn by doing, quickly discovering how and where to find critical information and presenting it in an easily understood manner.
CURRENT CASE COUNT: 817 (As of 05/3/2025)
• Texas: 688 (+20)(62% of these cases are in Gaines County).
• New Mexico: 67 (+1 )(92.4% of the cases are from Eddy County)
• Oklahoma: 16 (+1)
• Kansas: 46 (32% of the cases are from Gray County)
HOSPITALIZATIONS: 97 (+2)
• Texas: 89 (+2) - This is 13.02% of all TX cases.
• New Mexico: 7 - This is 10.6% of all NM cases.
• Kansas: 1 - This is 2.7% of all KS cases.
DEATHS: 3
• Texas: 2 – This is 0.31% of all cases
• New Mexico: 1 – This is 1.54% of all cases
US NATIONAL CASE COUNT: 967 (Confirmed and suspected):
INTERNATIONAL SPREAD (As of 4/2/2025)
• Mexico – 865 (+58)
‒Chihuahua, Mexico: 844 (+58) cases, 3 hospitalizations, 1 fatality
• Canada: 1531 (+270) (This reflects Ontario's Outbreak, which began 11/24)
‒Ontario, Canada – 1243 (+223) cases, 84 hospitalizations.
• Europe: 6,814
How to Manage Upselling in Odoo 18 SalesCeline George
In this slide, we’ll discuss on how to manage upselling in Odoo 18 Sales module. Upselling in Odoo is a powerful sales technique that allows you to increase the average order value by suggesting additional or more premium products or services to your customers.
Link your Lead Opportunities into Spreadsheet using odoo CRMCeline George
In Odoo 17 CRM, linking leads and opportunities to a spreadsheet can be done by exporting data or using Odoo’s built-in spreadsheet integration. To export, navigate to the CRM app, filter and select the relevant records, and then export the data in formats like CSV or XLSX, which can be opened in external spreadsheet tools such as Excel or Google Sheets.
How to Create Kanban View in Odoo 18 - Odoo SlidesCeline George
The Kanban view in Odoo is a visual interface that organizes records into cards across columns, representing different stages of a process. It is used to manage tasks, workflows, or any categorized data, allowing users to easily track progress by moving cards between stages.
Ancient Stone Sculptures of India: As a Source of Indian HistoryVirag Sontakke
This Presentation is prepared for Graduate Students. A presentation that provides basic information about the topic. Students should seek further information from the recommended books and articles. This presentation is only for students and purely for academic purposes. I took/copied the pictures/maps included in the presentation are from the internet. The presenter is thankful to them and herewith courtesy is given to all. This presentation is only for academic purposes.
Happy May and Happy Weekend, My Guest Students.
Weekends seem more popular for Workshop Class Days lol.
These Presentations are timeless. Tune in anytime, any weekend.
<<I am Adult EDU Vocational, Ordained, Certified and Experienced. Course genres are personal development for holistic health, healing, and self care. I am also skilled in Health Sciences. However; I am not coaching at this time.>>
A 5th FREE WORKSHOP/ Daily Living.
Our Sponsor / Learning On Alison:
Sponsor: Learning On Alison:
— We believe that empowering yourself shouldn’t just be rewarding, but also really simple (and free). That’s why your journey from clicking on a course you want to take to completing it and getting a certificate takes only 6 steps.
Hopefully Before Summer, We can add our courses to the teacher/creator section. It's all within project management and preps right now. So wish us luck.
Check our Website for more info: https://ldmchapels.weebly.com
Get started for Free.
Currency is Euro. Courses can be free unlimited. Only pay for your diploma. See Website for xtra assistance.
Make sure to convert your cash. Online Wallets do vary. I keep my transactions safe as possible. I do prefer PayPal Biz. (See Site for more info.)
Understanding Vibrations
If not experienced, it may seem weird understanding vibes? We start small and by accident. Usually, we learn about vibrations within social. Examples are: That bad vibe you felt. Also, that good feeling you had. These are common situations we often have naturally. We chit chat about it then let it go. However; those are called vibes using your instincts. Then, your senses are called your intuition. We all can develop the gift of intuition and using energy awareness.
Energy Healing
First, Energy healing is universal. This is also true for Reiki as an art and rehab resource. Within the Health Sciences, Rehab has changed dramatically. The term is now very flexible.
Reiki alone, expanded tremendously during the past 3 years. Distant healing is almost more popular than one-on-one sessions? It’s not a replacement by all means. However, its now easier access online vs local sessions. This does break limit barriers providing instant comfort.
Practice Poses
You can stand within mountain pose Tadasana to get started.
Also, you can start within a lotus Sitting Position to begin a session.
There’s no wrong or right way. Maybe if you are rushing, that’s incorrect lol. The key is being comfortable, calm, at peace. This begins any session.
Also using props like candles, incenses, even going outdoors for fresh air.
(See Presentation for all sections, THX)
Clearing Karma, Letting go.
Now, that you understand more about energies, vibrations, the practice fusions, let’s go deeper. I wanted to make sure you all were comfortable. These sessions are for all levels from beginner to review.
Again See the presentation slides, Thx.
Computer crime and Legal issues Computer crime and Legal issuesAbhijit Bodhe
• Computer crime and Legal issues: Intellectual property.
• privacy issues.
• Criminal Justice system for forensic.
• audit/investigative.
• situations and digital crime procedure/standards for extraction,
preservation, and deposition of legal evidence in a court of law.
How to Add Customer Note in Odoo 18 POS - Odoo SlidesCeline George
In this slide, we’ll discuss on how to add customer note in Odoo 18 POS module. Customer Notes in Odoo 18 POS allow you to add specific instructions or information related to individual order lines or the entire order.
Learn about the APGAR SCORE , a simple yet effective method to evaluate a newborn's physical condition immediately after birth ....this presentation covers .....
what is apgar score ?
Components of apgar score.
Scoring system
Indications of apgar score........
This chapter provides an in-depth overview of the viscosity of macromolecules, an essential concept in biophysics and medical sciences, especially in understanding fluid behavior like blood flow in the human body.
Key concepts covered include:
✅ Definition and Types of Viscosity: Dynamic vs. Kinematic viscosity, cohesion, and adhesion.
⚙️ Methods of Measuring Viscosity:
Rotary Viscometer
Vibrational Viscometer
Falling Object Method
Capillary Viscometer
🌡️ Factors Affecting Viscosity: Temperature, composition, flow rate.
🩺 Clinical Relevance: Impact of blood viscosity in cardiovascular health.
🌊 Fluid Dynamics: Laminar vs. turbulent flow, Reynolds number.
🔬 Extension Techniques:
Chromatography (adsorption, partition, TLC, etc.)
Electrophoresis (protein/DNA separation)
Sedimentation and Centrifugation methods.
This slide is an exercise for the inquisitive students preparing for the competitive examinations of the undergraduate and postgraduate students. An attempt is being made to present the slide keeping in mind the New Education Policy (NEP). An attempt has been made to give the references of the facts at the end of the slide. If new facts are discovered in the near future, this slide will be revised.
This presentation is related to the brief History of Kashmir (Part-I) with special reference to Karkota Dynasty. In the seventh century a person named Durlabhvardhan founded the Karkot dynasty in Kashmir. He was a functionary of Baladitya, the last king of the Gonanda dynasty. This dynasty ruled Kashmir before the Karkot dynasty. He was a powerful king. Huansang tells us that in his time Taxila, Singhpur, Ursha, Punch and Rajputana were parts of the Kashmir state.
How to Configure Scheduled Actions in odoo 18Celine George
Scheduled actions in Odoo 18 automate tasks by running specific operations at set intervals. These background processes help streamline workflows, such as updating data, sending reminders, or performing routine tasks, ensuring smooth and efficient system operations.
Ajanta Paintings: Study as a Source of HistoryVirag Sontakke
This Presentation is prepared for Graduate Students. A presentation that provides basic information about the topic. Students should seek further information from the recommended books and articles. This presentation is only for students and purely for academic purposes. I took/copied the pictures/maps included in the presentation are from the internet. The presenter is thankful to them and herewith courtesy is given to all. This presentation is only for academic purposes.
3. Contents
1. The Science of Statistics
2. Types of Statistical Applications in Business
3. Fundamental Elements of Statistics
4. Processes
5. Types of Data
6. Collecting Data
7. The Role of Statistics in Managerial Decision
Making
4. Learning Objectives
1. Introduce the field of statistics
2. Demonstrate how statistics applies to business
3. Establish the link between statistics and data
4. Identify the different types of data and data-
collection methods
5. Differentiate between population and sample data
6. Differentiate between descriptive and inferential
statistics
7. What Is Statistics?
Statistics is the science of data. It involves collecting,
classifying, summarizing, organizing, analyzing, and
interpreting numerical information.
9. Application Areas
• Economics
• Forecasting
• Demographics
• Sports
• Individual & Team
Performance
• Engineering
• Construction
• Materials
• Business
• Consumer Preferences
• Financial Trends
10. Statistics: Two Processes
Describing sets of data
and
Drawing conclusions (making estimates, decisions,
predictions, etc. about sets of data based on sampling)
15. Fundamental Elements
1. Experimental unit
• Object upon which we collect data
2. Population
• All items of interest
3. Variable
• Characteristic of an individual
experimental unit
4. Sample
• Subset of the units of a population
• P in Population
& Parameter
• S in Sample
& Statistic
16. Fundamental Elements
1. Statistical Inference
• Estimate or prediction or generalization about a
population based on information contained in a sample
2. Measure of Reliability
• Statement (usually qualified) about the degree of
uncertainty associated with a statistical inference
17. Four Elements of Descriptive
Statistical Problems
1. The population or sample of interest
2. One or more variables (characteristics of the
population or sample units) that are to be
investigated
3. Tables, graphs, or numerical summary tools
4. Identification of patterns in the data
18. Five Elements of Inferential
Statistical Problems
1. The population of interest
2. One or more variables (characteristics of the
population units) that are to be investigated
3. The sample of population units
4. The inference about the population based on
information contained in the sample
5. A measure of reliability for the inference
20. Process
A process is a series of actions or operations that
transforms inputs to outputs. A process produces or
generates output over time.
21. Process
A process whose operations or actions are unknown or
unspecified is called a black box.
Any set of output (object or numbers) produced by a
process is called a sample.
23. Types of Data
Quantitative data are measurements that are recorded
on a naturally occurring numerical scale.
Qualitative data are measurements that cannot be
measured on a natural numerical scale; they can only be
classified into one of a group of categories.
25. Quantitative Data
Measured on a numeric
scale.
• Number of defective
items in a lot.
• Salaries of CEOs of
oil companies.
• Ages of employees at
a company.
3
52
71
4
8
943
120 12
21
26. Qualitative Data
Classified into categories.
• College major of each
student in a class.
• Gender of each employee
at a company.
• Method of payment
(cash, check, credit card).
$ Credit
28. Obtaining Data
1. Data from a published source
2. Data from a designed experiment
3. Data from a survey
4. Data collected observationally
29. Obtaining Data
Published source:
book, journal, newspaper, Web site
Designed experiment:
researcher exerts strict control over units
Survey:
a group of people are surveyed and their responses are
recorded
Observation study:
units are observed in natural setting and variables of
interest are recorded
30. Samples
A representative sample exhibits characteristics typical
of those possessed by the population of interest.
A random sample of n experimental units is a sample
selected from the population in such a way that every
different sample of size n has an equal chance of
selection.
33. Statistical Thinking
Statistical thinking involves applying rational thought
and the science of statistics to critically assess data and
inferences. Fundamental to the thought process is that
variation exists in populations and process data.
A random sample of n experimental units is a sample
selected from the population in such a way that every
different sample of size n has an equal chance of
selection.
34. Nonrandom Sample Errors
Selection bias results when a subset of the
experimental units in the population is excluded so
that these units have no chance of being selected for
the sample.
Nonresponse bias results when the researchers
conducting a survey or study are unable to obtain data
on all experimental units selected for the sample.
Measurement error refers to inaccuracies in the
values of the data recorded. In surveys, the error may
be due to ambiguous or leading questions and the
interviewer’s effect on the respondent.
37. Key Ideas
Types of Statistical Applications
Descriptive
1. Identify population and sample (collection of
experimental units)
2. Identify variable(s)
3. Collect data
4. Describe data
38. Key Ideas
Types of Statistical Applications
Inferential
1. Identify population (collection of all
experimental units)
2. Identify variable(s)
3. Collect sample data (subset of population)
4. Inference about population based on sample
5. Measure of reliability for inference
39. Key Ideas
Types of Data
1. Quantitative (numerical in nature)
2. Qualitative (categorical in nature)
42. The mean, median, and mode are measures of central tendency that are used to identify the core position of a data
set. They are applied in different situations depending on the type of data and the level of measurement:
•Nominal data: The mode is the only appropriate measure of central tendency to use. The mode is the most frequent value in the data set.
•Ordinal data: The median or mode is usually the best choice. The median is the value in the middle of the data set.
•Interval or ratio data: The mean, median, and mode can all be used. The mean is the average value.
•Skewed distribution: The median is often the best measure of central tendency.
•Symmetrical distribution for continuous data: The mean, median, and mode are all equal.
•Data with extreme scores: The median is preferred because a single outlier can have a big effect on the mean.
•Data with missing or undetermined values: The median is preferred.
The mean is the most commonly used measure of central tendency, but the best measure depends on the type of data.
44. 44
Measures of Central Tendency
• A measure of central tendency is a descriptive statistic that describes
the average, or typical value of a set of scores
• There are three common measures of central tendency:
• the mode
• the median
• the mean
45. 45
The Mode
• The mode is the score
that occurs most
frequently in a set of
data
0
1
2
3
4
5
6
75 80 85 90 95
Score on Exam 1
Frequency
46. 46
Bimodal Distributions
• When a distribution
has two “modes,” it is
called bimodal
0
1
2
3
4
5
6
75 80 85 90 95
Score on Exam 1
Frequency
47. 47
Multimodal Distributions
• If a distribution has
more than 2 “modes,”
it is called multimodal
0
1
2
3
4
5
6
75 80 85 90 95
Score on Exam 1
Frequency
48. 48
When To Use the Mode
• The mode is not a very useful measure of central
tendency
• It is insensitive to large changes in the data set
• That is, two data sets that are very different from each other
can have the same mode
0
1
2
3
4
5
6
7
1 2 3 4 5 6 7 8 9 10
0
20
40
60
80
100
120
10 20 30 40 50 60 70 80 90 100
49. 49
When To Use the Mode
• The mode is primarily used with nominally scaled
data
• It is the only measure of central tendency that is
appropriate for nominally scaled data
50. 50
The Median
• The median is simply another name for the 50th
percentile
• It is the score in the middle; half of the scores are larger than the median and
half of the scores are smaller than the median
51. 51
How To Calculate the Median
• Conceptually, it is easy to calculate the median
• There are many minor problems that can occur; it is best to let a computer do
it
• Sort the data from highest to lowest
• Find the score in the middle
• middle = (N + 1) / 2
• If N, the number of scores, is even the median is the average of the middle
two scores
52. 52
Median Example
• What is the median of the following scores:
10 8 14 15 7 3 3 8 12 10 9
• Sort the scores:
15 14 12 10 10 9 8 8 7 3 3
• Determine the middle score:
middle = (N + 1) / 2 = (11 + 1) / 2 = 6
• Middle score = median = 9
53. 53
Median Example
• What is the median of the following scores:
24 18 19 42 16 12
• Sort the scores:
42 24 19 18 16 12
• Determine the middle score:
middle = (N + 1) / 2 = (6 + 1) / 2 = 3.5
• Median = average of 3rd
and 4th
scores:
(19 + 18) / 2 = 18.5
54. Median Example for
Discrete frequency
x: 1 2 3 4 5 6 7 8 9
F: 8 10 11 16 20 25 15 9 6
x f CF
1 8 8
2 10 8+10=18
3 11 18+11=29
4 16 45
5 20 65
6 25 90
7 15 105
8 9 114
9 6 120
The median class is 65
N=Σ fi =120
N/2= 120/2=60
The CF just greater than (N/2=60)
is 65
Median=5
Median for continuous frequency distribution
Wages : 2000-3000 3000-4000 4000-5000 5000-6000 6000-7000
No.of workers : 3 5 20 10 5
wages
no.of
workers cf
2000-3000 3 3
3000-4000 5 8
4000-5000 20 28
5000-6000 10 38
6000-7000 5 43
N=Σ fi =43
N/2= 43/2=21.5
The CF just greater than (N/2=21.5) is 28
The corresponding interval is 4000-5000
Median= L+h/2(N/2-c.f)
L = limit of the median class
f = frequency of Median class
h =Magnitude of Median class
CF = The cf of the class preceeding the median class
Median= 4000+(1000/2)(21.5-8)
4000+500(13.5)
4675
55. 55
When To Use the Median
• The median is often used when the distribution of scores is either
positively or negatively skewed
• The few really large scores (positively skewed) or really small scores
(negatively skewed) will not overly influence the median
56. 56
The Mean
• The mean is:
• the arithmetic average of all the scores
(X)/N
• the number, m, that makes (X - m) equal to 0
• the number, m, that makes (X - m)2
a minimum
• The mean of a population is represented by the Greek letter ; the
mean of a sample is represented by X
57. 57
Calculating the Mean
• Calculate the mean of the following data:
1 5 4 3 2
• Sum the scores (X):
1 + 5 + 4 + 3 + 2 = 15
• Divide the sum (X = 15) by the number of scores (N = 5):
15 / 5 = 3
• Mean = X = 3
58. Calculating the Mean for discrete data
x= 1 2 3 4 5 6 7
Fi= 5 9 12 17 14 10 6
Mean= Xifi/ fi
=299/73
=4.06
xi fi fi*xi
1 5 5
2 9 18
3 12 36
4 17 68
5 14 70
6 10 60
7 6 42
59. 59
When To Use the Mean
• You should use the mean when
• the data are interval or ratio scaled
• Many people will use the mean with ordinally scaled data too
• and the data are not skewed
• The mean is preferred because it is sensitive to every score
• If you change one score in the data set, the mean will change
60. 60
Relations Between the Measures of Central
Tendency
• In symmetrical
distributions, the median
and mean are equal
• For normal distributions,
mean = median = mode
• In positively skewed
distributions, the mean is
greater than the median
In negatively skewed
distributions, the mean is
smaller than the median
62. 62
Definition
• Measures of dispersion are descriptive statistics that describe how
similar a set of scores are to each other
• The more similar the scores are to each other, the lower the measure of
dispersion will be
• The less similar the scores are to each other, the higher the measure of
dispersion will be
• In general, the more spread out a distribution is, the larger the measure of
dispersion will be
63. 63
Measures of Dispersion
• Which of the
distributions of scores
has the larger
dispersion?
0
25
50
75
100
125
1 2 3 4 5 6 7 8 9 10
0
25
50
75
100
125
1 2 3 4 5 6 7 8 9 10
The upper distribution
has more dispersion
because the scores are
more spread out
That is, they are less
similar to each other
64. 64
Measures of Dispersion
• There are three main measures of dispersion:
• The range
• The semi-interquartile range (SIR)
• Variance / standard deviation
65. 65
The Range
• The range is defined as the difference between the largest score in
the set of data and the smallest score in the set of data, XL - XS
• What is the range of the following data:
4 8 1 6 6 2 9 3 6 9
• The largest score (XL) is 9; the smallest score (XS) is 1; the range is XL -
XS = 9 - 1 = 8
66. 66
When To Use the Range
• The range is used when
• you have ordinal data or
• you are presenting your results to people with little or no knowledge of
statistics
• The range is rarely used in scientific work as it is fairly insensitive
• It depends on only two scores in the set of data, XL and XS
• Two very different sets of data can have the same range:
1 1 1 1 9 vs 1 3 5 7 9
67. 67
The Semi-Interquartile Range
• The semi-interquartile range (or SIR) is defined as the difference of
the first and third quartiles divided by two
• The first quartile is the 25th
percentile
• The third quartile is the 75th
percentile
• SIR = (Q3 - Q1) / 2
68. 68
SIR Example
• What is the SIR for the
data to the right?
• 25 % of the scores are
below 5
• 5 is the first quartile
• 25 % of the scores are
above 25
• 25 is the third quartile
• SIR = (Q3 - Q1) / 2 = (25 -
5) / 2 = 10
2
4
6
5 = 25th
%tile
8
10
12
14
20
30
25 = 75th
%tile
60
69. 69
When To Use the SIR
• The SIR is often used with skewed data as it is insensitive to the
extreme scores
71. 71
What Does the Variance Formula Mean?
• First, it says to subtract the mean from each of the scores
• This difference is called a deviate or a deviation score
• The deviate tells us how far a given score is from the typical, or average, score
• Thus, the deviate is a measure of dispersion for a given score
72. 72
What Does the Variance Formula Mean?
• Why can’t we simply take the average of the
deviates? That is, why isn’t variance defined as:
N
X
2
This is not the formula
for variance!
73. 73
What Does the Variance Formula Mean?
• One of the definitions of the mean was that it always made the sum
of the scores minus the mean equal to 0
• Thus, the average of the deviates must be 0 since the sum of the
deviates must equal 0
• To avoid this problem, statisticians square the deviate score prior to
averaging them
• Squaring the deviate score makes all the squared scores positive
74. 74
What Does the Variance Formula Mean?
• Variance is the mean of the squared deviation scores
• The larger the variance is, the more the scores deviate, on average,
away from the mean
• The smaller the variance is, the less the scores deviate, on average,
from the mean
75. 75
Standard Deviation
• When the deviate scores are squared in variance, their
unit of measure is squared as well
• E.g. If people’s weights are measured in pounds, then the
variance of the weights would be expressed in pounds2
(or
squared pounds)
• Since squared units of measure are often awkward to
deal with, the square root of variance is often used
instead
• The standard deviation is the square root of variance
77. 77
Computational Formula
• When calculating variance, it is often easier to use
a computational formula which is algebraically
equivalent to the definitional formula:
N
N
N X
X
X
2
2
2
2
2
is the population variance, X is a score, is the
population mean, and N is the number of scores
80. 80
Variance of a Sample
• Because the sample mean is not a perfect estimate
of the population mean, the formula for the
variance of a sample is slightly different from the
formula for the variance of a population:
1
N
X
X
s
2
2
s2
is the sample variance, X is a score, X is the
sample mean, and N is the number of scores
81. 81
Measure of Skew
• Skew is a measure of symmetry in the distribution
of scores
Positive Skew Negative Skew
Normal (skew = 0)
82. 82
Measure of Skew
• The following formula can be used to determine
skew:
N
N
X
X
X
X
s 2
3
3
83. 83
Measure of Skew
• If s3
< 0, then the distribution has a negative skew
• If s3
> 0 then the distribution has a positive skew
• If s3
= 0 then the distribution is symmetrical
• The more different s3
is from 0, the greater the skew in the
distribution
84. 84
Kurtosis
(Not Related to Halitosis)
• Kurtosis measures whether the scores are spread
out more or less than they would be in a normal
(Gaussian) distribution
Mesokurtic (s4
= 3)
Leptokurtic (s4
> 3) Platykurtic (s4
< 3)
85. 85
Kurtosis
• When the distribution is normally distributed, its kurtosis equals 3
and it is said to be mesokurtic
• When the distribution is less spread out than normal, its kurtosis is
greater than 3 and it is said to be leptokurtic
• When the distribution is more spread out than normal, its kurtosis is
less than 3 and it is said to be platykurtic
86. 86
Measure of Kurtosis
• The measure of kurtosis is given by:
N
N
X
X
X
X
s
4
2
4
87. 87
s2
, s3
, & s4
• Collectively, the variance (s2
), skew (s3
), and kurtosis (s4
) describe the
shape of the distribution
88. Karl Pearson’s coefficient of skewness Bowley’s coefficient of skewness
It is based on mean, mode and standard deviation. It is based on quartiles.
It is the usual method of finding coefficient of skewness. It is usually used when difference between
quartiles are given.
Skewness = mean – mode Skewness = Q3 + Q1 – 2Median
Coefficient of Skewness by Karl Pearson’s method = mean-
mode / standard deviation
Coefficient of Skewness by Bowley’s method =
Q3 + Q1 – 2Median / Q3 - Q1
Tip
Coefficient of Skewness by Karl Pearson’s method = mean- mode / standard deviation coefficient of
Skewness by Bowley’s method = Q3 + Q1 – 2Median / Q3 - Q1
Explanation
Final Answer
Karl Pearson’s method: It is the usual method of finding the coefficient of skewness. It is based on mean,
mode and standard deviation.
Coefficient of Skewness by Karl Pearson’s method = mean- mode / standard deviation.Bowley’s method: It
is usually used
when the difference between quartiles are given. It is based on quartiles. Coefficient of Skewness by
Bowley’s method = Q3 + Q1 – 2Median / Q3 - Q1
89. Caluculate karlpearsons co-efficient for following data
X: 20 30 40 50 60 70
f: 8 12 20 10 6 4
Skp=M-M0/
Mean= Fixi/ Fi
=2460/60=41
Mode=40
Standard deviation
=13.7
skp=41-40/13.7=0.07
X Fi XiFi X2
X2
F
20 8 160 400 3200
30 12 360 900 10800
40 20 800 1600 32000
50 10 500 2500 25000
60 6 360 3600 21600
70 4 280 4900 19600
#15: Data
facts or information that is relevant or appropriate to a decision maker
Population
the totality of objects under consideration
Sample
a portion of the population that is selected for analysis
Parameter
a summary measure (e.g., mean) that is computed to describe a characteristic of the population
Statistic
a summary measure (e.g., mean) that is computed to describe a characteristic of the sample
#16: Data
facts or information that is relevant or appropriate to a decision maker
Population
the totality of objects under consideration
Sample
a portion of the population that is selected for analysis
Parameter
a summary measure (e.g., mean) that is computed to describe a characteristic of the population
Statistic
a summary measure (e.g., mean) that is computed to describe a characteristic of the sample
#17: Data
facts or information that is relevant or appropriate to a decision maker
Population
the totality of objects under consideration
Sample
a portion of the population that is selected for analysis
Parameter
a summary measure (e.g., mean) that is computed to describe a characteristic of the population
Statistic
a summary measure (e.g., mean) that is computed to describe a characteristic of the sample
#18: Data
facts or information that is relevant or appropriate to a decision maker
Population
the totality of objects under consideration
Sample
a portion of the population that is selected for analysis
Parameter
a summary measure (e.g., mean) that is computed to describe a characteristic of the population
Statistic
a summary measure (e.g., mean) that is computed to describe a characteristic of the sample
#20: Data
facts or information that is relevant or appropriate to a decision maker
Population
the totality of objects under consideration
Sample
a portion of the population that is selected for analysis
Parameter
a summary measure (e.g., mean) that is computed to describe a characteristic of the population
Statistic
a summary measure (e.g., mean) that is computed to describe a characteristic of the sample
#21: Data
facts or information that is relevant or appropriate to a decision maker
Population
the totality of objects under consideration
Sample
a portion of the population that is selected for analysis
Parameter
a summary measure (e.g., mean) that is computed to describe a characteristic of the population
Statistic
a summary measure (e.g., mean) that is computed to describe a characteristic of the sample
#23: Data
facts or information that is relevant or appropriate to a decision maker
Population
the totality of objects under consideration
Sample
a portion of the population that is selected for analysis
Parameter
a summary measure (e.g., mean) that is computed to describe a characteristic of the population
Statistic
a summary measure (e.g., mean) that is computed to describe a characteristic of the sample
#28: Data
facts or information that is relevant or appropriate to a decision maker
Population
the totality of objects under consideration
Sample
a portion of the population that is selected for analysis
Parameter
a summary measure (e.g., mean) that is computed to describe a characteristic of the population
Statistic
a summary measure (e.g., mean) that is computed to describe a characteristic of the sample