Statistics can be categorized into descriptive and inferential types. Descriptive statistics summarize data from samples using measures like mean and standard deviation, while inferential statistics interpret descriptive statistics to draw conclusions. There are four levels of measurement scales: nominal for categories without ordering; ordinal for ordered categories; interval for equal intervals but arbitrary zero; and ratio for absolute zero. Proper use of statistics and scales allows for accurate data analysis across various fields.
This document provides an introduction and definition of statistics. It discusses statistics in both the plural and singular sense, as numerical data and as a method of study, respectively. It also outlines the basic terminologies in statistics such as data, population, sample, parameters, variables, and scales of measurement. Finally, it discusses the classification and applications of statistics as well as its limitations.
1. Introduction to statistics in curriculum and Instruction
1 The definition of statistics and other related terms
1.2 Descriptive statistics
3 Inferential statistics
1.4 Function and significance of statistics in education
5 Types and levels of measurement scale
2. Introduction to SPSS Software
3. Frequency Distribution
4. Normal Curve and Standard Score
5. Confidence Interval for the Mean, Proportions, and Variances
6. Hypothesis Testing with One and two Sample
7. Two-way Analysis of Variance
8. Correlation and Simple Linear Regression
9. CHI-SQUARE
This document provides an overview of key concepts in biostatistics and how to use SPSS software for data analysis. It discusses learning objectives for understanding biostatistics, different types of data (nominal, ordinal, interval, ratio) and variables (independent, dependent
Evaluation Unit 4
Statistics in the View point of Evaluation
Unit 4 Syllabus-
4.2.1- Measuring Scales- Meaning and Statistical Use
4.2.2- Conversion and interpretation of Test Score
4.2.3- Normal Probability Curve
4.2.4- Central Tendency and its importance in Evaluation.
4.2.5- Dimensions of Deviation
The Unit 4 is all about Statistics…
Statistics is the study of the collection, analysis, interpretation, presentation, and organization of data.
In other words, it is a mathematical discipline to collect, summarize data.
Also, we can say that statistics is a branch of applied mathematics.
Statistics is simply defined as the study and manipulation of data. As we have already discussed in the introduction that statistics deals with the analysis and computation of numerical data.
Projective methods of Evaluation through Statistics-
Measurement is a process of assigning numbers to individuals or their characteristics according to specific rules.” (Eble and Frisbie, 1991, p.25).
This is very common and simple definition of the term ‘measurement’.
You can say that measurement is a quantitative description of one’s performance. Gay (1991) further simplified the term as a process of quantifying the degree to which someone or something possessed a given trait, i.e., quality, characteristics, or features.
Measurement assigns a numeral to quantify certain aspects of human and non-human beings.
It is numerical description of objects, traits, attributes, characteristics or behaviours.
Measurement is not an end in itself but definitely a means to evaluate the abilities of a person in education and other fields as well.
Measurement Scale-
Whenever we measure anything, we assign a numerical value. This numerical value is known as scale of measurement. A scale is a system or scheme for assigning values or scores to the characteristics being measured (Sattler, 1992). Like for measuring any aspect of the human being we assign a numeral to quantify it, further we can provide an order to it if we know the similar type of measurement of other members of the group, we can also make groups considering equal interval scores within the group.
Psychologist Stanley Stevens developed the four common scales of measurement:
Nominal
Ordinal
Interval &
Ratio
Each scale of measurement has properties that determine how to properly analyze the data.
Nominal scale-
In nominal scale, a numeral or label is assigned for characterizing the attribute of the person or thing.
That caters no order to define the attribute as high-low, more-less, big-small, superior-inferior etc.
In nominal scale, assigning a numeral is purely an individual matter.
It is nothing to do with the group scores or group measurement.
Statistics such as frequencies, percentages, mode, and chi-square tests are used in nominal measurement.
Examples include gender (male, female), colors (red, blue, green), or types of fruit (apple, banana, orange).
Ordinal scale-
Ordinal scale is synonymous to ranking or g
- Descriptive statistics describe the properties of sample and population data through metrics like mean, median, mode, variance, and standard deviation. Inferential statistics use those properties to test hypotheses and draw conclusions about large groups.
- Descriptive statistics focus on central tendency, variability, and distribution of data. Inferential statistics allow statisticians to draw conclusions about populations based on samples and determine the reliability of those conclusions.
- Statistics rely on variables, which are characteristics or attributes that can be measured and analyzed. Variables can be qualitative like gender or quantitative like mileage, and quantitative variables can be discrete like test scores or continuous like height.
- Descriptive statistics describe the properties of sample and population data through metrics like mean, median, mode, variance, and standard deviation. Inferential statistics use those properties to test hypotheses and draw conclusions about large groups.
- The two major areas of statistics are descriptive statistics, which summarizes data, and inferential statistics, which uses descriptive statistics to make generalizations and predictions.
- Mean, median, and mode describe central tendency, with mean being the average, median being the middle number, and mode being the most frequent value.
This document provides an overview of key concepts in data management and statistics. It defines statistics as the study of collecting, organizing, and interpreting data to make inferences about populations. The main branches are descriptive statistics, which summarizes data, and inferential statistics, which generalizes from samples to populations. It also defines key terms like population, sample, parameter, statistic, variable, data, levels of measurement, and measures of central tendency and dispersion. Measures of central tendency like mean, median, and mode are used to describe the center of data, while measures of dispersion like range and standard deviation describe how spread out data are.
Please acknowledge my work and I hope you like it. This is not boring like other ppts you see, I have tried my best to make it extremely informative with lots of pictures and images, I am sure if you choose this as your presentation for statistics topic in your office or school, you are surely going to appreciated by all including your teachers, friends, your interviewer or your manager.
Statistics involves the collection, organization, analysis, and interpretation of numerical data to aid decision making. Descriptive statistics summarize and describe data without generalizing, while inferential statistics makes generalizations using the data. Data can be collected directly through interviews or indirectly through questionnaires and observation, and also through experimentation and registration. Data is presented textually, in tables, or graphically. Population is the total set of data, while a sample is a representative portion. Variables measure characteristics that change, and can be quantitative or qualitative, discrete or continuous. Measurement scales include nominal, ordinal, interval, and ratio levels.
This document provides an introduction to statistics, including definitions, types, data measurement, and important terms. It defines statistics as the collection, analysis, interpretation, and presentation of numerical data. Statistics can be descriptive, dealing with conclusions about a particular group, or inferential, using a sample to make inferences about a larger population. There are four levels of data measurement - nominal, ordinal, interval, and ratio. Important statistical terms defined include population, sample, parameter, and statistic.
This presentation on Introduction to Statistics helps Engineering students to review the fundamental topics of statistics. It is according tl syllabus of Institute of Engineering (IOE) but is similar to that of almost all the engineering colleges.
This document provides an overview of key concepts in statistics. It discusses that statistics involves collecting, organizing, analyzing and interpreting data. It also defines important statistical terms like population, sample, parameter, statistic, qualitative and quantitative data, independent and dependent variables, discrete and continuous variables, and different levels of measurement for variables. The different levels of measurement are nominal, ordinal, interval and ratio. Descriptive statistics are used to summarize and describe data, while inferential statistics allow making inferences about populations from samples.
If you happen to like this powerpoint, you may contact me at flippedchannel@gmail.com
I offer some educational services like:
-powerpoint presentation maker
-grammarian
-content creator
-layout designer
Subscribe to our online platforms:
FlippED Channel (Youtube)
http://bit.ly/FlippEDChannel
LET in the NET (facebook)
http://bit.ly/LETndNET
Students will learn about key concepts in statistics including data collection, organization, and analysis. They will become familiar with descriptive and inferential statistics, different types of variables, and measurement scales. The course will cover topics such as probability distributions, sampling, estimation, and hypothesis testing. Statistics is used across many fields to analyze data, identify patterns, and make inferences about populations. While useful for decision making, statistics also has limitations as it deals with aggregates rather than individual data.
Sampling-A compact study of different types of sampleAsith Paul.K
The document discusses various topics related to data collection in research methodology. It defines data collection and explains that it must be well-planned. It also discusses different types of variables like quantitative, qualitative, dependent, independent etc. and different scales of measurement. Further, it explains different data collection methods like surveys, questionnaires, interviews and focus groups. It also discusses concepts like population, sample, sampling methods and sources of data.
This document provides an introduction to biostatistics and key concepts. It defines biostatistics as the development and application of statistical techniques to scientific research relating to human life and health. Some key terms discussed include:
- Population, which is the totality of individuals of interest
- Sample, which is a subset of a population
- Variables, which can be qualitative (non-numerical) or quantitative (numerical)
- Levels of measurement for variables, including nominal, ordinal, interval, and ratio scales
- Descriptive methods for qualitative data, including frequency distributions
Biostatistics plays an important role in modern medicine, including determining disease burden, finding new drug treatments, planning resource allocation, and measuring
The material is consolidated from different sources on the basic concepts of Statistics which could be used for the Visualization an Prediction requirements of Analytics.
I deeply acknowledge the sources which helped me consolidate the material for my students.
This document provides an outline and definitions for key concepts in statistics. It begins by defining statistics as a branch of applied mathematics dealing with collecting, organizing, analyzing, and interpreting quantitative data. It then distinguishes between descriptive statistics, which summarizes data, and inferential statistics, which makes predictions based on data analysis. It defines variables, scales of measurement, populations and samples, and parameters. The last section discusses common methods for collecting data, including interviews, questionnaires, observation, tests, and mechanical devices.
In research, one of the important aspects is analyzing the data appropriately in order to derive the findings. Statistics is majorly applied in quantitative research to do the analysis of the research. Data collection, analysis, interpretation, presentation, and organization are all part of the mathematical field of statistics. Making sense of the data, making judgments, and coming to trustworthy findings are its main goals. Numerous disciplines, including science, economics, business, medicine, and social sciences, heavily rely on statistics. Statistical assignment help Canada emphasizes that statistics offers helpful methods and instruments for interpreting data, coming to trustworthy judgments, and bolstering evidence-based decision-making.
- Biostatistics refers to applying statistical methods to biological and medical problems. It is also called biometrics, which means biological measurement or measurement of life.
- There are two main types of statistics: descriptive statistics which organizes and summarizes data, and inferential statistics which allows conclusions to be made from the sample data.
- Data can be qualitative like gender or eye color, or quantitative which has numerical values like age, height, weight. Quantitative data can further be interval/ratio or discrete/continuous.
- Common measures of central tendency include the mean, median and mode. Measures of variability include range, standard deviation, variance and coefficient of variation.
- Correlation describes the relationship between two variables
stackconf 2025 | Integrating generative AI into API Platform: Good idea? by L...NETWAYS
This conference will not praise generative AI but rather focus on how to integrate and leverage these technologies, as well as the tools available. As a bonus, we will explore whether machine learning in PHP is feasible and what its limitations are. A thought-provoking discussion on the challenges and opportunities in this field.
stackconf 2025 | The Sustainable Infrastructure of the future by Alessandro V...NETWAYS
Sustainability is the challenge that IT faces and it’s an increasingly pivotal strategy for many enterprises and open source projects, like Kubernetes and the other CNCF-backed cloud native infrastructure-related software. We will present the stance that the cloud native community takes on the subject, thru the efforts of the Technical Advisory Group (TAG) for Environmental Sustainability, we will present various open source projects that help you measure, understand, and remediate the common pitfalls of current architecture paradigms and help reduce the carbon footprint of our code and infrastructure. We will give the audience practical examples and methods to apply the learning to make their own apps and infrastructure more green and sustainable.
Ad
More Related Content
Similar to 543957106-Introduction-Basic-Concepts-in-Statistics-PPT - Copy.pptx (20)
- Descriptive statistics describe the properties of sample and population data through metrics like mean, median, mode, variance, and standard deviation. Inferential statistics use those properties to test hypotheses and draw conclusions about large groups.
- The two major areas of statistics are descriptive statistics, which summarizes data, and inferential statistics, which uses descriptive statistics to make generalizations and predictions.
- Mean, median, and mode describe central tendency, with mean being the average, median being the middle number, and mode being the most frequent value.
This document provides an overview of key concepts in data management and statistics. It defines statistics as the study of collecting, organizing, and interpreting data to make inferences about populations. The main branches are descriptive statistics, which summarizes data, and inferential statistics, which generalizes from samples to populations. It also defines key terms like population, sample, parameter, statistic, variable, data, levels of measurement, and measures of central tendency and dispersion. Measures of central tendency like mean, median, and mode are used to describe the center of data, while measures of dispersion like range and standard deviation describe how spread out data are.
Please acknowledge my work and I hope you like it. This is not boring like other ppts you see, I have tried my best to make it extremely informative with lots of pictures and images, I am sure if you choose this as your presentation for statistics topic in your office or school, you are surely going to appreciated by all including your teachers, friends, your interviewer or your manager.
Statistics involves the collection, organization, analysis, and interpretation of numerical data to aid decision making. Descriptive statistics summarize and describe data without generalizing, while inferential statistics makes generalizations using the data. Data can be collected directly through interviews or indirectly through questionnaires and observation, and also through experimentation and registration. Data is presented textually, in tables, or graphically. Population is the total set of data, while a sample is a representative portion. Variables measure characteristics that change, and can be quantitative or qualitative, discrete or continuous. Measurement scales include nominal, ordinal, interval, and ratio levels.
This document provides an introduction to statistics, including definitions, types, data measurement, and important terms. It defines statistics as the collection, analysis, interpretation, and presentation of numerical data. Statistics can be descriptive, dealing with conclusions about a particular group, or inferential, using a sample to make inferences about a larger population. There are four levels of data measurement - nominal, ordinal, interval, and ratio. Important statistical terms defined include population, sample, parameter, and statistic.
This presentation on Introduction to Statistics helps Engineering students to review the fundamental topics of statistics. It is according tl syllabus of Institute of Engineering (IOE) but is similar to that of almost all the engineering colleges.
This document provides an overview of key concepts in statistics. It discusses that statistics involves collecting, organizing, analyzing and interpreting data. It also defines important statistical terms like population, sample, parameter, statistic, qualitative and quantitative data, independent and dependent variables, discrete and continuous variables, and different levels of measurement for variables. The different levels of measurement are nominal, ordinal, interval and ratio. Descriptive statistics are used to summarize and describe data, while inferential statistics allow making inferences about populations from samples.
If you happen to like this powerpoint, you may contact me at flippedchannel@gmail.com
I offer some educational services like:
-powerpoint presentation maker
-grammarian
-content creator
-layout designer
Subscribe to our online platforms:
FlippED Channel (Youtube)
http://bit.ly/FlippEDChannel
LET in the NET (facebook)
http://bit.ly/LETndNET
Students will learn about key concepts in statistics including data collection, organization, and analysis. They will become familiar with descriptive and inferential statistics, different types of variables, and measurement scales. The course will cover topics such as probability distributions, sampling, estimation, and hypothesis testing. Statistics is used across many fields to analyze data, identify patterns, and make inferences about populations. While useful for decision making, statistics also has limitations as it deals with aggregates rather than individual data.
Sampling-A compact study of different types of sampleAsith Paul.K
The document discusses various topics related to data collection in research methodology. It defines data collection and explains that it must be well-planned. It also discusses different types of variables like quantitative, qualitative, dependent, independent etc. and different scales of measurement. Further, it explains different data collection methods like surveys, questionnaires, interviews and focus groups. It also discusses concepts like population, sample, sampling methods and sources of data.
This document provides an introduction to biostatistics and key concepts. It defines biostatistics as the development and application of statistical techniques to scientific research relating to human life and health. Some key terms discussed include:
- Population, which is the totality of individuals of interest
- Sample, which is a subset of a population
- Variables, which can be qualitative (non-numerical) or quantitative (numerical)
- Levels of measurement for variables, including nominal, ordinal, interval, and ratio scales
- Descriptive methods for qualitative data, including frequency distributions
Biostatistics plays an important role in modern medicine, including determining disease burden, finding new drug treatments, planning resource allocation, and measuring
The material is consolidated from different sources on the basic concepts of Statistics which could be used for the Visualization an Prediction requirements of Analytics.
I deeply acknowledge the sources which helped me consolidate the material for my students.
This document provides an outline and definitions for key concepts in statistics. It begins by defining statistics as a branch of applied mathematics dealing with collecting, organizing, analyzing, and interpreting quantitative data. It then distinguishes between descriptive statistics, which summarizes data, and inferential statistics, which makes predictions based on data analysis. It defines variables, scales of measurement, populations and samples, and parameters. The last section discusses common methods for collecting data, including interviews, questionnaires, observation, tests, and mechanical devices.
In research, one of the important aspects is analyzing the data appropriately in order to derive the findings. Statistics is majorly applied in quantitative research to do the analysis of the research. Data collection, analysis, interpretation, presentation, and organization are all part of the mathematical field of statistics. Making sense of the data, making judgments, and coming to trustworthy findings are its main goals. Numerous disciplines, including science, economics, business, medicine, and social sciences, heavily rely on statistics. Statistical assignment help Canada emphasizes that statistics offers helpful methods and instruments for interpreting data, coming to trustworthy judgments, and bolstering evidence-based decision-making.
- Biostatistics refers to applying statistical methods to biological and medical problems. It is also called biometrics, which means biological measurement or measurement of life.
- There are two main types of statistics: descriptive statistics which organizes and summarizes data, and inferential statistics which allows conclusions to be made from the sample data.
- Data can be qualitative like gender or eye color, or quantitative which has numerical values like age, height, weight. Quantitative data can further be interval/ratio or discrete/continuous.
- Common measures of central tendency include the mean, median and mode. Measures of variability include range, standard deviation, variance and coefficient of variation.
- Correlation describes the relationship between two variables
stackconf 2025 | Integrating generative AI into API Platform: Good idea? by L...NETWAYS
This conference will not praise generative AI but rather focus on how to integrate and leverage these technologies, as well as the tools available. As a bonus, we will explore whether machine learning in PHP is feasible and what its limitations are. A thought-provoking discussion on the challenges and opportunities in this field.
stackconf 2025 | The Sustainable Infrastructure of the future by Alessandro V...NETWAYS
Sustainability is the challenge that IT faces and it’s an increasingly pivotal strategy for many enterprises and open source projects, like Kubernetes and the other CNCF-backed cloud native infrastructure-related software. We will present the stance that the cloud native community takes on the subject, thru the efforts of the Technical Advisory Group (TAG) for Environmental Sustainability, we will present various open source projects that help you measure, understand, and remediate the common pitfalls of current architecture paradigms and help reduce the carbon footprint of our code and infrastructure. We will give the audience practical examples and methods to apply the learning to make their own apps and infrastructure more green and sustainable.
stackconf 2025 | Zap the Flakes! Leveraging AI to Combat Flaky Tests with CAN...NETWAYS
Flakes aka tests that don’t behave deterministically, i.e. they fail sometimes and pass sometimes, are an ever recurring problem in software development. This is especially the sad reality when running end-to-end tests where a lot of components are involved. There are various reasons why a test can be flaky, however the impact can be as fatal as CI being loaded beyond capacity causing overly long feedback cycles or even users losing trust in CI itself. Ideally CI maintainers want potential flakes to be flagged at the earliest stage of the development lifecycle, so that they do not even enter the end-to-end test suite. They want a gate that acts as a safeguard for developers, pointing out to them what kind of stability issues a test has. This reduces CI user frustration and improves trust in CI. At the same time it cuts down on unnecessary waste of CI system resources. This talk will explore the CANNIER approach, which aims to reduce the time cost of rerunning tests. The speaker and the audience will take a closer look at the feature set used to create training data from existing test code of the KubeVirt project, enabling CI systems to predict probable flakiness of a certain test with an AI model. The speaker will cover how a subset of the CANNIER feature set is implemented, how it is used to generate training data for an AI model, and how the model is used to analyze flakiness probability, generating actionable feedback for test authors. Attendees will gain insight on how the CANNIER approach can help them improve test reliability, ultimately enhancing overall software quality and team productivity.
stackconf 2025 | IP Authentication: A Tale of Performance Pitfalls and Challe...NETWAYS
In this talk, I explore our journey with IP-based authentication as one part of a major re-engineering of our authentication methods. I will focus on the inherent performance challenges and unexpected issues we encountered… in production. I will talk about why IP authentication, despite its initial appeal, is a problematic solution in practice.
Key points:
The fundamental flaw: Necessity of checking every request against IP ranges
Performance implications of constant IP checks and potential optimization attempts
Implementation challenges: Efficient CIDR comparison for large IP ranges
Production rollout tale: Massive error rates (>30k/hour) and multiple rollbacks
Root cause analysis: Unexpected impact of cookieless crawlers and legacy code
Lessons learned: The importance of thorough testing and understanding legacy systems
I will offer insights for engineers considering IP authentication (don’t do it). I will discuss how the requirement for constant IP checking creates a performance bottleneck, and how seemingly minor factors like crawler behavior and old clients can significantly impact system stability. I will also talk about our production fuckups and how we tried to find out if our customers really had a problem or if this was just a storm in the waterglas.
We Are The World-USA for Africa : Written By Lionel Richie And Michael Jackso...hershtara1
80s pop culture moment, we are the world, America's artists got together to record 1 song to help stop the fathom in Africa. The song sold over a million copies in the first month. it proved that music can make a difference
stackconf 2025 | Stratoshark: Bringing the Wireshark experience to cloud and ...NETWAYS
Wireshark revolutionized network troubleshooting and security with its intuitive analysis of network packets. Now, meet its sibling: Stratoshark, the tool designed to bring the Wireshark experience to the world of cloud and containers. Stratoshark enables you to capture and analyze system calls and log messages, providing the same level of clarity and insight for system-level activity as Wireshark does for network packets. Powered by libsinsp and libscap, it seamlessly integrates with tools like Sysdig and Falco, enhancing your ability to troubleshoot, secure, and understand modern cloud-native environments. In this Ignite session, we’ll introduce Stratoshark’s capabilities, demonstrate its use with system calls and AWS CloudTrail logs, and highlight its cross-platform availability for macOS, Windows, and Linux. Whether you’re exploring S3 buckets or securing Kubernetes clusters, Stratoshark is here to help you navigate cloud and container complexity with confidence. Join us for a glimpse into the future of system and cloud introspection!
⭐️ Bitcoin - Mining Race ⭐️ The fastest-growing Bitcoin movement ⭐️ english
⭐️ Referral link - https://miningrace.com/wallet/invite-activate/edA6xDgWMVLBAfCClWJyr ⭐️
Invitation code - edA6xDgWMVLBAfCClWJyr
Mining Race - The fastest-growing Bitcoin movement
Participate in the ultimate Bitcoin community challenge. Conquer the top in the Mining Race.
Cryptocurrencies are all about the community. And what better way to support the BTC community than with a community-based mining program?
By participating in the Mining Race, you not only support the Bitcoin blockchain, but also receive additional rewards for being a member of the Mining Race community!
Ready for the Bitcoin Mining Race Challenge?
⭐️ Referral link - https://miningrace.com/wallet/invite-activate/edA6xDgWMVLBAfCClWJyr ⭐️
Invitation code - edA6xDgWMVLBAfCClWJyr
A Bot Identification Model and Tool Based on GitHub Activity Sequencesnatarajan8993
These slides are presented at International Workshop on Bots in Software Engineering (BotSE) 2025 as a journal first presentation. The publication can be found at https://doi.org/10.1016/j.jss.2024.112287 and the RABBIT tool at https://github.com/natarajan-chidambaram/RABBIT.
stackconf 2025 | Detect & Respond to Threats in Kubernetes with Falco by Luca...NETWAYS
Kubernetes has become the backbone of modern cloud-native applications, but its dynamic nature presents unique security challenges. In this hands-on lab session, we’ll dive deep into securing Kubernetes environments with Falco, the open-source standard for runtime threat detection. This workshop will guide attendees through the end-to-end process of setting up Kubernetes, installing Falco, and building custom detection rules to address evolving Linux threats. You’ll learn how to craft rules tailored to your specific environment, enabling more precise detection of anomalous behaviour and potential threats. Additionally, we’ll introduce Falco Talon, a powerful response engine that integrates seamlessly with Falco to mitigate threats in Kubernetes and the cloud. See how Falco Talon automates threat containment and response, minimising downtime and enhancing your cloud security posture. Whether you’re a Kubernetes beginner or a seasoned user, this session will equip you with practical tools and techniques to detect and respond to threats effectively in your cloud-native environments. I want to keep this session interactive, so ask me anything!
stackconf 2025 | The Power of Small Habits in Agile Teams by Maroš Kutschy.pdfNETWAYS
When I was reading books like ‘Atomic Habits’ by James Clear and ‘The Power of Habit’ by Charles Duhigg, I was thinking about how these principles related to habits could be applied in software development and testing in agile teams.
I am fascinated by connecting the IT world with the non-IT world, and I think that the small habits topic is a great example of such a connection.
In my talk I will share my ideas on how we can apply these principles in both individual and team areas.
I will provide examples related to the daily work of developers, testers, scrum masters, and DevOps/TechOps engineers, along with examples for whole teams.
I will share my experience of applying principles of small habits to myself at work and in personal life. I will tell the stories and lessons learnt about what worked and what didn’t.
The more time I spend with this topic, the more I am convinced that small habits have great power to increase the performance of agile teams.
This session will be unique, because it will be packed with many practical tips which can be applied right after the session in daily life of the attendees.
Main takeaways (lessons learnt):
– Why small habits are so powerful
– How to apply small habits principles for individual developers, testers, scrum masters, and DevOps/TechOps engineers
– How to apply small habit principles in agile teams delivering software
2. WHAT IS STATISTICS
Statistics is used in business and economics. It
plays an important role in the exploration of new
markets for a product, forecasting of business
trends, control and maintenance of high-quality
products, improvement of employer-employee
relationship and analysis of data concerning
insurance, investment, sales, employment,
transportation, communications, auditing and
accounting procedures.
3. STATISTICS is the branch of mathematics that deals
with the theory and method of collecting, organizing,
presenting, analyzing and interpreting data.
Two Main Divisions/Phases of Statistics
1. DESCRIPTIVE STATISTICS refers to the summary
statistic that quantitatively describes or summarizes
features from a collection of data under investigation.
The goal is to describe. Numerical measures are used to
tell about features of a set of data.
4. EXAMPLES:
The average, or measure of the center of a data set, consisting of the
mean, median, mode, or midrange
The spread of a data set, which can be measured with the range or
standard deviation
Overall descriptions of data such as the five number summary
Measurements such as skewness and kurtosis
The exploration of relationships and correlation between paired data
The presentation of statistical results in graphical form
5. 2. INFERENTIAL STATISTICS- statistical tools
that are used to examine the relationships
between variables within a sample and then
make generalizations or predictions about how
those variables will relate to a larger population.
• Example:
Tests of significance or hypothesis testing where scientists
make a claim about the population by analyzing a statistical
sample. By design, there is some uncertainty in this process.
This can be expressed in terms of a level of significance.
6. Two Branches of Statistics
1. Statistical Theory – is concerned with the
formulation of theories, principles, and
formulas which are used as bases in the
solution of problems related to Statistics.
2. Statistical Methods – is concerned with the
application of the theories, principles and
formulas in the solution of everyday
problems.
7. OTHER STATISTICAL TERMS:
• POPULATION – a set of data consisting of all conceivable possible
observations of a certain phenomenon. It refers to the totality of
the observations. Population is denoted by capital N.
• SAMPLE – a finite number of items selected from a population
possessing identical characteristics with those of the population from
which it was taken. Sample is denoted by small letter n
• PARAMETERS – are characteristics/measures computed from the
population
• STATISTIC/S – are characteristics/measures computed from the
sample
8. • VARIABLE – refers to a fundamental quantity that changes in
value from one observation to another within a given domain and
under a given set of conditions. Variables may be represented
by the letters X, Y, etc.
• DISCRETE VARIABLE - is a variable whose value is obtained by
counting.
• CONTINUOUS VARIABLE- is a variable whose value is obtained
by measuring.
• CONSTANT – refers to fundamental quantities that do not
change in value.
9. FOUR LEVELS OF DATA
MEASUREMENT
Nominal –also called the categorical variable scale, is defined as a scale
used for labeling variables into distinct classifications and doesn’t involve a
quantitative value or order. This scale is the simplest of the four variable
measurement scales. Calculations done on these variables will be futile as
there is no numerical value of the options. (ex. Sex, gender, place of
residence, political affiliation)
Ordinal –a variable measurement scale used to simply depict the order
of variables and not the difference between each of the variables. These
scales are generally used to depict non-mathematical ideas such as
frequency, satisfaction, happiness, a degree of pain, etc.
10. • Ordinal Scale maintains description qualities along with an intrinsic order
but is void of an origin of scale and thus, the distance between variables
can’t be calculated. Description qualities indicate tagging properties
similar to the nominal scale, in addition to which, the ordinal scale also has
a relative position of variables. Origin of this scale is absent due to which
there is no fixed start or “true zero”.
Examples:
High school class ranking: 1st, 9th, 87th…
Socioeconomic status: poor, middle class, rich.
The Likert Scale: strongly disagree, disagree, neutral, agree, strongly agree.
Level of Agreement: yes, maybe, no.
Time of Day: dawn, morning, noon, afternoon, evening, night.
Political Orientation: left, center, right.
11. • Interval Scale is defined as a numerical scale where the order of the
variables is known as well as the difference between these variables.
Variables that have familiar, constant, and computable differences are
classified using the Interval scale. It is easy to remember the primary
role of this scale too, ‘Interval’ indicates ‘distance between two
entities’, which is what Interval scale helps in achieving.
• These scales are effective as they open doors for the statistical
analysis of provided data. Mean, median, or mode can be used to
calculate the central tendency in this scale. The only drawback of this
scale is that there no pre-decided starting point or a true zero value.
• Interval scale contains all the properties of the ordinal scale, in
addition to which, it offers a calculation of the difference between
variables. The main characteristic of this scale is the equidistant
difference between objects.
12. Interval Scale Examples
There are situations where attitude scales are considered to be interval scales.
Apart from the temperature scale, time is also a very common example of an
interval scale as the values are already established, constant, and measurable.
Calendar years and time also fall under this category of measurement scales.
Likert scale, Net Promoter Score, Semantic Differential Scale,
Bipolar Matrix Table, etc. are the most-used interval scale
examples.
Celsius Temperature.
Fahrenheit Temperature.
IQ (intelligence scale).
SAT scores.
Time on a clock with hands.
13. • Ratio Scale: 4th
Level of Measurement
• is defined as a variable measurement scale that not only produces
the order of variables but also makes the difference between
variables known along with information on the value of true zero.
It is calculated by assuming that the variables have an option for
zero, the difference between the two variables is the same and
there is a specific order between the options.
• With the option of true zero, varied inferential, and descriptive
analysis techniques can be applied to the variables. In addition to
the fact that the ratio scale does everything that a nominal,
ordinal, and interval scale can do, it can also establish the value
of absolute zero. The best examples of ratio scales are weight
and height. In market research, a ratio scale is used to calculate
market share, annual sales, the price of an upcoming product, the
number of consumers, etc.
14. • Examples of Ratio scale
Age
Weight
Height
Sales Figures
Ruler measurements.
Income earned in a week
15. STEPS IN A STATISTICAL INQUIRY OR
INVESTIGATION
start with a problem
1. Collection of data
2. Presentation of data
3. Analysis of data
4. Interpretation of data
16. DATA COLLECTION AND DATA PRESENTATION
What are DATA?
• Data are plain facts, usually raw numbers, words, measurements,
observations or just description of things. Think of a spreadsheet full
of numbers with no meaningful description. In order for these
numbers to become information, they must be interpreted to have
meaning.
TWO TYPES OF DATA
1. QUALITATIVE DATA is descriptive in nature ex., color, shapes
2. QUANTITATIVE is numerical information ex. weight, height
17. DATA COLLECTION
• Data collection is concerned with the accurate
gathering of data; although methods may differ
depending on the field, the emphasis on ensuring
accuracy. The primary goal of any data collection
is to capture quality data or evidence that easily
translates to rich data analysis that may lead to
credible and conclusive answers to questions that
have been posed.
18. M E T H O D S O F D A T A C O L L E C T I O N
1. THE INTERVIEW or DIRECT METHOD
The researcher or interviewer gets the needed data
from the respondent or interviewee verbally and directly
face-to-face contact.
2. THE QUESTIONNAIRE or INDIRECT METHOD
The questionnaire is a tool for data gathering and
research that consists of a set of questions in a different
form of question type that is used to collect information
from the respondents for the purpose of either survey or
statistical analysis study.
19. 3. REGISTRATION METHOD
This method is used by the government such as the records of births at the
Philippine Statistics Authority (PSA), registration record at the COMELEC
4. OBSERVATION
This method is a way of collecting data through observing. The observer gains
firsthand knowledge by being in and around the social setting that is being
investigated.
5. EXPERIMENTATION
An experiment is a procedure carried out to support, refute, or validate a
hypothesis. An experiment is a method that most clearly shows cause-and-effect
because it isolates and manipulates a single variable, in order to clearly show its
effect.
20. DATA PRESENTATION
Once data has been collected, it has to be classified and organized in such a way that it becomes easily
readable and interpretable, that is, converted to information.
TYPES OF DATA PRESENTATION
1. TEXTUAL PRESENTATION
This type of presentation combines text and figures in a statistical report.
Example: news item in the newspaper
2. TABULAR PRESENTATION
This type of presentation uses tables consisting of vertical columns and horizontal rows
with headings describing these rows and columns. The data are presented in more brief and orderly
manner.
Example: frequency table
3. GRAPHICAL PRESENTATION
It is a most effective means of presenting statistical data because important relationships
are brought out more clearly in graphs.
21. DIFFERENT TYPES OF GRAPHS COMMONLY USED IN DATA
PRESENTATION
1. BAR GRAPH
A bar chart or bar graph is a chart or graph that presents
categorical data with rectangular bars with heights or lengths
proportional to the values that they represent. The bars can
be plotted vertically or horizontally.
22. LINE GRAPH
A line graph is a graphical display of information that changes
continuously over time. A line graph may also be referred to as
a line chart. Within a line graph, there are points connecting
the data to show a continuous change. The lines in a line graph
can descend and ascend based on the data. We can use a line
graph to compare different events, situations, and information.
23. PIE GRAPH
A pie chart is a circular chart divided into wedge-like sectors, illustrating
proportion. Each wedge represents a proportionate part of the whole, and the total
value of the pie is always 100 percent.
Pie charts can make the size of portions easy to understand at a glance.
They're widely used in business presentations and education to show the proportions
among a large variety of categories including expenses, segments of a population, or
answers to a survey.
24. SCATTER DIAGRAM
A scatter diagram also called a scatterplot, is a type of plot or
mathematical diagram using Cartesian coordinates to display values for typically two
variables for a set of data. If the points are coded (color/shape/size), one additional
variable can be displayed. The data are displayed as a collection of points, each having
the value of one variable determining the position on the horizontal axis and the value
of the other variable determining the position on the vertical axis.
25. 5. PICTOGRAPH/PICTOGRAM
A pictograph is a chart or graph, which uses pictures to represent data. A pictograph
is one of the simplest forms of data visualization.
26. TWO TYPES OF SAMPLING
• Probability sampling
• Simple random
• Systematic
• Stratified
• Cluster
• Non-probability sampling
• Convenience/Accidental
• Judgmental/Purposive
• Quota
• Snowball
27. PROBABILITY VS NON-PROBABILITY SAMPLING
1. Probability or Random Sampling
Provides equal chances to every single element of the population to be
included in the sampling.
2. Non-Probability Sampling
The samples are selected in a process that does not give all the
individuals in the population equal chances of being selected.
Samples are selected on the basis of their accessibility or by the
purposive personal judgment of the researcher.
29. PROBABILITY-BASED SAMPLING
Systematic Sampling
Step 1. Identify the population (N)
Step 2. Identify the number of sample (n) to be drawn from the population
Step 3. Divide N by n to find nth interval
Example
Population is 1,000. Desired sample size is 100. Sampling interval is 10
Get a random start from 1 to 10 in the list as first sample and every 10th
in
the list
30. PROBABILITY-BASED SAMPLING
Stratified Sampling
Used to ensure that different groups in the population are adequately represented in the sample
Step 1. Identify the population and divide the population into different groups or strata according to criteria.
Step 2. Decide on the sampling size or actual percentage of the population to be considered as sample.
Step 3. Get a proportion of sample from each group
Step 4. Select the respondents by random sampling
Example : Population = 2000 Desired Sample Size = 10%
Proportion of sample per stratum = 10%
500 students x .10 = 50
600 businessman x .10 = 60
400 teachers x .10 = 40
500 farmers x .10 = 50
Total sample = 200
Select the 200 by random sampling.
31. PROBABILITY-BASED SAMPLING
Cluster Sampling
Often called geographic sampling
Used in large scale surveys
The population is divided into multiple groups called clusters . The
clusters are selected with simple random or systematic sampling
technique for data collection and data analysis.
Example: the Population includes elementary schools in the Province.
The province is first divided into Districts which are treated as clusters
and are randomly selected. From the districts, the schools can be picked
out at random and then classes and then students are selected at random
32. NON-PROBABILITY SAMPLING
1. Accidental or Convenience Sampling
Researcher selects subjects that are more readily accessible or
available.
2. Purposive Sampling
Subjects are selected based on the needs of the study.
33. NON-PROBABILITY-BASED SAMPLING
Quota Sampling
Researcher takes a sample that is in proportion to some characteristic or trait of the
population
The population is divided into groups or strata (the basis may be age, gender, education
level, race, religion etc.
Samples are taken from each group to meet a quota.
Care is taken to maintain the correct proportions representative of the population.
Example :
The population consists of 60% female and 40% male.
The desired sample size is 200.
Therefore, the sample should consist of ____ females and ____ males.
34. NON-PROBABILITY-BASED SAMPLING
A study on science teaching is to be conducted in high schools of a region.
There are 4,641 teachers grouped according to area of specialization.
There are 2,243 biology teachers, 1,406 chemistry teachers and 992 physics
teachers.
The desired sample size is 300.
Select the sample according to the Quota Sampling technique.
35. NON-PROBABILITY-BASED SAMPLING
4. Snowball Sampling
This type of sampling starts with known sources of information, who or
which will in turn give other sources of information . As this goes on,
data accumulates.
This is used to find socially devalued urban populations such as drug
addicts, alcoholics, child abusers and criminals because they are usually
hidden from outsiders.