This presentation includes an introduction to statistics, introduction to sampling methods, collection of data, classification and tabulation, frequency distribution, graphs and measures of central tendency.
Fundamentals Of Statistics-Definition of statistics,Descriptive and Inferential Statistics,Major Types of Descriptive Statistics,Statistical data analysis
Descriptive statistics are used to describe and summarize the basic features of data through measures of central tendency like the mean, median, and mode, and measures of variability like range, variance and standard deviation. The mean is the average value and is best for continuous, non-skewed data. The median is less affected by outliers and is best for skewed or ordinal data. The mode is the most frequent value and is used for categorical data. Measures of variability describe how spread out the data is, with higher values indicating more dispersion.
Statistics is the science of dealing with numbers.
It is used for collection, summarization, presentation and analysis of data.
Statistics provides a way of organizing data to get information on a wider and more formal (objective) basis than relying on personal experience (subjective).
This document provides an introduction to statistics, including definitions, types, data measurement, and important terms. It defines statistics as the collection, analysis, interpretation, and presentation of numerical data. Statistics can be descriptive, dealing with conclusions about a particular group, or inferential, using a sample to make inferences about a larger population. There are four levels of data measurement - nominal, ordinal, interval, and ratio. Important statistical terms defined include population, sample, parameter, and statistic.
Introduction to Statistics - Basic concepts
- How to be a good doctor - A step in Health promotion
- By Ibrahim A. Abdelhaleem - Zagazig Medical Research Society (ZMRS)
There are two types of errors in hypothesis testing:
Type I errors occur when a null hypothesis is true but rejected. This is a false positive. Type I error rate is called alpha.
Type II errors occur when a null hypothesis is false but not rejected. This is a false negative. Type II error rate is called beta.
Reducing one type of error increases the other - more stringent criteria lower Type I errors but raise Type II errors, and vice versa. Both errors cannot be reduced simultaneously.
Measures of central tendency describe the middle or center of a data set and include the mean, median, and mode. The mean is the average value calculated by adding all values and dividing by the number of values. The median is the middle number in a data set arranged in order. The mode is the value that occurs most frequently. These measures are used to understand the typical or common values in a data set.
This presentation is about Basic Statistics-related to types of Data-Qualitative and Quantitative, and its Examples in everyday life- By: Dr. Farhana Shaheen
Descriptive statistics are methods of describing the characteristics of a data set. It includes calculating things such as the average of the data, its spread and the shape it produces.
The document discusses different types of data that can be collected in statistics including categorical vs. quantitative data, discrete vs. continuous data, and different levels of measurement for data including nominal, ordinal, interval, and ratio scales. It also discusses key concepts such as parameters, statistics, populations, and samples. Potential pitfalls in statistical analysis are outlined such as misleading conclusions, nonresponse bias, and issues with survey question wording and order.
This document discusses concepts related to data, including collection, organization, presentation, and analysis of data. It defines key terms like qualitative vs quantitative data and primary vs secondary data. It explains methods of collecting primary data through surveys, sampling techniques, and secondary data from published and unpublished sources. The document also covers organizing data through frequency distributions, statistical series, and presenting data in tabular, diagrammatic and graphical forms like pie charts, histograms, bar diagrams and ogives. It concludes with analyzing organized data through measures of central tendency, dispersion, correlation and regression.
Logistic regression allows prediction of discrete outcomes from continuous and discrete variables. It addresses questions like discriminant analysis and multiple regression but without distributional assumptions. There are two main types: binary logistic regression for dichotomous dependent variables, and multinomial logistic regression for variables with more than two categories. Binary logistic regression expresses the log odds of the dependent variable as a function of the independent variables. Logistic regression assesses the effects of multiple explanatory variables on a binary outcome variable. It is useful when the dependent variable is non-parametric, there is no homoscedasticity, or normality and linearity are suspect.
This document defines data and different types of data presentation. It discusses quantitative and qualitative data, and different scales for qualitative data. The document also covers different ways to present data scientifically, including through tables, graphs, charts and diagrams. Key types of visual presentation covered are bar charts, histograms, pie charts and line diagrams. Presentation should aim to clearly convey information in a concise and systematic manner.
This document discusses variance and standard deviation. It defines variance as the average squared deviation from the mean of a data set. Standard deviation measures how spread out numbers are from the mean and is calculated by taking the square root of the variance. The document provides step-by-step instructions for calculating both variance and standard deviation, including examples using test score data.
This document discusses descriptive statistics and analysis. It provides definitions of key terms like data, variable, statistic, and parameter. It also describes common measures of central tendency like mean, median and mode. Additionally, it covers measures of variability such as range, variance and standard deviation. Various graphical and numerical methods for summarizing and presenting sample data are presented, including tables, charts and distributions.
This document provides an overview of univariate analysis. It defines key terms like variables, scales of measurement, and types of univariate analysis. It describes descriptive statistics like measures of central tendency (mean, median, mode) and dispersion (range, variance, standard deviation). It also discusses inferential univariate analysis and appropriate statistical tests for different variable types and research questions, including z-tests, t-tests, and chi-square tests. Examples are provided to illustrate calculating and interpreting these statistics.
This document discusses inferential statistics, which uses sample data to make inferences about populations. It explains that inferential statistics is based on probability and aims to determine if observed differences between groups are dependable or due to chance. The key purposes of inferential statistics are estimating population parameters from samples and testing hypotheses. It discusses important concepts like sampling distributions, confidence intervals, null hypotheses, levels of significance, type I and type II errors, and choosing appropriate statistical tests.
Lecture on Introduction to Descriptive Statistics - Part 1 and Part 2. These slides were presented during a lecture at the Colombo Institute of Research and Psychology.
This document discusses different types of data. It begins by defining data as facts such as numbers, words, measurements or descriptions. There are two main types of data: quantitative and qualitative. Quantitative data can be continuous, representing measurements, or discrete, representing countable items. Qualitative data includes categorical data for characteristics, binary data with two options, and nominal or ordinal scales. Choosing the right statistical test is challenging.
This document provides an overview of data analysis and statistics concepts for a training session. It begins with an agenda outlining topics like descriptive statistics, inferential statistics, and independent vs dependent samples. Descriptive statistics concepts covered include measures of central tendency (mean, median, mode), measures of variability (range, standard deviation), and charts. Inferential statistics discusses estimating population parameters, hypothesis testing, and statistical tests like t-tests, ANOVA, and chi-squared. The document provides examples and online simulation tools. It concludes with some practical tips for data analysis like checking for errors, reviewing findings early, and consulting a statistician on analysis plans.
This document introduces the concept of data classification and levels of measurement in statistics. It explains that data can be either qualitative or quantitative. Qualitative data consists of attributes and labels while quantitative data involves numerical measurements. The document also outlines the four levels of measurement - nominal, ordinal, interval, and ratio - from lowest to highest. Each level allows for different types of statistical calculations, with the ratio level permitting the most complex calculations like ratios of two values.
This document discusses different types of data including:
- Qualitative data which describes attributes that can be observed but not computed, and quantitative data which can be measured numerically.
- Primary data is collected first-hand for a specific purpose, while secondary data has already been collected in the past.
- Discrete data takes only certain values, while continuous data can take any value in a range.
Topic: Types of Data
Student Name: Duwa
Class: B.Ed. 2.5
Project Name: “Young Teachers' Professional Development (TPD)"
"Project Founder: Prof. Dr. Amjad Ali Arain
Faculty of Education, University of Sindh, Pakistan
The document provides an overview of inferential statistics. It defines inferential statistics as making generalizations about a larger population based on a sample. Key topics covered include hypothesis testing, types of hypotheses, significance tests, critical values, p-values, confidence intervals, z-tests, t-tests, ANOVA, chi-square tests, correlation, and linear regression. The document aims to explain these statistical concepts and techniques at a high level.
Regression analysis is a statistical technique for predicting a dependent variable based on one or more independent variables. Simple linear regression fits a straight line to the data to predict a continuous dependent variable (y) from a single independent variable (x). The output is an equation of the form y= b0 + b1x + ε, where b0 is the y-intercept, b1 is the slope, and ε is the error. Multiple linear regression extends this to include more than one independent variable. Regression analysis calculates the "best fit" line that minimizes the residuals, or differences between predicted and observed y values.
I do not have enough information to determine what percentage of residents are asleep now versus at the beginning of this talk. As an AI assistant without direct observation of the audience, I do not have data on individual residents' states of alertness over time.
This presentation is about Basic Statistics-related to types of Data-Qualitative and Quantitative, and its Examples in everyday life- By: Dr. Farhana Shaheen
Descriptive statistics are methods of describing the characteristics of a data set. It includes calculating things such as the average of the data, its spread and the shape it produces.
The document discusses different types of data that can be collected in statistics including categorical vs. quantitative data, discrete vs. continuous data, and different levels of measurement for data including nominal, ordinal, interval, and ratio scales. It also discusses key concepts such as parameters, statistics, populations, and samples. Potential pitfalls in statistical analysis are outlined such as misleading conclusions, nonresponse bias, and issues with survey question wording and order.
This document discusses concepts related to data, including collection, organization, presentation, and analysis of data. It defines key terms like qualitative vs quantitative data and primary vs secondary data. It explains methods of collecting primary data through surveys, sampling techniques, and secondary data from published and unpublished sources. The document also covers organizing data through frequency distributions, statistical series, and presenting data in tabular, diagrammatic and graphical forms like pie charts, histograms, bar diagrams and ogives. It concludes with analyzing organized data through measures of central tendency, dispersion, correlation and regression.
Logistic regression allows prediction of discrete outcomes from continuous and discrete variables. It addresses questions like discriminant analysis and multiple regression but without distributional assumptions. There are two main types: binary logistic regression for dichotomous dependent variables, and multinomial logistic regression for variables with more than two categories. Binary logistic regression expresses the log odds of the dependent variable as a function of the independent variables. Logistic regression assesses the effects of multiple explanatory variables on a binary outcome variable. It is useful when the dependent variable is non-parametric, there is no homoscedasticity, or normality and linearity are suspect.
This document defines data and different types of data presentation. It discusses quantitative and qualitative data, and different scales for qualitative data. The document also covers different ways to present data scientifically, including through tables, graphs, charts and diagrams. Key types of visual presentation covered are bar charts, histograms, pie charts and line diagrams. Presentation should aim to clearly convey information in a concise and systematic manner.
This document discusses variance and standard deviation. It defines variance as the average squared deviation from the mean of a data set. Standard deviation measures how spread out numbers are from the mean and is calculated by taking the square root of the variance. The document provides step-by-step instructions for calculating both variance and standard deviation, including examples using test score data.
This document discusses descriptive statistics and analysis. It provides definitions of key terms like data, variable, statistic, and parameter. It also describes common measures of central tendency like mean, median and mode. Additionally, it covers measures of variability such as range, variance and standard deviation. Various graphical and numerical methods for summarizing and presenting sample data are presented, including tables, charts and distributions.
This document provides an overview of univariate analysis. It defines key terms like variables, scales of measurement, and types of univariate analysis. It describes descriptive statistics like measures of central tendency (mean, median, mode) and dispersion (range, variance, standard deviation). It also discusses inferential univariate analysis and appropriate statistical tests for different variable types and research questions, including z-tests, t-tests, and chi-square tests. Examples are provided to illustrate calculating and interpreting these statistics.
This document discusses inferential statistics, which uses sample data to make inferences about populations. It explains that inferential statistics is based on probability and aims to determine if observed differences between groups are dependable or due to chance. The key purposes of inferential statistics are estimating population parameters from samples and testing hypotheses. It discusses important concepts like sampling distributions, confidence intervals, null hypotheses, levels of significance, type I and type II errors, and choosing appropriate statistical tests.
Lecture on Introduction to Descriptive Statistics - Part 1 and Part 2. These slides were presented during a lecture at the Colombo Institute of Research and Psychology.
This document discusses different types of data. It begins by defining data as facts such as numbers, words, measurements or descriptions. There are two main types of data: quantitative and qualitative. Quantitative data can be continuous, representing measurements, or discrete, representing countable items. Qualitative data includes categorical data for characteristics, binary data with two options, and nominal or ordinal scales. Choosing the right statistical test is challenging.
This document provides an overview of data analysis and statistics concepts for a training session. It begins with an agenda outlining topics like descriptive statistics, inferential statistics, and independent vs dependent samples. Descriptive statistics concepts covered include measures of central tendency (mean, median, mode), measures of variability (range, standard deviation), and charts. Inferential statistics discusses estimating population parameters, hypothesis testing, and statistical tests like t-tests, ANOVA, and chi-squared. The document provides examples and online simulation tools. It concludes with some practical tips for data analysis like checking for errors, reviewing findings early, and consulting a statistician on analysis plans.
This document introduces the concept of data classification and levels of measurement in statistics. It explains that data can be either qualitative or quantitative. Qualitative data consists of attributes and labels while quantitative data involves numerical measurements. The document also outlines the four levels of measurement - nominal, ordinal, interval, and ratio - from lowest to highest. Each level allows for different types of statistical calculations, with the ratio level permitting the most complex calculations like ratios of two values.
This document discusses different types of data including:
- Qualitative data which describes attributes that can be observed but not computed, and quantitative data which can be measured numerically.
- Primary data is collected first-hand for a specific purpose, while secondary data has already been collected in the past.
- Discrete data takes only certain values, while continuous data can take any value in a range.
Topic: Types of Data
Student Name: Duwa
Class: B.Ed. 2.5
Project Name: “Young Teachers' Professional Development (TPD)"
"Project Founder: Prof. Dr. Amjad Ali Arain
Faculty of Education, University of Sindh, Pakistan
The document provides an overview of inferential statistics. It defines inferential statistics as making generalizations about a larger population based on a sample. Key topics covered include hypothesis testing, types of hypotheses, significance tests, critical values, p-values, confidence intervals, z-tests, t-tests, ANOVA, chi-square tests, correlation, and linear regression. The document aims to explain these statistical concepts and techniques at a high level.
Regression analysis is a statistical technique for predicting a dependent variable based on one or more independent variables. Simple linear regression fits a straight line to the data to predict a continuous dependent variable (y) from a single independent variable (x). The output is an equation of the form y= b0 + b1x + ε, where b0 is the y-intercept, b1 is the slope, and ε is the error. Multiple linear regression extends this to include more than one independent variable. Regression analysis calculates the "best fit" line that minimizes the residuals, or differences between predicted and observed y values.
I do not have enough information to determine what percentage of residents are asleep now versus at the beginning of this talk. As an AI assistant without direct observation of the audience, I do not have data on individual residents' states of alertness over time.
This document provides an introduction to statistics and research design. It discusses key concepts in descriptive and inferential statistics, including scales of measurement, measures of central tendency and variability, sampling methods, and parameters versus statistics. Descriptive statistics are used to summarize and describe data, while inferential statistics make predictions about a population based on a sample. Research design involves the plan for investigating research questions using statistical analysis tools and following the logic of hypothesis testing.
This document provides an overview of biostatistics and research methodology. It defines key statistical terms and concepts, describes methods of data collection and presentation, discusses sampling and different sampling methods, and outlines the steps in research including defining a problem, developing objectives and hypotheses, collecting and analyzing data, and interpreting results. Common statistical analyses covered include measures of central tendency, dispersion, significance testing, correlation, and regression.
This document provides an introduction to biostatistics. It outlines several key objectives of a biostatistics course including understanding descriptive statistics, statistical inference, common tests and their assumptions. It defines important statistical concepts like population, sample, parameters, statistics, variables, and types of statistical analysis. Descriptive statistics are used to summarize data, while inferential statistics allow generalizing from samples to populations. Examples of potential statistical abuses are also provided.
This document provides an overview of key concepts in statistics including:
- Statistics involves collecting and analyzing quantitative data and summarizing results numerically. It is used across many fields including business, economics, and science.
- Common statistical measures include the mean, median, mode, range, variance, and standard deviation which quantify central tendency and dispersion of data.
- Time series analysis examines data measured over time to identify trends, seasonal variations, cycles, and irregular fluctuations. Proper sampling and avoiding bias are important in statistical analysis.
Introduction to Data Management in Human EcologyKern Rocke
This document provides an introduction to data management concepts in human ecology. It defines data and describes common data types like qualitative and quantitative data. It also discusses topics like sources of data, types of statistical analyses, strategies for computer-aided analysis, principles of statistical analysis, and interpreting p-values. Examples of statistical programs and various statistical analysis methods for comparing groups and exploring relationships between variables are also outlined.
Statistics in research by dr. sudhir sahuSudhir INDIA
This document discusses key concepts in statistics including types of data, variables, and descriptive and inferential statistics. It defines statistics as the science of organizing, presenting, analyzing, and interpreting numerical data to assist in making more effective decisions. There are two main types of data - quantitative and qualitative. Variables can be discrete, continuous, nominal, ordinal, interval, or ratio. Descriptive statistics are used to summarize and present data, while inferential statistics are used to make predictions or generalizations about populations based on samples. Common descriptive and inferential statistical techniques are explained.
This document provides an overview of key concepts in quantitative data analysis, including:
1. It describes four scales of measurement (nominal, ordinal, interval, ratio) and warns against using statistics inappropriate for the scale of data.
2. It distinguishes between parametric and non-parametric statistics, descriptive and inferential statistics, and the types of variables and analyses.
3. It explains important statistical concepts like hypotheses, one-tailed and two-tailed tests, distributions, significance, and avoiding type I and II errors in hypothesis testing.
This document provides an overview of data processing and analysis techniques. It discusses editing, coding, classification, and tabulation as part of data processing. For data analysis, it describes descriptive statistics such as univariate, bivariate, and multivariate analysis. It also discusses inferential statistics and various correlation, regression, time series analysis techniques to determine relationships between variables and test hypotheses.
This document discusses various quantitative data analysis techniques. It defines categorical and numerical data types. There are two main types of quantitative data analysis: descriptive statistics which summarize and describe data, and inferential statistics which make inferences about populations based on samples. Common descriptive analysis techniques discussed include frequency distributions, measures of central tendency (mean, median, mode), measures of dispersion (standard deviation, coefficient of variation), and measures of skewness and kurtosis. Correlation and regression are also covered.
fundamentals of data science and analytics on descriptive analysis.pptxkumaragurusv
This document discusses various types of graphs used to visualize quantitative data such as histograms, frequency polygons, and scatter plots. It also covers concepts related to variability in data like range, variance, standard deviation, and interquartile range. Finally, it discusses qualitative vs quantitative data, scales of measurement, correlation, regression analysis techniques like least squares regression, and hypothesis testing of regression coefficients.
Descriptive statistics are used to summarize large datasets and communicate findings. There are measures of central tendency like mean, median, and mode to describe typical values. Measures of dispersion like range and standard deviation quantify how spread out the data is. Skewness measures describe the symmetry of distributions. Together these statistics condense complex data into clear high-level insights.
Here are my responses to the case vignettes:
Case 1:
Q1: Diagnosis - Bipolar I disorder, current episode manic
Q2: Management - Conduct a thorough assessment. Start treatment with mood stabilizer (lithium or valproate) plus atypical antipsychotic. Consider hospitalization given severity of symptoms. Provide psychoeducation to family.
Case 2:
Q1: Diagnosis - Bipolar I disorder, current episode depressed
Q2: Management - Conduct assessment. Start antidepressant under cover of mood stabilizer due to risk of switching. Consider ECT given severity. Provide psychoeducation and support to family.
Case 3:
Q1:
Disorders of sleep can be classified into dyssomnias, which involve disturbances in sleep quantity or timing; hypersomnias, which involve excessive sleepiness; and parasomnias, which involve abnormal behaviors during sleep transitions. The most common disorders include insomnia, sleep apnea, narcolepsy, and restless leg syndrome. Diagnosis involves polysomnography and other tests to evaluate sleep patterns and rule out underlying causes. Treatment depends on the specific disorder diagnosed.
Circadian rhythm refers to biological cycles that occur over approximately 24 hours. These rhythms are regulated by molecular feedback loops in clock genes and proteins that influence cellular functions and synchronize organs. Disruption of circadian rhythms through irregular sleep/wake cycles, jet lag, or light exposure at night has been linked to increased risk of metabolic, cardiovascular, and mental health conditions. Maintaining circadian alignment through regular sleep/wake and meal times may help reduce disease risk.
This document discusses the nature vs nurture debate regarding human behavior. It provides examples that support both nature and nurture influences. For nature, it discusses genetic influences on behaviors and conditions like language acquisition and schizophrenia. For nurture, it discusses studies showing environmental influences like Little Albert's learned fear response and the Stanford Prison Experiment. It also notes that behavior is often an interaction between innate traits and environmental responses, and provides applications of considering both nature and nurture in developing drug therapies or adapting environments.
This document discusses various interview techniques used in psychiatry. It describes facilitating techniques like reinforcement, reflection and summarizing that enable patients to share openly. Expanding techniques like clarifying, probing and redirecting help expand the focus of the interview. Obstructive techniques like judgmental questions or premature advice can inhibit the interview process. The document also discusses stress interviews used in employee selection and counseling approaches and principles.
This document provides an introduction to psychology and discusses several key topics:
- It defines psychology as the scientific study of behavior and mental processes, and psychiatry as a branch of medicine dealing with mental disorders.
- It outlines the nature vs. nurture debate on whether human capabilities are innate or developed through experience.
- It describes the origins of modern scientific psychology in the late 19th century with Wilhelm Wundt establishing the first psychology laboratory.
- It discusses several early schools of psychology including structuralism, functionalism, behaviorism, Gestalt psychology, psychoanalysis, and modern biological, cognitive, social, developmental, and humanistic perspectives.
This document provides an overview of psychiatry case taking and examination, including history taking and mental status examination (MSE). It discusses the purpose and general principles of history taking, as well as how to structure the interview room and questions. It then describes how to obtain information on a patient's identifying data, chief complaints, history of present illness, past history, family history, personal history, and pre-morbid personality. Finally, it outlines the components of the MSE including general appearance, psychomotor activity, speech, mood, thought, perception, and cognitive functions.
Internet Addiction Disorder & Blue Whale Challengedonthuraj
Presentation discuss on Internet addiction and Some information about Blue Whale challenge... (Regarding blue whale the information is from w=various media)
This is a ppt on Ragging. I have covered on basic definition, psychological aspects & legal aspects related to ragging in India. summing with some suggestions.
This document provides an introduction to the field of psychiatry. It begins with definitions of key terms like psychiatry, psychology, psychotherapy and psychoanalysis. It then discusses the history of psychiatry, from early views of mental disorders as supernatural to modern biological perspectives. Famous figures in the field like Sigmund Freud, Anna Freud, Jean Piaget are mentioned. The document outlines concepts in phenomenology like delusions, hallucinations and classification systems like ICD-10 and DSM-5. It describes various sub-specialties within psychiatry such as addiction, biological, child and adolescent psychiatry.
This document discusses polypharmacy in psychiatry. It defines polypharmacy as using two or more medications to treat the same or different conditions. While historically frowned upon, polypharmacy is now seen as necessary in many cases. Studies show rates of polypharmacy vary widely, from 13-90%, and have increased over time. Polypharmacy is more common in certain populations like adult men, those with schizophrenia, and the geriatric population where over 90% use at least one medication per week. While polypharmacy can increase adverse effects and interactions, it may be justified when treating co-morbidities or when mono-therapy is insufficient. Education and following guidelines can help avoid irrational polypharmacy.
Complementary and Alternative therapies in Psychiatrydonthuraj
This is a seminar which i had presented as a part of academic activity in my department. Please comment on the seminar, so that i can make any future changes... Thank you.
This is seminar presented as part of academics in my department. Please comment on the content, so that i can improve myself. If the content is good, kindly like it.
This document discusses the history and procedure of electroconvulsive therapy (ECT). It begins by defining ECT as the application of electric current to the head to induce a seizure for improving abnormal mental states. It then outlines the early history of inducing seizures for psychiatric treatment from the 16th century through various chemical methods. It describes the development of ECT in the 1930s-1940s by Cerletti and Bini who applied electricity directly to humans. The document covers the mechanisms, electrical principles, electrode placements, indications, contraindications, pretreatment, effects and risks of the modern ECT procedure.
The document discusses pervasive developmental disorders and autism spectrum disorders. It presents a case vignette of a 5-year-old boy named Donald exhibiting symptoms of autism such as lack of social interaction and affection. It then covers the history of autism including theorists who studied and named different conditions. It discusses diagnostic classifications, prevalence, potential etiologies including genetic and neurological factors, and features of autistic disorder.
Title: A Quick and Illustrated Guide to APA Style Referencing (7th Edition)
This visual and beginner-friendly guide simplifies the APA referencing style (7th edition) for academic writing. Designed especially for commerce students and research beginners, it includes:
✅ Real examples from original research papers
✅ Color-coded diagrams for clarity
✅ Key rules for in-text citation and reference list formatting
✅ Free citation tools like Mendeley & Zotero explained
Whether you're writing a college assignment, dissertation, or academic article, this guide will help you cite your sources correctly, confidently, and consistent.
Created by: Prof. Ishika Ghosh,
Faculty.
📩 For queries or feedback: ishikaghosh9@gmail.com
Lecture 2 CLASSIFICATION OF PHYLUM ARTHROPODA UPTO CLASSES & POSITION OF_1.pptxArshad Shaikh
*Phylum Arthropoda* includes animals with jointed appendages, segmented bodies, and exoskeletons. It's divided into subphyla like Chelicerata (spiders), Crustacea (crabs), Hexapoda (insects), and Myriapoda (millipedes, centipedes). This phylum is one of the most diverse groups of animals.
How to Add Customer Note in Odoo 18 POS - Odoo SlidesCeline George
In this slide, we’ll discuss on how to add customer note in Odoo 18 POS module. Customer Notes in Odoo 18 POS allow you to add specific instructions or information related to individual order lines or the entire order.
What makes space feel generous, and how architecture address this generosity in terms of atmosphere, metrics, and the implications of its scale? This edition of #Untagged explores these and other questions in its presentation of the 2024 edition of the Master in Collective Housing. The Master of Architecture in Collective Housing, MCH, is a postgraduate full-time international professional program of advanced architecture design in collective housing presented by Universidad Politécnica of Madrid (UPM) and Swiss Federal Institute of Technology (ETH).
Yearbook MCH 2024. Master in Advanced Studies in Collective Housing UPM - ETH
How to Create A Todo List In Todo of Odoo 18Celine George
In this slide, we’ll discuss on how to create a Todo List In Todo of Odoo 18. Odoo 18’s Todo module provides a simple yet powerful way to create and manage your to-do lists, ensuring that no task is overlooked.
Happy May and Taurus Season.
♥☽✷♥We have a large viewing audience for Presentations. So far my Free Workshop Presentations are doing excellent on views. I just started weeks ago within May. I am also sponsoring Alison within my blog and courses upcoming. See our Temple office for ongoing weekly updates.
https://ldmchapels.weebly.com
♥☽About: I am Adult EDU Vocational, Ordained, Certified and Experienced. Course genres are personal development for holistic health, healing, and self care/self serve.
Ajanta Paintings: Study as a Source of HistoryVirag Sontakke
This Presentation is prepared for Graduate Students. A presentation that provides basic information about the topic. Students should seek further information from the recommended books and articles. This presentation is only for students and purely for academic purposes. I took/copied the pictures/maps included in the presentation are from the internet. The presenter is thankful to them and herewith courtesy is given to all. This presentation is only for academic purposes.
How to Configure Public Holidays & Mandatory Days in Odoo 18Celine George
In this slide, we’ll explore the steps to set up and manage Public Holidays and Mandatory Days in Odoo 18 effectively. Managing Public Holidays and Mandatory Days is essential for maintaining an organized and compliant work schedule in any organization.
A measles outbreak originating in West Texas has been linked to confirmed cases in New Mexico, with additional cases reported in Oklahoma and Kansas. The current case count is 817 from Texas, New Mexico, Oklahoma, and Kansas. 97 individuals have required hospitalization, and 3 deaths, 2 children in Texas and one adult in New Mexico. These fatalities mark the first measles-related deaths in the United States since 2015 and the first pediatric measles death since 2003.
The YSPH Virtual Medical Operations Center Briefs (VMOC) were created as a service-learning project by faculty and graduate students at the Yale School of Public Health in response to the 2010 Haiti Earthquake. Each year, the VMOC Briefs are produced by students enrolled in Environmental Health Science Course 581 - Public Health Emergencies: Disaster Planning and Response. These briefs compile diverse information sources – including status reports, maps, news articles, and web content– into a single, easily digestible document that can be widely shared and used interactively. Key features of this report include:
- Comprehensive Overview: Provides situation updates, maps, relevant news, and web resources.
- Accessibility: Designed for easy reading, wide distribution, and interactive use.
- Collaboration: The “unlocked" format enables other responders to share, copy, and adapt seamlessly. The students learn by doing, quickly discovering how and where to find critical information and presenting it in an easily understood manner.
CURRENT CASE COUNT: 817 (As of 05/3/2025)
• Texas: 688 (+20)(62% of these cases are in Gaines County).
• New Mexico: 67 (+1 )(92.4% of the cases are from Eddy County)
• Oklahoma: 16 (+1)
• Kansas: 46 (32% of the cases are from Gray County)
HOSPITALIZATIONS: 97 (+2)
• Texas: 89 (+2) - This is 13.02% of all TX cases.
• New Mexico: 7 - This is 10.6% of all NM cases.
• Kansas: 1 - This is 2.7% of all KS cases.
DEATHS: 3
• Texas: 2 – This is 0.31% of all cases
• New Mexico: 1 – This is 1.54% of all cases
US NATIONAL CASE COUNT: 967 (Confirmed and suspected):
INTERNATIONAL SPREAD (As of 4/2/2025)
• Mexico – 865 (+58)
‒Chihuahua, Mexico: 844 (+58) cases, 3 hospitalizations, 1 fatality
• Canada: 1531 (+270) (This reflects Ontario's Outbreak, which began 11/24)
‒Ontario, Canada – 1243 (+223) cases, 84 hospitalizations.
• Europe: 6,814
How to Create Kanban View in Odoo 18 - Odoo SlidesCeline George
The Kanban view in Odoo is a visual interface that organizes records into cards across columns, representing different stages of a process. It is used to manage tasks, workflows, or any categorized data, allowing users to easily track progress by moving cards between stages.
1. Basics of statistics
Conducted by Dept. of Biostatistics, NIMHANS
From 28 to 30 Sept, 2015
"It is easy to lie with statistics,
But it is hard to tell the truth without statistics."
–Andrejs Dunkels
2. Topics covered
• Introduction
• Types of statistics
• Definitions
• Variable & Types
• Variable scales
• Description of data
• Distribution of sample &
population
• Measures of center, dispersion &
shape
• Properties of Normal distribution
• Testing of hypothesis
• Types of error
• Estimation of sample size
• Various tests to be used
• Central limit theorem
• Parametric tests- t-test, ANOVA,
Post Hoc, Correlation &
Regression
• Non Parametric tests
• Tests for categorical data
• Summary of tests to be used
• Qualitative vs Quantitative
research
• Qualitative research
• Software packages
3. Statistics
• Consists of a body of methods for collecting and analyzing data.
• It provides methods for-
– Design- planning and carrying out research studies
– Description- summarizing and exploring data
– Inference- making predictions & generalizing about phenomena
represented by data.
4. Types of statistics
• 2 major types of statistics
• Descriptive statistics- It consists of methods for organizing and
summarizing information.
– Includes- graphs, charts, tables & calculation of averages, percentiles
• Inferential statistics- It consists of methods for drawing and measuring the
reliability of conclusions about population based on information obtained.
– Includes- point estimation, interval estimation, hypothesis testing.
• Both are interrelated. Necessary to use methods of descriptive statistics to
organize and summarize the information obtained before methods of
inferential statistics can be used.
5. Population & Sample
• Basic concepts in statistics.
• Population- It is the collection of all individuals or items under
consideration in a statistical study
• Sample- It is the part of the population from which information is
collected.
• Population always represents the target of an investigation. We learn about
population by sampling from the collection.
6. • Parameters- used to summarize the features of the population under
investigation.
• Statistic- it describes a characteristics of the sample, which can then be
used to make inference about unknown parameters.
7. Variable & types
• Variable- a characteristic that varies from one person or thing to another.
• Types- Qualitative/ Quantitative, Discrete/ Continuous, Dependent/
Independent
• Qualitative data- the variable which yield non numerical data.
– Eg- sex, marital status, eye colour
• Quantitative data- the variables that yield numerical data
– Eg- height, weight, number of siblings.
8. • Discrete variable- the variable has only a countable number of distinct
possible values.
– Eg- number of car accidents, number of children
• Continuous variable- the variable has divisible unit.
– Eg- weight, length, temperature.
• Independent variable- variable is not dependent on other variable.
– Eg- age, sex.
• Dependent variable- depends on the independent variable.
– Eg- weight of a newborn, stress
9. Variable scales
• Variables can also be described according to the scale on which they are
defined.
• Nominal scale- the categories are merely names. They do not have a
natural order.
– Eg- male/female, yes/no
• Ordinal scale- the categories can be put in order. But the difference
between the two may not be same as other two.
– Eg- mild/ Moderate/ Severe.
10. • Interval scale- the differences between variables are comparable. The
variable does not has absolute zero.
– Eg- temperature, time
• Ratio scale- the variable has absolute zero as well as difference between
variables are comparable..
– Eg- stress using PSS, insomnia using ISI
• Nominal & Ordinal scales are used to describe Qualitative data.
• Interval & Ratio scales are used to describe Quantitative data.
11. Describing data
• Qualitative data-
– Frequency- number of observations falling into particular class/
category of the qualitative variable.
– Frequency distribution- table listing all classes & their frequencies.
– Graphical representation- Pie chart, Bar graph.
– Nominal data best displayed by pie chart
– Ordinal data best displayed by bar graph
12. • Quantitative data-
– Can be presented by a frequency distribution.
– If the discrete variable has a lot of different values, or if the data is a
continuous variable then data can be grouped into classes/ categories.
– Class interval- covers the range between maximum & minimum
values.
– Class limits- end points of class interval.
– Class frequency- number of observations in the data that belong to
each class interval.
– Usually presented as a Histogram or a Bar graph.
13. Population & Sample distribution
• Population distribution- frequency distribution of the population.
• Sample distribution- frequency distribution of the sample.
• Sample distribution is a blurry photo of the population distribution.
• As the sample size ↑, the sample distribution becomes closer representative
of the population distribution.
• Sample of population distribution can be summarized by describing its
shape (based on the graph).
• It can be Symmetric or Nonsymmetric/ Skewed to left/ right based on its
tail.
14. Properties of
Numerical data &
Measures
Central tendency
Mean
Median
Mode
Dispersion
Range
Interquartile
Range
Standard
Deviation
Shape
Skewness
Kurtosis
15. Measures of center
• Central tendency- In any distribution, majority of the observations pile up,
or cluster around in a particular region.
– Includes- Mean, Median & Mode.
• Mean- sum of observed values in a data divided by the number of
observations
• Median- observation in the data set that divides the data set into half.
• Mode- value of the data set which occurs with greatest frequency
• Mean & Median can be applied only to Quantitative data
• Mode can be used either to Qualitative or Quantitative data.
16. What to choose?
• Qualitative variable- Mode.
• Quantitative with symmetric distribution- Mean.
• Quantitative with skewed distribution- Median.
• Outlier- observation that falls far from the rest of the data. Mean gets
highly influenced by the outlier.
• We use sample mean, median & mode to estimate the population mean,
median & mode.
17. Measures of dispersion
• Dispersion- It is the spread/ variability of values about the measures of
central tendency. They quantify the variability of the distribution.
• Measures include-
– Range
– Sample interquartile range
– Standard deviation
• Mostly used for quantitative data
• Range- difference between the largest observed value in the data set and
the smallest one.
– So, while considering range great deal of information is ignored.
18. • Interquartile range- difference between the first & third quartiles of the
variable.
– Percentile- divides the observed values into hundredths/ 100 equal
parts.
– Deciles- divides the observed values into tenths/ 10 equal parts
– Quartiles- divides the observed values into 4 equal parts. Q1 divides
the bottom 25% of observed values from top 75%...
• Standard deviation- it is a kind of average of the absolute deviation of
observed values from the mean of the variable.
– It is defined using the sample mean & values get strongly affected by
few extreme observations.
19. Shape
• Skewness- Lack of symmetry in distribution. It can be interpreted from
frequency polygon.
• Properties-
– Mean, median & mode fall at different points.
– Quartiles are not equidistant from median.
– Curve is not symmetrical but stretched more to one side.
• Distribution may be positively or negatively skewed. Limits for
coefficient of skewness is ± 3.
• Kurtosis- convexity of a curve.
– Gives an idea about the flatness/ peakedness of the curve.
20. Normal distribution
• Bell shaped symmetric distribution.
• Why is it important?
– Many things are normally distributed, or very close to it.
– It is easy to work with mathematically
– Most inferential statistical methods make use of properties of the
normal distribution.
• Mean = Median = Mode
• 68.2% of the values lie within 1SD.
• 95.4% of the values lie within 2SD.
• 99.7% of the values lie within 3SD.
21. Tests to check normal distribution
1. Checking measures of Central tendency, Skewness & Kurtosis.
2. Graphical evaluation- normal plot, frequency polygon.
3. Statistical tests-
– Kolmogorov-Smirnov test
– Shapiro-Wilk test
– Lilliefor’s test
– Pearson’s chi-squared test
• Shapiro-Wilk has the best power for a given significance.
• If not normally distributed?- correction by transformation of the data- log
transformation, square root transformation.
22. Hypothesis testing
• Aim of doing a study is to check whether the data agree with certain
predictions. These predictions are called hypothesis.
• Hypothesis arise from the theory that drives the research.
• Significance test- it is a way of statistically testing a hypothesis by
comparing the data values.
– It consists of two hypothesis- Null (H0) & Alternative hypothesis (H1).
– Null hypothesis is usually a statement that the parameter has value
corresponding to, in some sense, no effect.
– Alternative hypothesis is a hypothesis contradicts null hypothesis.
– Hypothesis are formulated before collecting the data.
23. • Significance test analyzes the strength of sample evidence against the null
hypothesis.
• The test is conducted to investigate whether the data contradicts the null
hypothesis, suggesting alternative hypothesis is true.
• Test statistics- statistic calculated from the sample data to test the null
hypothesis.
• p-value- is the probability, if H0 were true, that the test statistic would fall
in this collection of values. The smaller the p-value, the more strongly
the data contradicts H0.
• When p-value ≤ 0.05, data sufficiently contradicts H0.
24. Types of error
• Type I/ α error- Rejecting true null hypothesis.
– We may conclude that difference is significant, when in fact there is no
real difference.
– It is popularly known as p-value. Maximum p-value allowed is called
as level of significance. Being serious p-value is kept low, mostly less
than 5% or p<0.05.
• Type II/ β error- Accepting false null hypothesis.
– We may conclude that difference is not significant, when in fact there is
real difference.
– It is also called as Power of the test & indicates sensitivity of the test.
• Not possible to reduce both type I & II, So α error is fixed at a tolerable
limit & β error is minimized by ↑ sample size.
25. Estimation of Sample size
• Small sample- fails to detect clinically important effects (lack of Power)
• Large sample- identify differences which has no clinical relevance.
• Calculation is based on (not included formulas)-
– Estimation of mean
– Estimation of proportions
– Comparison in two means
– Comparison in two proportions
• Checklist- level of significance, power, study design, statistical procedure.
• Minimum sample size required for statistical analysis- 50.
26. Basic theorem in statistics
• Central limit theorem-
– States that the distribution of the sum/ average of a large number of
independent, identically distributed variables will be approximately
normal.
• Why is this important?
– Basis of many statistical procedures.
27. Parametric tests
• These are statistical tests that makes assumptions about the parameters
(defining properties).
• Assumptions made are-
– Data follows normal distribution.
– Sample size is large enough for Central limit theorem to lead to
normality of averages.
– Data is not normal, but can be transformed.
• Some situations where data does not follow normal distribution-
– Outcome is an ordinal variable.
– Presence of definite outliers
– Outcome has clear limits of demarcation.
28. Tests to be used
Scale type Permissible statistics
Nominal Mode
Chi-Square test
Ordinal Mode/ Median
Interval
Mean, Standard
Deviation
t-test, ANOVA, Post hoc,
Correlation, Regression,
Ratio
One sample
t-Test
Independent
t- test
Dependent
t- test
Compares the sample
mean with the
population mean
Compares the means of
two independent
samples
Compares the means of
paired samples
(before-after, pre-post)
29. ANOVA
• t- Test- difference between 2 means.
– If there are more than 2 means, then doing t test increases the α & β
error. Which creates a serious flaw.
• So when there are >2 means to be compared we use ANOVA.
• Types-
– One way- study effects of one factors.
– Two way- study effects of multiple factors.
• Assumptions of ANOVA- Normality, Linearity.
• ANCOVA- It is a blend of ANOVA & Regression. In other words,
measures how much 2 variables change together & how strong is the
relationship.
30. Post Hoc
• Latin phrase, means- “after this” or “after the event”
• Why do Post hoc tests?
– ANOVA tells whether there is an overall difference between groups,
but it does not tell which specific group differed.
– Post hoc tests tell where the difference occurred between groups.
• Different Post hoc tests-
– Bonferroni
– Fisher’s least significant difference (LSD)
– Tukey’s honestly significant difference (HSD)
– Scheffe post hoc tests
31. Correlation & Regression
• Correlation- denotes association between 2 quantitative variables.
– Assume that the association is linear (i.e.., one variable ↑/ ↓ a fixed
amount for a unit ↑/ ↓ in the other).
– Degree of association is measured by a correlation coefficient, r.
– r is measured on a scale from -1 through 0 to +1.
– When both variables ↑, then r is + & when 1 variable ↑ and other
decreases, then r is -.
• Graphically- Scatter diagrams, usually independent variable is plotted
against x-axis & dependent against y-axis.
• Limitation- it does not say anything about Cause & Effect relationship.
– Beware of spurious/ non sense correlation.
32. • Correlation-
– Strength/ degree of association.
• Regression-
– Nature of association (eg- if x & y related, it means if x changes by
certain amount then y changes on an average by certain amount).
– Expresses the linear relationship between variables.
– Regression coefficient- β
– Types- Linear, Non linear, Stepwise
• Regression coefficient gives a better summary of the relationship between
the two variables than Correlation coefficient.
33. Non Parametric tests
• Also called as “Distribution free tests”, because they are based on fewer
assumptions.
• Advantages-
– When data does not follow normal distribution.
– When the average is better represented by median.
– Sample size is small.
– Presence of outliers.
– Relatively simple to conduct
34. Tests
Characters Parametric test Non Parametric test
Testing mean, a
hypothesized value
One sample t test Sign test
Comparison of means of
2 groups
Independent t test Mann Whitney U test
Means of related samples Paired t test
Wilcoxon Signed rank
test
Comparison of means of
> 2 groups
ANOVA Kruskal Wallis test
Comparison of means of
> 2 related groups
Repeated measures of
ANOVA
Friedman’s test
Assessing the
relationship between 2
quantitative variables
Pearson’s correlation Spearman’s correlation
35. Chi-Square test
• Used for analysis of categorical data.
• Other tests- Fisher exact probability test, McNemar’s test.
• Requirements of Chi-Square-
– Sample should be independent
– Sample size should be reasonably large (n >40)
– Expected cell frequency should not be < 5.
• Yate’s correction- if expected cell frequency is < 5
• Fisher exact probability test- used when sample size is small (n < 20)
• McNemar’s test- used when there are two related samples or there are
repeated measurements
36. RR & OR
• Relative Risk (RR)-
– It is the ratio of incidence rate among exposed to the incidence rate
among not exposed.
– used in RCTs & Cohort studies
– Values- <1 - risk of disease is less among exposed
– >1 – risk of disease is more among exposed
– =1 – equal risk among exposed & non exposed
• Odds Ratio (OR)-
– Ratio of odds of exposure among the cases to odds of exposure among
controls. Used for rare diseases/ events
– Used in case control & retrospective studies (no meaning in calculating
the risk of getting the disease)
– Values- >1- more among cases, <1- more among controls
37. Qualitative v/s Quantitative
Qualitative research
• Seeks to confirm hypothesis
• Highly structured methods used
• Uses closed ended, numerical
methods of collecting data
• Study design is fixed & subject to
statistical assumptions
Quantitative research
• Seeks to explore phenomena
• Semi-structured methods used
• Uses open ended, textual methods
• Study design is flexible, iterative
& subject to textual analysis
38. Qualitative research
• Provides complex descriptions & information about issues such as
contradictory behavior, belief, opinions, emotions & relationships.
• Methods used are-
– Phenomenology
– Ethnography
– Grounded theory
• Designs used-
– Case studies
– Comparative designs
– Snapshots
– Retrospective & Longitudinal studies
39. Statistical software packages
Quantitative research
• SPSS by IBM
• R by R Foundation
• GenStat by VSN International
• Mathematica by Wolfram
research
• Minitab, MATLAB, Nmath Stats
etc..,
Qualitative research
• ATLASti
• NVIVO
• MAXQDA
• NUDist
• ANTHTOPAC
40. "An approximate answer to the right problem is worth a good deal,
more than an exact answer to an approximate problem." -- John Tukey