This course gives an introduction to probability and statistics for engineers including;probability, combinatorics, random variables, functions of random variables, moments,inequalities and limit theorems, statistics, regression and estimation theory, autocorrelation and cross correlation of analogue and discrete data, hypothesis testing, system reliability, and computer usage in solving problems involving probability and statistics.
Intended learning Outcomes (ILO’s) :
Introducing the basic concepts of probability theory and its applications..
Recognizing the basic concepts of statistical models and its importance in analyzing data.And Recognizing the effect of variability in solving problems.
Demonstrating industry and management related problems, including data collection, analysis, and model utilization
Textbook :
Douglas C. Montgomery, George C. Runger, 2007. Applied Statistics and Probability for Engineers, 4th Edition, John Wiley and sons.
The first exam will be on next Tue. 20/October/2020 at the formal lecture time (11:00-12:00). please, be all well prepared with all material and tools you may need. Good Luck
Statistics is the study of the collection, organization, analysis, interpretation and presentation of data. It deals with all aspects of data, including the planning of data collection in terms of the design of surveys and experiments.
Objectives:
After careful study of this chapter you should be able to do the following:
Identify the role that statistics can play in the engineering problem-solving process
Discuss how variability affects the data collected and used for making engineering decisions
Explain the difference between enumerative and analytical studies
Discuss the different methods that engineers use to collect data
Identify the advantages that designed experiments have in comparison to other methods of collecting engineering data
Explain the differences between mechanistic models and empirical models
Discuss how probability and probability models are used in engineering and science
Probability (or likelihood) is a measure or estimation of how likely it is that something will happen or that a statement is true. Probabilities are given a value between 0 (0% chance or will not happen) and 1 (100% chance or will happen). The higher the degree of probability, the more likely the event is to happen, or, in a longer series of samples, the greater the number of times such event is expected to happen.
These concepts have been given an axiomatic mathematical derivation in probability theory which is used widely in such areas of study as mathematics, statistics, finance, gambling, science, artificial intelligence/machine learning and philosophy to, for example, draw inferences about the expected frequency of events. Probability theory is also used to describe the underlying mechanics and regularities of complex systems.
Objectives:
After careful study of this chapter you should be able to do the following:
Understand and describe sample spaces and events for random experiments with graphs, tables, lists, or tree diagrams
Interpret probabilities and use probabilities of outcomes to calculate probabilities of events in discrete sample spaces
Calculate the probabilities of joint events such as unions and intersections from the probabilities of individual events
Interpret and calculate conditional probabilities of events
Determine the independence of events and use independence to calculate probabilities
Ninth:the following video gives a a basic definition of the Conditional Probability:
Tenth: The following video, gives a complete lecture about the - Probability
Topic 3
Topic Three: Discrete Random Variables and Probability Distributions
Introduction:
In probability and statistics, a random variable or stochastic variable is a variable whose value is subject to variations due to chance (i.e. randomness, in a mathematical sense).As opposed to other mathematical variables, a random variable conceptually does not have a single, fixed value (even if unknown); rather, it can take on a set of possible different values, each with an associated probability.
Many physical systems can be modeled by the same or similar random experiments and random variables. The distribution of the random variables involved in each of these common systems can be analyzed, and the results can be used in different applications and examples. In this chapter, we present the analysis of several random experiments and discrete random variables that frequently arise in applications. We often omit a discussion of the underlying sample space of the random experiment and directly describe the distribution of a particular random variable.
Objectives:
After careful study of this chapter you should be able to do the following:
Determine probabilities from probability mass functions and the reverse.
Determine probabilities from cumulative distribution functions and cumulative distribution functions from probability mass functions, and the reverse.
Calculate means and variances for discrete random variables.
Understand the assumptions for some common discrete probability distributions.
Select an appropriate discrete probability distribution to calculate probabilities in specific applications.
Calculate probabilities, determine means and variances for some common discrete probability distributions
Topic Four: Continuous Random Variables and Probability Distributions
Introduction:
A continuous random variable maps outcomes to values of an uncountable set (e.g., the real numbers). For a continuous random variable, the probability of any specific value is zero, whereas the probability of some infinite set of values (such as an interval of non-zero length) may be positive.
Objectives:
After careful study of this chapter you should be able to do the following:
Determine probabilities from probability density functions
Determine probabilities from cumulative distribution functions and cumulative distribution functions from probability density functions, and the reverse
Calculate means and variances for continuous random variables
Understand the assumptions for some common continuous probability distributions
Select an appropriate continuous probability distribution to calculate probabilities in specific applications
Calculate probabilities, determine means and variances for some common continuous probability distributions
Standardize normal random variables
Use the table for the cumulative distribution function of a standard normal distribution to calculate probabilities
Approximate probabilities for some binomial and Poisson distributions
In the section on probability distributions, we looked at discrete and continuous distributions but we only focused on single random variables. Probability distributions can, however, be applied to grouped random variables which gives rise to joint probability distributions. Here we're going to focus on 2-dimensional distributions (i.e. only two random variables) but higher dimensions (more than two variables) are also possible.Since all random variables are divided into discrete and continuous random variables, we have end up having both discrete and continuous joint probability distributions.
Objectives:
After careful study of this chapter you should be able to do the following:
Use joint probability mass functions and joint probability density functions to calculate probabilities.
Calculate marginal and conditional probability distributions from joint probability distributions.
Interpret and calculate covariances and correlations between random variables.
Use the multinomial distribution to determine probabilities.
Understand properties of a bivariate normal distribution and be able to draw contour plots for the probability density function.
Calculate means and variances for linear combinations of random variables and calculate probabilities for linear combinations of normally distributed random variables.
Determine the distribution of a general function of a random variable
1.This videoDescribes the Discrete Joint Proabaility distribution
s
2.This video, describes the Joint and Marginal Distributions
3.This video, will conclude the first part of topic by defineing joint, marginal and conditional distributions (pmfs or pdfs).This video motivate these definitions using a simple example of two discrete random variables and their joint distribution. After motivating all three types of distributions, it discusses two aspects of these concepts that are important for econometrics: independence and Bayes' Rule.In the discussion of independence, the presenter offer a useful test for detecting non-independence of random variables.In the discussion of Bayes' Rule, the presenter actually derive the formula from formulas for conditional probabilities and marginal probabilities.
Third: Watch the following video which gives an introduction to the multinomial distribution, it is a common discrete probability distribution.The videot discuss the basics of the multinomial distribution and work through two examples of probability calculations. For comparison purposes, the video finishes off with a quick example of a multivariate hypergeometric probability calculation.
1.This videogives a brief description of the Two Continous Random Variables
2. This video, discusses how to check for independence of two random variables and recover marginal density functions from joint density.
3. This video, depicts how to get the Joint Cumulative Distribution Function from Joint Probability Density Function and how to use Joint CDF in simple probability questions.
1.This video, gives discusses both Correlation & Covariance
2.This video, Describing Bivariate Distributions,
3. This video, gives an example of determining a probability of a linear combination of two random variables X and Y.
Topic 7
Topic Six: Descriptive Statistics.
Introduction:
Descriptive statistics is the discipline of quantitatively describing the main features of a collection of information, or the quantitative description itself. Descriptive statistics are distinguished from inferential statistics (or inductive statistics), in that descriptive statistics aim to summarize a sample, rather than use the data to learn about the population that the sample of data is thought to represent.
Objectives:
After careful study of this chapter you should be able to do the following:
Compute and interpret the sample mean, sample variance, sample standard deviation, sample median, and sample range
Explain the concepts of sample mean, sample variance, population mean, and population variance
Construct and interpret visual data displays, including the stem-and-leaf display, the histogram, and the box plot
Explain the concept of random sampling
Construct and interpret normal probability plots
Explain how to use box plots and other data displays to visually compare two or more samples of data
Know how to use simple time series plots to visually display the important features of time oriented data
Topic Seven: Sampling Distributions and Point Estimation of Parameters
Introduction:
Sample data is collected on a population to draw conclusions, or make statistical inferences, about the population.Statistical methods are used to make decisions and draw conclusions about populations. This aspect of statistics is generally called statistical inference. These techniques utilize the information in a sample in drawing conclusions. This chapter begins our study of the statistical methods used in decision making.
Objectives:
After careful study of this chapter you should be able to do the following:
Explain the general concepts of estimating the parameters of a population or a probability distribution
Explain the important role of the normal distribution as a sampling distribution
Understand the central limit theorem
Explain important properties of point estimators, including bias, variance, and mean square error
Know how to construct point estimators using the method of moments and the method of maximum likelihood
Know how to compute and explain the precision with which a parameter is estimated
Know how to construct a point estimator using the Bayesian approach
Second:Watch the following two videos which answer the question, "What is an interval estimate and why do we care"?
Third: Watch the following video which discuss point estimation.
Fourth:Watch the following Video which gives an Introduction to the Central Limit Theorem.
Topic 9
Topic Eight: Statistical Intervals for a Single Sample
Introduction:
In statistics, interval estimation is the use of sample data to calculate an interval of possible (or probable) values of an unknown population parameter, in contrast to point estimation, which is a single number. Jerzy Neyman (1937) identified interval estimation ("estimation by interval") as distinct from point estimation ("estimation by unique estimate"). In doing so, he recognized that then-recent work quoting results in the form of an estimate plus-or-minus a standard deviation indicated that interval estimation was actually the problem statisticians really had in mind.
The most prevalent forms of interval estimation are:
confidence intervals (a frequentist method); and
credible intervals (a Bayesian method).
Other common approaches to interval estimation, which are encompassed by statistical theory, are:
Tolerance intervals
Prediction intervals - used mainly in Regression Analysis
Likelihood intervals
Objectives:
After careful study of this chapter you should be able to do the following:
Construct confidence intervals on the mean of a normal distribution, using either the normal distribution or the t distribution method
Construct confidence intervals on the variance and standard deviation of a normal distribution
Construct confidence intervals on a population proportion
Use a general method for constructing an approximate confidence interval on a parameter
Construct prediction intervals for a future observation
Construct a tolerance interval for a normal population
Explain the three types of interval estimates: confidence intervals, prediction intervals, and tolerance intervals
Fifth:Study the following video which discusses and reveals why we need to make allowances for small samples by using t-Scores to compensate for weak estimates of sigma.
Topic 10
Topic Nine: Tests of Hypotheses for a Single Sample
Introduction:
Hypothesis testing refers to the process of choosing between competing hypotheses about a probability distribution, based on observed data from the distribution. It is a core topic in mathematical statistics, and indeed is a fundamental part of the language of statistics.
Objectives:
After careful study of this chapter you should be able to do the following:
Structure engineering decision-making problems as hypothesis tests
Test hypotheses on the mean of a normal distribution using either a Z-test or a t-test procedure
Test hypotheses on the variance or standard deviation of a normal distribution
Test hypotheses on a population proportion
Use the P-value approach for making decisions in hypotheses tests
Compute power and type II error probability, and make sample size selection decisions for tests on means, variances, and proportions
Explain and use the relationship between confidence intervals and hypothesis tests
Use the chi-square goodness of fit test to check distributional assumptions