**Please note the registration will be closed 2 days (48 Hours) prior to the date of the seminar.
Hypothesis testing is the basis of everything we do in statistics, including not only DOE but also statistical process control (SPC) and acceptance sampling. These activities begin with a null hypothesis that is similar to the presumption of innocence in a criminal trial. We assume that there is no difference between the control or the experiment, that the process is in control, or that a production lot is acceptable.
We must prove beyond a quantifiable reasonable doubt (the significance level or Type I risk) that the alternate hypothesis is true instead. The alternate hypothesis is that the experiment is different than the control, the process is out of control, or the production lot must be rejected. There is also a Type II risk of not rejecting the null hypothesis when it should be rejected (similar to acquitting a guilty defendant), and this can be reduced by obtaining more data. Selection of an adequate sample size is always important in statistical activities; it is why statisticians always want more data.
It is also vital to exclude extraneous sources of variation from the experiment. Techniques include randomization (specimens are selected at random) and blocking, in which several treatments might be applied to the same specimen if possible. The latter eliminates the effect of specimen to specimen variation.
Two populations (such as an experiment and a control) may be compared with the t test or the paired comparison t test (a form of blocking), and also with one-way Analysis of Variance (ANOVA). One way ANOVA can also be used to compare several different treatments (such as choice of material or choice of method).
It is then necessary to validate (technically "not disprove") the underlying assumption that the residuals from the experiment follow the normal distribution. The residual is the difference between the actual result and the result that is expected from the physical model. If this assumption is not met then it is necessary to transform the data or else use a nonparametric method that does not rely on the normality assumption. Nonparametric methods are however less powerful (have higher Type II risks) than t tests and ANOVA.
Two way ANOVA can meanwhile assess the effects of two different factors (a factor is something such as machine, material, or method-note the connection with the cause and effect diagram-that can affect the response variable) as well as interactions between their levels or treatment choices. An interaction means there is a synergy or antagonism that makes the whole greater or less than the sum of its parts. One variable at a time experiments are not able to detect interactions.
Linear regression allows the construction of quantitative models for physical systems. It returns an expression y=f(X) where y is the response variable (e.g. the critical to quality characteristic) and X is a vector of predictor variables. As an example, it is possible to define an equation for spin coating thickness (y) on a silicon wafer as a function of the spin speed (e.g. in rpm), spin time, and solution viscosity. Chemical reaction rates can meanwhile be modeled on the basis of temperature and the concentrations of the reactants. This means we can create models with very practical real world applications.
A scientifically designed experiment economizes on time and material resources, and it returns actionable results in terms of root cause analysis. That is, DOE can identify the root cause of a problem to support corrective and preventive action (CAPA). It can also play a central role in process improvement by identifying and optimizing the factors that influence the critical to quality (CTQ) product characteristic.
DOE also takes the guesswork out of interpretation of experimental results. An experiment can determine beyond a quantifiable reasonable doubt (the significance level or Type I risk) that an experiment worked, or that there is a statistically significant difference between different treatments. Linear regression is meanwhile a powerful technique for fitting quantifiable physical models to data. This allows practitioners to predict a response (such as the critical to quality characteristic) on the basis of independent input variables.
|1||2 Attendees||10% off|
|2||3 to 6 Attendees||20% off|
|3||7 to 10 Attendees||25% off|
|4||10+ Attendees||30% off|
To avail the above group discounts, all the participants should register by making a single payment
Call our representative TODAY on 1800 447 9407 to have your seats confirmed!
William A. Levinson, P.E., is the principal of Levinson Productivity Systems, P.C. He is an ASQ Fellow, Certified Quality Engineer, Quality Auditor, Quality Manager, Reliability Engineer, and Six Sigma Black Belt. He is also the author of several books on quality, productivity, and management, of which the most recent is The Expanded and Annotated My Life and Work: Henry Ford's Universal Code for World-Class Success.