8.1 One-Group Designs
Learning objectives.
- Explain what quasi-experimental research is and distinguish it clearly from both experimental and correlational research.
- Describe three different types of one-group quasi-experimental designs.
- Identify the threats to internal validity associated with each of these designs.
One-Group Posttest Only Design
In a one-group posttest only design, a treatment is implemented (or an independent variable is manipulated) and then a dependent variable is measured once after the treatment is implemented. Imagine, for example, a researcher who is interested in the effectiveness of an anti-drug education program on elementary school students’ attitudes toward illegal drugs. The researcher could implement the anti-drug program, and then immediately after the program ends, the researcher could measure students’ attitudes toward illegal drugs.
This is the weakest type of quasi-experimental design. A major limitation to this design is the lack of a control or comparison group. There is no way to determine what the attitudes of these students would have been if they hadn’t completed the anti-drug program. Despite this major limitation, results from this design are frequently reported in the media and are often misinterpreted by the general population. For instance, advertisers might claim that 80% of women noticed their skin looked bright after using Brand X cleanser for a month. If there is no comparison group, then this statistic means little to nothing.
One-Group Pretest-Posttest Design
In a one-group pretest-posttest design , the dependent variable is measured once before the treatment is implemented and once after it is implemented. Let’s return to the example of a researcher who is interested in the effectiveness of an anti-drug education program on elementary school students’ attitudes toward illegal drugs. The researcher could measure the attitudes of students at a particular elementary school during one week, implement the anti-drug program during the next week, and finally, measure their attitudes again the following week. The pretest-posttest design is much like a within-subjects experiment in which each participant is tested first under the control condition and then under the treatment condition. It is unlike a within-subjects experiment, however, in that the order of conditions is not counterbalanced because it typically is not possible for a participant to be tested in the treatment condition first and then in an “untreated” control condition.
You might notice that the pretest-posttest design is much like a within-subjects experiment in which each participant is tested first under the control condition and then under the treatment condition. It is unlike a within-subjects experiment, however, in that the order of conditions is not counterbalanced because it typically is not possible for a participant to be tested in the treatment condition first and then in an “untreated” control condition.
If the average posttest score is better than the average pretest score (e.g., attitudes toward illegal drugs are more negative after the anti-drug educational program), then it makes sense to conclude that the treatment might be responsible for the improvement. Unfortunately, one often cannot conclude this with a high degree of certainty because there may be other explanations for why the posttest scores may have changed. One category of alternative explanations goes under the name of history . Other things might have happened between the pretest and the posttest that caused a change from pretest to posttest. Perhaps an anti-drug program aired on television and many of the students watched it, or perhaps a celebrity died of a drug overdose and many of the students heard about it.
Another category of alternative explanations goes under the name of maturation . Participants might have changed between the pretest and the posttest in ways that they were going to anyway because they are growing and learning. If it were a year long anti-drug program, participants might become less impulsive or better reasoners and this might be responsible for the change in their attitudes toward illegal drugs.
Another threat to the internal validity of one-group pretest-posttest designs is testing which refers to when the act of measuring the dependent variable during the pretest affects participants’ responses at posttest. For instance, completing the measure of attitudes towards illegal drugs may have had an effect on those attitudes. Simply completing this measure may have inspired further thinking and conversations about illegal drugs that then produced a change in posttest scores.
Similarly, instrumentation can be a threat to the internal validity of studies using this design. Instrumentation refers to when the basic characteristics of the measuring instrument change over time. When human observers are used to measure behavior, they may over time gain skill, become fatigued, or change the standards on which observations are based. So participants may have taken the measure of attitudes toward illegal drugs very seriously during the pretest when it was novel but then they may have become bored with the measure at posttest and been less careful in considering their responses.
Another alternative explanation for a change in the dependent variable in a pretest-posttest design is regression to the mean . This refers to the statistical fact that an individual who scores extremely on a variable on one occasion will tend to score less extremely on the next occasion. For example, a bowler with a long-term average of 150 who suddenly bowls a 220 will almost certainly score lower in the next game. Her score will “regress” toward her mean score of 150. Regression to the mean can be a problem when participants are selected for further study because of their extreme scores. Imagine, for example, that only students who scored especially high on the test of attitudes toward illegal drugs (those with extremely favorable attitudes toward drugs) were given the anti-drug program and then were retested. Regression to the mean all but guarantees that their scores will be lower at the posttest even if the training program has no effect.
A closely related concept—and an extremely important one in psychological research—is spontaneous remission . This is the tendency for many medical and psychological problems to improve over time without any form of treatment. The common cold is a good example. If one were to measure symptom severity in 100 common cold sufferers today, give them a bowl of chicken soup every day, and then measure their symptom severity again in a week, they would probably be much improved. This does not mean that the chicken soup was responsible for the improvement, however, because they would have been much improved without any treatment at all. The same is true of many psychological problems. A group of severely depressed people today is likely to be less depressed on average in 6 months. In reviewing the results of several studies of treatments for depression, researchers Michael Posternak and Ivan Miller found that participants in waitlist control conditions improved an average of 10 to 15% before they received any treatment at all (Posternak & Miller, 2001) [1] . Thus one must generally be very cautious about inferring causality from pretest-posttest designs.
Does Psychotherapy Work?
Early studies on the effectiveness of psychotherapy tended to use pretest-posttest designs. In a classic 1952 article, researcher Hans Eysenck summarized the results of 24 such studies showing that about two thirds of patients improved between the pretest and the posttest (Eysenck, 1952) [2] . But Eysenck also compared these results with archival data from state hospital and insurance company records showing that similar patients recovered at about the same rate without receiving psychotherapy. This parallel suggested to Eysenck that the improvement that patients showed in the pretest-posttest studies might be no more than spontaneous remission. Note that Eysenck did not conclude that psychotherapy was ineffective. He merely concluded that there was no evidence that it was, and he wrote of “the necessity of properly planned and executed experimental studies into this important field” (p. 323). You can read the entire article here:
http://psychclassics.yorku.ca/Eysenck/psychotherapy.htm
Fortunately, many other researchers took up Eysenck’s challenge, and by 1980 hundreds of experiments had been conducted in which participants were randomly assigned to treatment and control conditions, and the results were summarized in a classic book by Mary Lee Smith, Gene Glass, and Thomas Miller (Smith, Glass, & Miller, 1980) [3] . They found that overall psychotherapy was quite effective, with about 80% of treatment participants improving more than the average control participant. Subsequent research has focused more on the conditions under which different types of psychotherapy are more or less effective.
Interrupted Time Series Design
A variant of the pretest-posttest design is the interrupted time-series design . A time series is a set of measurements taken at intervals over a period of time. For example, a manufacturing company might measure its workers’ productivity each week for a year. In an interrupted time series-design, a time series like this one is “interrupted” by a treatment. In one classic example, the treatment was the reduction of the work shifts in a factory from 10 hours to 8 hours (Cook & Campbell, 1979) [4] . Because productivity increased rather quickly after the shortening of the work shifts, and because it remained elevated for many months afterward, the researcher concluded that the shortening of the shifts caused the increase in productivity. Notice that the interrupted time-series design is like a pretest-posttest design in that it includes measurements of the dependent variable both before and after the treatment. It is unlike the pretest-posttest design, however, in that it includes multiple pretest and posttest measurements.
Figure 8.1 shows data from a hypothetical interrupted time-series study. The dependent variable is the number of student absences per week in a research methods course. The treatment is that the instructor begins publicly taking attendance each day so that students know that the instructor is aware of who is present and who is absent. The top panel of Figure 8.1 shows how the data might look if this treatment worked. There is a consistently high number of absences before the treatment, and there is an immediate and sustained drop in absences after the treatment. The bottom panel of Figure 8.1 shows how the data might look if this treatment did not work. On average, the number of absences after the treatment is about the same as the number before. This figure also illustrates an advantage of the interrupted time-series design over a simpler pretest-posttest design. If there had been only one measurement of absences before the treatment at Week 7 and one afterward at Week 8, then it would have looked as though the treatment were responsible for the reduction. The multiple measurements both before and after the treatment suggest that the reduction between Weeks 7 and 8 is nothing more than normal week-to-week variation.
Figure 8.1 A Hypothetical Interrupted Time-Series Design. The top panel shows data that suggest that the treatment caused a reduction in absences. The bottom panel shows data that suggest that it did not.
- Posternak, M. A., & Miller, I. (2001). Untreated short-term course of major depression: A meta-analysis of studies using outcomes from studies using wait-list control groups. Journal of Affective Disorders, 66 , 139–146. ↵
- Eysenck, H. J. (1952). The effects of psychotherapy: An evaluation. Journal of Consulting Psychology, 16 , 319–324. ↵
- Smith, M. L., Glass, G. V., & Miller, T. I. (1980). The benefits of psychotherapy . Baltimore, MD: Johns Hopkins University Press. ↵
- Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design & analysis issues in field settings . Boston, MA: Houghton Mifflin. ↵
Share This Book
- Increase Font Size
Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.
Quasi-Experimental Research
38 One-Group Designs
Learning objectives.
- Explain what quasi-experimental research is and distinguish it clearly from both experimental and correlational research.
- Describe three different types of one-group quasi-experimental designs.
- Identify the threats to internal validity associated with each of these designs.
One-Group Posttest Only Design
In a one-group posttest only design , a treatment is implemented (or an independent variable is manipulated) and then a dependent variable is measured once after the treatment is implemented. Imagine, for example, a researcher who is interested in the effectiveness of an anti-drug education program on elementary school students’ attitudes toward illegal drugs. The researcher could implement the anti-drug program, and then immediately after the program ends, the researcher could measure students’ attitudes toward illegal drugs.
This is the weakest type of quasi-experimental design. A major limitation to this design is the lack of a control or comparison group. There is no way to determine what the attitudes of these students would have been if they hadn’t completed the anti-drug program. Despite this major limitation, results from this design are frequently reported in the media and are often misinterpreted by the general population. For instance, advertisers might claim that 80% of women noticed their skin looked bright after using Brand X cleanser for a month. If there is no comparison group, then this statistic means little to nothing.
One-Group Pretest-Posttest Design
In a one-group pretest-posttest design , the dependent variable is measured once before the treatment is implemented and once after it is implemented. Let’s return to the example of a researcher who is interested in the effectiveness of an anti-drug education program on elementary school students’ attitudes toward illegal drugs. The researcher could measure the attitudes of students at a particular elementary school during one week, implement the anti-drug program during the next week, and finally, measure their attitudes again the following week. The pretest-posttest design is much like a within-subjects experiment in which each participant is tested first under the control condition and then under the treatment condition. It is unlike a within-subjects experiment, however, in that the order of conditions is not counterbalanced because it typically is not possible for a participant to be tested in the treatment condition first and then in an “untreated” control condition.
If the average posttest score is better than the average pretest score (e.g., attitudes toward illegal drugs are more negative after the anti-drug educational program), then it makes sense to conclude that the treatment might be responsible for the improvement. Unfortunately, one often cannot conclude this with a high degree of certainty because there may be other explanations for why the posttest scores may have changed. These alternative explanations pose threats to internal validity.
One alternative explanation goes under the name of history . Other things might have happened between the pretest and the posttest that caused a change from pretest to posttest. Perhaps an anti-drug program aired on television and many of the students watched it, or perhaps a celebrity died of a drug overdose and many of the students heard about it.
Another alternative explanation goes under the name of maturation . Participants might have changed between the pretest and the posttest in ways that they were going to anyway because they are growing and learning. If it were a year long anti-drug program, participants might become less impulsive or better reasoners and this might be responsible for the change in their attitudes toward illegal drugs.
Another threat to the internal validity of one-group pretest-posttest designs is testing , which refers to when the act of measuring the dependent variable during the pretest affects participants’ responses at posttest. For instance, completing the measure of attitudes towards illegal drugs may have had an effect on those attitudes. Simply completing this measure may have inspired further thinking and conversations about illegal drugs that then produced a change in posttest scores.
Similarly, instrumentation can be a threat to the internal validity of studies using this design. Instrumentation refers to when the basic characteristics of the measuring instrument change over time. When human observers are used to measure behavior, they may over time gain skill, become fatigued, or change the standards on which observations are based. So participants may have taken the measure of attitudes toward illegal drugs very seriously during the pretest when it was novel but then they may have become bored with the measure at posttest and been less careful in considering their responses.
Another alternative explanation for a change in the dependent variable in a pretest-posttest design is regression to the mean . This refers to the statistical fact that an individual who scores extremely high or extremely low on a variable on one occasion will tend to score less extremely on the next occasion. For example, a bowler with a long-term average of 150 who suddenly bowls a 220 will almost certainly score lower in the next game. Her score will “regress” toward her mean score of 150. Regression to the mean can be a problem when participants are selected for further study because of their extreme scores. Imagine, for example, that only students who scored especially high on the test of attitudes toward illegal drugs (those with extremely favorable attitudes toward drugs) were given the anti-drug program and then were retested. Regression to the mean all but guarantees that their scores will be lower at the posttest even if the training program has no effect.
A closely related concept—and an extremely important one in psychological research—is spontaneous remission . This is the tendency for many medical and psychological problems to improve over time without any form of treatment. The common cold is a good example. If one were to measure symptom severity in 100 common cold sufferers today, give them a bowl of chicken soup every day, and then measure their symptom severity again in a week, they would probably be much improved. This does not mean that the chicken soup was responsible for the improvement, however, because they would have been much improved without any treatment at all. The same is true of many psychological problems. A group of severely depressed people today is likely to be less depressed on average in 6 months. In reviewing the results of several studies of treatments for depression, researchers Michael Posternak and Ivan Miller found that participants in waitlist control conditions improved an average of 10 to 15% before they received any treatment at all (Posternak & Miller, 2001) [1] . Thus one must generally be very cautious about inferring causality from pretest-posttest designs.
A common approach to ruling out the threats to internal validity described above is by revisiting the research design to include a control group, one that does not receive the treatment effect. A control group would be subject to the same threats from history, maturation, testing, instrumentation, regression to the mean, and spontaneous remission and so would allow the researcher to measure the actual effect of the treatment (if any). Of course, including a control group would mean that this is no longer a one-group design.
Does Psychotherapy Work?
Early studies on the effectiveness of psychotherapy tended to use pretest-posttest designs. In a classic 1952 article, researcher Hans Eysenck summarized the results of 24 such studies showing that about two thirds of patients improved between the pretest and the posttest (Eysenck, 1952) [2] . But Eysenck also compared these results with archival data from state hospital and insurance company records showing that similar patients recovered at about the same rate without receiving psychotherapy. This parallel suggested to Eysenck that the improvement that patients showed in the pretest-posttest studies might be no more than spontaneous remission. Note that Eysenck did not conclude that psychotherapy was ineffective. He merely concluded that there was no evidence that it was, and he wrote of “the necessity of properly planned and executed experimental studies into this important field” (p. 323). You can read the entire article here:
http://psychclassics.yorku.ca/Eysenck/psychotherapy.htm
Fortunately, many other researchers took up Eysenck’s challenge, and by 1980 hundreds of experiments had been conducted in which participants were randomly assigned to treatment and control conditions, and the results were summarized in a classic book by Mary Lee Smith, Gene Glass, and Thomas Miller (Smith, Glass, & Miller, 1980) [3] . They found that overall psychotherapy was quite effective, with about 80% of treatment participants improving more than the average control participant. Subsequent research has focused more on the conditions under which different types of psychotherapy are more or less effective.
Interrupted Time Series Design
A variant of the pretest-posttest design is the interrupted time-series desig n . A time series is a set of measurements taken at intervals over a period of time. For example, a manufacturing company might measure its workers’ productivity each week for a year. In an interrupted time series-design, a time series like this one is “interrupted” by a treatment. In one classic example, the treatment was the reduction of the work shifts in a factory from 10 hours to 8 hours (Cook & Campbell, 1979) [4] . Because productivity increased rather quickly after the shortening of the work shifts, and because it remained elevated for many months afterward, the researcher concluded that the shortening of the shifts caused the increase in productivity. Notice that the interrupted time-series design is like a pretest-posttest design in that it includes measurements of the dependent variable both before and after the treatment. It is unlike the pretest-posttest design, however, in that it includes multiple pretest and posttest measurements.
Figure 8.1 shows data from a hypothetical interrupted time-series study. The dependent variable is the number of student absences per week in a research methods course. The treatment is that the instructor begins publicly taking attendance each day so that students know that the instructor is aware of who is present and who is absent. The top panel of Figure 8.1 shows how the data might look if this treatment worked. There is a consistently high number of absences before the treatment, and there is an immediate and sustained drop in absences after the treatment. The bottom panel of Figure 8.1 shows how the data might look if this treatment did not work. On average, the number of absences after the treatment is about the same as the number before. This figure also illustrates an advantage of the interrupted time-series design over a simpler pretest-posttest design. If there had been only one measurement of absences before the treatment at Week 7 and one afterward at Week 8, then it would have looked as though the treatment were responsible for the reduction. The multiple measurements both before and after the treatment suggest that the reduction between Weeks 7 and 8 is nothing more than normal week-to-week variation.
Image Descriptions
Figure 8.1 image description: Two line graphs charting the number of absences per week over 14 weeks. The first 7 weeks are without treatment and the last 7 weeks are with treatment. In the first line graph, there are between 4 to 8 absences each week. After the treatment, the absences drop to 0 to 3 each week, which suggests the treatment worked. In the second line graph, there is no noticeable change in the number of absences per week after the treatment, which suggests the treatment did not work. [Return to Figure 8.1]
- Posternak, M. A., & Miller, I. (2001). Untreated short-term course of major depression: A meta-analysis of studies using outcomes from studies using wait-list control groups. Journal of Affective Disorders, 66 , 139–146. ↵
- Eysenck, H. J. (1952). The effects of psychotherapy: An evaluation. Journal of Consulting Psychology, 16 , 319–324. ↵
- Smith, M. L., Glass, G. V., & Miller, T. I. (1980). The benefits of psychotherapy . Baltimore, MD: Johns Hopkins University Press. ↵
- Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design & analysis issues in field settings . Boston, MA: Houghton Mifflin. ↵
A treatment is implemented (or an independent variable is manipulated) and then a dependent variable is measured once after the treatment is implemented.
An experiment design in which the dependent variable is measured once before the treatment is implemented and once after it is implemented.
Events outside of the pretest-posttest research design that might have influenced many or all of the participants between the pretest and the posttest.
Participants might have changed between the pretest and the posttest in ways that they were going to anyway because they are growing and learning.
A threat to internal validity that occurs when the measurement of the dependent variable during the pretest affects participants' responses at posttest.
A potential threat to internal validity when the basic characteristics of the measuring instrument change over the course of the study.
Refers to the statistical fact that an individual who scores extremely high or extremely low on a variable on one occasion will tend to score less extremely on the next occasion.
The tendency for many medical and psychological problems to improve over time without any form of treatment.
A set of measurements taken at intervals over a period of time that is "interrupted" by a treatment.
Research Methods in Psychology Copyright © 2019 by Rajiv S. Jhangiani, I-Chant A. Chiang, Carrie Cuttler, & Dana C. Leighton is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.
Share This Book
One-Group Pretest-Posttest Design: An Introduction
The one-group pretest-posttest design is a type of quasi-experiment in which the outcome of interest is measured 2 times: once before and once after exposing a non-random group of participants to a certain intervention/treatment.
The objective is to evaluate the effect of that intervention which can be:
- A training program
- A policy change
- A medical treatment, etc.
The one-group pretest-posttest design has 3 major characteristics:
- The group of participants who receives the intervention is selected in a non-random way — which makes it a quasi-experimental design.
- The absence of a control group against which the outcome can be compared.
- The effect of the intervention is measured by comparing the pre- and post-intervention measurements (the null hypothesis is that the intervention has no effect, i.e. the 2 measurements are equal).
Advantages of the one-group pretest-posttest design
1. feasible when random assignment of participants is considered unethical.
Random assignment of participants is considered unethical when the intervention is believed to be harmful (for example exposing people to smoking or dangerous chemicals) or on the contrary when it is believed to be so beneficial that it would be malevolent not to offer it to all participants (for example a groundbreaking treatment or medical operation).
2. Feasible when randomization is impractical
In some cases, where the intervention acts on a group of people at a given location, it becomes difficult to adequately randomize subjects (eg. an intervention that reduces pollution in a given area).
3. Requires fewer resources than most designs
The one-group pretest-posttest design does not require a large sample size nor a high cost to account for the follow-up of a control group.
4. No temporality issue
Since the outcome is measured after the intervention, we can be certain that it occurred after it, which is important for inferring a causal relationship between the two.
The one-group pretest-posttest design is an improvement over the one-group posttest only design as it adds a pretest measurement against which we can estimate the effect of the intervention. However, it has some major limitations which will be our next topic.
Limitations of the one-group pretest-posttest design
This design uses the outcome of the pretest to judge what might have happened if the intervention had not been implemented. The problem with this approach is that the difference between the outcome of the pretest and the posttest might be due to factors other than the intervention.
Here is a list of factors that can bias a one-group pretest-posttest study:
History refers to events (other than the intervention) that take place in time between the pretest and posttest and can affect the outcome of the posttest. The longer the time lapse is between the pretest and the posttest, the higher the risk is for history to bias the study.
Example: A commercial to help people quit smoking — the intervention — may be implemented at the same time as a new warning for cigarette packs — a co-occurring event.
2. Maturation
Maturation refers to things that vary naturally with time such as: seasonality effects, psychological factors that may change with time, worsening or improvement of a disease or condition with time, etc. These can bias the study if they affect the outcome of the posttest.
Example: People may feel overwhelmed after starting a new job, then calm down as time passes. So a one-group pretest posttest study targeting people on their first week at work may be under the influence of maturation due to the participants’ varying levels of stress.
The testing effect is the influence of the pretest itself on the outcome of the posttest. This happens when just taking the pretest increases the experience, knowledge, or awareness of participants which changes their posttest results (this change will occur irrespective of the intervention).
Example: As one takes more IQ tests, the person becomes trained to think in a way that makes them do better on subsequent IQ tests. So, when studying the effect of a certain intervention on IQ, a pretest IQ score cannot be directly compared to a posttest IQ score as the effect of the intervention on the IQ score will be biased by the effect of testing.
Another example is when asking people about their hygiene in a pretest makes them more attentive about their hygiene and therefore affects posttest results.
4. Instrumentation
Instrumentation effect refers to changes in the measuring instrument that may account for the observed difference between pretest and posttest results. Note that sometimes the measuring instrument is the researchers themselves who are recording the outcome.
Example: Fatigue, loss of interest, or instead an increase in measuring skills of the researcher between pre- and posttest may introduce instrumentation bias.
5. Differential loss to follow-up
Loss to follow-up constitutes a problem if the group of participants who quit the study (i.e. those who did the pretest and quit before they were assessed on the posttest) differ from those who stayed until the study was over – i.e. the loss to follow-up is not random.
Example: If some participants who took the pretest were discouraged by its outcome and left the study before reaching the posttest, then the study might get biased toward proving that the intervention is better than it actually is.
6. Regression to the mean
Regression to the mean happens when the study group is selected because of its unusual scoring on a pretest (either unusually high or unusually low score), because on a subsequent test (i.e. the posttest), we would expect the scores to regress naturally toward the mean.
Example: Imagine asking a group of people “how much money did you spend today on shopping?”, selecting the top 10 who spent the most, and summing up their expenditures. If we asked the same question to those 10 people again after some time, then almost certainly the sum spent on shopping the second time will be lower. This is because unusual behavior/scoring is hard to sustain.
How to deal with these limitations?
In general, we would be more confident that the observed effect is only due to the intervention if:
- The study conditions were under control.
- Participants were isolated from the outside world.
- The time interval between pretest and posttest was short.
More specifically, in order to reduce the effect of maturation and regression to the mean, we can add another pretest measure.
Example of a study that used the one-group pretest-posttest design
Kimport & Hartzell conducted a one-group pretest-posttest quasi-experiment to study the effect of clay work (as an art therapy) on reducing the anxiety of 49 psychiatric inpatient volunteers.
Pretest, posttest, and intervention
In order to measure anxiety, a self-report questionnaire was used as a pretest and posttest. The intervention was the creation of a clay pinch pot.
There was a statistically significant decrease in the anxiety score between the pretest and the posttest from 46.8 to 39.3.
Limitations
The limitations mentioned in the study were co-occurring treatments/explanations that can bias the study (i.e. only History effects):
- The intervention might have been effective in reducing anxiety by providing a simple distraction or a new experience for these patients.
- The group-talk between participants may have been a biasing factor.
- The personality of the researcher administrating the intervention may have played a role in reducing anxiety.
- Reichardt CS. Quasi-Experimentation: A Guide to Design and Analysis . The Guilford Press; 2019.
- Campbell DT, Stanley JC. Experimental and Quasi-Experimental Designs for Research . Wadsworth; 1963.
- Shadish WR, Cook TD, Campbell DT. Experimental and Quasi-Experimental Designs for Generalized Causal Inference . 2nd edition. Cengage Learning; 2001.
Further reading
- Experimental vs Quasi-Experimental Design
- Understand Quasi-Experimental Design Through an Example
- One-Group Posttest Only Design
- Posttest-Only Control Group Design
- Static Group Comparison Design
- Separate-Sample Pretest-Posttest Design
- Matched Pairs Design
- Randomized Block Design
Experimental Research Design — 6 mistakes you should never make!
Since school days’ students perform scientific experiments that provide results that define and prove the laws and theorems in science. These experiments are laid on a strong foundation of experimental research designs.
An experimental research design helps researchers execute their research objectives with more clarity and transparency.
In this article, we will not only discuss the key aspects of experimental research designs but also the issues to avoid and problems to resolve while designing your research study.
Table of Contents
What Is Experimental Research Design?
Experimental research design is a framework of protocols and procedures created to conduct experimental research with a scientific approach using two sets of variables. Herein, the first set of variables acts as a constant, used to measure the differences of the second set. The best example of experimental research methods is quantitative research .
Experimental research helps a researcher gather the necessary data for making better research decisions and determining the facts of a research study.
When Can a Researcher Conduct Experimental Research?
A researcher can conduct experimental research in the following situations —
- When time is an important factor in establishing a relationship between the cause and effect.
- When there is an invariable or never-changing behavior between the cause and effect.
- Finally, when the researcher wishes to understand the importance of the cause and effect.
Importance of Experimental Research Design
To publish significant results, choosing a quality research design forms the foundation to build the research study. Moreover, effective research design helps establish quality decision-making procedures, structures the research to lead to easier data analysis, and addresses the main research question. Therefore, it is essential to cater undivided attention and time to create an experimental research design before beginning the practical experiment.
By creating a research design, a researcher is also giving oneself time to organize the research, set up relevant boundaries for the study, and increase the reliability of the results. Through all these efforts, one could also avoid inconclusive results. If any part of the research design is flawed, it will reflect on the quality of the results derived.
Types of Experimental Research Designs
Based on the methods used to collect data in experimental studies, the experimental research designs are of three primary types:
1. Pre-experimental Research Design
A research study could conduct pre-experimental research design when a group or many groups are under observation after implementing factors of cause and effect of the research. The pre-experimental design will help researchers understand whether further investigation is necessary for the groups under observation.
Pre-experimental research is of three types —
- One-shot Case Study Research Design
- One-group Pretest-posttest Research Design
- Static-group Comparison
2. True Experimental Research Design
A true experimental research design relies on statistical analysis to prove or disprove a researcher’s hypothesis. It is one of the most accurate forms of research because it provides specific scientific evidence. Furthermore, out of all the types of experimental designs, only a true experimental design can establish a cause-effect relationship within a group. However, in a true experiment, a researcher must satisfy these three factors —
- There is a control group that is not subjected to changes and an experimental group that will experience the changed variables
- A variable that can be manipulated by the researcher
- Random distribution of the variables
This type of experimental research is commonly observed in the physical sciences.
3. Quasi-experimental Research Design
The word “Quasi” means similarity. A quasi-experimental design is similar to a true experimental design. However, the difference between the two is the assignment of the control group. In this research design, an independent variable is manipulated, but the participants of a group are not randomly assigned. This type of research design is used in field settings where random assignment is either irrelevant or not required.
The classification of the research subjects, conditions, or groups determines the type of research design to be used.
Advantages of Experimental Research
Experimental research allows you to test your idea in a controlled environment before taking the research to clinical trials. Moreover, it provides the best method to test your theory because of the following advantages:
- Researchers have firm control over variables to obtain results.
- The subject does not impact the effectiveness of experimental research. Anyone can implement it for research purposes.
- The results are specific.
- Post results analysis, research findings from the same dataset can be repurposed for similar research ideas.
- Researchers can identify the cause and effect of the hypothesis and further analyze this relationship to determine in-depth ideas.
- Experimental research makes an ideal starting point. The collected data could be used as a foundation to build new research ideas for further studies.
6 Mistakes to Avoid While Designing Your Research
There is no order to this list, and any one of these issues can seriously compromise the quality of your research. You could refer to the list as a checklist of what to avoid while designing your research.
1. Invalid Theoretical Framework
Usually, researchers miss out on checking if their hypothesis is logical to be tested. If your research design does not have basic assumptions or postulates, then it is fundamentally flawed and you need to rework on your research framework.
2. Inadequate Literature Study
Without a comprehensive research literature review , it is difficult to identify and fill the knowledge and information gaps. Furthermore, you need to clearly state how your research will contribute to the research field, either by adding value to the pertinent literature or challenging previous findings and assumptions.
3. Insufficient or Incorrect Statistical Analysis
Statistical results are one of the most trusted scientific evidence. The ultimate goal of a research experiment is to gain valid and sustainable evidence. Therefore, incorrect statistical analysis could affect the quality of any quantitative research.
4. Undefined Research Problem
This is one of the most basic aspects of research design. The research problem statement must be clear and to do that, you must set the framework for the development of research questions that address the core problems.
5. Research Limitations
Every study has some type of limitations . You should anticipate and incorporate those limitations into your conclusion, as well as the basic research design. Include a statement in your manuscript about any perceived limitations, and how you considered them while designing your experiment and drawing the conclusion.
6. Ethical Implications
The most important yet less talked about topic is the ethical issue. Your research design must include ways to minimize any risk for your participants and also address the research problem or question at hand. If you cannot manage the ethical norms along with your research study, your research objectives and validity could be questioned.
Experimental Research Design Example
In an experimental design, a researcher gathers plant samples and then randomly assigns half the samples to photosynthesize in sunlight and the other half to be kept in a dark box without sunlight, while controlling all the other variables (nutrients, water, soil, etc.)
By comparing their outcomes in biochemical tests, the researcher can confirm that the changes in the plants were due to the sunlight and not the other variables.
Experimental research is often the final form of a study conducted in the research process which is considered to provide conclusive and specific results. But it is not meant for every research. It involves a lot of resources, time, and money and is not easy to conduct, unless a foundation of research is built. Yet it is widely used in research institutes and commercial industries, for its most conclusive results in the scientific approach.
Have you worked on research designs? How was your experience creating an experimental design? What difficulties did you face? Do write to us or comment below and share your insights on experimental research designs!
Frequently Asked Questions
Randomization is important in an experimental research because it ensures unbiased results of the experiment. It also measures the cause-effect relationship on a particular group of interest.
Experimental research design lay the foundation of a research and structures the research to establish quality decision making process.
There are 3 types of experimental research designs. These are pre-experimental research design, true experimental research design, and quasi experimental research design.
The difference between an experimental and a quasi-experimental design are: 1. The assignment of the control group in quasi experimental research is non-random, unlike true experimental design, which is randomly assigned. 2. Experimental research group always has a control group; on the other hand, it may not be always present in quasi experimental research.
Experimental research establishes a cause-effect relationship by testing a theory or hypothesis using experimental groups or control variables. In contrast, descriptive research describes a study or a topic by defining the variables under it and answering the questions related to the same.
good and valuable
Very very good
Good presentation.
Rate this article Cancel Reply
Your email address will not be published.
Enago Academy's Most Popular Articles
- Promoting Research
Graphical Abstracts Vs. Infographics: Best practices for using visual illustrations for increased research impact
Dr. Sarah Chen stared at her computer screen, her eyes staring at her recently published…
- Publishing Research
10 Tips to Prevent Research Papers From Being Retracted
Research paper retractions represent a critical event in the scientific community. When a published article…
- Industry News
Google Releases 2024 Scholar Metrics, Evaluates Impact of Scholarly Articles
Google has released its 2024 Scholar Metrics, assessing scholarly articles from 2019 to 2023. This…
Ensuring Academic Integrity and Transparency in Academic Research: A comprehensive checklist for researchers
Academic integrity is the foundation upon which the credibility and value of scientific findings are…
- Reporting Research
How to Optimize Your Research Process: A step-by-step guide
For researchers across disciplines, the path to uncovering novel findings and insights is often filled…
Choosing the Right Analytical Approach: Thematic analysis vs. content analysis for…
Comparing Cross Sectional and Longitudinal Studies: 5 steps for choosing the right…
Sign-up to read more
Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:
- 2000+ blog articles
- 50+ Webinars
- 10+ Expert podcasts
- 50+ Infographics
- 10+ Checklists
- Research Guides
We hate spam too. We promise to protect your privacy and never spam you.
- AI in Academia
- Career Corner
- Diversity and Inclusion
- Infographics
- Expert Video Library
- Other Resources
- Enago Learn
- Upcoming & On-Demand Webinars
- Open Access Week 2024
- Peer Review Week 2024
- Publication Integrity Week 2024
- Conference Videos
- Enago Report
- Journal Finder
- Enago Plagiarism & AI Grammar Check
- Editing Services
- Publication Support Services
- Research Impact
- Translation Services
- Publication solutions
- AI-Based Solutions
- Thought Leadership
- Call for Articles
- Call for Speakers
- Author Training
- Edit Profile
I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:
Which among these would you prefer the most for improving research integrity?
- Privacy Policy
Home » Experimental Design – Types, Methods, Guide
Experimental Design – Types, Methods, Guide
Table of Contents
Experimental design is a structured approach used to conduct scientific experiments. It enables researchers to explore cause-and-effect relationships by controlling variables and testing hypotheses. This guide explores the types of experimental designs, common methods, and best practices for planning and conducting experiments.
Experimental Design
Experimental design refers to the process of planning a study to test a hypothesis, where variables are manipulated to observe their effects on outcomes. By carefully controlling conditions, researchers can determine whether specific factors cause changes in a dependent variable.
Key Characteristics of Experimental Design :
- Manipulation of Variables : The researcher intentionally changes one or more independent variables.
- Control of Extraneous Factors : Other variables are kept constant to avoid interference.
- Randomization : Subjects are often randomly assigned to groups to reduce bias.
- Replication : Repeating the experiment or having multiple subjects helps verify results.
Purpose of Experimental Design
The primary purpose of experimental design is to establish causal relationships by controlling for extraneous factors and reducing bias. Experimental designs help:
- Test Hypotheses : Determine if there is a significant effect of independent variables on dependent variables.
- Control Confounding Variables : Minimize the impact of variables that could distort results.
- Generate Reproducible Results : Provide a structured approach that allows other researchers to replicate findings.
Types of Experimental Designs
Experimental designs can vary based on the number of variables, the assignment of participants, and the purpose of the experiment. Here are some common types:
1. Pre-Experimental Designs
These designs are exploratory and lack random assignment, often used when strict control is not feasible. They provide initial insights but are less rigorous in establishing causality.
- Example : A training program is provided, and participants’ knowledge is tested afterward, without a pretest.
- Example : A group is tested on reading skills, receives instruction, and is tested again to measure improvement.
2. True Experimental Designs
True experiments involve random assignment of participants to control or experimental groups, providing high levels of control over variables.
- Example : A new drug’s efficacy is tested with patients randomly assigned to receive the drug or a placebo.
- Example : Two groups are observed after one group receives a treatment, and the other receives no intervention.
3. Quasi-Experimental Designs
Quasi-experiments lack random assignment but still aim to determine causality by comparing groups or time periods. They are often used when randomization isn’t possible, such as in natural or field experiments.
- Example : Schools receive different curriculums, and students’ test scores are compared before and after implementation.
- Example : Traffic accident rates are recorded for a city before and after a new speed limit is enforced.
4. Factorial Designs
Factorial designs test the effects of multiple independent variables simultaneously. This design is useful for studying the interactions between variables.
- Example : Studying how caffeine (variable 1) and sleep deprivation (variable 2) affect memory performance.
- Example : An experiment studying the impact of age, gender, and education level on technology usage.
5. Repeated Measures Design
In repeated measures designs, the same participants are exposed to different conditions or treatments. This design is valuable for studying changes within subjects over time.
- Example : Measuring reaction time in participants before, during, and after caffeine consumption.
- Example : Testing two medications, with each participant receiving both but in a different sequence.
Methods for Implementing Experimental Designs
- Purpose : Ensures each participant has an equal chance of being assigned to any group, reducing selection bias.
- Method : Use random number generators or assignment software to allocate participants randomly.
- Purpose : Prevents participants or researchers from knowing which group (experimental or control) participants belong to, reducing bias.
- Method : Implement single-blind (participants unaware) or double-blind (both participants and researchers unaware) procedures.
- Purpose : Provides a baseline for comparison, showing what would happen without the intervention.
- Method : Include a group that does not receive the treatment but otherwise undergoes the same conditions.
- Purpose : Controls for order effects in repeated measures designs by varying the order of treatments.
- Method : Assign different sequences to participants, ensuring that each condition appears equally across orders.
- Purpose : Ensures reliability by repeating the experiment or including multiple participants within groups.
- Method : Increase sample size or repeat studies with different samples or in different settings.
Steps to Conduct an Experimental Design
- Clearly state what you intend to discover or prove through the experiment. A strong hypothesis guides the experiment’s design and variable selection.
- Independent Variable (IV) : The factor manipulated by the researcher (e.g., amount of sleep).
- Dependent Variable (DV) : The outcome measured (e.g., reaction time).
- Control Variables : Factors kept constant to prevent interference with results (e.g., time of day for testing).
- Choose a design type that aligns with your research question, hypothesis, and available resources. For example, an RCT for a medical study or a factorial design for complex interactions.
- Randomly assign participants to experimental or control groups. Ensure control groups are similar to experimental groups in all respects except for the treatment received.
- Randomize the assignment and, if possible, apply blinding to minimize potential bias.
- Follow a consistent procedure for each group, collecting data systematically. Record observations and manage any unexpected events or variables that may arise.
- Use appropriate statistical methods to test for significant differences between groups, such as t-tests, ANOVA, or regression analysis.
- Determine whether the results support your hypothesis and analyze any trends, patterns, or unexpected findings. Discuss possible limitations and implications of your results.
Examples of Experimental Design in Research
- Medicine : Testing a new drug’s effectiveness through a randomized controlled trial, where one group receives the drug and another receives a placebo.
- Psychology : Studying the effect of sleep deprivation on memory using a within-subject design, where participants are tested with different sleep conditions.
- Education : Comparing teaching methods in a quasi-experimental design by measuring students’ performance before and after implementing a new curriculum.
- Marketing : Using a factorial design to examine the effects of advertisement type and frequency on consumer purchase behavior.
- Environmental Science : Testing the impact of a pollution reduction policy through a time series design, recording pollution levels before and after implementation.
Experimental design is fundamental to conducting rigorous and reliable research, offering a systematic approach to exploring causal relationships. With various types of designs and methods, researchers can choose the most appropriate setup to answer their research questions effectively. By applying best practices, controlling variables, and selecting suitable statistical methods, experimental design supports meaningful insights across scientific, medical, and social research fields.
- Campbell, D. T., & Stanley, J. C. (1963). Experimental and Quasi-Experimental Designs for Research . Houghton Mifflin Company.
- Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and Quasi-Experimental Designs for Generalized Causal Inference . Houghton Mifflin.
- Fisher, R. A. (1935). The Design of Experiments . Oliver and Boyd.
- Field, A. (2013). Discovering Statistics Using IBM SPSS Statistics . Sage Publications.
- Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences . Routledge.
About the author
Muhammad Hassan
Researcher, Academic Writer, Web developer
You may also like
Focus Groups – Steps, Examples and Guide
Correlational Research – Methods, Types and...
Qualitative Research – Methods, Analysis Types...
Observational Research – Methods and Guide
Triangulation in Research – Types, Methods and...
Questionnaire – Definition, Types, and Examples
Experimental Design: Types, Examples & Methods
Saul McLeod, PhD
Editor-in-Chief for Simply Psychology
BSc (Hons) Psychology, MRes, PhD, University of Manchester
Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.
Learn about our Editorial Process
Olivia Guy-Evans, MSc
Associate Editor for Simply Psychology
BSc (Hons) Psychology, MSc Psychology of Education
Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.
On This Page:
Experimental design refers to how participants are allocated to different groups in an experiment. Types of design include repeated measures, independent groups, and matched pairs designs.
Probably the most common way to design an experiment in psychology is to divide the participants into two groups, the experimental group and the control group, and then introduce a change to the experimental group, not the control group.
The researcher must decide how he/she will allocate their sample to the different experimental groups. For example, if there are 10 participants, will all 10 participants participate in both groups (e.g., repeated measures), or will the participants be split in half and take part in only one group each?
Three types of experimental designs are commonly used:
1. Independent Measures
Independent measures design, also known as between-groups , is an experimental design where different participants are used in each condition of the independent variable. This means that each condition of the experiment includes a different group of participants.
This should be done by random allocation, ensuring that each participant has an equal chance of being assigned to one group.
Independent measures involve using two separate groups of participants, one in each condition. For example:
- Con : More people are needed than with the repeated measures design (i.e., more time-consuming).
- Pro : Avoids order effects (such as practice or fatigue) as people participate in one condition only. If a person is involved in several conditions, they may become bored, tired, and fed up by the time they come to the second condition or become wise to the requirements of the experiment!
- Con : Differences between participants in the groups may affect results, for example, variations in age, gender, or social background. These differences are known as participant variables (i.e., a type of extraneous variable ).
- Control : After the participants have been recruited, they should be randomly assigned to their groups. This should ensure the groups are similar, on average (reducing participant variables).
2. Repeated Measures Design
Repeated Measures design is an experimental design where the same participants participate in each independent variable condition. This means that each experiment condition includes the same group of participants.
Repeated Measures design is also known as within-groups or within-subjects design .
- Pro : As the same participants are used in each condition, participant variables (i.e., individual differences) are reduced.
- Con : There may be order effects. Order effects refer to the order of the conditions affecting the participants’ behavior. Performance in the second condition may be better because the participants know what to do (i.e., practice effect). Or their performance might be worse in the second condition because they are tired (i.e., fatigue effect). This limitation can be controlled using counterbalancing.
- Pro : Fewer people are needed as they participate in all conditions (i.e., saves time).
- Control : To combat order effects, the researcher counter-balances the order of the conditions for the participants. Alternating the order in which participants perform in different conditions of an experiment.
Counterbalancing
Suppose we used a repeated measures design in which all of the participants first learned words in “loud noise” and then learned them in “no noise.”
We expect the participants to learn better in “no noise” because of order effects, such as practice. However, a researcher can control for order effects using counterbalancing.
The sample would be split into two groups: experimental (A) and control (B). For example, group 1 does ‘A’ then ‘B,’ and group 2 does ‘B’ then ‘A.’ This is to eliminate order effects.
Although order effects occur for each participant, they balance each other out in the results because they occur equally in both groups.
3. Matched Pairs Design
A matched pairs design is an experimental design where pairs of participants are matched in terms of key variables, such as age or socioeconomic status. One member of each pair is then placed into the experimental group and the other member into the control group .
One member of each matched pair must be randomly assigned to the experimental group and the other to the control group.
- Con : If one participant drops out, you lose 2 PPs’ data.
- Pro : Reduces participant variables because the researcher has tried to pair up the participants so that each condition has people with similar abilities and characteristics.
- Con : Very time-consuming trying to find closely matched pairs.
- Pro : It avoids order effects, so counterbalancing is not necessary.
- Con : Impossible to match people exactly unless they are identical twins!
- Control : Members of each pair should be randomly assigned to conditions. However, this does not solve all these problems.
Experimental design refers to how participants are allocated to an experiment’s different conditions (or IV levels). There are three types:
1. Independent measures / between-groups : Different participants are used in each condition of the independent variable.
2. Repeated measures /within groups : The same participants take part in each condition of the independent variable.
3. Matched pairs : Each condition uses different participants, but they are matched in terms of important characteristics, e.g., gender, age, intelligence, etc.
Learning Check
Read about each of the experiments below. For each experiment, identify (1) which experimental design was used; and (2) why the researcher might have used that design.
1 . To compare the effectiveness of two different types of therapy for depression, depressed patients were assigned to receive either cognitive therapy or behavior therapy for a 12-week period.
The researchers attempted to ensure that the patients in the two groups had similar severity of depressed symptoms by administering a standardized test of depression to each participant, then pairing them according to the severity of their symptoms.
2 . To assess the difference in reading comprehension between 7 and 9-year-olds, a researcher recruited each group from a local primary school. They were given the same passage of text to read and then asked a series of questions to assess their understanding.
3 . To assess the effectiveness of two different ways of teaching reading, a group of 5-year-olds was recruited from a primary school. Their level of reading ability was assessed, and then they were taught using scheme one for 20 weeks.
At the end of this period, their reading was reassessed, and a reading improvement score was calculated. They were then taught using scheme two for a further 20 weeks, and another reading improvement score for this period was calculated. The reading improvement scores for each child were then compared.
4 . To assess the effect of the organization on recall, a researcher randomly assigned student volunteers to two conditions.
Condition one attempted to recall a list of words that were organized into meaningful categories; condition two attempted to recall the same words, randomly grouped on the page.
Experiment Terminology
Ecological validity.
The degree to which an investigation represents real-life experiences.
Experimenter effects
These are the ways that the experimenter can accidentally influence the participant through their appearance or behavior.
Demand characteristics
The clues in an experiment lead the participants to think they know what the researcher is looking for (e.g., the experimenter’s body language).
Independent variable (IV)
The variable the experimenter manipulates (i.e., changes) is assumed to have a direct effect on the dependent variable.
Dependent variable (DV)
Variable the experimenter measures. This is the outcome (i.e., the result) of a study.
Extraneous variables (EV)
All variables which are not independent variables but could affect the results (DV) of the experiment. Extraneous variables should be controlled where possible.
Confounding variables
Variable(s) that have affected the results (DV), apart from the IV. A confounding variable could be an extraneous variable that has not been controlled.
Random Allocation
Randomly allocating participants to independent variable conditions means that all participants should have an equal chance of taking part in each condition.
The principle of random allocation is to avoid bias in how the experiment is carried out and limit the effects of participant variables.
Order effects
Changes in participants’ performance due to their repeating the same or similar test more than once. Examples of order effects include:
(i) practice effect: an improvement in performance on a task due to repetition, for example, because of familiarity with the task;
(ii) fatigue effect: a decrease in performance of a task due to repetition, for example, because of boredom or tiredness.
No internet connection.
All search filters on the page have been cleared., your search has been saved..
- Sign in to my profile My Profile
Reader's guide
Entries a-z, subject index.
- One-Group Pretest–Posttest Design
- By: Gregory A. Cranmer
- In: The SAGE Encyclopedia of Communication Research Methods
- Chapter DOI: https:// doi. org/10.4135/9781483381411.n388
- Subject: Communication and Media Studies , Sociology
- Show page numbers Hide page numbers
A one-group pretest–posttest design is a type of research design that is most often utilized by behavioral researchers to determine the effect of a treatment or intervention on a given sample. This research design is characterized by two features. The first feature is the use of a single group of participants (i.e., a one-group design). This feature denotes that all participants are part of a single condition—meaning all participants are given the same treatments and assessments. The second feature is a linear ordering that requires the assessment of a dependent variable before and after a treatment is implemented (i.e., a pretest–posttest design). Within pretest–posttest research designs, the effect of a treatment is determined by calculating the difference between the first assessment of the dependent variable (i.e., the pretest) and the second assessment of the dependent variable (i.e., the posttest). The one-group pretest–posttest research design is illustrated in Figure 1 . This entry discusses the design’s implementation in social sciences, examines threats to internal validity, and explains when and how to use the design.
Implementation in Social Sciences
The one-group pretest–posttest research design is mostly implemented by social scientists to evaluate the effectiveness of educational programs, the restructuring of social groups and organizations, or the implementation of behavioral interventions. A common example is curriculum or instructor assessments, as instructors frequently use the one-group pretest–posttest research design to assess their own effectiveness as instructors or the effectiveness of a given curriculum. To achieve this aim, instructors assess their students’ knowledge of a given topic or skill at performing a particular behavior at the beginning of the course (i.e., a pretest, O 1 ). Then, these instructors devote their efforts over a period of time to teaching their students and assisting them in acquiring knowledge or skills that relate to the topic of the course (i.e., a treatment, X 1 ). Finally, at the conclusion of the course, instructors again assess students’ knowledge or skills via exams, projects, performances, or exit interviews (i.e., a posttest, O 2 ). The difference between students’ knowledge or skills at the beginning of the course compared with the end of the course is often attributed to the education they were provided by the instructor. This scenario is commonly used within STEM disciplines (i.e., science, technology, engineering, and mathematics), but it is also utilized within the discipline of communication studies—especially within public speaking or introductory communication courses. This example will be referenced in the subsequent section to illustrate potential threats to the internal validity of the one-group pretest–posttest research design.
Threats to Internal Validity
The one-group pretest–posttest research design does not account for many confounding variables that may threaten the internal validity of a study. In particular, this research design is susceptible to seven distinct threats to internal validity that may promote inaccurate conclusions regarding the effectiveness of a treatment or intervention.
History Effects
The first type of threat is history effects , which acknowledges that events or experiences outside the scope of a study may influence the changes in a dependent variable from pretest to posttest. The longer a research design takes to execute and the more time participants spend outside the controlled environment of an experiment or study, the greater the chance that the posttest can be influenced by unaccounted for variables or experiences. For instance, in the aforementioned example, students will spend a majority of their time outside of the confines of the classroom. As such, the growth in their knowledge or skills may be explained by experiences besides the few hours of instruction they receive per week (e.g., they could learn in other classes, watch documentaries on their own time, or refine their skills as part of extracurricular experiences).
Maturation Effect
The second threat is a maturation effect , which recognizes that any changes in the dependent variable between the pretest and posttest may be attributed to changes that naturally occur within a sample. For instance, students’ abilities to acquire knowledge over a period of time may be attributed to the development of their brain and cognitive capabilities as they age. Similar to history effects, the longer a study occurs, the more likely a maturation effect is to occur.
Figure 1 A Visual Representation of a One-Group Pretest–Posttest Research Design
Hawthorne Effect
The third type of threat is the Hawthorne effect , which acknowledges the possibility that participants’ awareness of being included in a study may influence their behavior. This effect can be problematic within a one-group pretest–posttest design if participants are not aware of their inclusion in a study until after they complete the pretest. For instance, if students are unaware of their inclusion in a study until after the pretest, they may put forth extra effort during the posttest because they are now cognizant that their performance will be evaluated and will be considered a representation of their instructors’ effectiveness (i.e., information they did not possess during the pretest).
Participant Mortality
The fourth threat is participant mortality , which occurs when a considerable number of participants withdraw from a study before completing the posttest. Throughout most research designs, it is inevitable that some participants will not finish, but when mortality becomes excessive, it can alter the relationship between the pretest and posttest assessments. For instance, if the students with the lowest scores on the pretest withdraw from the course before the posttest (i.e., midsemester)—assuming those students were less academically inclined—the posttest scores for the course will be artificially inflated. Furthermore, the examination of only the remaining students’ performances from the pretest to posttest will become more susceptible to regression threats (discussed later in this entry), and may lead to the conclusion that the treatment or invention had a detrimental effect on the dependent variable.
Instrument Reactivity
The fifth threat is instrument reactivity , which occurs when the implementation of the pretest uniquely influences participants’ performances on the posttest. Pretests can prime participants to respond to the posttest in a manner that they otherwise would not have if they did not receive the pretest. For example, the pretest of students’ knowledge at the beginning of the course could raise their awareness to particular topics or skills that they do not yet possess. This awareness would guide how they approach the course (e.g., the information they notice or how they study). Thus, the priming that resulted from the pretest would influence students’ performances on the posttest.
Instrumentation Effect
The sixth threat is an instrumentation effect , which recognizes that changes in how the dependent variable is assessed during the pretest and posttest, rather than the treatment or intervention, may explain observed changes in a dependent variable. A dependent variable is often operationalized with different assessments from the pretest to posttest to avoid instrument reactivity. For instance, when assessing students’ learning, it would not make sense to give the same exact assessment as the pretest and posttest because students would know the answers during the posttest. Thus, a new assessment is needed. The difference in the questions used within the first and second assessment may account for students’ performance.
Regression to the Mean
The final threat is regression to the mean , which recognizes that participants with extremely high or low scores on the pretest are more likely to record a score that is closer to the study average on their posttest. For instance, if a student gets a 100% on the pretest, it will be difficult for that student to record another 100% on the posttest, as a single error would lower their score. Similarly, if a student performs extremely poorly and records a 10% on the pretest, the chances are their score will increase on the posttest.
When and How to Use
Although the one-group pretest–posttest research design is recognized as a weak experimental design, under particular conditions, it can be useful. An advantage of this research design is that it is simple to implement and the results can often be calculated with simple analyses (i.e., most often a dependent t -test). Therefore, this research design is viable for students or early-career social scientists who are still learning research methods and analyses. This design is also beneficial when only one group of participants is available to the researcher or when creating a control group is unethical. In this scenario, a one-group pretest–posttest design is more rigorous than some other one-group designs (e.g., one-group posttest design) because it provides a baseline for participant performance. However, when using this research design, researchers should attempt to avoid lengthy studies and to control for confounding variables given the previously mentioned threats to internal validity.
Gregory A. Cranmer
See also Errors of Measurement ; Errors of Measurement: Regression Towards the Mean ; Internal Validity ; Mortality in Sample ; t -Test
Further Readings
Campbell, D. T. (1957). Factors relevant to the validity of experiments in social settings. Psychological Bulletin, 54 , 297–312.
Dimitrov, D. M., & Rumrill, P. D. (2003). Pretest-posttest designs and measurement of change. Work, 20 , 159–165.
Spector, P. E. (1981). Research designs . Beverly Hills, CA: Sage.
- One-Tailed Test
- Authoring: Telling a Research Story
- Body Image and Eating Disorders
- Hypothesis Formulation
- Methodology, Selection of
- Program Assessment
- Research Ideas, Sources of
- Research Project, Planning of
- Research Question Formulation
- Research Topic, Definition of
- Research, Inspiration for
- Social Media: Blogs, Microblogs, and Twitter
- Testability
- Acknowledging the Contribution of Others
- Activism and Social Justice
- Anonymous Source of Data
- Authorship Bias
- Authorship Credit
- Confidentiality and Anonymity of Participants
- Conflict of Interest in Research
- Controversial Experiments
- Copyright Issues in Research
- Cultural Sensitivity in Research
- Data Security
- Debriefing of Participants
- Deception in Research
- Ethical Issues, International Research
- Ethics Codes and Guidelines
- Fraudulent and Misleading Data
- Funding Research
- Health Care Disparities
- Human Subjects, Treatment of
- Informed Consent
- Institutional Review Board
- Organizational Ethics
- Peer Review
- Plagiarism, Self-
- Privacy of Information
- Privacy of Participants
- Public Behavior, Recording of
- Reliability, Unitizing
- Research Ethics and Social Values
- Researcher-Participant Relationships
- Social Implications of Research
- Archive Searching for Research
- Bibliographic Research
- Databases, Academic
- Foundation and Government Research Collections
- Library Research
- Literature Review, The
- Literature Reviews, Foundational
- Literature Reviews, Resources for
- Literature Reviews, Strategies for
- Literature Sources, Skeptical and Critical Stance Toward
- Literature, Determining Quality of
- Literature, Determining Relevance of
- Meta-Analysis
- Publications, Scholarly
- Search Engines for Literature Search
- Vote Counting Literature Review Methods
- Abstract or Executive Summary
- Academic Journals
- Alternative Conference Presentation Formats
- American Psychological Association (APA) Style
- Archiving Data
- Blogs and Research
- Chicago Style
- Citations to Research
- Evidence-Based Policy Making
- Invited Publication
- Limitations of Research
- Modern Language Association (MLA) Style
- Narrative Literature Review
- New Media Analysis
- News Media, Writing for
- Panel Presentations and Discussion
- Pay to Review and/or Publish
- Peer Reviewed Publication
- Poster Presentation of Research
- Primary Data Analysis
- Publication Style Guides
- Publication, Politics of
- Publications, Open-Access
- Publishing a Book
- Publishing a Journal Article
- Research Report, Organization of
- Research Reports, Objective
- Research Reports, Subjective
- Scholarship of Teaching and Learning
- Secondary Data
- Submission of Research to a Convention
- Submission of Research to a Journal
- Title of Manuscript, Selection of
- Visual Images as Data Within Qualitative Research
- Writer’s Block
- Writing a Discussion Section
- Writing a Literature Review
- Writing a Methods Section
- Writing a Results Section
- Writing Process, The
- Coding of Data
- Content Analysis, Definition of
- Content Analysis, Process of
- Content Analysis: Advantages and Disadvantages
- Conversation Analysis
- Critical Analysis
- Discourse Analysis
- Interaction Analysis, Quantitative
- Intercoder Reliability
- Intercoder Reliability Coefficients, Comparison of
- Intercoder Reliability Standards: Reproducibility
- Intercoder Reliability Standards: Stability
- Intercoder Reliability Techniques: Cohen’s Kappa
- Intercoder Reliability Techniques: Fleiss System
- Intercoder Reliability Techniques: Holsti Method
- Intercoder Reliability Techniques: Krippendorf Alpha
- Intercoder Reliability Techniques: Percent Agreement
- Intercoder Reliability Techniques: Scott’s Pi
- Metrics for Analysis, Selection of
- Narrative Analysis
- Observational Research Methods
- Observational Research, Advantages and Disadvantages
- Observer Reliability
- Rhetorical and Dramatism Analysis
- Unobtrusive Analysis
- Association of Internet Researchers (AoIR)
- Computer-Mediated Communication (CMC)
- Internet as Cultural Context
- Internet Research and Ethical Decision Making
- Internet Research, Privacy of Participants
- Online and Offline Data, Comparison of
- Online Communities
- Online Data, Collection and Interpretation of
- Online Data, Documentation of
- Online Data, Hacking of
- Online Interviews
- Online Social Worlds
- Social Networks, Online
- Correspondence Analysis
- Cutoff Scores
- Data Cleaning
- Data Reduction
- Data Trimming
- Facial Affect Coding System
- Factor Analysis
- Factor Analysis-Oblique Rotation
- Factor Analysis: Confirmatory
- Factor Analysis: Evolutionary
- Factor Analysis: Exploratory
- Factor Analysis: Internal Consistency
- Factor Analysis: Parallelism Test
- Factor Analysis: Rotated Matrix
- Factor Analysis: Varimax Rotation
- Implicit Measures
- Measurement Levels
- Measurement Levels, Interval
- Measurement Levels, Nominal/Categorical
- Measurement Levels, Ordinal
- Measurement Levels, Ratio
- Observational Measurement: Face Features
- Observational Measurement: Proxemics and Touch
- Observational Measurement: Vocal Qualities
- Organizational Identification
- Outlier Analysis
- Physiological Measurement
- Physiological Measurement: Blood Pressure
- Physiological Measurement: Genital Blood Volume
- Physiological Measurement: Heart Rate
- Physiological Measurement: Pupillary Response
- Physiological Measurement: Skin Conductance
- Reaction Time
- Reliability of Measurement
- Reliability, Cronbach’s Alpha
- Reliability, Knuder-Richardson
- Reliability, Split-half
- Scales, Forced Choice
- Scales, Likert Statement
- Scales, Open-Ended
- Scales, Rank Order
- Scales, Semantic Differential
- Scales, True/False
- Scaling, Guttman
- Standard Score
- Time Series Notation
- Validity, Concurrent
- Validity, Construct
- Validity, Face and Content
- Validity, Halo Effect
- Validity, Measurement of
- Validity, Predictive
- Variables, Conceptualization
- Variables, Operationalization
- Z Transformation
- Confederates
- Generalization
- Imagined Interactions
- Interviewees
- Matched Groups
- Matched Individuals
- Random Assignment of Participants
- Respondents
- Response Style
- Treatment Groups
- Vulnerable Groups
- Experience Sampling Method
- Sample Versus Population
- Sampling Decisions
- Sampling Frames
- Sampling, Internet
- Sampling, Methodological Issues in
- Sampling, Multistage
- Sampling, Nonprobability
- Sampling, Probability
- Sampling, Special Population
- Opinion Polling
- Sampling, Random
- Survey Instructions
- Survey Questions, Writing and Phrasing of
- Survey Response Rates
- Survey Wording
- Survey: Contrast Questions
- Survey: Demographic Questions
- Survey: Dichotomous Questions
- Survey: Filter Questions
- Survey: Follow-up Questions
- Survey: Leading Questions
- Survey: Multiple-Choice Questions
- Survey: Negative-Wording Questions
- Survey: Open-Ended Questions
- Survey: Questionnaire
- Survey: Sampling Issues
- Survey: Structural Questions
- Surveys, Advantages and Disadvantages of
- Surveys, Using Others’
- Under-represented Group
- Alternative News Media
- Analytic Induction
- Archival Analysis
- Artifact Selection
- Autoethnography
- Axial Coding
- Burkean Analysis
- Close Reading
- Coding, Fixed
- Coding, Flexible
- Computer-Assisted Qualitative Data Analysis Software (CAQDAS)
- Covert Observation
- Critical Ethnography
- Critical Incident Method
- Critical Race Theory
- Cultural Studies and Communication
- Demand Characteristics
- Ethnographic Interview
- Ethnography
- Ethnomethodology
- Fantasy Theme Analysis
- Feminist Analysis
- Field Notes
- First Wave Feminism
- Fisher Narrative Paradigm
- Focus Groups
- Frame Analysis
- Garfinkling
- Gender-Specific Language
- Grounded Theory
- Hermeneutics
- Historical Analysis
- Informant Interview
- Interaction Analysis, Qualitative
- Interpretative Research
- Interviews for Data Gathering
- Interviews, Recording and Transcribing
- Marxist Analysis
- Meta-ethnography
- Metaphor Analysis
- Narrative Interviewing
- Naturalistic Observation
- Negative Case Analysis
- Neo-Aristotelian Method
- New Media and Participant Observation
- Participant Observer
- Pentadic Analysis
- Performance Research
- Phenomenological Traditions
- Poetic Analysis
- Postcolonial Analysis
- Power in Language
- Pronomial Use-Solidarity
- Psychoanalytic Approaches to Rhetoric
- Public Memory
- Qualitative Data
- Queer Methods
- Queer Theory
- Researcher-Participant Relationships in Observational Research
- Respondent Interviews
- Rhetoric as Epistemic
- Rhetoric, Aristotle’s: Ethos
- Rhetoric, Aristotle’s: Logos
- Rhetoric, Aristotle’s: Pathos
- Rhetoric, Isocrates’
- Rhetorical Artifact
- Rhetorical Method
- Rhetorical Theory
- Second Wave Feminism
- Snowball Subject Recruitment
- Social Constructionism
- Social Network Analysis
- Spontaneous Decision Making
- Symbolic Interactionism
- Terministic Screens
- Textual Analysis
- Thematic Analysis
- Theoretical Traditions
- Third-Wave Feminism
- Transcription Systems
- Triangulation
- Turning Point Analysis
- Unobtrusive Measurement
- Visual Materials, Analysis of
- t -Test, Independent Samples
- t -Test, One Sample
- t -Test, Paired Samples
- Analysis of Covariance (ANCOVA)
- Analysis of Ranks
- Analysis of Variance (ANOVA)
- Bonferroni Correction
- Decomposing Sums of Squares
- Eta Squared
- Factorial Analysis of Variance
- McNemar Test
- One-Way Analysis of Variance
- Post Hoc Tests
- Post Hoc Tests: Duncan Multiple Range Test
- Post Hoc Tests: Least Significant Difference
- Post Hoc Tests: Scheffe Test
- Post Hoc Tests: Student-Newman-Keuls Test
- Post Hoc Tests: Tukey Honestly Significance Difference Test
- Repeated Measures
- Between-Subjects Design
- Blocking Variable
- Control Groups
- Counterbalancing
- Cross-Sectional Design
- Degrees of Freedom
- Delayed Measurement
- Ex Post Facto Designs
- Experimental Manipulation
- Experiments and Experimental Design
- External Validity
- Extraneous Variables, Control of
- Factor, Crossed
- Factor, Fixed
- Factor, Nested
- Factor, Random
- Factorial Designs
- False Negative
- False Positive
- Field Experiments
- Hierarchical Model
- Individual Difference
- Internal Validity
- Laboratory Experiments
- Latin Square Design
- Longitudinal Design
- Manipulation Check
- Measures of Variability
- Median Split of Sample
- Mixed Level Design
- Multitrial Design
- Null Hypothesis
- Orthogonality
- Overidentified Model
- Pilot Study
- Population/Sample
- Power Curves
- Quantitative Research, Purpose of
- Quantitative Research, Steps for
- Quasi-Experimental Design
- Random Assignment
- Replication
- Research Proposal
- Sampling Theory
- Sampling, Determining Size
- Solomon Four-Group Design
- Stimulus Pre-test
- Two-Group Pretest–Posttest Design
- Two-Group Random Assignment Pretest–Posttest Design
- Variables, Control
- Variables, Dependent
- Variables, Independent
- Variables, Latent
- Variables, Marker
- Variables, Mediating Types
- Variables, Moderating Types
- Within-Subjects Design
- Analysis of Residuals
- Bivariate Statistics
- Bootstrapping
- Confidence Interval
- Conjoint Analysis
- Contrast Analysis
- Correlation, Pearson
- Correlation, Point-Biserial
- Correlation, Spearman
- Covariance/Variance Matrix
- Cramér’s V
- Discriminant Analysis
- Kendall’s Tau
- Kruskal-Wallis Test
- Linear Regression
- Linear Versus Nonlinear Relationships
- Multicollinearity
- Multiple Regression
- Multiple Regression: Block Analysis
- Multiple Regression: Covariates in Multiple Regression
- Multiple Regression: Multiple R
- Multiple Regression: Standardized Regression Coefficient
- Partial Correlation
- Phi Coefficient
- Semi-Partial r
- Simple Bivariate Correlation
- Categorization
- Cluster Analysis
- Data Transformation
- Errors of Measurement
- Errors of Measurement: Attenuation
- Errors of Measurement: Ceiling and Floor Effects
- Errors of Measurement: Dichotomization of a Continuous Variable
- Errors of Measurement: Range Restriction
- Errors of Measurement: Regression Toward the Mean
- Frequency Distributions
- Heterogeneity of Variance
- Heteroskedasticity
- Homogeneity of Variance
- Hypothesis Testing, Logic of
- Intraclass Correlation
- Mean, Arithmetic
- Mean, Geometric
- Mean, Harmonic
- Measures of Central Tendency
- Mortality in Sample
- Normal Curve Distribution
- Relationships Between Variables
- Sensitivity Analysis
- Significance Test
- Simple Descriptive Statistics
- Standard Deviation and Variance
- Standard Error
- Standard Error, Mean
- Statistical Power Analysis
- Type I error
- Type II error
- Univariate Statistics
- Variables, Categorical
- Variables, Continuous
- Variables, Defining
- Variables, Interaction of
- Autoregressive, Integrative, Moving Average (ARIMA) Models
- Binomial Effect Size Display
- Cloze Procedure
- Cross Validation
- Cross-Lagged Panel Analysis
- Curvilinear Relationship
- Effect Sizes
- Hierarchical Linear Modeling
- Lag Sequential Analysis
- Log-Linear Analysis
- Logistic Analysis
- Margin of Error
- Markov Analysis
- Maximum Likelihood Estimation
- Meta-Analysis: Estimation of Average Effect
- Meta-Analysis: Fixed Effects Analysis
- Meta-Analysis: Literature Search Issues
- Meta-Analysis: Model Testing
- Meta-Analysis: Random Effects Analysis
- Meta-Analysis: Statistical Conversion to Common Metric
- Multivariate Analysis of Variance (MANOVA)
- Multivariate Statistics
- Ordinary Least Squares
- Path Analysis
- Probit Analysis
- Structural Equation Modeling
- Time-Series Analysis
- Acculturation
- African American Communication and Culture
- Agenda Setting
- Applied Communication
- Argumentation Theory
- Asian/Pacific American Communication Studies
- Bad News, Communication of
- Basic Course in Communication
- Business Communication
- Communication and Aging Research
- Communication and Culture
- Communication and Evolution
- Communication and Future Studies
- Communication and Human Biology
- Communication and Technology
- Communication Apprehension
- Communication Assessment
- Communication Competence
- Communication Education
- Communication Ethics
- Communication History
- Communication Privacy Management Theory
- Communication Skills
- Communication Theory
- Conflict, Mediation, and Negotiation
- Corporate Communication
- Crisis Communication
- Cross-Cultural Communication
- Cyberchondria
- Dark Side of Communication
- Debate and Forensics
- Development of Communication in Children
- Digital Media and Race
- Digital Natives
- Dime Dating
- Disability and Communication
- Distance Learning
- Educational Technology
- Emergency Communication
- Empathic Listening
- English as a Second Language
- Environmental Communication
- Family Communication
- Feminist Communication Studies
- Film Studies
- Financial Communication
- Freedom of Expression
- Game Studies
- Gender and Communication
- GLBT Communication Studies
- GLBT Social Media
- Group Communication
- Health Communication
- Health Literacy
- Human-Computer Interaction
- Instructional Communication
- Intercultural Communication
- Intergenerational Communication
- Intergroup Communication
- International Communication
- International Film
- Interpersonal Communication
- Intrapersonal Communication
- Language and Social Interaction
- Latino Communication
- Legal Communication
- Managerial Communication
- Mass Communication
- Massive Multiplayer Online Games
- Massive Open Online Courses
- Media and Technology Studies
- Media Diffusion
- Media Effects Research
- Media Literacy
- Message Production
- Multiplatform Journalism
- Native American or Indigenous Peoples Communication
- Nonverbal Communication
- Organizational Communication
- Parasocial Communication
- Patient-Centered Communication
- Peace Studies
- Performance Studies
- Personal Relationship Studies
- Philosophy of Communication
- Political Communication
- Political Debates
- Political Economy of Media
- Popular Communication
- Pornography and Research
- Public Address
- Public Relations
- Reality Television
- Relational Dialectics Theory
- Religious Communication
- Rhetorical Genre
- Risk Communication
- Robotic Communication
- Science Communication
- Selective Exposure
- Service Learning
- Small Group Communication
- Social Cognition
- Social Network Systems
- Social Presence
- Social Relationships
- Spirituality and Communication
- Sports Communication
- Strategic Communication
- Structuration Theory
- Training and Development in Organizations
- Video Games
- Visual Communication Studies
- Wartime Communication
- Academic Journal Structure
- Citation Analyses
- Communication Journals
- Interdisciplinary Journals
- Professional Communication Organizations (NCA, ICA, Central, etc.)
Sign in to access this content
Get a 30 day free trial, more like this, sage recommends.
We found other relevant content for you on other Sage platforms.
Have you created a personal profile? Login or create a profile so that you can save clips, playlists and searches
- Sign in/register
Navigating away from this page will delete your results
Please save your results to "My Self-Assessments" in your profile before navigating away from this page.
Sign in to my profile
Please sign into your institution before accessing your profile
Sign up for a free trial and experience all Sage Learning Resources have to offer.
You must have a valid academic email address to sign up.
Get off-campus access
- View or download all content my institution has access to.
Sign up for a free trial and experience all Sage Learning Resources has to offer.
- view my profile
- view my lists
IMAGES
VIDEO
COMMENTS
Explain what quasi-experimental research is and distinguish it clearly from both experimental and correlational research. Describe three different types of one-group quasi-experimental designs. Identify the threats to internal validity associated with each of these designs.
Explain what quasi-experimental research is and distinguish it clearly from both experimental and correlational research. Describe three different types of one-group quasi-experimental designs. Identify the threats to internal validity associated with each of these designs.
The one-group pretest-posttest design is a type of quasi-experiment in which the outcome of interest is measured 2 times: once before and once after exposing a non-random group of participants to a certain intervention/treatment.
An experimental research design helps researchers execute their research objectives with more clarity and transparency. In this article, we will not only discuss the key aspects of experimental research designs but also the issues to avoid and problems to resolve while designing your research study.
Experimental design is fundamental to conducting rigorous and reliable research, offering a systematic approach to exploring causal relationships. With various types of designs and methods, researchers can choose the most appropriate setup to answer their research questions effectively.
Experimental design refers to how participants are allocated to different groups in an experiment. Types of design include repeated measures, independent groups, and matched pairs designs.
ons: quasi-experimental designs and single-case experimental designs. We begin this chapter with an introduction to the type of res. ype of research design called the quasi-experimental research design. The quasi-experimental research design, also defined in Chapter 6, is structured similar to an .
A one-group pretest–posttest design is a type of research design that is most often utilized by behavioral researchers to determine the effect of a treatment or intervention on a given sample. This research design is characterized by two features.
Explain what quasi-experimental research is and distinguish it clearly from both experimental and correlational research. Describe three different types of one-group quasi-experimental designs. Identify the threats to internal validity associated with each of these designs.
More than 50 years ago, Donald Campbell and Julian Stanley (1963) carefully explained why the one-group pretest–posttest pre-experimental design (Y 1 X Y 2) was a very poor choice for testing the effect of an independent variable X on a dependent variable Y that is measured at Time 1 and Time 2.