- Voxco Online
- Voxco Panel Management
- Voxco Panel Portal
- Voxco Audience
- Voxco Mobile Offline
- Voxco Dialer Cloud
- Voxco Dialer On-premise
- Voxco TCPA Connect
- Voxco Analytics
- Voxco Text & Sentiment Analysis
- 40+ question types
- Drag-and-drop interface
- Skip logic and branching
- Multi-lingual survey
- Text piping
- Question library
- CSS customization
- White-label surveys
- Customizable ‘Thank You’ page
- Customizable survey theme
- Reminder send-outs
- Survey rewards
- Social media
- Website surveys
- Correlation analysis
- Cross-tabulation analysis
- Trend analysis
- Real-time dashboard
- Customizable report
- Email address validation
- Recaptcha validation
- SSL security
Take a peek at our powerful survey features to design surveys that scale discoveries.
Download feature sheet.
- Hospitality
- Academic Research
- Customer Experience
- Employee Experience
- Product Experience
- Market Research
- Social Research
- Data Analysis
Explore Voxco
Need to map Voxco’s features & offerings? We can help!
Watch a Demo
Download Brochures
Get a Quote
- NPS Calculator
- CES Calculator
- A/B Testing Calculator
- Margin of Error Calculator
- Sample Size Calculator
- CX Strategy & Management Hub
- Market Research Hub
- Patient Experience Hub
- Employee Experience Hub
- NPS Knowledge Hub
- Market Research Guide
- Customer Experience Guide
- Survey Research Guides
- Survey Template Library
- Webinars and Events
- Feature Sheets
- Try a sample survey
- Professional Services
Get exclusive insights into research trends and best practices from top experts! Access Voxco’s ‘State of Research Report 2024 edition’ .
We’ve been avid users of the Voxco platform now for over 20 years. It gives us the flexibility to routinely enhance our survey toolkit and provides our clients with a more robust dataset and story to tell their clients.
VP Innovation & Strategic Partnerships, The Logit Group
- Client Stories
- Voxco Reviews
- Why Voxco Research?
- Careers at Voxco
- Vulnerabilities and Ethical Hacking
Explore Regional Offices
- Survey Software The world’s leading omnichannel survey software
- Online Survey Tools Create sophisticated surveys with ease.
- Mobile Offline Conduct efficient field surveys.
- Text Analysis
- Close The Loop
- Automated Translations
- NPS Dashboard
- CATI Manage high volume phone surveys efficiently
- Cloud/On-premise Dialer TCPA compliant Cloud on-premise dialer
- IVR Survey Software Boost productivity with automated call workflows.
- Analytics Analyze survey data with visual dashboards
- Panel Manager Nurture a loyal community of respondents.
- Survey Portal Best-in-class user friendly survey portal.
- Voxco Audience Conduct targeted sample research in hours.
- Predictive Analytics
- Customer 360
- Customer Loyalty
- Fraud & Risk Management
- AI/ML Enablement Services
- Credit Underwriting
Find the best survey software for you! (Along with a checklist to compare platforms)
Get Buyer’s Guide
- 100+ question types
- SMS surveys
- Financial Services
- Banking & Financial Services
- Retail Solution
- Risk Management
- Customer Lifecycle Solutions
- Net Promoter Score
- Customer Behaviour Analytics
- Customer Segmentation
- Data Unification
Explore Voxco
Watch a Demo
Download Brochures
- CX Strategy & Management Hub
- The Voxco Guide to Customer Experience
- Professional services
- Blogs & White papers
- Case Studies
Find the best customer experience platform
Uncover customer pain points, analyze feedback and run successful CX programs with the best CX platform for your team.
Get the Guide Now
VP Innovation & Strategic Partnerships, The Logit Group
- Why Voxco Intelligence?
- Our clients
- Client stories
- Featuresheets
True Experimental Design - Types & How to Conduct
SHARE THE ARTICLE ON
True-experimental research is often considered the most accurate research. A researcher has complete control over the process which helps reduce any error in the result. This also increases the confidence level of the research outcome.
In this blog, we will explore in detail what it is, its various types, and how to conduct it in 7 steps.
What is a true experimental design?
True experimental design is a statistical approach to establishing a cause-and-effect relationship between variables. This research method is the most accurate forms which provides substantial backing to support the existence of relationships.
There are three elements in this study that you need to fulfill in order to perform this type of research:
1. The existence of a control group: The sample of participants is subdivided into 2 groups – one that is subjected to the experiment and so, undergoes changes and the other that does not.
2. The presence of an independent variable: Independent variables that influence the working of other variables must be there for the researcher to control and observe changes.
3. Random assignment: Participants must be randomly distributed within the groups.
Read how Voxco helped Brain Research improve research productivity by 60%.
“The platform extends our ability to productively manage our busy intercept survey projects, and the confidence to support major new clients.”
Laura Ruvalcaba, President & CEO, Brain Research
An example of true experimental design
A study to observe the effects of physical exercise on productivity levels can be conducted using a true experimental design.
Suppose a group of 300 people volunteer for a study involving office workers in their 20s. These 300 participants are randomly distributed into 3 groups.
- 1st Group: A control group that does not participate in exercising and has to carry on with their everyday schedule.
- 2nd Group: Asked to indulge in home workouts for 30-45 minutes every day for one month.
- 3rd Group: Has to work out 2 hours every day for a month. Both groups have to take one rest day per week.
In this research, the level of physical exercise acts as an independent variable while the performance at the workplace is a dependent variable that varies with the change in exercise levels.
Before initiating the true experimental research, each participant’s current performance at the workplace is evaluated and documented. As the study goes on, a progress report is generated for each of the 300 participants to monitor how their physical activity has impacted their workplace functioning.
At the end of two weeks, participants from the 2nd and 3rd groups that are able to endure their current level of workout, are asked to increase their daily exercise time by half an hour. While those that aren’t able to endure, are suggested to either continue with the same timing or fix the timing to a level that is half an hour lower.
So, in this true experimental design a participant who at the end of two weeks is not able to put up with 2 hours of workout, will now workout for 1 hour and 30 minutes for the remaining tenure of two weeks while someone who can endure the 2 hours, will now push themselves towards 2 hours and 30 minutes.
In this manner, the researcher notes the timings of each member from the two active groups for the first two weeks and the remaining two weeks after the change in timings and also monitors their corresponding performance levels at work.
The above example can be categorized as true experiment research since now we have:
- Control group: Group 1 carries on with their schedule without being conditioned to exercise.
- Independent variable : The duration of exercise each day.
- Random assignment: 300 participants are randomly distributed into 3 groups and as such, there are no criteria for the assignment.
What is the purpose of conducting true experimental research?
Both the primary usage and purpose of a true experimental design lie in establishing meaningful relationships based on quantitative surveillance.
True experiments focus on connecting the dots between two or more variables by displaying how the change in one variable brings about a change in another variable. It can be as small a change as having enough sleep improves retention or as large scale as geographical differences affect consumer behavior.
The main idea is to ensure the presence of different sets of variables to study with some shared commonality.
Beyond this, the research is used when the three criteria of random distribution, a control group, and an independent variable to be manipulated by the researcher, are met.
Voxco’s omnichannel survey software helps you collect insights from multiple channels using a single platform
See the true power of using an integrated survey platform to conduct online, offline, and phone surveys along with a built-in analytical suite.
What are the advantages of true experimental design?
Let’s take a look at some advantages that make this research design conclusive and accurate research.
Concrete method of research:
The statistical nature of the experimental design makes it highly credible and accurate. The data collected from the research is subjected to statistical tools.
This makes the results easy to understand, objective and actionable. This makes it a better alternative to observation-based studies that are subjective and difficult to make inferences from.
Easy to understand and replicate:
Since the research provides hard figures and a precise representation of the entire process, the results presented become easily comprehensible for any stakeholder.
Further, it becomes easier for future researchers conducting studies around the same subject to get a grasp of prior takes on the same and replicate its results to supplement their own research.
Establishes comparison:
The presence of a control group in true experimental research allows researchers to compare and contrast. The degree to which a methodology is applied to a group can be studied with respect to the end result as a frame of reference.
Conclusive:
The research combines observational and statistical analysis to generate informed conclusions. This directs the flow of follow-up actions in a definite direction, thus, making the research process fruitful.
What are the disadvantages of true experimental design?
We should also learn about the disadvantages it can pose in research to help you determine when and how you should use this type of research.
This research design is costly. It takes a lot of investment in recruiting and managing a large number of participants which is necessary for the sample to be representative.
The high resource investment makes it highly important for the researcher to plan each aspect of the process to its minute details.
Too idealistic:
The research takes place in a completely controlled environment. Such a scenario is not representative of real-world situations and so the results may not be authentic.
T his is one of the main limitation why open-field research is preferred over lab research, wherein the researcher can influence the study.
Time-consuming:
Setting up and conducting a true experiment is highly time-consuming. This is because of the processes like recruiting a large enough sample, gathering respondent data, random distribution into groups, monitoring the process over a span of time, tracking changes, and making adjustments.
The amount of processes, although essential to the entire model, is not a feasible option to go for when the results are required in the near future.
Now that we’ve learned about the advantages and disadvantages let’s look at its types.
Get started with your Experimental Research
Send your survey to the right people to receive quality responses.
What are the 3 types of true experimental design?
The research design is categorized into three types based on the way you should conduct the research. Each type has its own procedure and guidelines, which you should be aware of to achieve reliable data.
The three types are:
1) Post-test-only control group design.
2) Pre-test post-test control group design.
3) Solomon four group control design.
Let’s see how these three types differ.
1) Post-test-only control group design:
In this type of true experimental research, the control as well as the experimental group that has been formed using random allocation, are not tested before applying the experimental methodology. This is so as to avoid affecting the quality of the study.
The participants are always on the lookout to identify the purpose and criteria for assessment. Pre-test conveys to them the basis on which they are being judged which can allow them to modify their end responses, compromising the quality of the entire research process.
However, this can hinder your ability to establish a comparison between the pre-experiment and post-experiment conditions which weighs in on the changes that have taken place over the course of the research.
2) Pre-test post-test control group design:
It is a modification of the post-test control group design with an additional test carried out before the implementation of the experimental methodology.
This two-way testing method can help in noticing significant changes brought in the research groups as a result of the experimental intervention. There is no guarantee that the results present the true picture as post-testing can be affected due to the exposure of the respondents to the pre-test.
3) Solomon four group control design:
This type of true experimental design involves the random distribution of sample members into 4 groups. These groups consist of 2 control groups that are not subjected to the experiments and changes and 2 experimental groups that the experimental methodology applies to.
Out of these 4 groups, one control and one experimental group is used for pre-testing while all four groups are subjected to post-tests.
This way researcher gets to establish pre-test post-test contrast while there remains another set of respondents that have not been exposed to pre-tests and so, provide genuine post-test responses, thus, accounting for testing effects.
Explore all the survey question types possible on Voxco.
What is the difference between pre-experimental & true experimental research design.
Pre-experimental research helps determine the researchers’ intervention on a group of people. It is a step where you design the proper experiment to address a research question.
True experiment defines that you are conducting the research. It helps establish a cause-and-effect relationship between the variables.
We’ll discuss the differences between the two based on four categories, which are:
- Observatory Vs. Statistical.
- Absence Vs. Presence of control groups.
- Non-randomization Vs. Randomization.
- Feasibility test Vs. Conclusive test.
Let’s find the differences to better understand the two experiments.
Observatory vs Statistical:
Pre-experimental research is an observation-based model i.e. it is highly subjective and qualitative in nature.
The true experimental design offers an accurate analysis of the data collected using statistical data analysis tools.
Absence vs Presence of control groups:
Pre-experimental research designs do not usually employ a control group which makes it difficult to establish contrast.
While all three types of true experiments employ control groups.
Non-randomization vs Randomization:
Pre-experimental research doesn’t use randomization in certain cases whereas
True experimental research always adheres to a randomization approach to group distribution.
Feasibility test vs Conclusive test:
Pre-tests are used as a feasibility mechanism to see if the methodology being applied is actually suitable for the research purpose and whether it will have an impact or not.
While true experiments are conclusive in nature.
Guide to Descriptive Research
Learn the key steps of conducting descriptive research to uncover breakthrough insights into your target market.
7 Steps to conduct a true experimental research
It’s important to understand the steps/guidelines of research in order to maintain research integrity and gather valid and reliable data.
We have explained 7 steps to conducting this research in detail. The TL;DR version of it is:
1) Identify the research objective.
2) Identify independent and dependent variables.
3) Define and group the population.
4) Conduct Pre-tests.
5) Conduct the research.
6) Conduct post-tests.
7) Analyse the collected data.
Now let’s explore these seven steps in true experimental design.
1) Identify the research objective:
Identify the variables which you need to analyze for a cause-and-effect relationship. Deliberate which particular relationship study will help you make effective decisions and frame this research objective in one of the following manners:
- Determination of the impact of X on Y
- Studying how the usage/application of X causes Y
2) Identify independent and dependent variables:
Establish clarity as to what would be your controlling/independent variable and what variable would change and would be observed by the researcher. In the above samples, for research purposes, X is an independent variable & Y is a dependent variable.
3) Define and group the population:
Define the targeted audience for the true experimental design. It is out of this target audience that a sample needs to be selected for accurate research to be carried out. It is imperative that the target population gets defined in as much detail as possible.
To narrow the field of view, a random selection of individuals from the population is carried out. These are the selected respondents that help the researcher in answering their research questions. Post their selection, this sample of individuals gets randomly subdivided into control and experimental groups.
4) Conduct Pre-tests:
Before commencing with the actual study, pre-tests are to be carried out wherever necessary. These pre-tests take an assessment of the condition of the respondent so that an effective comparison between the pre and post-tests reveals the change brought about by the research.
5) Conduct the research:
Implement your experimental procedure with the experimental group created in the previous step in the true experimental design. Provide the necessary instructions and solve any doubts or queries that the participants might have. Monitor their practices and track their progress. Ensure that the intervention is being properly complied with, otherwise, the results can be tainted.
6) Conduct post-tests:
Gauge the impact that the intervention has had on the experimental group and compare it with the pre-tests. This is particularly important since the pre-test serves as a starting point from where all the changes that have been measured in the post-test, are the effect of the experimental intervention.
So for example: If the pre-test in the above example shows that a particular customer service employee was able to solve 10 customer problems in two hours and the post-test conducted after a month of 2-hour workouts every day shows a boost of 5 additional customer problems being solved within those 2 hours, the additional 5 customer service calls that the employee makes is the result of the additional productivity gained by the employee as a result of putting in the requisite time
7) Analyse the collected data:
Use appropriate statistical tools to derive inferences from the data observed and collected. Correlational data analysis tools and tests of significance are highly effective relationship-based studies and so are highly applicable for true experimental research.
This step also includes differentiating between the pre and the post-tests for scoping in on the impact that the independent variable has had on the dependent variable. A contrast between the control group and the experimental groups sheds light on the change brought about within the span of the experiment and how much change is brought intentionally and is not caused by chance.
Voxco is trusted by 500+ global brands and top 50 MR firms to gather insights and take actions.
See how Voxco can help enhance your research efficiency.
Wrapping up;
This sums up everything about true experimental design. While it’s often considered complex and expensive, it is also one of the most accurate research.
The true experiment uses statistical analysis which ensures that your data is reliable and has a high confidence level. Curious to learn how you can use survey software to conduct your experimental research, book a meeting with us .
- What is true experimental research design?
True experimental research design helps investigate the cause-and-effect relationships between the variables under study. The research method requires manipulating an independent variable, random assignment of participants to different groups, and measuring the dependent variable.
- How does true experiment research differ from other research designs?
The true experiment uses random selection/assignment of participants in the group to minimize preexisting differences between groups. It allows researchers to make causal inferences about the influence of independent variables. This is the factor that makes it different from other research designs like correlational research.
- What are the key components of true experimental research designs?
The following are the important factors of a true experimental design:
- Manipulation of the independent variable.
- Control groups.
- Experiment groups.
- Dependent variable.
- Random assignment.
- What are some advantages of true experiment design?
It enables you to establish causal relationships between variables and offers control over the confounding variables. Moreover, you can generalize the research findings to the target population.
- What ethical considerations are important in a true experimental research design?
When conducting this research method, you must obtain informed consent from the participants. It’s important to ensure the confidentiality and privacy of the participants to minimize any risk or harm.
Explore Voxco Survey Software
+ Omnichannel Survey Software
+ Online Survey Software
+ CATI Survey Software
+ IVR Survey Software
+ Market Research Tool
+ Customer Experience Tool
+ Product Experience Software
+ Enterprise Survey Software
Mastering Control Variables in Research: What You Need to Know
Mastering Control Variables in Research: What You Need to Know SHARE THE ARTICLE ON Join the 500+ clients in 40+ countries using Voxco’s Insights Platform
All you need to know about Panel Surveys
All you need to know about Panel Surveys SHARE THE ARTICLE ON Table of Contents What is a panel survey? A panel survey can be
Getting the most from Market Research Tools
Getting the most from Market Research Tools Free Download: Enhance NPS® Scores using our NPS® Survey Templates Download Now SHARE THE ARTICLE ON Market research
Improving Customer Loyalty: How an impactful Customer Experience Strategy can help?
Improving Customer Loyalty: How an impactful Customer Experience Strategy can help? Try a free Voxco Online sample survey! Unlock your Sample Survey SHARE THE ARTICLE
Using a t-test
Using a t-test SHARE THE ARTICLE ON Share on facebook Share on twitter Share on linkedin Table of Contents A t-test is a statistical technique
Correlation vs Causation
Correlation vs Causation The Ultimate Guide to Cluster Sampling Get a step-by-step guide for choosing the correct representative sample for survey research. Download Now SHARE
- Foundations
- Write Paper
Search form
- Experiments
- Anthropology
- Self-Esteem
- Social Anxiety
True Experimental Design
True experimental design is regarded as the most accurate form of experimental research, in that it tries to prove or disprove a hypothesis mathematically, with statistical analysis.
This article is a part of the guide:
- Research Designs
- Quantitative and Qualitative Research
- Literature Review
- Quantitative Research Design
Browse Full Outline
- 1 Research Designs
- 2.1 Pilot Study
- 2.2 Quantitative Research Design
- 2.3 Qualitative Research Design
- 2.4 Quantitative and Qualitative Research
- 3.1 Case Study
- 3.2 Naturalistic Observation
- 3.3 Survey Research Design
- 3.4 Observational Study
- 4.1 Case-Control Study
- 4.2 Cohort Study
- 4.3 Longitudinal Study
- 4.4 Cross Sectional Study
- 4.5 Correlational Study
- 5.1 Field Experiments
- 5.2 Quasi-Experimental Design
- 5.3 Identical Twins Study
- 6.1 Experimental Design
- 6.2 True Experimental Design
- 6.3 Double Blind Experiment
- 6.4 Factorial Design
- 7.1 Literature Review
- 7.2 Systematic Reviews
- 7.3 Meta Analysis
For some of the physical sciences, such as physics, chemistry and geology, they are standard and commonly used. For social sciences, psychology and biology, they can be a little more difficult to set up.
For an experiment to be classed as a true experimental design, it must fit all of the following criteria.
- The sample groups must be assigned randomly .
- There must be a viable control group .
- Only one variable can be manipulated and tested. It is possible to test more than one, but such experiments and their statistical analysis tend to be cumbersome and difficult.
- The tested subjects must be randomly assigned to either control or experimental groups.
The results of a true experimental design can be statistically analyzed and so there can be little argument about the results .
It is also much easier for other researchers to replicate the experiment and validate the results.
For physical sciences working with mainly numerical data, it is much easier to manipulate one variable, so true experimental design usually gives a yes or no answer.
Disadvantages
Whilst perfect in principle, there are a number of problems with this type of design. Firstly, they can be almost too perfect, with the conditions being under complete control and not being representative of real world conditions.
For psychologists and behavioral biologists, for example, there can never be any guarantee that a human or living organism will exhibit ‘normal’ behavior under experimental conditions.
True experiments can be too accurate and it is very difficult to obtain a complete rejection or acceptance of a hypothesis because the standards of proof required are so difficult to reach.
True experiments are also difficult and expensive to set up. They can also be very impractical.
While for some fields, like physics, there are not as many variables so the design is easy, for social sciences and biological sciences, where variations are not so clearly defined it is much more difficult to exclude other factors that may be affecting the manipulated variable.
True experimental design is an integral part of science, usually acting as a final test of a hypothesis . Whilst they can be cumbersome and expensive to set up, literature reviews , qualitative research and descriptive research can serve as a good precursor to generate a testable hypothesis, saving time and money.
Whilst they can be a little artificial and restrictive, they are the only type of research that is accepted by all disciplines as statistically provable.
- Psychology 101
- Flags and Countries
- Capitals and Countries
Martyn Shuttleworth (Mar 24, 2008). True Experimental Design. Retrieved Nov 24, 2024 from Explorable.com: https://explorable.com/true-experimental-design
You Are Allowed To Copy The Text
The text in this article is licensed under the Creative Commons-License Attribution 4.0 International (CC BY 4.0) .
This means you're free to copy, share and adapt any parts (or all) of the text in the article, as long as you give appropriate credit and provide a link/reference to this page.
That is it. You don't need our permission to copy the article; just include a link/reference back to this page. You can use it freely (with some kind of link), and we're also okay with people reprinting in publications like books, blogs, newsletters, course-material, papers, wikipedia and presentations (with clear attribution).
Related articles
Quasi-Experimental Design
Design of Experiments
Experimental Research
Control Group
Hypothesis Testing
Want to stay up to date? Follow us!
Get all these articles in 1 guide.
Want the full version to study at home, take to school or just scribble on?
Whether you are an academic novice, or you simply want to brush up your skills, this book will take your academic writing skills to the next level.
Download electronic versions: - Epub for mobiles and tablets - PDF version here
Save this course for later
Don't have time for it all now? No problem, save it as a course and come back to it later.
Footer bottom
- Privacy Policy
- Subscribe to our RSS Feed
- Like us on Facebook
- Follow us on Twitter
- Privacy Policy
Home » Experimental Design – Types, Methods, Guide
Experimental Design – Types, Methods, Guide
Table of Contents
Experimental design is a structured approach used to conduct scientific experiments. It enables researchers to explore cause-and-effect relationships by controlling variables and testing hypotheses. This guide explores the types of experimental designs, common methods, and best practices for planning and conducting experiments.
Experimental Design
Experimental design refers to the process of planning a study to test a hypothesis, where variables are manipulated to observe their effects on outcomes. By carefully controlling conditions, researchers can determine whether specific factors cause changes in a dependent variable.
Key Characteristics of Experimental Design :
- Manipulation of Variables : The researcher intentionally changes one or more independent variables.
- Control of Extraneous Factors : Other variables are kept constant to avoid interference.
- Randomization : Subjects are often randomly assigned to groups to reduce bias.
- Replication : Repeating the experiment or having multiple subjects helps verify results.
Purpose of Experimental Design
The primary purpose of experimental design is to establish causal relationships by controlling for extraneous factors and reducing bias. Experimental designs help:
- Test Hypotheses : Determine if there is a significant effect of independent variables on dependent variables.
- Control Confounding Variables : Minimize the impact of variables that could distort results.
- Generate Reproducible Results : Provide a structured approach that allows other researchers to replicate findings.
Types of Experimental Designs
Experimental designs can vary based on the number of variables, the assignment of participants, and the purpose of the experiment. Here are some common types:
1. Pre-Experimental Designs
These designs are exploratory and lack random assignment, often used when strict control is not feasible. They provide initial insights but are less rigorous in establishing causality.
- Example : A training program is provided, and participants’ knowledge is tested afterward, without a pretest.
- Example : A group is tested on reading skills, receives instruction, and is tested again to measure improvement.
2. True Experimental Designs
True experiments involve random assignment of participants to control or experimental groups, providing high levels of control over variables.
- Example : A new drug’s efficacy is tested with patients randomly assigned to receive the drug or a placebo.
- Example : Two groups are observed after one group receives a treatment, and the other receives no intervention.
3. Quasi-Experimental Designs
Quasi-experiments lack random assignment but still aim to determine causality by comparing groups or time periods. They are often used when randomization isn’t possible, such as in natural or field experiments.
- Example : Schools receive different curriculums, and students’ test scores are compared before and after implementation.
- Example : Traffic accident rates are recorded for a city before and after a new speed limit is enforced.
4. Factorial Designs
Factorial designs test the effects of multiple independent variables simultaneously. This design is useful for studying the interactions between variables.
- Example : Studying how caffeine (variable 1) and sleep deprivation (variable 2) affect memory performance.
- Example : An experiment studying the impact of age, gender, and education level on technology usage.
5. Repeated Measures Design
In repeated measures designs, the same participants are exposed to different conditions or treatments. This design is valuable for studying changes within subjects over time.
- Example : Measuring reaction time in participants before, during, and after caffeine consumption.
- Example : Testing two medications, with each participant receiving both but in a different sequence.
Methods for Implementing Experimental Designs
- Purpose : Ensures each participant has an equal chance of being assigned to any group, reducing selection bias.
- Method : Use random number generators or assignment software to allocate participants randomly.
- Purpose : Prevents participants or researchers from knowing which group (experimental or control) participants belong to, reducing bias.
- Method : Implement single-blind (participants unaware) or double-blind (both participants and researchers unaware) procedures.
- Purpose : Provides a baseline for comparison, showing what would happen without the intervention.
- Method : Include a group that does not receive the treatment but otherwise undergoes the same conditions.
- Purpose : Controls for order effects in repeated measures designs by varying the order of treatments.
- Method : Assign different sequences to participants, ensuring that each condition appears equally across orders.
- Purpose : Ensures reliability by repeating the experiment or including multiple participants within groups.
- Method : Increase sample size or repeat studies with different samples or in different settings.
Steps to Conduct an Experimental Design
- Clearly state what you intend to discover or prove through the experiment. A strong hypothesis guides the experiment’s design and variable selection.
- Independent Variable (IV) : The factor manipulated by the researcher (e.g., amount of sleep).
- Dependent Variable (DV) : The outcome measured (e.g., reaction time).
- Control Variables : Factors kept constant to prevent interference with results (e.g., time of day for testing).
- Choose a design type that aligns with your research question, hypothesis, and available resources. For example, an RCT for a medical study or a factorial design for complex interactions.
- Randomly assign participants to experimental or control groups. Ensure control groups are similar to experimental groups in all respects except for the treatment received.
- Randomize the assignment and, if possible, apply blinding to minimize potential bias.
- Follow a consistent procedure for each group, collecting data systematically. Record observations and manage any unexpected events or variables that may arise.
- Use appropriate statistical methods to test for significant differences between groups, such as t-tests, ANOVA, or regression analysis.
- Determine whether the results support your hypothesis and analyze any trends, patterns, or unexpected findings. Discuss possible limitations and implications of your results.
Examples of Experimental Design in Research
- Medicine : Testing a new drug’s effectiveness through a randomized controlled trial, where one group receives the drug and another receives a placebo.
- Psychology : Studying the effect of sleep deprivation on memory using a within-subject design, where participants are tested with different sleep conditions.
- Education : Comparing teaching methods in a quasi-experimental design by measuring students’ performance before and after implementing a new curriculum.
- Marketing : Using a factorial design to examine the effects of advertisement type and frequency on consumer purchase behavior.
- Environmental Science : Testing the impact of a pollution reduction policy through a time series design, recording pollution levels before and after implementation.
Experimental design is fundamental to conducting rigorous and reliable research, offering a systematic approach to exploring causal relationships. With various types of designs and methods, researchers can choose the most appropriate setup to answer their research questions effectively. By applying best practices, controlling variables, and selecting suitable statistical methods, experimental design supports meaningful insights across scientific, medical, and social research fields.
- Campbell, D. T., & Stanley, J. C. (1963). Experimental and Quasi-Experimental Designs for Research . Houghton Mifflin Company.
- Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and Quasi-Experimental Designs for Generalized Causal Inference . Houghton Mifflin.
- Fisher, R. A. (1935). The Design of Experiments . Oliver and Boyd.
- Field, A. (2013). Discovering Statistics Using IBM SPSS Statistics . Sage Publications.
- Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences . Routledge.
About the author
Muhammad Hassan
Researcher, Academic Writer, Web developer
You may also like
Research Methods – Types, Examples and Guide
One-to-One Interview – Methods and Guide
Qualitative Research Methods
Questionnaire – Definition, Types, and Examples
Quasi-Experimental Research Design – Types...
Exploratory Research – Types, Methods and...
Experimental Research Design — 6 mistakes you should never make!
Since school days’ students perform scientific experiments that provide results that define and prove the laws and theorems in science. These experiments are laid on a strong foundation of experimental research designs.
An experimental research design helps researchers execute their research objectives with more clarity and transparency.
In this article, we will not only discuss the key aspects of experimental research designs but also the issues to avoid and problems to resolve while designing your research study.
Table of Contents
What Is Experimental Research Design?
Experimental research design is a framework of protocols and procedures created to conduct experimental research with a scientific approach using two sets of variables. Herein, the first set of variables acts as a constant, used to measure the differences of the second set. The best example of experimental research methods is quantitative research .
Experimental research helps a researcher gather the necessary data for making better research decisions and determining the facts of a research study.
When Can a Researcher Conduct Experimental Research?
A researcher can conduct experimental research in the following situations —
- When time is an important factor in establishing a relationship between the cause and effect.
- When there is an invariable or never-changing behavior between the cause and effect.
- Finally, when the researcher wishes to understand the importance of the cause and effect.
Importance of Experimental Research Design
To publish significant results, choosing a quality research design forms the foundation to build the research study. Moreover, effective research design helps establish quality decision-making procedures, structures the research to lead to easier data analysis, and addresses the main research question. Therefore, it is essential to cater undivided attention and time to create an experimental research design before beginning the practical experiment.
By creating a research design, a researcher is also giving oneself time to organize the research, set up relevant boundaries for the study, and increase the reliability of the results. Through all these efforts, one could also avoid inconclusive results. If any part of the research design is flawed, it will reflect on the quality of the results derived.
Types of Experimental Research Designs
Based on the methods used to collect data in experimental studies, the experimental research designs are of three primary types:
1. Pre-experimental Research Design
A research study could conduct pre-experimental research design when a group or many groups are under observation after implementing factors of cause and effect of the research. The pre-experimental design will help researchers understand whether further investigation is necessary for the groups under observation.
Pre-experimental research is of three types —
- One-shot Case Study Research Design
- One-group Pretest-posttest Research Design
- Static-group Comparison
2. True Experimental Research Design
A true experimental research design relies on statistical analysis to prove or disprove a researcher’s hypothesis. It is one of the most accurate forms of research because it provides specific scientific evidence. Furthermore, out of all the types of experimental designs, only a true experimental design can establish a cause-effect relationship within a group. However, in a true experiment, a researcher must satisfy these three factors —
- There is a control group that is not subjected to changes and an experimental group that will experience the changed variables
- A variable that can be manipulated by the researcher
- Random distribution of the variables
This type of experimental research is commonly observed in the physical sciences.
3. Quasi-experimental Research Design
The word “Quasi” means similarity. A quasi-experimental design is similar to a true experimental design. However, the difference between the two is the assignment of the control group. In this research design, an independent variable is manipulated, but the participants of a group are not randomly assigned. This type of research design is used in field settings where random assignment is either irrelevant or not required.
The classification of the research subjects, conditions, or groups determines the type of research design to be used.
Advantages of Experimental Research
Experimental research allows you to test your idea in a controlled environment before taking the research to clinical trials. Moreover, it provides the best method to test your theory because of the following advantages:
- Researchers have firm control over variables to obtain results.
- The subject does not impact the effectiveness of experimental research. Anyone can implement it for research purposes.
- The results are specific.
- Post results analysis, research findings from the same dataset can be repurposed for similar research ideas.
- Researchers can identify the cause and effect of the hypothesis and further analyze this relationship to determine in-depth ideas.
- Experimental research makes an ideal starting point. The collected data could be used as a foundation to build new research ideas for further studies.
6 Mistakes to Avoid While Designing Your Research
There is no order to this list, and any one of these issues can seriously compromise the quality of your research. You could refer to the list as a checklist of what to avoid while designing your research.
1. Invalid Theoretical Framework
Usually, researchers miss out on checking if their hypothesis is logical to be tested. If your research design does not have basic assumptions or postulates, then it is fundamentally flawed and you need to rework on your research framework.
2. Inadequate Literature Study
Without a comprehensive research literature review , it is difficult to identify and fill the knowledge and information gaps. Furthermore, you need to clearly state how your research will contribute to the research field, either by adding value to the pertinent literature or challenging previous findings and assumptions.
3. Insufficient or Incorrect Statistical Analysis
Statistical results are one of the most trusted scientific evidence. The ultimate goal of a research experiment is to gain valid and sustainable evidence. Therefore, incorrect statistical analysis could affect the quality of any quantitative research.
4. Undefined Research Problem
This is one of the most basic aspects of research design. The research problem statement must be clear and to do that, you must set the framework for the development of research questions that address the core problems.
5. Research Limitations
Every study has some type of limitations . You should anticipate and incorporate those limitations into your conclusion, as well as the basic research design. Include a statement in your manuscript about any perceived limitations, and how you considered them while designing your experiment and drawing the conclusion.
6. Ethical Implications
The most important yet less talked about topic is the ethical issue. Your research design must include ways to minimize any risk for your participants and also address the research problem or question at hand. If you cannot manage the ethical norms along with your research study, your research objectives and validity could be questioned.
Experimental Research Design Example
In an experimental design, a researcher gathers plant samples and then randomly assigns half the samples to photosynthesize in sunlight and the other half to be kept in a dark box without sunlight, while controlling all the other variables (nutrients, water, soil, etc.)
By comparing their outcomes in biochemical tests, the researcher can confirm that the changes in the plants were due to the sunlight and not the other variables.
Experimental research is often the final form of a study conducted in the research process which is considered to provide conclusive and specific results. But it is not meant for every research. It involves a lot of resources, time, and money and is not easy to conduct, unless a foundation of research is built. Yet it is widely used in research institutes and commercial industries, for its most conclusive results in the scientific approach.
Have you worked on research designs? How was your experience creating an experimental design? What difficulties did you face? Do write to us or comment below and share your insights on experimental research designs!
Frequently Asked Questions
Randomization is important in an experimental research because it ensures unbiased results of the experiment. It also measures the cause-effect relationship on a particular group of interest.
Experimental research design lay the foundation of a research and structures the research to establish quality decision making process.
There are 3 types of experimental research designs. These are pre-experimental research design, true experimental research design, and quasi experimental research design.
The difference between an experimental and a quasi-experimental design are: 1. The assignment of the control group in quasi experimental research is non-random, unlike true experimental design, which is randomly assigned. 2. Experimental research group always has a control group; on the other hand, it may not be always present in quasi experimental research.
Experimental research establishes a cause-effect relationship by testing a theory or hypothesis using experimental groups or control variables. In contrast, descriptive research describes a study or a topic by defining the variables under it and answering the questions related to the same.
good and valuable
Very very good
Good presentation.
Rate this article Cancel Reply
Your email address will not be published.
Enago Academy's Most Popular Articles
- Promoting Research
Graphical Abstracts Vs. Infographics: Best practices for using visual illustrations for increased research impact
Dr. Sarah Chen stared at her computer screen, her eyes staring at her recently published…
- Publishing Research
10 Tips to Prevent Research Papers From Being Retracted
Research paper retractions represent a critical event in the scientific community. When a published article…
- Industry News
Google Releases 2024 Scholar Metrics, Evaluates Impact of Scholarly Articles
Google has released its 2024 Scholar Metrics, assessing scholarly articles from 2019 to 2023. This…
Ensuring Academic Integrity and Transparency in Academic Research: A comprehensive checklist for researchers
Academic integrity is the foundation upon which the credibility and value of scientific findings are…
- Reporting Research
How to Optimize Your Research Process: A step-by-step guide
For researchers across disciplines, the path to uncovering novel findings and insights is often filled…
Choosing the Right Analytical Approach: Thematic analysis vs. content analysis for…
Comparing Cross Sectional and Longitudinal Studies: 5 steps for choosing the right…
Sign-up to read more
Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:
- 2000+ blog articles
- 50+ Webinars
- 10+ Expert podcasts
- 50+ Infographics
- 10+ Checklists
- Research Guides
We hate spam too. We promise to protect your privacy and never spam you.
- AI in Academia
- Career Corner
- Diversity and Inclusion
- Infographics
- Expert Video Library
- Other Resources
- Enago Learn
- Upcoming & On-Demand Webinars
- Open Access Week 2024
- Peer Review Week 2024
- Publication Integrity Week 2024
- Conference Videos
- Enago Report
- Journal Finder
- Enago Plagiarism & AI Grammar Check
- Editing Services
- Publication Support Services
- Research Impact
- Translation Services
- Publication solutions
- AI-Based Solutions
- Thought Leadership
- Call for Articles
- Call for Speakers
- Author Training
- Edit Profile
I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:
Which among these would you prefer the most for improving research integrity?
Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.
30 8.1 Experimental design: What is it and when should it be used?
Learning objectives.
- Define experiment
- Identify the core features of true experimental designs
- Describe the difference between an experimental group and a control group
- Identify and describe the various types of true experimental designs
Experiments are an excellent data collection strategy for social workers wishing to observe the effects of a clinical intervention or social welfare program. Understanding what experiments are and how they are conducted is useful for all social scientists, whether they actually plan to use this methodology or simply aim to understand findings from experimental studies. An experiment is a method of data collection designed to test hypotheses under controlled conditions. In social scientific research, the term experiment has a precise meaning and should not be used to describe all research methodologies.
Experiments have a long and important history in social science. Behaviorists such as John Watson, B. F. Skinner, Ivan Pavlov, and Albert Bandura used experimental design to demonstrate the various types of conditioning. Using strictly controlled environments, behaviorists were able to isolate a single stimulus as the cause of measurable differences in behavior or physiological responses. The foundations of social learning theory and behavior modification are found in experimental research projects. Moreover, behaviorist experiments brought psychology and social science away from the abstract world of Freudian analysis and towards empirical inquiry, grounded in real-world observations and objectively-defined variables. Experiments are used at all levels of social work inquiry, including agency-based experiments that test therapeutic interventions and policy experiments that test new programs.
Several kinds of experimental designs exist. In general, designs considered to be true experiments contain three basic key features:
- random assignment of participants into experimental and control groups
- a “treatment” (or intervention) provided to the experimental group
- measurement of the effects of the treatment in a post-test administered to both groups
Some true experiments are more complex. Their designs can also include a pre-test and can have more than two groups, but these are the minimum requirements for a design to be a true experiment.
Experimental and control groups
In a true experiment, the effect of an intervention is tested by comparing two groups: one that is exposed to the intervention (the experimental group , also known as the treatment group) and another that does not receive the intervention (the control group ). Importantly, participants in a true experiment need to be randomly assigned to either the control or experimental groups. Random assignment uses a random number generator or some other random process to assign people into experimental and control groups. Random assignment is important in experimental research because it helps to ensure that the experimental group and control group are comparable and that any differences between the experimental and control groups are due to random chance. We will address more of the logic behind random assignment in the next section.
Treatment or intervention
In an experiment, the independent variable is receiving the intervention being tested—for example, a therapeutic technique, prevention program, or access to some service or support. It is less common in of social work research, but social science research may also have a stimulus, rather than an intervention as the independent variable. For example, an electric shock or a reading about death might be used as a stimulus to provoke a response.
In some cases, it may be immoral to withhold treatment completely from a control group within an experiment. If you recruited two groups of people with severe addiction and only provided treatment to one group, the other group would likely suffer. For these cases, researchers use a control group that receives “treatment as usual.” Experimenters must clearly define what treatment as usual means. For example, a standard treatment in substance abuse recovery is attending Alcoholics Anonymous or Narcotics Anonymous meetings. A substance abuse researcher conducting an experiment may use twelve-step programs in their control group and use their experimental intervention in the experimental group. The results would show whether the experimental intervention worked better than normal treatment, which is useful information.
The dependent variable is usually the intended effect the researcher wants the intervention to have. If the researcher is testing a new therapy for individuals with binge eating disorder, their dependent variable may be the number of binge eating episodes a participant reports. The researcher likely expects her intervention to decrease the number of binge eating episodes reported by participants. Thus, she must, at a minimum, measure the number of episodes that occur after the intervention, which is the post-test . In a classic experimental design, participants are also given a pretest to measure the dependent variable before the experimental treatment begins.
Types of experimental design
Let’s put these concepts in chronological order so we can better understand how an experiment runs from start to finish. Once you’ve collected your sample, you’ll need to randomly assign your participants to the experimental group and control group. In a common type of experimental design, you will then give both groups your pretest, which measures your dependent variable, to see what your participants are like before you start your intervention. Next, you will provide your intervention, or independent variable, to your experimental group, but not to your control group. Many interventions last a few weeks or months to complete, particularly therapeutic treatments. Finally, you will administer your post-test to both groups to observe any changes in your dependent variable. What we’ve just described is known as the classical experimental design and is the simplest type of true experimental design. All of the designs we review in this section are variations on this approach. Figure 8.1 visually represents these steps.
An interesting example of experimental research can be found in Shannon K. McCoy and Brenda Major’s (2003) study of people’s perceptions of prejudice. In one portion of this multifaceted study, all participants were given a pretest to assess their levels of depression. No significant differences in depression were found between the experimental and control groups during the pretest. Participants in the experimental group were then asked to read an article suggesting that prejudice against their own racial group is severe and pervasive, while participants in the control group were asked to read an article suggesting that prejudice against a racial group other than their own is severe and pervasive. Clearly, these were not meant to be interventions or treatments to help depression, but were stimuli designed to elicit changes in people’s depression levels. Upon measuring depression scores during the post-test period, the researchers discovered that those who had received the experimental stimulus (the article citing prejudice against their same racial group) reported greater depression than those in the control group. This is just one of many examples of social scientific experimental research.
In addition to classic experimental design, there are two other ways of designing experiments that are considered to fall within the purview of “true” experiments (Babbie, 2010; Campbell & Stanley, 1963). The posttest-only control group design is almost the same as classic experimental design, except it does not use a pretest. Researchers who use posttest-only designs want to eliminate testing effects , in which participants’ scores on a measure change because they have already been exposed to it. If you took multiple SAT or ACT practice exams before you took the real one you sent to colleges, you’ve taken advantage of testing effects to get a better score. Considering the previous example on racism and depression, participants who are given a pretest about depression before being exposed to the stimulus would likely assume that the intervention is designed to address depression. That knowledge could cause them to answer differently on the post-test than they otherwise would. In theory, as long as the control and experimental groups have been determined randomly and are therefore comparable, no pretest is needed. However, most researchers prefer to use pretests in case randomization did not result in equivalent groups and to help assess change over time within both the experimental and control groups.
Researchers wishing to account for testing effects but also gather pretest data can use a Solomon four-group design. In the Solomon four-group design , the researcher uses four groups. Two groups are treated as they would be in a classic experiment—pretest, experimental group intervention, and post-test. The other two groups do not receive the pretest, though one receives the intervention. All groups are given the post-test. Table 8.1 illustrates the features of each of the four groups in the Solomon four-group design. By having one set of experimental and control groups that complete the pretest (Groups 1 and 2) and another set that does not complete the pretest (Groups 3 and 4), researchers using the Solomon four-group design can account for testing effects in their analysis.
Solomon four-group designs are challenging to implement in the real world because they are time- and resource-intensive. Researchers must recruit enough participants to create four groups and implement interventions in two of them.
Overall, true experimental designs are sometimes difficult to implement in a real-world practice environment. It may be impossible to withhold treatment from a control group or randomly assign participants in a study. In these cases, pre-experimental and quasi-experimental designs–which we will discuss in the next section–can be used. However, the differences in rigor from true experimental designs leave their conclusions more open to critique.
Experimental design in macro-level research
You can imagine that social work researchers may be limited in their ability to use random assignment when examining the effects of governmental policy on individuals. For example, it is unlikely that a researcher could randomly assign some states to implement decriminalization of recreational marijuana and some states not to in order to assess the effects of the policy change. There are, however, important examples of policy experiments that use random assignment, including the Oregon Medicaid experiment. In the Oregon Medicaid experiment, the wait list for Oregon was so long, state officials conducted a lottery to see who from the wait list would receive Medicaid (Baicker et al., 2013). Researchers used the lottery as a natural experiment that included random assignment. People selected to be a part of Medicaid were the experimental group and those on the wait list were in the control group. There are some practical complications macro-level experiments, just as with other experiments. For example, the ethical concern with using people on a wait list as a control group exists in macro-level research just as it does in micro-level research.
Key Takeaways
- True experimental designs require random assignment.
- Control groups do not receive an intervention, and experimental groups receive an intervention.
- The basic components of a true experiment include a pretest, posttest, control group, and experimental group.
- Testing effects may cause researchers to use variations on the classic experimental design.
- Classic experimental design- uses random assignment, an experimental and control group, as well as pre- and posttesting
- Control group- the group in an experiment that does not receive the intervention
- Experiment- a method of data collection designed to test hypotheses under controlled conditions
- Experimental group- the group in an experiment that receives the intervention
- Posttest- a measurement taken after the intervention
- Posttest-only control group design- a type of experimental design that uses random assignment, and an experimental and control group, but does not use a pretest
- Pretest- a measurement taken prior to the intervention
- Random assignment-using a random process to assign people into experimental and control groups
- Solomon four-group design- uses random assignment, two experimental and two control groups, pretests for half of the groups, and posttests for all
- Testing effects- when a participant’s scores on a measure change because they have already been exposed to it
- True experiments- a group of experimental designs that contain independent and dependent variables, pretesting and post testing, and experimental and control groups
Image attributions
exam scientific experiment by mohamed_hassan CC-0
Foundations of Social Work Research Copyright © 2020 by Rebecca L. Mauldin is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.
Share This Book
Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.
14.2 True experiments
Learning objectives.
Learners will be able to…
- Describe a true experimental design in social work research
- Understand the different types of true experimental designs
- Determine what kinds of research questions true experimental designs are suited for
- Discuss advantages and disadvantages of true experimental designs
A true experiment , often considered to be the “gold standard” in research designs, is thought of as one of the most rigorous of all research designs. In this design, one or more independent variables (as treatments) are manipulated by the researcher, subjects are randomly assigned (i.e., random assignment) to different treatment levels, and the results of the treatments on outcomes (dependent variables) are observed. The unique strength of experimental research is its ability to increase internal validity and help establish causality through treatment manipulation, while controlling for the effects of extraneous variables. As such they are best suited for explanatory research questions.
In true experimental design, research subjects are assigned to either an experimental group, which receives the treatment or intervention being investigated, or a control group, which does not. Control groups may receive no treatment at all, the standard treatment (which is called “treatment as usual” or TAU), or a treatment that entails some type of contact or interaction without the characteristics of the intervention being investigated. For example, the control group may participate in a support group while the experimental group is receiving a new group-based therapeutic intervention consisting of education and cognitive behavioral group therapy.
After determining the nature of the experimental and control groups, the next decision a researcher must make is when they need to collect data during their experiment. Do they take a baseline measurement and then a measurement after treatment, or just a measurement after treatment, or do they handle data collection another way? Below, we’ll discuss three main types of true experimental designs. There are sub-types of each of these designs, but here, we just want to get you started with some of the basics.
Using a true experiment in social work research is often difficult and can be quite resource intensive. True experiments work best with relatively large sample sizes, and random assignment, a key criterion for a true experimental design, is hard (and unethical) to execute in practice when you have people in dire need of an intervention. Nonetheless, some of the strongest evidence bases are built on true experiments.
For the purposes of this section, let’s bring back the example of CBT for the treatment of social anxiety. We have a group of 500 individuals who have agreed to participate in our study, and we have randomly assigned them to the control and experimental groups. The participants in the experimental group will receive CBT, while the participants in the control group will receive a series of videos about social anxiety.
Classical experiments (pretest posttest control group design)
The elements of a classical experiment are (1) random assignment of participants into an experimental and control group, (2) a pretest to assess the outcome(s) of interest for each group, (3) delivery of an intervention/treatment to the experimental group, and (4) a posttest to both groups to assess potential change in the outcome(s).
When explaining experimental research designs, we often use diagrams with abbreviations to visually represent the components of the experiment. Table 14.2 starts us off by laying out what the abbreviations mean.
Figure 14.1 depicts a classical experiment using our example of assessing the intervention of CBT for social anxiety. In the figure, RA denotes random assignment to the experimental group A and RB is random assignment to the control group B. O 1 (observation 1) denotes the pretest, X e denotes the experimental intervention, and O 2 (observation 2) denotes the posttest.
The more general, or universal, notation for classical experimental design is shown in Figure 14.2.
In a situation where the control group received treatment as usual instead of no intervention, the diagram would look this way (Figure 14.3), with X i denoting treatment as usual:
Hopefully, these diagrams provide you a visualization of how this type of experiment establishes temporality , a key component of a causal relationship. By administering the pretest, researchers can assess if the change in the outcome occured after the intervention. Assuming there is a change in the scores between the pretest and posttest, we would be able to say that yes, the change did occur after the intervention.
Posttest only control group design
Posttest only control group design involves only giving participants a posttest, just like it sounds. But why would you use this design instead of using a pretest posttest design? One reason could be to avoid potential testing effects that can happen when research participants take a pretest.
In research, the testing effect threatens internal validity when the pretest changes the way the participants respond on the posttest or subsequent assessments (Flannelly, Flannelly, & Jankowski, 2018). [1] A common example occurs when testing interventions for cognitive impairment in older adults. By taking a cognitive assessment during the pretest, participants get exposed to the items on the assessment and get to “practice” taking it (see for example, Cooley et al., 2015). [2] They may perform better the second time they take it because they have learned how to take the test, not because there have been changes in cognition. This specific type of testing effect is called the practice effect . [3]
The testing effect isn’t always bad in practice—our initial assessments might help clients identify or put into words feelings or experiences they are having when they haven’t been able to do that before. In research, however, we might want to control its effects to isolate a cleaner causal relationship between intervention and outcome. Going back to our CBT for social anxiety example, we might be concerned that participants would learn about social anxiety symptoms by virtue of taking a pretest. They might then identify that they have those symptoms on the posttest, even though they are not new symptoms for them. That could make our intervention look less effective than it actually is. To mitigate the influence of testing effects, posttest only control group designs do not administer a pretest to participants. Figure 14.4 depicts this.
A drawback to the posttest only control group design is that without a baseline measurement, establishing causality can be more difficult. If we don’t know someone’s state of mind before our intervention, how do we know our intervention did anything at all? Establishing time order is thus a little more difficult. The posttest only control group design relies on the random assignment to groups to create groups that are equivalent at baseline because, without a pretest, researchers cannot assess whether the groups are equivalent before the intervention. Researchers must balance this consideration with the benefits of this type of design.
Solomon four group design
One way we can possibly measure how much the testing effect threatens internal validity is with the Solomon four group design. Basically, as part of this experiment, there are two experimental groups and two control groups. The first pair of experimental/control groups receives both a pretest and a posttest. The other pair receives only a posttest (Figure 14.5). In addition to addressing testing effects, this design also addresses the problems of establishing time order and equivalent groups in posttest only control group designs.
For our CBT project, we would randomly assign people to four different groups instead of just two. Groups A and B would take our pretest measures and our posttest measures, and groups C and D would take only our posttest measures. We could then compare the results among these groups and see if they’re significantly different between the folks in A and B, and C and D. If they are, we may have identified some kind of testing effect, which enables us to put our results into full context. We don’t want to draw a strong causal conclusion about our intervention when we have major concerns about testing effects without trying to determine the extent of those effects.
Solomon four group designs are less common in social work research, primarily because of the logistics and resource needs involved. Nonetheless, this is an important experimental design to consider when we want to address major concerns about testing effects.
Key Takeaways
- True experimental design is best suited for explanatory research questions.
- True experiments require random assignment of participants to control and experimental groups.
- Pretest posttest research design involves two points of measurement—one pre-intervention and one post-intervention.
- Posttest only research design involves only one point of measurement—after the intervention or treatment. It is a useful design to minimize the effect of testing effects on our results.
- Solomon four group research design involves both of the above types of designs, using 2 pairs of control and experimental groups. One group receives both a pretest and a posttest, while the other receives only a posttest. This can help uncover the influence of testing effects.
TRACK 1 (IF YOU ARE CREATING A RESEARCH PROPOSAL FOR THIS CLASS):
- Think about a true experiment you might conduct for your research project. Which design would be best for your research, and why?
- What challenges or limitations might make it unrealistic (or at least very complicated!) for you to carry your true experimental design in the real-world as a researcher?
- What hypothesis(es) would you test using this true experiment?
TRACK 2 (IF YOU AREN’T CREATING A RESEARCH PROPOSAL FOR THIS CLASS):
Imagine you are interested in studying child welfare practice. You are interested in learning more about community-based programs aimed to prevent child maltreatment and to prevent out-of-home placement for children.
- Think about a true experiment you might conduct for this research project. Which design would be best for this research, and why?
- What challenges or limitations might make it unrealistic (or at least very complicated) for you to carry your true experimental design in the real-world as a researcher?
- Flannelly, K. J., Flannelly, L. T., & Jankowski, K. R. B. (2018). Threats to the internal validity of experimental and quasi-experimental research in healthcare. Journal of Health Care Chaplaincy, 24 (3), 107-130. https://doi.org/10.1080/08854726.20 17.1421019 ↵
- Cooley, S. A., Heaps, J. M., Bolzenius, J. D., Salminen, L. E., Baker, L. M., Scott, S. E., & Paul, R. H. (2015). Longitudinal change in performance on the Montreal Cognitive Assessment in older adults. The Clinical Neuropsychologist, 29(6), 824-835. https://doi.org/10.1080/13854046.2015.1087596 ↵
- Duff, K., Beglinger, L. J., Schultz, S. K., Moser, D. J., McCaffrey, R. J., Haase, R. F., Westervelt, H. J., Langbehn, D. R., Paulsen, J. S., & Huntington's Study Group (2007). Practice effects in the prediction of long-term cognitive outcome in three patient samples: a novel prognostic index. Archives of clinical neuropsychology : the official journal of the National Academy of Neuropsychologists, 22(1), 15–24. https://doi.org/10.1016/j.acn.2006.08.013 ↵
An experimental design in which one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different treatment levels (random assignment), and the results of the treatments on outcomes (dependent variables) are observed
Ability to say that one variable "causes" something to happen to another variable. Very important to assess when thinking about studies that examine causation such as experimental or quasi-experimental designs.
the idea that one event, behavior, or belief will result in the occurrence of another, subsequent event, behavior, or belief
A demonstration that a change occurred after an intervention. An important criterion for establishing causality.
an experimental design in which participants are randomly assigned to control and treatment groups, one group receives an intervention, and both groups receive only a post-test assessment
The measurement error related to how a test is given; the conditions of the testing, including environmental conditions; and acclimation to the test itself
improvements in cognitive assessments due to exposure to the instrument
Doctoral Research Methods in Social Work Copyright © by Mavs Open Press. All Rights Reserved.
Share This Book
21 13. Experimental design
Chapter outline.
- What is an experiment and when should you use one? (8 minute read)
- True experimental designs (7 minute read)
- Quasi-experimental designs (8 minute read)
- Non-experimental designs (5 minute read)
- Critical, ethical, and critical considerations (5 minute read)
Content warning : examples in this chapter contain references to non-consensual research in Western history, including experiments conducted during the Holocaust and on African Americans (section 13.6).
13.1 What is an experiment and when should you use one?
Learning objectives.
Learners will be able to…
- Identify the characteristics of a basic experiment
- Describe causality in experimental design
- Discuss the relationship between dependent and independent variables in experiments
- Explain the links between experiments and generalizability of results
- Describe advantages and disadvantages of experimental designs
The basics of experiments
The first experiment I can remember using was for my fourth grade science fair. I wondered if latex- or oil-based paint would hold up to sunlight better. So, I went to the hardware store and got a few small cans of paint and two sets of wooden paint sticks. I painted one with oil-based paint and the other with latex-based paint of different colors and put them in a sunny spot in the back yard. My hypothesis was that the oil-based paint would fade the most and that more fading would happen the longer I left the paint sticks out. (I know, it’s obvious, but I was only 10.)
I checked in on the paint sticks every few days for a month and wrote down my observations. The first part of my hypothesis ended up being wrong—it was actually the latex-based paint that faded the most. But the second part was right, and the paint faded more and more over time. This is a simple example, of course—experiments get a heck of a lot more complex than this when we’re talking about real research.
Merriam-Webster defines an experiment as “an operation or procedure carried out under controlled conditions in order to discover an unknown effect or law, to test or establish a hypothesis, or to illustrate a known law.” Each of these three components of the definition will come in handy as we go through the different types of experimental design in this chapter. Most of us probably think of the physical sciences when we think of experiments, and for good reason—these experiments can be pretty flashy! But social science and psychological research follow the same scientific methods, as we’ve discussed in this book.
As the video discusses, experiments can be used in social sciences just like they can in physical sciences. It makes sense to use an experiment when you want to determine the cause of a phenomenon with as much accuracy as possible. Some types of experimental designs do this more precisely than others, as we’ll see throughout the chapter. If you’ll remember back to Chapter 11 and the discussion of validity, experiments are the best way to ensure internal validity, or the extent to which a change in your independent variable causes a change in your dependent variable.
Experimental designs for research projects are most appropriate when trying to uncover or test a hypothesis about the cause of a phenomenon, so they are best for explanatory research questions. As we’ll learn throughout this chapter, different circumstances are appropriate for different types of experimental designs. Each type of experimental design has advantages and disadvantages, and some are better at controlling the effect of extraneous variables —those variables and characteristics that have an effect on your dependent variable, but aren’t the primary variable whose influence you’re interested in testing. For example, in a study that tries to determine whether aspirin lowers a person’s risk of a fatal heart attack, a person’s race would likely be an extraneous variable because you primarily want to know the effect of aspirin.
In practice, many types of experimental designs can be logistically challenging and resource-intensive. As practitioners, the likelihood that we will be involved in some of the types of experimental designs discussed in this chapter is fairly low. However, it’s important to learn about these methods, even if we might not ever use them, so that we can be thoughtful consumers of research that uses experimental designs.
While we might not use all of these types of experimental designs, many of us will engage in evidence-based practice during our time as social workers. A lot of research developing evidence-based practice, which has a strong emphasis on generalizability, will use experimental designs. You’ve undoubtedly seen one or two in your literature search so far.
The logic of experimental design
How do we know that one phenomenon causes another? The complexity of the social world in which we practice and conduct research means that causes of social problems are rarely cut and dry. Uncovering explanations for social problems is key to helping clients address them, and experimental research designs are one road to finding answers.
As you read about in Chapter 8 (and as we’ll discuss again in Chapter 15 ), just because two phenomena are related in some way doesn’t mean that one causes the other. Ice cream sales increase in the summer, and so does the rate of violent crime; does that mean that eating ice cream is going to make me murder someone? Obviously not, because ice cream is great. The reality of that relationship is far more complex—it could be that hot weather makes people more irritable and, at times, violent, while also making people want ice cream. More likely, though, there are other social factors not accounted for in the way we just described this relationship.
Experimental designs can help clear up at least some of this fog by allowing researchers to isolate the effect of interventions on dependent variables by controlling extraneous variables . In true experimental design (discussed in the next section) and some quasi-experimental designs, researchers accomplish this w ith the control group and the experimental group . (The experimental group is sometimes called the “treatment group,” but we will call it the experimental group in this chapter.) The control group does not receive the intervention you are testing (they may receive no intervention or what is known as “treatment as usual”), while the experimental group does. (You will hopefully remember our earlier discussion of control variables in Chapter 8 —conceptually, the use of the word “control” here is the same.)
In a well-designed experiment, your control group should look almost identical to your experimental group in terms of demographics and other relevant factors. What if we want to know the effect of CBT on social anxiety, but we have learned in prior research that men tend to have a more difficult time overcoming social anxiety? We would want our control and experimental groups to have a similar gender mix because it would limit the effect of gender on our results, since ostensibly, both groups’ results would be affected by gender in the same way. If your control group has 5 women, 6 men, and 4 non-binary people, then your experimental group should be made up of roughly the same gender balance to help control for the influence of gender on the outcome of your intervention. (In reality, the groups should be similar along other dimensions, as well, and your group will likely be much larger.) The researcher will use the same outcome measures for both groups and compare them, and assuming the experiment was designed correctly, get a pretty good answer about whether the intervention had an effect on social anxiety.
You will also hear people talk about comparison groups , which are similar to control groups. The primary difference between the two is that a control group is populated using random assignment, but a comparison group is not. Random assignment entails using a random process to decide which participants are put into the control or experimental group (which participants receive an intervention and which do not). By randomly assigning participants to a group, you can reduce the effect of extraneous variables on your research because there won’t be a systematic difference between the groups.
Do not confuse random assignment with random sampling. Random sampling is a method for selecting a sample from a population, and is rarely used in psychological research. Random assignment is a method for assigning participants in a sample to the different conditions, and it is an important element of all experimental research in psychology and other related fields. Random sampling also helps a great deal with generalizability , whereas random assignment increases internal validity .
We have already learned about internal validity in Chapter 11 . The use of an experimental design will bolster internal validity since it works to isolate causal relationships. As we will see in the coming sections, some types of experimental design do this more effectively than others. It’s also worth considering that true experiments, which most effectively show causality , are often difficult and expensive to implement. Although other experimental designs aren’t perfect, they still produce useful, valid evidence and may be more feasible to carry out.
Key Takeaways
- Experimental designs are useful for establishing causality, but some types of experimental design do this better than others.
- Experiments help researchers isolate the effect of the independent variable on the dependent variable by controlling for the effect of extraneous variables .
- Experiments use a control/comparison group and an experimental group to test the effects of interventions. These groups should be as similar to each other as possible in terms of demographics and other relevant factors.
- True experiments have control groups with randomly assigned participants, while other types of experiments have comparison groups to which participants are not randomly assigned.
- Think about the research project you’ve been designing so far. How might you use a basic experiment to answer your question? If your question isn’t explanatory, try to formulate a new explanatory question and consider the usefulness of an experiment.
- Why is establishing a simple relationship between two variables not indicative of one causing the other?
13.2 True experimental design
- Describe a true experimental design in social work research
- Understand the different types of true experimental designs
- Determine what kinds of research questions true experimental designs are suited for
- Discuss advantages and disadvantages of true experimental designs
True experimental design , often considered to be the “gold standard” in research designs, is thought of as one of the most rigorous of all research designs. In this design, one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different treatment levels (random assignment), and the results of the treatments on outcomes (dependent variables) are observed. The unique strength of experimental research is its internal validity and its ability to establish ( causality ) through treatment manipulation, while controlling for the effects of extraneous variable. Sometimes the treatment level is no treatment, while other times it is simply a different treatment than that which we are trying to evaluate. For example, we might have a control group that is made up of people who will not receive any treatment for a particular condition. Or, a control group could consist of people who consent to treatment with DBT when we are testing the effectiveness of CBT.
As we discussed in the previous section, a true experiment has a control group with participants randomly assigned , and an experimental group . This is the most basic element of a true experiment. The next decision a researcher must make is when they need to gather data during their experiment. Do they take a baseline measurement and then a measurement after treatment, or just a measurement after treatment, or do they handle measurement another way? Below, we’ll discuss the three main types of true experimental designs. There are sub-types of each of these designs, but here, we just want to get you started with some of the basics.
Using a true experiment in social work research is often pretty difficult, since as I mentioned earlier, true experiments can be quite resource intensive. True experiments work best with relatively large sample sizes, and random assignment, a key criterion for a true experimental design, is hard (and unethical) to execute in practice when you have people in dire need of an intervention. Nonetheless, some of the strongest evidence bases are built on true experiments.
For the purposes of this section, let’s bring back the example of CBT for the treatment of social anxiety. We have a group of 500 individuals who have agreed to participate in our study, and we have randomly assigned them to the control and experimental groups. The folks in the experimental group will receive CBT, while the folks in the control group will receive more unstructured, basic talk therapy. These designs, as we talked about above, are best suited for explanatory research questions.
Before we get started, take a look at the table below. When explaining experimental research designs, we often use diagrams with abbreviations to visually represent the experiment. Table 13.1 starts us off by laying out what each of the abbreviations mean.
Pretest and post-test control group design
In pretest and post-test control group design , participants are given a pretest of some kind to measure their baseline state before their participation in an intervention. In our social anxiety experiment, we would have participants in both the experimental and control groups complete some measure of social anxiety—most likely an established scale and/or a structured interview—before they start their treatment. As part of the experiment, we would have a defined time period during which the treatment would take place (let’s say 12 weeks, just for illustration). At the end of 12 weeks, we would give both groups the same measure as a post-test .
In the diagram, RA (random assignment group A) is the experimental group and RB is the control group. O 1 denotes the pre-test, X e denotes the experimental intervention, and O 2 denotes the post-test. Let’s look at this diagram another way, using the example of CBT for social anxiety that we’ve been talking about.
In a situation where the control group received treatment as usual instead of no intervention, the diagram would look this way, with X i denoting treatment as usual (Figure 13.3).
Hopefully, these diagrams provide you a visualization of how this type of experiment establishes time order , a key component of a causal relationship. Did the change occur after the intervention? Assuming there is a change in the scores between the pretest and post-test, we would be able to say that yes, the change did occur after the intervention. Causality can’t exist if the change happened before the intervention—this would mean that something else led to the change, not our intervention.
Post-test only control group design
Post-test only control group design involves only giving participants a post-test, just like it sounds (Figure 13.4).
But why would you use this design instead of using a pretest/post-test design? One reason could be the testing effect that can happen when research participants take a pretest. In research, the testing effect refers to “measurement error related to how a test is given; the conditions of the testing, including environmental conditions; and acclimation to the test itself” (Engel & Schutt, 2017, p. 444) [1] (When we say “measurement error,” all we mean is the accuracy of the way we measure the dependent variable.) Figure 13.4 is a visualization of this type of experiment. The testing effect isn’t always bad in practice—our initial assessments might help clients identify or put into words feelings or experiences they are having when they haven’t been able to do that before. In research, however, we might want to control its effects to isolate a cleaner causal relationship between intervention and outcome.
Going back to our CBT for social anxiety example, we might be concerned that participants would learn about social anxiety symptoms by virtue of taking a pretest. They might then identify that they have those symptoms on the post-test, even though they are not new symptoms for them. That could make our intervention look less effective than it actually is.
However, without a baseline measurement establishing causality can be more difficult. If we don’t know someone’s state of mind before our intervention, how do we know our intervention did anything at all? Establishing time order is thus a little more difficult. You must balance this consideration with the benefits of this type of design.
Solomon four group design
One way we can possibly measure how much the testing effect might change the results of the experiment is with the Solomon four group design. Basically, as part of this experiment, you have two control groups and two experimental groups. The first pair of groups receives both a pretest and a post-test. The other pair of groups receives only a post-test (Figure 13.5). This design helps address the problem of establishing time order in post-test only control group designs.
For our CBT project, we would randomly assign people to four different groups instead of just two. Groups A and B would take our pretest measures and our post-test measures, and groups C and D would take only our post-test measures. We could then compare the results among these groups and see if they’re significantly different between the folks in A and B, and C and D. If they are, we may have identified some kind of testing effect, which enables us to put our results into full context. We don’t want to draw a strong causal conclusion about our intervention when we have major concerns about testing effects without trying to determine the extent of those effects.
Solomon four group designs are less common in social work research, primarily because of the logistics and resource needs involved. Nonetheless, this is an important experimental design to consider when we want to address major concerns about testing effects.
- True experimental design is best suited for explanatory research questions.
- True experiments require random assignment of participants to control and experimental groups.
- Pretest/post-test research design involves two points of measurement—one pre-intervention and one post-intervention.
- Post-test only research design involves only one point of measurement—post-intervention. It is a useful design to minimize the effect of testing effects on our results.
- Solomon four group research design involves both of the above types of designs, using 2 pairs of control and experimental groups. One group receives both a pretest and a post-test, while the other receives only a post-test. This can help uncover the influence of testing effects.
- Think about a true experiment you might conduct for your research project. Which design would be best for your research, and why?
- What challenges or limitations might make it unrealistic (or at least very complicated!) for you to carry your true experimental design in the real-world as a student researcher?
- What hypothesis(es) would you test using this true experiment?
13.4 Quasi-experimental designs
- Describe a quasi-experimental design in social work research
- Understand the different types of quasi-experimental designs
- Determine what kinds of research questions quasi-experimental designs are suited for
- Discuss advantages and disadvantages of quasi-experimental designs
Quasi-experimental designs are a lot more common in social work research than true experimental designs. Although quasi-experiments don’t do as good a job of giving us robust proof of causality , they still allow us to establish time order , which is a key element of causality. The prefix quasi means “resembling,” so quasi-experimental research is research that resembles experimental research, but is not true experimental research. Nonetheless, given proper research design, quasi-experiments can still provide extremely rigorous and useful results.
There are a few key differences between true experimental and quasi-experimental research. The primary difference between quasi-experimental research and true experimental research is that quasi-experimental research does not involve random assignment to control and experimental groups. Instead, we talk about comparison groups in quasi-experimental research instead. As a result, these types of experiments don’t control the effect of extraneous variables as well as a true experiment.
Quasi-experiments are most likely to be conducted in field settings in which random assignment is difficult or impossible. They are often conducted to evaluate the effectiveness of a treatment—perhaps a type of psychotherapy or an educational intervention. We’re able to eliminate some threats to internal validity, but we can’t do this as effectively as we can with a true experiment. Realistically, our CBT-social anxiety project is likely to be a quasi experiment, based on the resources and participant pool we’re likely to have available.
It’s important to note that not all quasi-experimental designs have a comparison group. There are many different kinds of quasi-experiments, but we will discuss the three main types below: nonequivalent comparison group designs, time series designs, and ex post facto comparison group designs.
Nonequivalent comparison group design
You will notice that this type of design looks extremely similar to the pretest/post-test design that we discussed in section 13.3. But instead of random assignment to control and experimental groups, researchers use other methods to construct their comparison and experimental groups. A diagram of this design will also look very similar to pretest/post-test design, but you’ll notice we’ve removed the “R” from our groups, since they are not randomly assigned (Figure 13.6).
Researchers using this design select a comparison group that’s as close as possible based on relevant factors to their experimental group. Engel and Schutt (2017) [2] identify two different selection methods:
- Individual matching : Researchers take the time to match individual cases in the experimental group to similar cases in the comparison group. It can be difficult, however, to match participants on all the variables you want to control for.
- Aggregate matching : Instead of trying to match individual participants to each other, researchers try to match the population profile of the comparison and experimental groups. For example, researchers would try to match the groups on average age, gender balance, or median income. This is a less resource-intensive matching method, but researchers have to ensure that participants aren’t choosing which group (comparison or experimental) they are a part of.
As we’ve already talked about, this kind of design provides weaker evidence that the intervention itself leads to a change in outcome. Nonetheless, we are still able to establish time order using this method, and can thereby show an association between the intervention and the outcome. Like true experimental designs, this type of quasi-experimental design is useful for explanatory research questions.
What might this look like in a practice setting? Let’s say you’re working at an agency that provides CBT and other types of interventions, and you have identified a group of clients who are seeking help for social anxiety, as in our earlier example. Once you’ve obtained consent from your clients, you can create a comparison group using one of the matching methods we just discussed. If the group is small, you might match using individual matching, but if it’s larger, you’ll probably sort people by demographics to try to get similar population profiles. (You can do aggregate matching more easily when your agency has some kind of electronic records or database, but it’s still possible to do manually.)
Time series design
Another type of quasi-experimental design is a time series design. Unlike other types of experimental design, time series designs do not have a comparison group. A time series is a set of measurements taken at intervals over a period of time (Figure 13.7). Proper time series design should include at least three pre- and post-intervention measurement points. While there are a few types of time series designs, we’re going to focus on the most common: interrupted time series design.
But why use this method? Here’s an example. Let’s think about elementary student behavior throughout the school year. As anyone with children or who is a teacher knows, kids get very excited and animated around holidays, days off, or even just on a Friday afternoon. This fact might mean that around those times of year, there are more reports of disruptive behavior in classrooms. What if we took our one and only measurement in mid-December? It’s possible we’d see a higher-than-average rate of disruptive behavior reports, which could bias our results if our next measurement is around a time of year students are in a different, less excitable frame of mind. When we take multiple measurements throughout the first half of the school year, we can establish a more accurate baseline for the rate of these reports by looking at the trend over time.
We may want to test the effect of extended recess times in elementary school on reports of disruptive behavior in classrooms. When students come back after the winter break, the school extends recess by 10 minutes each day (the intervention), and the researchers start tracking the monthly reports of disruptive behavior again. These reports could be subject to the same fluctuations as the pre-intervention reports, and so we once again take multiple measurements over time to try to control for those fluctuations.
This method improves the extent to which we can establish causality because we are accounting for a major extraneous variable in the equation—the passage of time. On its own, it does not allow us to account for other extraneous variables, but it does establish time order and association between the intervention and the trend in reports of disruptive behavior. Finding a stable condition before the treatment that changes after the treatment is evidence for causality between treatment and outcome.
Ex post facto comparison group design
Ex post facto (Latin for “after the fact”) designs are extremely similar to nonequivalent comparison group designs. There are still comparison and experimental groups, pretest and post-test measurements, and an intervention. But in ex post facto designs, participants are assigned to the comparison and experimental groups once the intervention has already happened. This type of design often occurs when interventions are already up and running at an agency and the agency wants to assess effectiveness based on people who have already completed treatment.
In most clinical agency environments, social workers conduct both initial and exit assessments, so there are usually some kind of pretest and post-test measures available. We also typically collect demographic information about our clients, which could allow us to try to use some kind of matching to construct comparison and experimental groups.
In terms of internal validity and establishing causality, ex post facto designs are a bit of a mixed bag. The ability to establish causality depends partially on the ability to construct comparison and experimental groups that are demographically similar so we can control for these extraneous variables .
Quasi-experimental designs are common in social work intervention research because, when designed correctly, they balance the intense resource needs of true experiments with the realities of research in practice. They still offer researchers tools to gather robust evidence about whether interventions are having positive effects for clients.
- Quasi-experimental designs are similar to true experiments, but do not require random assignment to experimental and control groups.
- In quasi-experimental projects, the group not receiving the treatment is called the comparison group, not the control group.
- Nonequivalent comparison group design is nearly identical to pretest/post-test experimental design, but participants are not randomly assigned to the experimental and control groups. As a result, this design provides slightly less robust evidence for causality.
- Nonequivalent groups can be constructed by individual matching or aggregate matching .
- Time series design does not have a control or experimental group, and instead compares the condition of participants before and after the intervention by measuring relevant factors at multiple points in time. This allows researchers to mitigate the error introduced by the passage of time.
- Ex post facto comparison group designs are also similar to true experiments, but experimental and comparison groups are constructed after the intervention is over. This makes it more difficult to control for the effect of extraneous variables, but still provides useful evidence for causality because it maintains the time order[ /pb_glossary] of the experiment.
- Think back to the experiment you considered for your research project in Section 13.3. Now that you know more about quasi-experimental designs, do you still think it's a true experiment? Why or why not?
- What should you consider when deciding whether an experimental or quasi-experimental design would be more feasible or fit your research question better?
13.5 Non-experimental designs
Learners will be able to...
- Describe non-experimental designs in social work research
- Discuss how non-experimental research differs from true and quasi-experimental research
- Demonstrate an understanding the different types of non-experimental designs
- Determine what kinds of research questions non-experimental designs are suited for
- Discuss advantages and disadvantages of non-experimental designs
The previous sections have laid out the basics of some rigorous approaches to establish that an intervention is responsible for changes we observe in research participants. This type of evidence is extremely important to build an evidence base for social work interventions, but it's not the only type of evidence to consider. We will discuss qualitative methods, which provide us with rich, contextual information, in Part 4 of this text. The designs we'll talk about in this section are sometimes used in [pb_glossary id="851"] qualitative research, but in keeping with our discussion of experimental design so far, we're going to stay in the quantitative research realm for now. Non-experimental is also often a stepping stone for more rigorous experimental design in the future, as it can help test the feasibility of your research.
In general, non-experimental designs do not strongly support causality and don't address threats to internal validity. However, that's not really what they're intended for. Non-experimental designs are useful for a few different types of research, including explanatory questions in program evaluation. Certain types of non-experimental design are also helpful for researchers when they are trying to develop a new assessment or scale. Other times, researchers or agency staff did not get a chance to gather any assessment information before an intervention began, so a pretest/post-test design is not possible.
A significant benefit of these types of designs is that they're pretty easy to execute in a practice or agency setting. They don't require a comparison or control group, and as Engel and Schutt (2017) [3] point out, they "flow from a typical practice model of assessment, intervention, and evaluating the impact of the intervention" (p. 177). Thus, these designs are fairly intuitive for social workers, even when they aren't expert researchers. Below, we will go into some detail about the different types of non-experimental design.
One group pretest/post-test design
Also known as a before-after one-group design, this type of research design does not have a comparison group and everyone who participates in the research receives the intervention (Figure 13.8). This is a common type of design in program evaluation in the practice world. Controlling for extraneous variables is difficult or impossible in this design, but given that it is still possible to establish some measure of time order, it does provide weak support for causality.
Imagine, for example, a researcher who is interested in the effectiveness of an anti-drug education program on elementary school students’ attitudes toward illegal drugs. The researcher could assess students' attitudes about illegal drugs (O 1 ), implement the anti-drug program (X), and then immediately after the program ends, the researcher could once again measure students’ attitudes toward illegal drugs (O 2 ). You can see how this would be relatively simple to do in practice, and have probably been involved in this type of research design yourself, even if informally. But hopefully, you can also see that this design would not provide us with much evidence for causality because we have no way of controlling for the effect of extraneous variables. A lot of things could have affected any change in students' attitudes—maybe girls already had different attitudes about illegal drugs than children of other genders, and when we look at the class's results as a whole, we couldn't account for that influence using this design.
All of that doesn't mean these results aren't useful, however. If we find that children's attitudes didn't change at all after the drug education program, then we need to think seriously about how to make it more effective or whether we should be using it at all. (This immediate, practical application of our results highlights a key difference between program evaluation and research, which we will discuss in Chapter 23 .)
After-only design
As the name suggests, this type of non-experimental design involves measurement only after an intervention. There is no comparison or control group, and everyone receives the intervention. I have seen this design repeatedly in my time as a program evaluation consultant for nonprofit organizations, because often these organizations realize too late that they would like to or need to have some sort of measure of what effect their programs are having.
Because there is no pretest and no comparison group, this design is not useful for supporting causality since we can't establish the time order and we can't control for extraneous variables. However, that doesn't mean it's not useful at all! Sometimes, agencies need to gather information about how their programs are functioning. A classic example of this design is satisfaction surveys—realistically, these can only be administered after a program or intervention. Questions regarding satisfaction, ease of use or engagement, or other questions that don't involve comparisons are best suited for this type of design.
Static-group design
A final type of non-experimental research is the static-group design. In this type of research, there are both comparison and experimental groups, which are not randomly assigned. There is no pretest, only a post-test, and the comparison group has to be constructed by the researcher. Sometimes, researchers will use matching techniques to construct the groups, but often, the groups are constructed by convenience of who is being served at the agency.
Non-experimental research designs are easy to execute in practice, but we must be cautious about drawing causal conclusions from the results. A positive result may still suggest that we should continue using a particular intervention (and no result or a negative result should make us reconsider whether we should use that intervention at all). You have likely seen non-experimental research in your daily life or at your agency, and knowing the basics of how to structure such a project will help you ensure you are providing clients with the best care possible.
- Non-experimental designs are useful for describing phenomena, but cannot demonstrate causality.
- After-only designs are often used in agency and practice settings because practitioners are often not able to set up pre-test/post-test designs.
- Non-experimental designs are useful for explanatory questions in program evaluation and are helpful for researchers when they are trying to develop a new assessment or scale.
- Non-experimental designs are well-suited to qualitative methods.
- If you were to use a non-experimental design for your research project, which would you choose? Why?
- Have you conducted non-experimental research in your practice or professional life? Which type of non-experimental design was it?
13.6 Critical, ethical, and cultural considerations
- Describe critiques of experimental design
- Identify ethical issues in the design and execution of experiments
- Identify cultural considerations in experimental design
As I said at the outset, experiments, and especially true experiments, have long been seen as the gold standard to gather scientific evidence. When it comes to research in the biomedical field and other physical sciences, true experiments are subject to far less nuance than experiments in the social world. This doesn't mean they are easier—just subject to different forces. However, as a society, we have placed the most value on quantitative evidence obtained through empirical observation and especially experimentation.
Major critiques of experimental designs tend to focus on true experiments, especially randomized controlled trials (RCTs), but many of these critiques can be applied to quasi-experimental designs, too. Some researchers, even in the biomedical sciences, question the view that RCTs are inherently superior to other types of quantitative research designs. RCTs are far less flexible and have much more stringent requirements than other types of research. One seemingly small issue, like incorrect information about a research participant, can derail an entire RCT. RCTs also cost a great deal of money to implement and don't reflect “real world” conditions. The cost of true experimental research or RCTs also means that some communities are unlikely to ever have access to these research methods. It is then easy for people to dismiss their research findings because their methods are seen as "not rigorous."
Obviously, controlling outside influences is important for researchers to draw strong conclusions, but what if those outside influences are actually important for how an intervention works? Are we missing really important information by focusing solely on control in our research? Is a treatment going to work the same for white women as it does for indigenous women? With the myriad effects of our societal structures, you should be very careful ever assuming this will be the case. This doesn't mean that cultural differences will negate the effect of an intervention; instead, it means that you should remember to practice cultural humility implementing all interventions, even when we "know" they work.
How we build evidence through experimental research reveals a lot about our values and biases, and historically, much experimental research has been conducted on white people, and especially white men. [4] This makes sense when we consider the extent to which the sciences and academia have historically been dominated by white patriarchy. This is especially important for marginalized groups that have long been ignored in research literature, meaning they have also been ignored in the development of interventions and treatments that are accepted as "effective." There are examples of marginalized groups being experimented on without their consent, like the Tuskegee Experiment or Nazi experiments on Jewish people during World War II. We cannot ignore the collective consciousness situations like this can create about experimental research for marginalized groups.
None of this is to say that experimental research is inherently bad or that you shouldn't use it. Quite the opposite—use it when you can, because there are a lot of benefits, as we learned throughout this chapter. As a social work researcher, you are uniquely positioned to conduct experimental research while applying social work values and ethics to the process and be a leader for others to conduct research in the same framework. It can conflict with our professional ethics, especially respect for persons and beneficence, if we do not engage in experimental research with our eyes wide open. We also have the benefit of a great deal of practice knowledge that researchers in other fields have not had the opportunity to get. As with all your research, always be sure you are fully exploring the limitations of the research.
- While true experimental research gathers strong evidence, it can also be inflexible, expensive, and overly simplistic in terms of important social forces that affect the resources.
- Marginalized communities' past experiences with experimental research can affect how they respond to research participation.
- Social work researchers should use both their values and ethics, and their practice experiences, to inform research and push other researchers to do the same.
- Think back to the true experiment you sketched out in the exercises for Section 13.3. Are there cultural or historical considerations you hadn't thought of with your participant group? What are they? Does this change the type of experiment you would want to do?
- How can you as a social work researcher encourage researchers in other fields to consider social work ethics and values in their experimental research?
- Engel, R. & Schutt, R. (2016). The practice of research in social work. Thousand Oaks, CA: SAGE Publications, Inc. ↵
- Sullivan, G. M. (2011). Getting off the “gold standard”: Randomized controlled trials and education research. Journal of Graduate Medical Education , 3 (3), 285-289. ↵
an operation or procedure carried out under controlled conditions in order to discover an unknown effect or law, to test or establish a hypothesis, or to illustrate a known law.
explains why particular phenomena work in the way that they do; answers “why” questions
variables and characteristics that have an effect on your outcome, but aren't the primary variable whose influence you're interested in testing.
the group of participants in our study who do not receive the intervention we are researching in experiments with random assignment
in experimental design, the group of participants in our study who do receive the intervention we are researching
the group of participants in our study who do not receive the intervention we are researching in experiments without random assignment
using a random process to decide which participants are tested in which conditions
The ability to apply research findings beyond the study sample to some broader population,
Ability to say that one variable "causes" something to happen to another variable. Very important to assess when thinking about studies that examine causation such as experimental or quasi-experimental designs.
the idea that one event, behavior, or belief will result in the occurrence of another, subsequent event, behavior, or belief
An experimental design in which one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different treatment levels (random assignment), and the results of the treatments on outcomes (dependent variables) are observed
a type of experimental design in which participants are randomly assigned to control and experimental groups, one group receives an intervention, and both groups receive pre- and post-test assessments
A measure of a participant's condition before they receive an intervention or treatment.
A measure of a participant's condition after an intervention or, if they are part of the control/comparison group, at the end of an experiment.
A demonstration that a change occurred after an intervention. An important criterion for establishing causality.
an experimental design in which participants are randomly assigned to control and treatment groups, one group receives an intervention, and both groups receive only a post-test assessment
The measurement error related to how a test is given; the conditions of the testing, including environmental conditions; and acclimation to the test itself
a subtype of experimental design that is similar to a true experiment, but does not have randomly assigned control and treatment groups
In nonequivalent comparison group designs, the process by which researchers match individual cases in the experimental group to similar cases in the comparison group.
In nonequivalent comparison group designs, the process in which researchers match the population profile of the comparison and experimental groups.
a set of measurements taken at intervals over a period of time
Graduate research methods in social work Copyright © 2021 by Matthew DeCarlo, Cory Cummings, Kate Agnelli is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.
IMAGES
VIDEO
COMMENTS
True experimental design is a statistical approach of establishing a cause and effect relationship between different variables. This is one of the most accurate forms of research designs which provides a substantial backing to support the existence of relationships.
True experimental design is regarded as the most accurate form of experimental research, in that it tries to prove or disprove a hypothesis mathematically, with statistical analysis.
True experimental design is best suited for explanatory research questions. True experiments require random assignment of participants to control and experimental groups. Pretest/post-test research design involves two points of measurement—one pre-intervention and one post-intervention.
Learn what a true experiment is and the three criteria that must be met: control group, experimental group and research-manipulated variable. See examples of true experimental design and compare it with quasi-experimental methods.
Experimental design is a structured approach used to conduct scientific experiments. It enables researchers to explore cause-and-effect relationships by controlling variables and testing hypotheses. This guide explores the types of experimental designs, common methods, and best practices for planning and conducting experiments.
A true experimental research design relies on statistical analysis to prove or disprove a researcher’s hypothesis. It is one of the most accurate forms of research because it provides specific scientific evidence.
Learn what experimental design is, how it is used in social work research, and what are its core features and types. Experimental design involves random assignment, treatment, and post-test of two groups: experimental and control.
Learn how to design an experiment to test a causal hypothesis. Follow five steps: define variables, write hypothesis, design treatments, assign subjects, measure dependent variable.
In true experimental design, research subjects are assigned to either an experimental group, which receives the treatment or intervention being investigated, or a control group, which does not.
Learn about the basics, types, and considerations of experimental design in social work research. Experimental design is a method to test causal relationships between variables by controlling extraneous factors and using control and experimental groups.