19+ Experimental Design Examples (Methods + Types)

practical psychology logo

Ever wondered how scientists discover new medicines, psychologists learn about behavior, or even how marketers figure out what kind of ads you like? Well, they all have something in common: they use a special plan or recipe called an "experimental design."

Imagine you're baking cookies. You can't just throw random amounts of flour, sugar, and chocolate chips into a bowl and hope for the best. You follow a recipe, right? Scientists and researchers do something similar. They follow a "recipe" called an experimental design to make sure their experiments are set up in a way that the answers they find are meaningful and reliable.

Experimental design is the roadmap researchers use to answer questions. It's a set of rules and steps that researchers follow to collect information, or "data," in a way that is fair, accurate, and makes sense.

experimental design test tubes

Long ago, people didn't have detailed game plans for experiments. They often just tried things out and saw what happened. But over time, people got smarter about this. They started creating structured plans—what we now call experimental designs—to get clearer, more trustworthy answers to their questions.

In this article, we'll take you on a journey through the world of experimental designs. We'll talk about the different types, or "flavors," of experimental designs, where they're used, and even give you a peek into how they came to be.

What Is Experimental Design?

Alright, before we dive into the different types of experimental designs, let's get crystal clear on what experimental design actually is.

Imagine you're a detective trying to solve a mystery. You need clues, right? Well, in the world of research, experimental design is like the roadmap that helps you find those clues. It's like the game plan in sports or the blueprint when you're building a house. Just like you wouldn't start building without a good blueprint, researchers won't start their studies without a strong experimental design.

So, why do we need experimental design? Think about baking a cake. If you toss ingredients into a bowl without measuring, you'll end up with a mess instead of a tasty dessert.

Similarly, in research, if you don't have a solid plan, you might get confusing or incorrect results. A good experimental design helps you ask the right questions ( think critically ), decide what to measure ( come up with an idea ), and figure out how to measure it (test it). It also helps you consider things that might mess up your results, like outside influences you hadn't thought of.

For example, let's say you want to find out if listening to music helps people focus better. Your experimental design would help you decide things like: Who are you going to test? What kind of music will you use? How will you measure focus? And, importantly, how will you make sure that it's really the music affecting focus and not something else, like the time of day or whether someone had a good breakfast?

In short, experimental design is the master plan that guides researchers through the process of collecting data, so they can answer questions in the most reliable way possible. It's like the GPS for the journey of discovery!

History of Experimental Design

Around 350 BCE, people like Aristotle were trying to figure out how the world works, but they mostly just thought really hard about things. They didn't test their ideas much. So while they were super smart, their methods weren't always the best for finding out the truth.

Fast forward to the Renaissance (14th to 17th centuries), a time of big changes and lots of curiosity. People like Galileo started to experiment by actually doing tests, like rolling balls down inclined planes to study motion. Galileo's work was cool because he combined thinking with doing. He'd have an idea, test it, look at the results, and then think some more. This approach was a lot more reliable than just sitting around and thinking.

Now, let's zoom ahead to the 18th and 19th centuries. This is when people like Francis Galton, an English polymath, started to get really systematic about experimentation. Galton was obsessed with measuring things. Seriously, he even tried to measure how good-looking people were ! His work helped create the foundations for a more organized approach to experiments.

Next stop: the early 20th century. Enter Ronald A. Fisher , a brilliant British statistician. Fisher was a game-changer. He came up with ideas that are like the bread and butter of modern experimental design.

Fisher invented the concept of the " control group "—that's a group of people or things that don't get the treatment you're testing, so you can compare them to those who do. He also stressed the importance of " randomization ," which means assigning people or things to different groups by chance, like drawing names out of a hat. This makes sure the experiment is fair and the results are trustworthy.

Around the same time, American psychologists like John B. Watson and B.F. Skinner were developing " behaviorism ." They focused on studying things that they could directly observe and measure, like actions and reactions.

Skinner even built boxes—called Skinner Boxes —to test how animals like pigeons and rats learn. Their work helped shape how psychologists design experiments today. Watson performed a very controversial experiment called The Little Albert experiment that helped describe behaviour through conditioning—in other words, how people learn to behave the way they do.

In the later part of the 20th century and into our time, computers have totally shaken things up. Researchers now use super powerful software to help design their experiments and crunch the numbers.

With computers, they can simulate complex experiments before they even start, which helps them predict what might happen. This is especially helpful in fields like medicine, where getting things right can be a matter of life and death.

Also, did you know that experimental designs aren't just for scientists in labs? They're used by people in all sorts of jobs, like marketing, education, and even video game design! Yes, someone probably ran an experiment to figure out what makes a game super fun to play.

So there you have it—a quick tour through the history of experimental design, from Aristotle's deep thoughts to Fisher's groundbreaking ideas, and all the way to today's computer-powered research. These designs are the recipes that help people from all walks of life find answers to their big questions.

Key Terms in Experimental Design

Before we dig into the different types of experimental designs, let's get comfy with some key terms. Understanding these terms will make it easier for us to explore the various types of experimental designs that researchers use to answer their big questions.

Independent Variable : This is what you change or control in your experiment to see what effect it has. Think of it as the "cause" in a cause-and-effect relationship. For example, if you're studying whether different types of music help people focus, the kind of music is the independent variable.

Dependent Variable : This is what you're measuring to see the effect of your independent variable. In our music and focus experiment, how well people focus is the dependent variable—it's what "depends" on the kind of music played.

Control Group : This is a group of people who don't get the special treatment or change you're testing. They help you see what happens when the independent variable is not applied. If you're testing whether a new medicine works, the control group would take a fake pill, called a placebo , instead of the real medicine.

Experimental Group : This is the group that gets the special treatment or change you're interested in. Going back to our medicine example, this group would get the actual medicine to see if it has any effect.

Randomization : This is like shaking things up in a fair way. You randomly put people into the control or experimental group so that each group is a good mix of different kinds of people. This helps make the results more reliable.

Sample : This is the group of people you're studying. They're a "sample" of a larger group that you're interested in. For instance, if you want to know how teenagers feel about a new video game, you might study a sample of 100 teenagers.

Bias : This is anything that might tilt your experiment one way or another without you realizing it. Like if you're testing a new kind of dog food and you only test it on poodles, that could create a bias because maybe poodles just really like that food and other breeds don't.

Data : This is the information you collect during the experiment. It's like the treasure you find on your journey of discovery!

Replication : This means doing the experiment more than once to make sure your findings hold up. It's like double-checking your answers on a test.

Hypothesis : This is your educated guess about what will happen in the experiment. It's like predicting the end of a movie based on the first half.

Steps of Experimental Design

Alright, let's say you're all fired up and ready to run your own experiment. Cool! But where do you start? Well, designing an experiment is a bit like planning a road trip. There are some key steps you've got to take to make sure you reach your destination. Let's break it down:

  • Ask a Question : Before you hit the road, you've got to know where you're going. Same with experiments. You start with a question you want to answer, like "Does eating breakfast really make you do better in school?"
  • Do Some Homework : Before you pack your bags, you look up the best places to visit, right? In science, this means reading up on what other people have already discovered about your topic.
  • Form a Hypothesis : This is your educated guess about what you think will happen. It's like saying, "I bet this route will get us there faster."
  • Plan the Details : Now you decide what kind of car you're driving (your experimental design), who's coming with you (your sample), and what snacks to bring (your variables).
  • Randomization : Remember, this is like shuffling a deck of cards. You want to mix up who goes into your control and experimental groups to make sure it's a fair test.
  • Run the Experiment : Finally, the rubber hits the road! You carry out your plan, making sure to collect your data carefully.
  • Analyze the Data : Once the trip's over, you look at your photos and decide which ones are keepers. In science, this means looking at your data to see what it tells you.
  • Draw Conclusions : Based on your data, did you find an answer to your question? This is like saying, "Yep, that route was faster," or "Nope, we hit a ton of traffic."
  • Share Your Findings : After a great trip, you want to tell everyone about it, right? Scientists do the same by publishing their results so others can learn from them.
  • Do It Again? : Sometimes one road trip just isn't enough. In the same way, scientists often repeat their experiments to make sure their findings are solid.

So there you have it! Those are the basic steps you need to follow when you're designing an experiment. Each step helps make sure that you're setting up a fair and reliable way to find answers to your big questions.

Let's get into examples of experimental designs.

1) True Experimental Design

notepad

In the world of experiments, the True Experimental Design is like the superstar quarterback everyone talks about. Born out of the early 20th-century work of statisticians like Ronald A. Fisher, this design is all about control, precision, and reliability.

Researchers carefully pick an independent variable to manipulate (remember, that's the thing they're changing on purpose) and measure the dependent variable (the effect they're studying). Then comes the magic trick—randomization. By randomly putting participants into either the control or experimental group, scientists make sure their experiment is as fair as possible.

No sneaky biases here!

True Experimental Design Pros

The pros of True Experimental Design are like the perks of a VIP ticket at a concert: you get the best and most trustworthy results. Because everything is controlled and randomized, you can feel pretty confident that the results aren't just a fluke.

True Experimental Design Cons

However, there's a catch. Sometimes, it's really tough to set up these experiments in a real-world situation. Imagine trying to control every single detail of your day, from the food you eat to the air you breathe. Not so easy, right?

True Experimental Design Uses

The fields that get the most out of True Experimental Designs are those that need super reliable results, like medical research.

When scientists were developing COVID-19 vaccines, they used this design to run clinical trials. They had control groups that received a placebo (a harmless substance with no effect) and experimental groups that got the actual vaccine. Then they measured how many people in each group got sick. By comparing the two, they could say, "Yep, this vaccine works!"

So next time you read about a groundbreaking discovery in medicine or technology, chances are a True Experimental Design was the VIP behind the scenes, making sure everything was on point. It's been the go-to for rigorous scientific inquiry for nearly a century, and it's not stepping off the stage anytime soon.

2) Quasi-Experimental Design

So, let's talk about the Quasi-Experimental Design. Think of this one as the cool cousin of True Experimental Design. It wants to be just like its famous relative, but it's a bit more laid-back and flexible. You'll find quasi-experimental designs when it's tricky to set up a full-blown True Experimental Design with all the bells and whistles.

Quasi-experiments still play with an independent variable, just like their stricter cousins. The big difference? They don't use randomization. It's like wanting to divide a bag of jelly beans equally between your friends, but you can't quite do it perfectly.

In real life, it's often not possible or ethical to randomly assign people to different groups, especially when dealing with sensitive topics like education or social issues. And that's where quasi-experiments come in.

Quasi-Experimental Design Pros

Even though they lack full randomization, quasi-experimental designs are like the Swiss Army knives of research: versatile and practical. They're especially popular in fields like education, sociology, and public policy.

For instance, when researchers wanted to figure out if the Head Start program , aimed at giving young kids a "head start" in school, was effective, they used a quasi-experimental design. They couldn't randomly assign kids to go or not go to preschool, but they could compare kids who did with kids who didn't.

Quasi-Experimental Design Cons

Of course, quasi-experiments come with their own bag of pros and cons. On the plus side, they're easier to set up and often cheaper than true experiments. But the flip side is that they're not as rock-solid in their conclusions. Because the groups aren't randomly assigned, there's always that little voice saying, "Hey, are we missing something here?"

Quasi-Experimental Design Uses

Quasi-Experimental Design gained traction in the mid-20th century. Researchers were grappling with real-world problems that didn't fit neatly into a laboratory setting. Plus, as society became more aware of ethical considerations, the need for flexible designs increased. So, the quasi-experimental approach was like a breath of fresh air for scientists wanting to study complex issues without a laundry list of restrictions.

In short, if True Experimental Design is the superstar quarterback, Quasi-Experimental Design is the versatile player who can adapt and still make significant contributions to the game.

3) Pre-Experimental Design

Now, let's talk about the Pre-Experimental Design. Imagine it as the beginner's skateboard you get before you try out for all the cool tricks. It has wheels, it rolls, but it's not built for the professional skatepark.

Similarly, pre-experimental designs give researchers a starting point. They let you dip your toes in the water of scientific research without diving in head-first.

So, what's the deal with pre-experimental designs?

Pre-Experimental Designs are the basic, no-frills versions of experiments. Researchers still mess around with an independent variable and measure a dependent variable, but they skip over the whole randomization thing and often don't even have a control group.

It's like baking a cake but forgetting the frosting and sprinkles; you'll get some results, but they might not be as complete or reliable as you'd like.

Pre-Experimental Design Pros

Why use such a simple setup? Because sometimes, you just need to get the ball rolling. Pre-experimental designs are great for quick-and-dirty research when you're short on time or resources. They give you a rough idea of what's happening, which you can use to plan more detailed studies later.

A good example of this is early studies on the effects of screen time on kids. Researchers couldn't control every aspect of a child's life, but they could easily ask parents to track how much time their kids spent in front of screens and then look for trends in behavior or school performance.

Pre-Experimental Design Cons

But here's the catch: pre-experimental designs are like that first draft of an essay. It helps you get your ideas down, but you wouldn't want to turn it in for a grade. Because these designs lack the rigorous structure of true or quasi-experimental setups, they can't give you rock-solid conclusions. They're more like clues or signposts pointing you in a certain direction.

Pre-Experimental Design Uses

This type of design became popular in the early stages of various scientific fields. Researchers used them to scratch the surface of a topic, generate some initial data, and then decide if it's worth exploring further. In other words, pre-experimental designs were the stepping stones that led to more complex, thorough investigations.

So, while Pre-Experimental Design may not be the star player on the team, it's like the practice squad that helps everyone get better. It's the starting point that can lead to bigger and better things.

4) Factorial Design

Now, buckle up, because we're moving into the world of Factorial Design, the multi-tasker of the experimental universe.

Imagine juggling not just one, but multiple balls in the air—that's what researchers do in a factorial design.

In Factorial Design, researchers are not satisfied with just studying one independent variable. Nope, they want to study two or more at the same time to see how they interact.

It's like cooking with several spices to see how they blend together to create unique flavors.

Factorial Design became the talk of the town with the rise of computers. Why? Because this design produces a lot of data, and computers are the number crunchers that help make sense of it all. So, thanks to our silicon friends, researchers can study complicated questions like, "How do diet AND exercise together affect weight loss?" instead of looking at just one of those factors.

Factorial Design Pros

This design's main selling point is its ability to explore interactions between variables. For instance, maybe a new study drug works really well for young people but not so great for older adults. A factorial design could reveal that age is a crucial factor, something you might miss if you only studied the drug's effectiveness in general. It's like being a detective who looks for clues not just in one room but throughout the entire house.

Factorial Design Cons

However, factorial designs have their own bag of challenges. First off, they can be pretty complicated to set up and run. Imagine coordinating a four-way intersection with lots of cars coming from all directions—you've got to make sure everything runs smoothly, or you'll end up with a traffic jam. Similarly, researchers need to carefully plan how they'll measure and analyze all the different variables.

Factorial Design Uses

Factorial designs are widely used in psychology to untangle the web of factors that influence human behavior. They're also popular in fields like marketing, where companies want to understand how different aspects like price, packaging, and advertising influence a product's success.

And speaking of success, the factorial design has been a hit since statisticians like Ronald A. Fisher (yep, him again!) expanded on it in the early-to-mid 20th century. It offered a more nuanced way of understanding the world, proving that sometimes, to get the full picture, you've got to juggle more than one ball at a time.

So, if True Experimental Design is the quarterback and Quasi-Experimental Design is the versatile player, Factorial Design is the strategist who sees the entire game board and makes moves accordingly.

5) Longitudinal Design

pill bottle

Alright, let's take a step into the world of Longitudinal Design. Picture it as the grand storyteller, the kind who doesn't just tell you about a single event but spins an epic tale that stretches over years or even decades. This design isn't about quick snapshots; it's about capturing the whole movie of someone's life or a long-running process.

You know how you might take a photo every year on your birthday to see how you've changed? Longitudinal Design is kind of like that, but for scientific research.

With Longitudinal Design, instead of measuring something just once, researchers come back again and again, sometimes over many years, to see how things are going. This helps them understand not just what's happening, but why it's happening and how it changes over time.

This design really started to shine in the latter half of the 20th century, when researchers began to realize that some questions can't be answered in a hurry. Think about studies that look at how kids grow up, or research on how a certain medicine affects you over a long period. These aren't things you can rush.

The famous Framingham Heart Study , started in 1948, is a prime example. It's been studying heart health in a small town in Massachusetts for decades, and the findings have shaped what we know about heart disease.

Longitudinal Design Pros

So, what's to love about Longitudinal Design? First off, it's the go-to for studying change over time, whether that's how people age or how a forest recovers from a fire.

Longitudinal Design Cons

But it's not all sunshine and rainbows. Longitudinal studies take a lot of patience and resources. Plus, keeping track of participants over many years can be like herding cats—difficult and full of surprises.

Longitudinal Design Uses

Despite these challenges, longitudinal studies have been key in fields like psychology, sociology, and medicine. They provide the kind of deep, long-term insights that other designs just can't match.

So, if the True Experimental Design is the superstar quarterback, and the Quasi-Experimental Design is the flexible athlete, then the Factorial Design is the strategist, and the Longitudinal Design is the wise elder who has seen it all and has stories to tell.

6) Cross-Sectional Design

Now, let's flip the script and talk about Cross-Sectional Design, the polar opposite of the Longitudinal Design. If Longitudinal is the grand storyteller, think of Cross-Sectional as the snapshot photographer. It captures a single moment in time, like a selfie that you take to remember a fun day. Researchers using this design collect all their data at one point, providing a kind of "snapshot" of whatever they're studying.

In a Cross-Sectional Design, researchers look at multiple groups all at the same time to see how they're different or similar.

This design rose to popularity in the mid-20th century, mainly because it's so quick and efficient. Imagine wanting to know how people of different ages feel about a new video game. Instead of waiting for years to see how opinions change, you could just ask people of all ages what they think right now. That's Cross-Sectional Design for you—fast and straightforward.

You'll find this type of research everywhere from marketing studies to healthcare. For instance, you might have heard about surveys asking people what they think about a new product or political issue. Those are usually cross-sectional studies, aimed at getting a quick read on public opinion.

Cross-Sectional Design Pros

So, what's the big deal with Cross-Sectional Design? Well, it's the go-to when you need answers fast and don't have the time or resources for a more complicated setup.

Cross-Sectional Design Cons

Remember, speed comes with trade-offs. While you get your results quickly, those results are stuck in time. They can't tell you how things change or why they're changing, just what's happening right now.

Cross-Sectional Design Uses

Also, because they're so quick and simple, cross-sectional studies often serve as the first step in research. They give scientists an idea of what's going on so they can decide if it's worth digging deeper. In that way, they're a bit like a movie trailer, giving you a taste of the action to see if you're interested in seeing the whole film.

So, in our lineup of experimental designs, if True Experimental Design is the superstar quarterback and Longitudinal Design is the wise elder, then Cross-Sectional Design is like the speedy running back—fast, agile, but not designed for long, drawn-out plays.

7) Correlational Design

Next on our roster is the Correlational Design, the keen observer of the experimental world. Imagine this design as the person at a party who loves people-watching. They don't interfere or get involved; they just observe and take mental notes about what's going on.

In a correlational study, researchers don't change or control anything; they simply observe and measure how two variables relate to each other.

The correlational design has roots in the early days of psychology and sociology. Pioneers like Sir Francis Galton used it to study how qualities like intelligence or height could be related within families.

This design is all about asking, "Hey, when this thing happens, does that other thing usually happen too?" For example, researchers might study whether students who have more study time get better grades or whether people who exercise more have lower stress levels.

One of the most famous correlational studies you might have heard of is the link between smoking and lung cancer. Back in the mid-20th century, researchers started noticing that people who smoked a lot also seemed to get lung cancer more often. They couldn't say smoking caused cancer—that would require a true experiment—but the strong correlation was a red flag that led to more research and eventually, health warnings.

Correlational Design Pros

This design is great at proving that two (or more) things can be related. Correlational designs can help prove that more detailed research is needed on a topic. They can help us see patterns or possible causes for things that we otherwise might not have realized.

Correlational Design Cons

But here's where you need to be careful: correlational designs can be tricky. Just because two things are related doesn't mean one causes the other. That's like saying, "Every time I wear my lucky socks, my team wins." Well, it's a fun thought, but those socks aren't really controlling the game.

Correlational Design Uses

Despite this limitation, correlational designs are popular in psychology, economics, and epidemiology, to name a few fields. They're often the first step in exploring a possible relationship between variables. Once a strong correlation is found, researchers may decide to conduct more rigorous experimental studies to examine cause and effect.

So, if the True Experimental Design is the superstar quarterback and the Longitudinal Design is the wise elder, the Factorial Design is the strategist, and the Cross-Sectional Design is the speedster, then the Correlational Design is the clever scout, identifying interesting patterns but leaving the heavy lifting of proving cause and effect to the other types of designs.

8) Meta-Analysis

Last but not least, let's talk about Meta-Analysis, the librarian of experimental designs.

If other designs are all about creating new research, Meta-Analysis is about gathering up everyone else's research, sorting it, and figuring out what it all means when you put it together.

Imagine a jigsaw puzzle where each piece is a different study. Meta-Analysis is the process of fitting all those pieces together to see the big picture.

The concept of Meta-Analysis started to take shape in the late 20th century, when computers became powerful enough to handle massive amounts of data. It was like someone handed researchers a super-powered magnifying glass, letting them examine multiple studies at the same time to find common trends or results.

You might have heard of the Cochrane Reviews in healthcare . These are big collections of meta-analyses that help doctors and policymakers figure out what treatments work best based on all the research that's been done.

For example, if ten different studies show that a certain medicine helps lower blood pressure, a meta-analysis would pull all that information together to give a more accurate answer.

Meta-Analysis Pros

The beauty of Meta-Analysis is that it can provide really strong evidence. Instead of relying on one study, you're looking at the whole landscape of research on a topic.

Meta-Analysis Cons

However, it does have some downsides. For one, Meta-Analysis is only as good as the studies it includes. If those studies are flawed, the meta-analysis will be too. It's like baking a cake: if you use bad ingredients, it doesn't matter how good your recipe is—the cake won't turn out well.

Meta-Analysis Uses

Despite these challenges, meta-analyses are highly respected and widely used in many fields like medicine, psychology, and education. They help us make sense of a world that's bursting with information by showing us the big picture drawn from many smaller snapshots.

So, in our all-star lineup, if True Experimental Design is the quarterback and Longitudinal Design is the wise elder, the Factorial Design is the strategist, the Cross-Sectional Design is the speedster, and the Correlational Design is the scout, then the Meta-Analysis is like the coach, using insights from everyone else's plays to come up with the best game plan.

9) Non-Experimental Design

Now, let's talk about a player who's a bit of an outsider on this team of experimental designs—the Non-Experimental Design. Think of this design as the commentator or the journalist who covers the game but doesn't actually play.

In a Non-Experimental Design, researchers are like reporters gathering facts, but they don't interfere or change anything. They're simply there to describe and analyze.

Non-Experimental Design Pros

So, what's the deal with Non-Experimental Design? Its strength is in description and exploration. It's really good for studying things as they are in the real world, without changing any conditions.

Non-Experimental Design Cons

Because a non-experimental design doesn't manipulate variables, it can't prove cause and effect. It's like a weather reporter: they can tell you it's raining, but they can't tell you why it's raining.

The downside? Since researchers aren't controlling variables, it's hard to rule out other explanations for what they observe. It's like hearing one side of a story—you get an idea of what happened, but it might not be the complete picture.

Non-Experimental Design Uses

Non-Experimental Design has always been a part of research, especially in fields like anthropology, sociology, and some areas of psychology.

For instance, if you've ever heard of studies that describe how people behave in different cultures or what teens like to do in their free time, that's often Non-Experimental Design at work. These studies aim to capture the essence of a situation, like painting a portrait instead of taking a snapshot.

One well-known example you might have heard about is the Kinsey Reports from the 1940s and 1950s, which described sexual behavior in men and women. Researchers interviewed thousands of people but didn't manipulate any variables like you would in a true experiment. They simply collected data to create a comprehensive picture of the subject matter.

So, in our metaphorical team of research designs, if True Experimental Design is the quarterback and Longitudinal Design is the wise elder, Factorial Design is the strategist, Cross-Sectional Design is the speedster, Correlational Design is the scout, and Meta-Analysis is the coach, then Non-Experimental Design is the sports journalist—always present, capturing the game, but not part of the action itself.

10) Repeated Measures Design

white rat

Time to meet the Repeated Measures Design, the time traveler of our research team. If this design were a player in a sports game, it would be the one who keeps revisiting past plays to figure out how to improve the next one.

Repeated Measures Design is all about studying the same people or subjects multiple times to see how they change or react under different conditions.

The idea behind Repeated Measures Design isn't new; it's been around since the early days of psychology and medicine. You could say it's a cousin to the Longitudinal Design, but instead of looking at how things naturally change over time, it focuses on how the same group reacts to different things.

Imagine a study looking at how a new energy drink affects people's running speed. Instead of comparing one group that drank the energy drink to another group that didn't, a Repeated Measures Design would have the same group of people run multiple times—once with the energy drink, and once without. This way, you're really zeroing in on the effect of that energy drink, making the results more reliable.

Repeated Measures Design Pros

The strong point of Repeated Measures Design is that it's super focused. Because it uses the same subjects, you don't have to worry about differences between groups messing up your results.

Repeated Measures Design Cons

But the downside? Well, people can get tired or bored if they're tested too many times, which might affect how they respond.

Repeated Measures Design Uses

A famous example of this design is the "Little Albert" experiment, conducted by John B. Watson and Rosalie Rayner in 1920. In this study, a young boy was exposed to a white rat and other stimuli several times to see how his emotional responses changed. Though the ethical standards of this experiment are often criticized today, it was groundbreaking in understanding conditioned emotional responses.

In our metaphorical lineup of research designs, if True Experimental Design is the quarterback and Longitudinal Design is the wise elder, Factorial Design is the strategist, Cross-Sectional Design is the speedster, Correlational Design is the scout, Meta-Analysis is the coach, and Non-Experimental Design is the journalist, then Repeated Measures Design is the time traveler—always looping back to fine-tune the game plan.

11) Crossover Design

Next up is Crossover Design, the switch-hitter of the research world. If you're familiar with baseball, you'll know a switch-hitter is someone who can bat both right-handed and left-handed.

In a similar way, Crossover Design allows subjects to experience multiple conditions, flipping them around so that everyone gets a turn in each role.

This design is like the utility player on our team—versatile, flexible, and really good at adapting.

The Crossover Design has its roots in medical research and has been popular since the mid-20th century. It's often used in clinical trials to test the effectiveness of different treatments.

Crossover Design Pros

The neat thing about this design is that it allows each participant to serve as their own control group. Imagine you're testing two new kinds of headache medicine. Instead of giving one type to one group and another type to a different group, you'd give both kinds to the same people but at different times.

Crossover Design Cons

What's the big deal with Crossover Design? Its major strength is in reducing the "noise" that comes from individual differences. Since each person experiences all conditions, it's easier to see real effects. However, there's a catch. This design assumes that there's no lasting effect from the first condition when you switch to the second one. That might not always be true. If the first treatment has a long-lasting effect, it could mess up the results when you switch to the second treatment.

Crossover Design Uses

A well-known example of Crossover Design is in studies that look at the effects of different types of diets—like low-carb vs. low-fat diets. Researchers might have participants follow a low-carb diet for a few weeks, then switch them to a low-fat diet. By doing this, they can more accurately measure how each diet affects the same group of people.

In our team of experimental designs, if True Experimental Design is the quarterback and Longitudinal Design is the wise elder, Factorial Design is the strategist, Cross-Sectional Design is the speedster, Correlational Design is the scout, Meta-Analysis is the coach, Non-Experimental Design is the journalist, and Repeated Measures Design is the time traveler, then Crossover Design is the versatile utility player—always ready to adapt and play multiple roles to get the most accurate results.

12) Cluster Randomized Design

Meet the Cluster Randomized Design, the team captain of group-focused research. In our imaginary lineup of experimental designs, if other designs focus on individual players, then Cluster Randomized Design is looking at how the entire team functions.

This approach is especially common in educational and community-based research, and it's been gaining traction since the late 20th century.

Here's how Cluster Randomized Design works: Instead of assigning individual people to different conditions, researchers assign entire groups, or "clusters." These could be schools, neighborhoods, or even entire towns. This helps you see how the new method works in a real-world setting.

Imagine you want to see if a new anti-bullying program really works. Instead of selecting individual students, you'd introduce the program to a whole school or maybe even several schools, and then compare the results to schools without the program.

Cluster Randomized Design Pros

Why use Cluster Randomized Design? Well, sometimes it's just not practical to assign conditions at the individual level. For example, you can't really have half a school following a new reading program while the other half sticks with the old one; that would be way too confusing! Cluster Randomization helps get around this problem by treating each "cluster" as its own mini-experiment.

Cluster Randomized Design Cons

There's a downside, too. Because entire groups are assigned to each condition, there's a risk that the groups might be different in some important way that the researchers didn't account for. That's like having one sports team that's full of veterans playing against a team of rookies; the match wouldn't be fair.

Cluster Randomized Design Uses

A famous example is the research conducted to test the effectiveness of different public health interventions, like vaccination programs. Researchers might roll out a vaccination program in one community but not in another, then compare the rates of disease in both.

In our metaphorical research team, if True Experimental Design is the quarterback, Longitudinal Design is the wise elder, Factorial Design is the strategist, Cross-Sectional Design is the speedster, Correlational Design is the scout, Meta-Analysis is the coach, Non-Experimental Design is the journalist, Repeated Measures Design is the time traveler, and Crossover Design is the utility player, then Cluster Randomized Design is the team captain—always looking out for the group as a whole.

13) Mixed-Methods Design

Say hello to Mixed-Methods Design, the all-rounder or the "Renaissance player" of our research team.

Mixed-Methods Design uses a blend of both qualitative and quantitative methods to get a more complete picture, just like a Renaissance person who's good at lots of different things. It's like being good at both offense and defense in a sport; you've got all your bases covered!

Mixed-Methods Design is a fairly new kid on the block, becoming more popular in the late 20th and early 21st centuries as researchers began to see the value in using multiple approaches to tackle complex questions. It's the Swiss Army knife in our research toolkit, combining the best parts of other designs to be more versatile.

Here's how it could work: Imagine you're studying the effects of a new educational app on students' math skills. You might use quantitative methods like tests and grades to measure how much the students improve—that's the 'numbers part.'

But you also want to know how the students feel about math now, or why they think they got better or worse. For that, you could conduct interviews or have students fill out journals—that's the 'story part.'

Mixed-Methods Design Pros

So, what's the scoop on Mixed-Methods Design? The strength is its versatility and depth; you're not just getting numbers or stories, you're getting both, which gives a fuller picture.

Mixed-Methods Design Cons

But, it's also more challenging. Imagine trying to play two sports at the same time! You have to be skilled in different research methods and know how to combine them effectively.

Mixed-Methods Design Uses

A high-profile example of Mixed-Methods Design is research on climate change. Scientists use numbers and data to show temperature changes (quantitative), but they also interview people to understand how these changes are affecting communities (qualitative).

In our team of experimental designs, if True Experimental Design is the quarterback, Longitudinal Design is the wise elder, Factorial Design is the strategist, Cross-Sectional Design is the speedster, Correlational Design is the scout, Meta-Analysis is the coach, Non-Experimental Design is the journalist, Repeated Measures Design is the time traveler, Crossover Design is the utility player, and Cluster Randomized Design is the team captain, then Mixed-Methods Design is the Renaissance player—skilled in multiple areas and able to bring them all together for a winning strategy.

14) Multivariate Design

Now, let's turn our attention to Multivariate Design, the multitasker of the research world.

If our lineup of research designs were like players on a basketball court, Multivariate Design would be the player dribbling, passing, and shooting all at once. This design doesn't just look at one or two things; it looks at several variables simultaneously to see how they interact and affect each other.

Multivariate Design is like baking a cake with many ingredients. Instead of just looking at how flour affects the cake, you also consider sugar, eggs, and milk all at once. This way, you understand how everything works together to make the cake taste good or bad.

Multivariate Design has been a go-to method in psychology, economics, and social sciences since the latter half of the 20th century. With the advent of computers and advanced statistical software, analyzing multiple variables at once became a lot easier, and Multivariate Design soared in popularity.

Multivariate Design Pros

So, what's the benefit of using Multivariate Design? Its power lies in its complexity. By studying multiple variables at the same time, you can get a really rich, detailed understanding of what's going on.

Multivariate Design Cons

But that complexity can also be a drawback. With so many variables, it can be tough to tell which ones are really making a difference and which ones are just along for the ride.

Multivariate Design Uses

Imagine you're a coach trying to figure out the best strategy to win games. You wouldn't just look at how many points your star player scores; you'd also consider assists, rebounds, turnovers, and maybe even how loud the crowd is. A Multivariate Design would help you understand how all these factors work together to determine whether you win or lose.

A well-known example of Multivariate Design is in market research. Companies often use this approach to figure out how different factors—like price, packaging, and advertising—affect sales. By studying multiple variables at once, they can find the best combination to boost profits.

In our metaphorical research team, if True Experimental Design is the quarterback, Longitudinal Design is the wise elder, Factorial Design is the strategist, Cross-Sectional Design is the speedster, Correlational Design is the scout, Meta-Analysis is the coach, Non-Experimental Design is the journalist, Repeated Measures Design is the time traveler, Crossover Design is the utility player, Cluster Randomized Design is the team captain, and Mixed-Methods Design is the Renaissance player, then Multivariate Design is the multitasker—juggling many variables at once to get a fuller picture of what's happening.

15) Pretest-Posttest Design

Let's introduce Pretest-Posttest Design, the "Before and After" superstar of our research team. You've probably seen those before-and-after pictures in ads for weight loss programs or home renovations, right?

Well, this design is like that, but for science! Pretest-Posttest Design checks out what things are like before the experiment starts and then compares that to what things are like after the experiment ends.

This design is one of the classics, a staple in research for decades across various fields like psychology, education, and healthcare. It's so simple and straightforward that it has stayed popular for a long time.

In Pretest-Posttest Design, you measure your subject's behavior or condition before you introduce any changes—that's your "before" or "pretest." Then you do your experiment, and after it's done, you measure the same thing again—that's your "after" or "posttest."

Pretest-Posttest Design Pros

What makes Pretest-Posttest Design special? It's pretty easy to understand and doesn't require fancy statistics.

Pretest-Posttest Design Cons

But there are some pitfalls. For example, what if the kids in our math example get better at multiplication just because they're older or because they've taken the test before? That would make it hard to tell if the program is really effective or not.

Pretest-Posttest Design Uses

Let's say you're a teacher and you want to know if a new math program helps kids get better at multiplication. First, you'd give all the kids a multiplication test—that's your pretest. Then you'd teach them using the new math program. At the end, you'd give them the same test again—that's your posttest. If the kids do better on the second test, you might conclude that the program works.

One famous use of Pretest-Posttest Design is in evaluating the effectiveness of driver's education courses. Researchers will measure people's driving skills before and after the course to see if they've improved.

16) Solomon Four-Group Design

Next up is the Solomon Four-Group Design, the "chess master" of our research team. This design is all about strategy and careful planning. Named after Richard L. Solomon who introduced it in the 1940s, this method tries to correct some of the weaknesses in simpler designs, like the Pretest-Posttest Design.

Here's how it rolls: The Solomon Four-Group Design uses four different groups to test a hypothesis. Two groups get a pretest, then one of them receives the treatment or intervention, and both get a posttest. The other two groups skip the pretest, and only one of them receives the treatment before they both get a posttest.

Sound complicated? It's like playing 4D chess; you're thinking several moves ahead!

Solomon Four-Group Design Pros

What's the pro and con of the Solomon Four-Group Design? On the plus side, it provides really robust results because it accounts for so many variables.

Solomon Four-Group Design Cons

The downside? It's a lot of work and requires a lot of participants, making it more time-consuming and costly.

Solomon Four-Group Design Uses

Let's say you want to figure out if a new way of teaching history helps students remember facts better. Two classes take a history quiz (pretest), then one class uses the new teaching method while the other sticks with the old way. Both classes take another quiz afterward (posttest).

Meanwhile, two more classes skip the initial quiz, and then one uses the new method before both take the final quiz. Comparing all four groups will give you a much clearer picture of whether the new teaching method works and whether the pretest itself affects the outcome.

The Solomon Four-Group Design is less commonly used than simpler designs but is highly respected for its ability to control for more variables. It's a favorite in educational and psychological research where you really want to dig deep and figure out what's actually causing changes.

17) Adaptive Designs

Now, let's talk about Adaptive Designs, the chameleons of the experimental world.

Imagine you're a detective, and halfway through solving a case, you find a clue that changes everything. You wouldn't just stick to your old plan; you'd adapt and change your approach, right? That's exactly what Adaptive Designs allow researchers to do.

In an Adaptive Design, researchers can make changes to the study as it's happening, based on early results. In a traditional study, once you set your plan, you stick to it from start to finish.

Adaptive Design Pros

This method is particularly useful in fast-paced or high-stakes situations, like developing a new vaccine in the middle of a pandemic. The ability to adapt can save both time and resources, and more importantly, it can save lives by getting effective treatments out faster.

Adaptive Design Cons

But Adaptive Designs aren't without their drawbacks. They can be very complex to plan and carry out, and there's always a risk that the changes made during the study could introduce bias or errors.

Adaptive Design Uses

Adaptive Designs are most often seen in clinical trials, particularly in the medical and pharmaceutical fields.

For instance, if a new drug is showing really promising results, the study might be adjusted to give more participants the new treatment instead of a placebo. Or if one dose level is showing bad side effects, it might be dropped from the study.

The best part is, these changes are pre-planned. Researchers lay out in advance what changes might be made and under what conditions, which helps keep everything scientific and above board.

In terms of applications, besides their heavy usage in medical and pharmaceutical research, Adaptive Designs are also becoming increasingly popular in software testing and market research. In these fields, being able to quickly adjust to early results can give companies a significant advantage.

Adaptive Designs are like the agile startups of the research world—quick to pivot, keen to learn from ongoing results, and focused on rapid, efficient progress. However, they require a great deal of expertise and careful planning to ensure that the adaptability doesn't compromise the integrity of the research.

18) Bayesian Designs

Next, let's dive into Bayesian Designs, the data detectives of the research universe. Named after Thomas Bayes, an 18th-century statistician and minister, this design doesn't just look at what's happening now; it also takes into account what's happened before.

Imagine if you were a detective who not only looked at the evidence in front of you but also used your past cases to make better guesses about your current one. That's the essence of Bayesian Designs.

Bayesian Designs are like detective work in science. As you gather more clues (or data), you update your best guess on what's really happening. This way, your experiment gets smarter as it goes along.

In the world of research, Bayesian Designs are most notably used in areas where you have some prior knowledge that can inform your current study. For example, if earlier research shows that a certain type of medicine usually works well for a specific illness, a Bayesian Design would include that information when studying a new group of patients with the same illness.

Bayesian Design Pros

One of the major advantages of Bayesian Designs is their efficiency. Because they use existing data to inform the current experiment, often fewer resources are needed to reach a reliable conclusion.

Bayesian Design Cons

However, they can be quite complicated to set up and require a deep understanding of both statistics and the subject matter at hand.

Bayesian Design Uses

Bayesian Designs are highly valued in medical research, finance, environmental science, and even in Internet search algorithms. Their ability to continually update and refine hypotheses based on new evidence makes them particularly useful in fields where data is constantly evolving and where quick, informed decisions are crucial.

Here's a real-world example: In the development of personalized medicine, where treatments are tailored to individual patients, Bayesian Designs are invaluable. If a treatment has been effective for patients with similar genetics or symptoms in the past, a Bayesian approach can use that data to predict how well it might work for a new patient.

This type of design is also increasingly popular in machine learning and artificial intelligence. In these fields, Bayesian Designs help algorithms "learn" from past data to make better predictions or decisions in new situations. It's like teaching a computer to be a detective that gets better and better at solving puzzles the more puzzles it sees.

19) Covariate Adaptive Randomization

old person and young person

Now let's turn our attention to Covariate Adaptive Randomization, which you can think of as the "matchmaker" of experimental designs.

Picture a soccer coach trying to create the most balanced teams for a friendly match. They wouldn't just randomly assign players; they'd take into account each player's skills, experience, and other traits.

Covariate Adaptive Randomization is all about creating the most evenly matched groups possible for an experiment.

In traditional randomization, participants are allocated to different groups purely by chance. This is a pretty fair way to do things, but it can sometimes lead to unbalanced groups.

Imagine if all the professional-level players ended up on one soccer team and all the beginners on another; that wouldn't be a very informative match! Covariate Adaptive Randomization fixes this by using important traits or characteristics (called "covariates") to guide the randomization process.

Covariate Adaptive Randomization Pros

The benefits of this design are pretty clear: it aims for balance and fairness, making the final results more trustworthy.

Covariate Adaptive Randomization Cons

But it's not perfect. It can be complex to implement and requires a deep understanding of which characteristics are most important to balance.

Covariate Adaptive Randomization Uses

This design is particularly useful in medical trials. Let's say researchers are testing a new medication for high blood pressure. Participants might have different ages, weights, or pre-existing conditions that could affect the results.

Covariate Adaptive Randomization would make sure that each treatment group has a similar mix of these characteristics, making the results more reliable and easier to interpret.

In practical terms, this design is often seen in clinical trials for new drugs or therapies, but its principles are also applicable in fields like psychology, education, and social sciences.

For instance, in educational research, it might be used to ensure that classrooms being compared have similar distributions of students in terms of academic ability, socioeconomic status, and other factors.

Covariate Adaptive Randomization is like the wise elder of the group, ensuring that everyone has an equal opportunity to show their true capabilities, thereby making the collective results as reliable as possible.

20) Stepped Wedge Design

Let's now focus on the Stepped Wedge Design, a thoughtful and cautious member of the experimental design family.

Imagine you're trying out a new gardening technique, but you're not sure how well it will work. You decide to apply it to one section of your garden first, watch how it performs, and then gradually extend the technique to other sections. This way, you get to see its effects over time and across different conditions. That's basically how Stepped Wedge Design works.

In a Stepped Wedge Design, all participants or clusters start off in the control group, and then, at different times, they 'step' over to the intervention or treatment group. This creates a wedge-like pattern over time where more and more participants receive the treatment as the study progresses. It's like rolling out a new policy in phases, monitoring its impact at each stage before extending it to more people.

Stepped Wedge Design Pros

The Stepped Wedge Design offers several advantages. Firstly, it allows for the study of interventions that are expected to do more good than harm, which makes it ethically appealing.

Secondly, it's useful when resources are limited and it's not feasible to roll out a new treatment to everyone at once. Lastly, because everyone eventually receives the treatment, it can be easier to get buy-in from participants or organizations involved in the study.

Stepped Wedge Design Cons

However, this design can be complex to analyze because it has to account for both the time factor and the changing conditions in each 'step' of the wedge. And like any study where participants know they're receiving an intervention, there's the potential for the results to be influenced by the placebo effect or other biases.

Stepped Wedge Design Uses

This design is particularly useful in health and social care research. For instance, if a hospital wants to implement a new hygiene protocol, it might start in one department, assess its impact, and then roll it out to other departments over time. This allows the hospital to adjust and refine the new protocol based on real-world data before it's fully implemented.

In terms of applications, Stepped Wedge Designs are commonly used in public health initiatives, organizational changes in healthcare settings, and social policy trials. They are particularly useful in situations where an intervention is being rolled out gradually and it's important to understand its impacts at each stage.

21) Sequential Design

Next up is Sequential Design, the dynamic and flexible member of our experimental design family.

Imagine you're playing a video game where you can choose different paths. If you take one path and find a treasure chest, you might decide to continue in that direction. If you hit a dead end, you might backtrack and try a different route. Sequential Design operates in a similar fashion, allowing researchers to make decisions at different stages based on what they've learned so far.

In a Sequential Design, the experiment is broken down into smaller parts, or "sequences." After each sequence, researchers pause to look at the data they've collected. Based on those findings, they then decide whether to stop the experiment because they've got enough information, or to continue and perhaps even modify the next sequence.

Sequential Design Pros

This allows for a more efficient use of resources, as you're only continuing with the experiment if the data suggests it's worth doing so.

One of the great things about Sequential Design is its efficiency. Because you're making data-driven decisions along the way, you can often reach conclusions more quickly and with fewer resources.

Sequential Design Cons

However, it requires careful planning and expertise to ensure that these "stop or go" decisions are made correctly and without bias.

Sequential Design Uses

In terms of its applications, besides healthcare and medicine, Sequential Design is also popular in quality control in manufacturing, environmental monitoring, and financial modeling. In these areas, being able to make quick decisions based on incoming data can be a big advantage.

This design is often used in clinical trials involving new medications or treatments. For example, if early results show that a new drug has significant side effects, the trial can be stopped before more people are exposed to it.

On the flip side, if the drug is showing promising results, the trial might be expanded to include more participants or to extend the testing period.

Think of Sequential Design as the nimble athlete of experimental designs, capable of quick pivots and adjustments to reach the finish line in the most effective way possible. But just like an athlete needs a good coach, this design requires expert oversight to make sure it stays on the right track.

22) Field Experiments

Last but certainly not least, let's explore Field Experiments—the adventurers of the experimental design world.

Picture a scientist leaving the controlled environment of a lab to test a theory in the real world, like a biologist studying animals in their natural habitat or a social scientist observing people in a real community. These are Field Experiments, and they're all about getting out there and gathering data in real-world settings.

Field Experiments embrace the messiness of the real world, unlike laboratory experiments, where everything is controlled down to the smallest detail. This makes them both exciting and challenging.

Field Experiment Pros

On one hand, the results often give us a better understanding of how things work outside the lab.

While Field Experiments offer real-world relevance, they come with challenges like controlling for outside factors and the ethical considerations of intervening in people's lives without their knowledge.

Field Experiment Cons

On the other hand, the lack of control can make it harder to tell exactly what's causing what. Yet, despite these challenges, they remain a valuable tool for researchers who want to understand how theories play out in the real world.

Field Experiment Uses

Let's say a school wants to improve student performance. In a Field Experiment, they might change the school's daily schedule for one semester and keep track of how students perform compared to another school where the schedule remained the same.

Because the study is happening in a real school with real students, the results could be very useful for understanding how the change might work in other schools. But since it's the real world, lots of other factors—like changes in teachers or even the weather—could affect the results.

Field Experiments are widely used in economics, psychology, education, and public policy. For example, you might have heard of the famous "Broken Windows" experiment in the 1980s that looked at how small signs of disorder, like broken windows or graffiti, could encourage more serious crime in neighborhoods. This experiment had a big impact on how cities think about crime prevention.

From the foundational concepts of control groups and independent variables to the sophisticated layouts like Covariate Adaptive Randomization and Sequential Design, it's clear that the realm of experimental design is as varied as it is fascinating.

We've seen that each design has its own special talents, ideal for specific situations. Some designs, like the Classic Controlled Experiment, are like reliable old friends you can always count on.

Others, like Sequential Design, are flexible and adaptable, making quick changes based on what they learn. And let's not forget the adventurous Field Experiments, which take us out of the lab and into the real world to discover things we might not see otherwise.

Choosing the right experimental design is like picking the right tool for the job. The method you choose can make a big difference in how reliable your results are and how much people will trust what you've discovered. And as we've learned, there's a design to suit just about every question, every problem, and every curiosity.

So the next time you read about a new discovery in medicine, psychology, or any other field, you'll have a better understanding of the thought and planning that went into figuring things out. Experimental design is more than just a set of rules; it's a structured way to explore the unknown and answer questions that can change the world.

Related posts:

  • Experimental Psychologist Career (Salary + Duties + Interviews)
  • 40+ Famous Psychologists (Images + Biographies)
  • 11+ Psychology Experiment Ideas (Goals + Methods)
  • The Little Albert Experiment
  • 41+ White Collar Job Examples (Salary + Path)

Reference this article:

About The Author

Photo of author

Free Personality Test

Free Personality Quiz

Free Memory Test

Free Memory Test

Free IQ Test

Free IQ Test

PracticalPie.com is a participant in the Amazon Associates Program. As an Amazon Associate we earn from qualifying purchases.

Follow Us On:

Youtube Facebook Instagram X/Twitter

Psychology Resources

Developmental

Personality

Relationships

Psychologists

Serial Killers

Psychology Tests

Personality Quiz

Memory Test

Depression test

Type A/B Personality Test

© PracticalPsychology. All rights reserved

Privacy Policy | Terms of Use

Enago Academy

Experimental Research Design — 6 mistakes you should never make!

' src=

Since school days’ students perform scientific experiments that provide results that define and prove the laws and theorems in science. These experiments are laid on a strong foundation of experimental research designs.

An experimental research design helps researchers execute their research objectives with more clarity and transparency.

In this article, we will not only discuss the key aspects of experimental research designs but also the issues to avoid and problems to resolve while designing your research study.

Table of Contents

What Is Experimental Research Design?

Experimental research design is a framework of protocols and procedures created to conduct experimental research with a scientific approach using two sets of variables. Herein, the first set of variables acts as a constant, used to measure the differences of the second set. The best example of experimental research methods is quantitative research .

Experimental research helps a researcher gather the necessary data for making better research decisions and determining the facts of a research study.

When Can a Researcher Conduct Experimental Research?

A researcher can conduct experimental research in the following situations —

  • When time is an important factor in establishing a relationship between the cause and effect.
  • When there is an invariable or never-changing behavior between the cause and effect.
  • Finally, when the researcher wishes to understand the importance of the cause and effect.

Importance of Experimental Research Design

To publish significant results, choosing a quality research design forms the foundation to build the research study. Moreover, effective research design helps establish quality decision-making procedures, structures the research to lead to easier data analysis, and addresses the main research question. Therefore, it is essential to cater undivided attention and time to create an experimental research design before beginning the practical experiment.

By creating a research design, a researcher is also giving oneself time to organize the research, set up relevant boundaries for the study, and increase the reliability of the results. Through all these efforts, one could also avoid inconclusive results. If any part of the research design is flawed, it will reflect on the quality of the results derived.

Types of Experimental Research Designs

Based on the methods used to collect data in experimental studies, the experimental research designs are of three primary types:

1. Pre-experimental Research Design

A research study could conduct pre-experimental research design when a group or many groups are under observation after implementing factors of cause and effect of the research. The pre-experimental design will help researchers understand whether further investigation is necessary for the groups under observation.

Pre-experimental research is of three types —

  • One-shot Case Study Research Design
  • One-group Pretest-posttest Research Design
  • Static-group Comparison

2. True Experimental Research Design

A true experimental research design relies on statistical analysis to prove or disprove a researcher’s hypothesis. It is one of the most accurate forms of research because it provides specific scientific evidence. Furthermore, out of all the types of experimental designs, only a true experimental design can establish a cause-effect relationship within a group. However, in a true experiment, a researcher must satisfy these three factors —

  • There is a control group that is not subjected to changes and an experimental group that will experience the changed variables
  • A variable that can be manipulated by the researcher
  • Random distribution of the variables

This type of experimental research is commonly observed in the physical sciences.

3. Quasi-experimental Research Design

The word “Quasi” means similarity. A quasi-experimental design is similar to a true experimental design. However, the difference between the two is the assignment of the control group. In this research design, an independent variable is manipulated, but the participants of a group are not randomly assigned. This type of research design is used in field settings where random assignment is either irrelevant or not required.

The classification of the research subjects, conditions, or groups determines the type of research design to be used.

experimental research design

Advantages of Experimental Research

Experimental research allows you to test your idea in a controlled environment before taking the research to clinical trials. Moreover, it provides the best method to test your theory because of the following advantages:

  • Researchers have firm control over variables to obtain results.
  • The subject does not impact the effectiveness of experimental research. Anyone can implement it for research purposes.
  • The results are specific.
  • Post results analysis, research findings from the same dataset can be repurposed for similar research ideas.
  • Researchers can identify the cause and effect of the hypothesis and further analyze this relationship to determine in-depth ideas.
  • Experimental research makes an ideal starting point. The collected data could be used as a foundation to build new research ideas for further studies.

6 Mistakes to Avoid While Designing Your Research

There is no order to this list, and any one of these issues can seriously compromise the quality of your research. You could refer to the list as a checklist of what to avoid while designing your research.

1. Invalid Theoretical Framework

Usually, researchers miss out on checking if their hypothesis is logical to be tested. If your research design does not have basic assumptions or postulates, then it is fundamentally flawed and you need to rework on your research framework.

2. Inadequate Literature Study

Without a comprehensive research literature review , it is difficult to identify and fill the knowledge and information gaps. Furthermore, you need to clearly state how your research will contribute to the research field, either by adding value to the pertinent literature or challenging previous findings and assumptions.

3. Insufficient or Incorrect Statistical Analysis

Statistical results are one of the most trusted scientific evidence. The ultimate goal of a research experiment is to gain valid and sustainable evidence. Therefore, incorrect statistical analysis could affect the quality of any quantitative research.

4. Undefined Research Problem

This is one of the most basic aspects of research design. The research problem statement must be clear and to do that, you must set the framework for the development of research questions that address the core problems.

5. Research Limitations

Every study has some type of limitations . You should anticipate and incorporate those limitations into your conclusion, as well as the basic research design. Include a statement in your manuscript about any perceived limitations, and how you considered them while designing your experiment and drawing the conclusion.

6. Ethical Implications

The most important yet less talked about topic is the ethical issue. Your research design must include ways to minimize any risk for your participants and also address the research problem or question at hand. If you cannot manage the ethical norms along with your research study, your research objectives and validity could be questioned.

Experimental Research Design Example

In an experimental design, a researcher gathers plant samples and then randomly assigns half the samples to photosynthesize in sunlight and the other half to be kept in a dark box without sunlight, while controlling all the other variables (nutrients, water, soil, etc.)

By comparing their outcomes in biochemical tests, the researcher can confirm that the changes in the plants were due to the sunlight and not the other variables.

Experimental research is often the final form of a study conducted in the research process which is considered to provide conclusive and specific results. But it is not meant for every research. It involves a lot of resources, time, and money and is not easy to conduct, unless a foundation of research is built. Yet it is widely used in research institutes and commercial industries, for its most conclusive results in the scientific approach.

Have you worked on research designs? How was your experience creating an experimental design? What difficulties did you face? Do write to us or comment below and share your insights on experimental research designs!

Frequently Asked Questions

Randomization is important in an experimental research because it ensures unbiased results of the experiment. It also measures the cause-effect relationship on a particular group of interest.

Experimental research design lay the foundation of a research and structures the research to establish quality decision making process.

There are 3 types of experimental research designs. These are pre-experimental research design, true experimental research design, and quasi experimental research design.

The difference between an experimental and a quasi-experimental design are: 1. The assignment of the control group in quasi experimental research is non-random, unlike true experimental design, which is randomly assigned. 2. Experimental research group always has a control group; on the other hand, it may not be always present in quasi experimental research.

Experimental research establishes a cause-effect relationship by testing a theory or hypothesis using experimental groups or control variables. In contrast, descriptive research describes a study or a topic by defining the variables under it and answering the questions related to the same.

' src=

good and valuable

Very very good

Good presentation.

Rate this article Cancel Reply

Your email address will not be published.

experimental methodology in research example

Enago Academy's Most Popular Articles

Graphical Abstracts vs. Infographics: Best Practices for Visuals - Enago

  • Promoting Research

Graphical Abstracts Vs. Infographics: Best practices for using visual illustrations for increased research impact

Dr. Sarah Chen stared at her computer screen, her eyes staring at her recently published…

10 Tips to Prevent Research Papers From Being Retracted

  • Publishing Research

10 Tips to Prevent Research Papers From Being Retracted

Research paper retractions represent a critical event in the scientific community. When a published article…

2024 Scholar Metrics: Unveiling research impact (2019-2023)

  • Industry News

Google Releases 2024 Scholar Metrics, Evaluates Impact of Scholarly Articles

Google has released its 2024 Scholar Metrics, assessing scholarly articles from 2019 to 2023. This…

What is Academic Integrity and How to Uphold it [FREE CHECKLIST]

Ensuring Academic Integrity and Transparency in Academic Research: A comprehensive checklist for researchers

Academic integrity is the foundation upon which the credibility and value of scientific findings are…

7 Step Guide for Optimizing Impactful Research Process

  • Reporting Research

How to Optimize Your Research Process: A step-by-step guide

For researchers across disciplines, the path to uncovering novel findings and insights is often filled…

Choosing the Right Analytical Approach: Thematic analysis vs. content analysis for…

Comparing Cross Sectional and Longitudinal Studies: 5 steps for choosing the right…

experimental methodology in research example

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

  • AI in Academia
  • Career Corner
  • Diversity and Inclusion
  • Infographics
  • Expert Video Library
  • Other Resources
  • Enago Learn
  • Upcoming & On-Demand Webinars
  • Peer Review Week 2024
  • Open Access Week 2023
  • Conference Videos
  • Enago Report
  • Journal Finder
  • Enago Plagiarism & AI Grammar Check
  • Editing Services
  • Publication Support Services
  • Research Impact
  • Translation Services
  • Publication solutions
  • AI-Based Solutions
  • Thought Leadership
  • Call for Articles
  • Call for Speakers
  • Author Training
  • Edit Profile

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

experimental methodology in research example

In your opinion, what is the most effective way to improve integrity in the peer review process?

  • Experimental Research Designs: Types, Examples & Methods

busayo.longe

Experimental research is the most familiar type of research design for individuals in the physical sciences and a host of other fields. This is mainly because experimental research is a classical scientific experiment, similar to those performed in high school science classes.

Imagine taking 2 samples of the same plant and exposing one of them to sunlight, while the other is kept away from sunlight. Let the plant exposed to sunlight be called sample A, while the latter is called sample B.

If after the duration of the research, we find out that sample A grows and sample B dies, even though they are both regularly wetted and given the same treatment. Therefore, we can conclude that sunlight will aid growth in all similar plants.

What is Experimental Research?

Experimental research is a scientific approach to research, where one or more independent variables are manipulated and applied to one or more dependent variables to measure their effect on the latter. The effect of the independent variables on the dependent variables is usually observed and recorded over some time, to aid researchers in drawing a reasonable conclusion regarding the relationship between these 2 variable types.

The experimental research method is widely used in physical and social sciences, psychology, and education. It is based on the comparison between two or more groups with a straightforward logic, which may, however, be difficult to execute.

Mostly related to a laboratory test procedure, experimental research designs involve collecting quantitative data and performing statistical analysis on them during research. Therefore, making it an example of quantitative research method .

What are The Types of Experimental Research Design?

The types of experimental research design are determined by the way the researcher assigns subjects to different conditions and groups. They are of 3 types, namely; pre-experimental, quasi-experimental, and true experimental research.

Pre-experimental Research Design

In pre-experimental research design, either a group or various dependent groups are observed for the effect of the application of an independent variable which is presumed to cause change. It is the simplest form of experimental research design and is treated with no control group.

Although very practical, experimental research is lacking in several areas of the true-experimental criteria. The pre-experimental research design is further divided into three types

  • One-shot Case Study Research Design

In this type of experimental study, only one dependent group or variable is considered. The study is carried out after some treatment which was presumed to cause change, making it a posttest study.

  • One-group Pretest-posttest Research Design: 

This research design combines both posttest and pretest study by carrying out a test on a single group before the treatment is administered and after the treatment is administered. With the former being administered at the beginning of treatment and later at the end.

  • Static-group Comparison: 

In a static-group comparison study, 2 or more groups are placed under observation, where only one of the groups is subjected to some treatment while the other groups are held static. All the groups are post-tested, and the observed differences between the groups are assumed to be a result of the treatment.

Quasi-experimental Research Design

  The word “quasi” means partial, half, or pseudo. Therefore, the quasi-experimental research bearing a resemblance to the true experimental research, but not the same.  In quasi-experiments, the participants are not randomly assigned, and as such, they are used in settings where randomization is difficult or impossible.

 This is very common in educational research, where administrators are unwilling to allow the random selection of students for experimental samples.

Some examples of quasi-experimental research design include; the time series, no equivalent control group design, and the counterbalanced design.

True Experimental Research Design

The true experimental research design relies on statistical analysis to approve or disprove a hypothesis. It is the most accurate type of experimental design and may be carried out with or without a pretest on at least 2 randomly assigned dependent subjects.

The true experimental research design must contain a control group, a variable that can be manipulated by the researcher, and the distribution must be random. The classification of true experimental design include:

  • The posttest-only Control Group Design: In this design, subjects are randomly selected and assigned to the 2 groups (control and experimental), and only the experimental group is treated. After close observation, both groups are post-tested, and a conclusion is drawn from the difference between these groups.
  • The pretest-posttest Control Group Design: For this control group design, subjects are randomly assigned to the 2 groups, both are presented, but only the experimental group is treated. After close observation, both groups are post-tested to measure the degree of change in each group.
  • Solomon four-group Design: This is the combination of the pretest-only and the pretest-posttest control groups. In this case, the randomly selected subjects are placed into 4 groups.

The first two of these groups are tested using the posttest-only method, while the other two are tested using the pretest-posttest method.

Examples of Experimental Research

Experimental research examples are different, depending on the type of experimental research design that is being considered. The most basic example of experimental research is laboratory experiments, which may differ in nature depending on the subject of research.

Administering Exams After The End of Semester

During the semester, students in a class are lectured on particular courses and an exam is administered at the end of the semester. In this case, the students are the subjects or dependent variables while the lectures are the independent variables treated on the subjects.

Only one group of carefully selected subjects are considered in this research, making it a pre-experimental research design example. We will also notice that tests are only carried out at the end of the semester, and not at the beginning.

Further making it easy for us to conclude that it is a one-shot case study research. 

Employee Skill Evaluation

Before employing a job seeker, organizations conduct tests that are used to screen out less qualified candidates from the pool of qualified applicants. This way, organizations can determine an employee’s skill set at the point of employment.

In the course of employment, organizations also carry out employee training to improve employee productivity and generally grow the organization. Further evaluation is carried out at the end of each training to test the impact of the training on employee skills, and test for improvement.

Here, the subject is the employee, while the treatment is the training conducted. This is a pretest-posttest control group experimental research example.

Evaluation of Teaching Method

Let us consider an academic institution that wants to evaluate the teaching method of 2 teachers to determine which is best. Imagine a case whereby the students assigned to each teacher is carefully selected probably due to personal request by parents or due to stubbornness and smartness.

This is a no equivalent group design example because the samples are not equal. By evaluating the effectiveness of each teacher’s teaching method this way, we may conclude after a post-test has been carried out.

However, this may be influenced by factors like the natural sweetness of a student. For example, a very smart student will grab more easily than his or her peers irrespective of the method of teaching.

What are the Characteristics of Experimental Research?  

Experimental research contains dependent, independent and extraneous variables. The dependent variables are the variables being treated or manipulated and are sometimes called the subject of the research.

The independent variables are the experimental treatment being exerted on the dependent variables. Extraneous variables, on the other hand, are other factors affecting the experiment that may also contribute to the change.

The setting is where the experiment is carried out. Many experiments are carried out in the laboratory, where control can be exerted on the extraneous variables, thereby eliminating them.

Other experiments are carried out in a less controllable setting. The choice of setting used in research depends on the nature of the experiment being carried out.

  • Multivariable

Experimental research may include multiple independent variables, e.g. time, skills, test scores, etc.

Why Use Experimental Research Design?  

Experimental research design can be majorly used in physical sciences, social sciences, education, and psychology. It is used to make predictions and draw conclusions on a subject matter. 

Some uses of experimental research design are highlighted below.

  • Medicine: Experimental research is used to provide the proper treatment for diseases. In most cases, rather than directly using patients as the research subject, researchers take a sample of the bacteria from the patient’s body and are treated with the developed antibacterial

The changes observed during this period are recorded and evaluated to determine its effectiveness. This process can be carried out using different experimental research methods.

  • Education: Asides from science subjects like Chemistry and Physics which involves teaching students how to perform experimental research, it can also be used in improving the standard of an academic institution. This includes testing students’ knowledge on different topics, coming up with better teaching methods, and the implementation of other programs that will aid student learning.
  • Human Behavior: Social scientists are the ones who mostly use experimental research to test human behaviour. For example, consider 2 people randomly chosen to be the subject of the social interaction research where one person is placed in a room without human interaction for 1 year.

The other person is placed in a room with a few other people, enjoying human interaction. There will be a difference in their behaviour at the end of the experiment.

  • UI/UX: During the product development phase, one of the major aims of the product team is to create a great user experience with the product. Therefore, before launching the final product design, potential are brought in to interact with the product.

For example, when finding it difficult to choose how to position a button or feature on the app interface, a random sample of product testers are allowed to test the 2 samples and how the button positioning influences the user interaction is recorded.

What are the Disadvantages of Experimental Research?  

  • It is highly prone to human error due to its dependency on variable control which may not be properly implemented. These errors could eliminate the validity of the experiment and the research being conducted.
  • Exerting control of extraneous variables may create unrealistic situations. Eliminating real-life variables will result in inaccurate conclusions. This may also result in researchers controlling the variables to suit his or her personal preferences.
  • It is a time-consuming process. So much time is spent on testing dependent variables and waiting for the effect of the manipulation of dependent variables to manifest.
  • It is expensive.
  • It is very risky and may have ethical complications that cannot be ignored. This is common in medical research, where failed trials may lead to a patient’s death or a deteriorating health condition.
  • Experimental research results are not descriptive.
  • Response bias can also be supplied by the subject of the conversation.
  • Human responses in experimental research can be difficult to measure.

What are the Data Collection Methods in Experimental Research?  

Data collection methods in experimental research are the different ways in which data can be collected for experimental research. They are used in different cases, depending on the type of research being carried out.

1. Observational Study

This type of study is carried out over a long period. It measures and observes the variables of interest without changing existing conditions.

When researching the effect of social interaction on human behavior, the subjects who are placed in 2 different environments are observed throughout the research. No matter the kind of absurd behavior that is exhibited by the subject during this period, its condition will not be changed.

This may be a very risky thing to do in medical cases because it may lead to death or worse medical conditions.

2. Simulations

This procedure uses mathematical, physical, or computer models to replicate a real-life process or situation. It is frequently used when the actual situation is too expensive, dangerous, or impractical to replicate in real life.

This method is commonly used in engineering and operational research for learning purposes and sometimes as a tool to estimate possible outcomes of real research. Some common situation software are Simulink, MATLAB, and Simul8.

Not all kinds of experimental research can be carried out using simulation as a data collection tool . It is very impractical for a lot of laboratory-based research that involves chemical processes.

A survey is a tool used to gather relevant data about the characteristics of a population and is one of the most common data collection tools. A survey consists of a group of questions prepared by the researcher, to be answered by the research subject.

Surveys can be shared with the respondents both physically and electronically. When collecting data through surveys, the kind of data collected depends on the respondent, and researchers have limited control over it.

Formplus is the best tool for collecting experimental data using survey s. It has relevant features that will aid the data collection process and can also be used in other aspects of experimental research.

Differences between Experimental and Non-Experimental Research 

1. In experimental research, the researcher can control and manipulate the environment of the research, including the predictor variable which can be changed. On the other hand, non-experimental research cannot be controlled or manipulated by the researcher at will.

This is because it takes place in a real-life setting, where extraneous variables cannot be eliminated. Therefore, it is more difficult to conclude non-experimental studies, even though they are much more flexible and allow for a greater range of study fields.

2. The relationship between cause and effect cannot be established in non-experimental research, while it can be established in experimental research. This may be because many extraneous variables also influence the changes in the research subject, making it difficult to point at a particular variable as the cause of a particular change

3. Independent variables are not introduced, withdrawn, or manipulated in non-experimental designs, but the same may not be said about experimental research.

Experimental Research vs. Alternatives and When to Use Them

1. experimental research vs causal comparative.

Experimental research enables you to control variables and identify how the independent variable affects the dependent variable. Causal-comparative find out the cause-and-effect relationship between the variables by comparing already existing groups that are affected differently by the independent variable.

For example, in an experiment to see how K-12 education affects children and teenager development. An experimental research would split the children into groups, some would get formal K-12 education, while others won’t. This is not ethically right because every child has the right to education. So, what we do instead would be to compare already existing groups of children who are getting formal education with those who due to some circumstances can not.

Pros and Cons of Experimental vs Causal-Comparative Research

  • Causal-Comparative:   Strengths:  More realistic than experiments, can be conducted in real-world settings.  Weaknesses:  Establishing causality can be weaker due to the lack of manipulation.

2. Experimental Research vs Correlational Research

When experimenting, you are trying to establish a cause-and-effect relationship between different variables. For example, you are trying to establish the effect of heat on water, the temperature keeps changing (independent variable) and you see how it affects the water (dependent variable).

For correlational research, you are not necessarily interested in the why or the cause-and-effect relationship between the variables, you are focusing on the relationship. Using the same water and temperature example, you are only interested in the fact that they change, you are not investigating which of the variables or other variables causes them to change.

Pros and Cons of Experimental vs Correlational Research

3. experimental research vs descriptive research.

With experimental research, you alter the independent variable to see how it affects the dependent variable, but with descriptive research you are simply studying the characteristics of the variable you are studying.

So, in an experiment to see how blown glass reacts to temperature, experimental research would keep altering the temperature to varying levels of high and low to see how it affects the dependent variable (glass). But descriptive research would investigate the glass properties.

Pros and Cons of Experimental vs Descriptive Research

4. experimental research vs action research.

Experimental research tests for causal relationships by focusing on one independent variable vs the dependent variable and keeps other variables constant. So, you are testing hypotheses and using the information from the research to contribute to knowledge.

However, with action research, you are using a real-world setting which means you are not controlling variables. You are also performing the research to solve actual problems and improve already established practices.

For example, if you are testing for how long commutes affect workers’ productivity. With experimental research, you would vary the length of commute to see how the time affects work. But with action research, you would account for other factors such as weather, commute route, nutrition, etc. Also, experimental research helps know the relationship between commute time and productivity, while action research helps you look for ways to improve productivity

Pros and Cons of Experimental vs Action Research

Conclusion  .

Experimental research designs are often considered to be the standard in research designs. This is partly due to the common misconception that research is equivalent to scientific experiments—a component of experimental research design.

In this research design, one or more subjects or dependent variables are randomly assigned to different treatments (i.e. independent variables manipulated by the researcher) and the results are observed to conclude. One of the uniqueness of experimental research is in its ability to control the effect of extraneous variables.

Experimental research is suitable for research whose goal is to examine cause-effect relationships, e.g. explanatory research. It can be conducted in the laboratory or field settings, depending on the aim of the research that is being carried out. 

Logo

Connect to Formplus, Get Started Now - It's Free!

  • examples of experimental research
  • experimental research methods
  • types of experimental research
  • busayo.longe

Formplus

You may also like:

Response vs Explanatory Variables: Definition & Examples

In this article, we’ll be comparing the two types of variables, what they both mean and see some of their real-life applications in research

experimental methodology in research example

Simpson’s Paradox & How to Avoid it in Experimental Research

In this article, we are going to look at Simpson’s Paradox from its historical point and later, we’ll consider its effect in...

Experimental Vs Non-Experimental Research: 15 Key Differences

Differences between experimental and non experimental research on definitions, types, examples, data collection tools, uses, advantages etc.

What is Experimenter Bias? Definition, Types & Mitigation

In this article, we will look into the concept of experimental bias and how it can be identified in your research

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

Experimental Method In Psychology

Saul McLeod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul McLeod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

The experimental method involves the manipulation of variables to establish cause-and-effect relationships. The key features are controlled methods and the random allocation of participants into controlled and experimental groups .

What is an Experiment?

An experiment is an investigation in which a hypothesis is scientifically tested. An independent variable (the cause) is manipulated in an experiment, and the dependent variable (the effect) is measured; any extraneous variables are controlled.

An advantage is that experiments should be objective. The researcher’s views and opinions should not affect a study’s results. This is good as it makes the data more valid  and less biased.

There are three types of experiments you need to know:

1. Lab Experiment

A laboratory experiment in psychology is a research method in which the experimenter manipulates one or more independent variables and measures the effects on the dependent variable under controlled conditions.

A laboratory experiment is conducted under highly controlled conditions (not necessarily a laboratory) where accurate measurements are possible.

The researcher uses a standardized procedure to determine where the experiment will take place, at what time, with which participants, and in what circumstances.

Participants are randomly allocated to each independent variable group.

Examples are Milgram’s experiment on obedience and  Loftus and Palmer’s car crash study .

  • Strength : It is easier to replicate (i.e., copy) a laboratory experiment. This is because a standardized procedure is used.
  • Strength : They allow for precise control of extraneous and independent variables. This allows a cause-and-effect relationship to be established.
  • Limitation : The artificiality of the setting may produce unnatural behavior that does not reflect real life, i.e., low ecological validity. This means it would not be possible to generalize the findings to a real-life setting.
  • Limitation : Demand characteristics or experimenter effects may bias the results and become confounding variables .

2. Field Experiment

A field experiment is a research method in psychology that takes place in a natural, real-world setting. It is similar to a laboratory experiment in that the experimenter manipulates one or more independent variables and measures the effects on the dependent variable.

However, in a field experiment, the participants are unaware they are being studied, and the experimenter has less control over the extraneous variables .

Field experiments are often used to study social phenomena, such as altruism, obedience, and persuasion. They are also used to test the effectiveness of interventions in real-world settings, such as educational programs and public health campaigns.

An example is Holfing’s hospital study on obedience .

  • Strength : behavior in a field experiment is more likely to reflect real life because of its natural setting, i.e., higher ecological validity than a lab experiment.
  • Strength : Demand characteristics are less likely to affect the results, as participants may not know they are being studied. This occurs when the study is covert.
  • Limitation : There is less control over extraneous variables that might bias the results. This makes it difficult for another researcher to replicate the study in exactly the same way.

3. Natural Experiment

A natural experiment in psychology is a research method in which the experimenter observes the effects of a naturally occurring event or situation on the dependent variable without manipulating any variables.

Natural experiments are conducted in the day (i.e., real life) environment of the participants, but here, the experimenter has no control over the independent variable as it occurs naturally in real life.

Natural experiments are often used to study psychological phenomena that would be difficult or unethical to study in a laboratory setting, such as the effects of natural disasters, policy changes, or social movements.

For example, Hodges and Tizard’s attachment research (1989) compared the long-term development of children who have been adopted, fostered, or returned to their mothers with a control group of children who had spent all their lives in their biological families.

Here is a fictional example of a natural experiment in psychology:

Researchers might compare academic achievement rates among students born before and after a major policy change that increased funding for education.

In this case, the independent variable is the timing of the policy change, and the dependent variable is academic achievement. The researchers would not be able to manipulate the independent variable, but they could observe its effects on the dependent variable.

  • Strength : behavior in a natural experiment is more likely to reflect real life because of its natural setting, i.e., very high ecological validity.
  • Strength : Demand characteristics are less likely to affect the results, as participants may not know they are being studied.
  • Strength : It can be used in situations in which it would be ethically unacceptable to manipulate the independent variable, e.g., researching stress .
  • Limitation : They may be more expensive and time-consuming than lab experiments.
  • Limitation : There is no control over extraneous variables that might bias the results. This makes it difficult for another researcher to replicate the study in exactly the same way.

Key Terminology

Ecological validity.

The degree to which an investigation represents real-life experiences.

Experimenter effects

These are the ways that the experimenter can accidentally influence the participant through their appearance or behavior.

Demand characteristics

The clues in an experiment lead the participants to think they know what the researcher is looking for (e.g., the experimenter’s body language).

Independent variable (IV)

The variable the experimenter manipulates (i.e., changes) is assumed to have a direct effect on the dependent variable.

Dependent variable (DV)

Variable the experimenter measures. This is the outcome (i.e., the result) of a study.

Extraneous variables (EV)

All variables which are not independent variables but could affect the results (DV) of the experiment. EVs should be controlled where possible.

Confounding variables

Variable(s) that have affected the results (DV), apart from the IV. A confounding variable could be an extraneous variable that has not been controlled.

Random Allocation

Randomly allocating participants to independent variable conditions means that all participants should have an equal chance of participating in each condition.

The principle of random allocation is to avoid bias in how the experiment is carried out and limit the effects of participant variables.

Order effects

Changes in participants’ performance due to their repeating the same or similar test more than once. Examples of order effects include:

(i) practice effect: an improvement in performance on a task due to repetition, for example, because of familiarity with the task;

(ii) fatigue effect: a decrease in performance of a task due to repetition, for example, because of boredom or tiredness.

Print Friendly, PDF & Email

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case AskWhy Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

experimental methodology in research example

Home Market Research

Experimental Research: What it is + Types of designs

Experimental Research Design

Any research conducted under scientifically acceptable conditions uses experimental methods. The success of experimental studies hinges on researchers confirming the change of a variable is based solely on the manipulation of the constant variable. The research should establish a notable cause and effect.

What is Experimental Research?

Experimental research is a study conducted with a scientific approach using two sets of variables. The first set acts as a constant, which you use to measure the differences of the second set. Quantitative research methods , for example, are experimental.

If you don’t have enough data to support your decisions, you must first determine the facts. This research gathers the data necessary to help you make better decisions.

You can conduct experimental research in the following situations:

  • Time is a vital factor in establishing a relationship between cause and effect.
  • Invariable behavior between cause and effect.
  • You wish to understand the importance of cause and effect.

Experimental Research Design Types

The classic experimental design definition is: “The methods used to collect data in experimental studies.”

There are three primary types of experimental design:

  • Pre-experimental research design
  • True experimental research design
  • Quasi-experimental research design

The way you classify research subjects based on conditions or groups determines the type of research design  you should use.

0 1. Pre-Experimental Design

A group, or various groups, are kept under observation after implementing cause and effect factors. You’ll conduct this research to understand whether further investigation is necessary for these particular groups.

You can break down pre-experimental research further into three types:

  • One-shot Case Study Research Design
  • One-group Pretest-posttest Research Design
  • Static-group Comparison

0 2. True Experimental Design

It relies on statistical analysis to prove or disprove a hypothesis, making it the most accurate form of research. Of the types of experimental design, only true design can establish a cause-effect relationship within a group. In a true experiment, three factors need to be satisfied:

  • There is a Control Group, which won’t be subject to changes, and an Experimental Group, which will experience the changed variables.
  • A variable that can be manipulated by the researcher
  • Random distribution

This experimental research method commonly occurs in the physical sciences.

0 3. Quasi-Experimental Design

The word “Quasi” indicates similarity. A quasi-experimental design is similar to an experimental one, but it is not the same. The difference between the two is the assignment of a control group. In this research, an independent variable is manipulated, but the participants of a group are not randomly assigned. Quasi-research is used in field settings where random assignment is either irrelevant or not required.

Importance of Experimental Design

Experimental research is a powerful tool for understanding cause-and-effect relationships. It allows us to manipulate variables and observe the effects, which is crucial for understanding how different factors influence the outcome of a study.

But the importance of experimental research goes beyond that. It’s a critical method for many scientific and academic studies. It allows us to test theories, develop new products, and make groundbreaking discoveries.

For example, this research is essential for developing new drugs and medical treatments. Researchers can understand how a new drug works by manipulating dosage and administration variables and identifying potential side effects.

Similarly, experimental research is used in the field of psychology to test theories and understand human behavior. By manipulating variables such as stimuli, researchers can gain insights into how the brain works and identify new treatment options for mental health disorders.

It is also widely used in the field of education. It allows educators to test new teaching methods and identify what works best. By manipulating variables such as class size, teaching style, and curriculum, researchers can understand how students learn and identify new ways to improve educational outcomes.

In addition, experimental research is a powerful tool for businesses and organizations. By manipulating variables such as marketing strategies, product design, and customer service, companies can understand what works best and identify new opportunities for growth.

Advantages of Experimental Research

When talking about this research, we can think of human life. Babies do their own rudimentary experiments (such as putting objects in their mouths) to learn about the world around them, while older children and teens do experiments at school to learn more about science.

Ancient scientists used this research to prove that their hypotheses were correct. For example, Galileo Galilei and Antoine Lavoisier conducted various experiments to discover key concepts in physics and chemistry. The same is true of modern experts, who use this scientific method to see if new drugs are effective, discover treatments for diseases, and create new electronic devices (among others).

It’s vital to test new ideas or theories. Why put time, effort, and funding into something that may not work?

This research allows you to test your idea in a controlled environment before marketing. It also provides the best method to test your theory thanks to the following advantages:

Advantages of experimental research

  • Researchers have a stronger hold over variables to obtain desired results.
  • The subject or industry does not impact the effectiveness of experimental research. Any industry can implement it for research purposes.
  • The results are specific.
  • After analyzing the results, you can apply your findings to similar ideas or situations.
  • You can identify the cause and effect of a hypothesis. Researchers can further analyze this relationship to determine more in-depth ideas.
  • Experimental research makes an ideal starting point. The data you collect is a foundation for building more ideas and conducting more action research .

Whether you want to know how the public will react to a new product or if a certain food increases the chance of disease, experimental research is the best place to start. Begin your research by finding subjects using  QuestionPro Audience  and other tools today.

LEARN MORE         FREE TRIAL

MORE LIKE THIS

target population

Target Population: What It Is + Strategies for Targeting

Aug 29, 2024

Microsoft Customer Voice vs QuestionPro: Choosing the Best

statistical methods

Statistical Methods: What It Is, Process, Analyze & Present

Aug 28, 2024

experimental methodology in research example

Velodu and QuestionPro: Connecting Data with a Human Touch

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • What’s Coming Up
  • Workforce Intelligence

Logo for University of Southern Queensland

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

10 Experimental research

Experimental research—often considered to be the ‘gold standard’ in research designs—is one of the most rigorous of all research designs. In this design, one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different treatment levels (random assignment), and the results of the treatments on outcomes (dependent variables) are observed. The unique strength of experimental research is its internal validity (causality) due to its ability to link cause and effect through treatment manipulation, while controlling for the spurious effect of extraneous variable.

Experimental research is best suited for explanatory research—rather than for descriptive or exploratory research—where the goal of the study is to examine cause-effect relationships. It also works well for research that involves a relatively limited and well-defined set of independent variables that can either be manipulated or controlled. Experimental research can be conducted in laboratory or field settings. Laboratory experiments , conducted in laboratory (artificial) settings, tend to be high in internal validity, but this comes at the cost of low external validity (generalisability), because the artificial (laboratory) setting in which the study is conducted may not reflect the real world. Field experiments are conducted in field settings such as in a real organisation, and are high in both internal and external validity. But such experiments are relatively rare, because of the difficulties associated with manipulating treatments and controlling for extraneous effects in a field setting.

Experimental research can be grouped into two broad categories: true experimental designs and quasi-experimental designs. Both designs require treatment manipulation, but while true experiments also require random assignment, quasi-experiments do not. Sometimes, we also refer to non-experimental research, which is not really a research design, but an all-inclusive term that includes all types of research that do not employ treatment manipulation or random assignment, such as survey research, observational research, and correlational studies.

Basic concepts

Treatment and control groups. In experimental research, some subjects are administered one or more experimental stimulus called a treatment (the treatment group ) while other subjects are not given such a stimulus (the control group ). The treatment may be considered successful if subjects in the treatment group rate more favourably on outcome variables than control group subjects. Multiple levels of experimental stimulus may be administered, in which case, there may be more than one treatment group. For example, in order to test the effects of a new drug intended to treat a certain medical condition like dementia, if a sample of dementia patients is randomly divided into three groups, with the first group receiving a high dosage of the drug, the second group receiving a low dosage, and the third group receiving a placebo such as a sugar pill (control group), then the first two groups are experimental groups and the third group is a control group. After administering the drug for a period of time, if the condition of the experimental group subjects improved significantly more than the control group subjects, we can say that the drug is effective. We can also compare the conditions of the high and low dosage experimental groups to determine if the high dose is more effective than the low dose.

Treatment manipulation. Treatments are the unique feature of experimental research that sets this design apart from all other research methods. Treatment manipulation helps control for the ‘cause’ in cause-effect relationships. Naturally, the validity of experimental research depends on how well the treatment was manipulated. Treatment manipulation must be checked using pretests and pilot tests prior to the experimental study. Any measurements conducted before the treatment is administered are called pretest measures , while those conducted after the treatment are posttest measures .

Random selection and assignment. Random selection is the process of randomly drawing a sample from a population or a sampling frame. This approach is typically employed in survey research, and ensures that each unit in the population has a positive chance of being selected into the sample. Random assignment, however, is a process of randomly assigning subjects to experimental or control groups. This is a standard practice in true experimental research to ensure that treatment groups are similar (equivalent) to each other and to the control group prior to treatment administration. Random selection is related to sampling, and is therefore more closely related to the external validity (generalisability) of findings. However, random assignment is related to design, and is therefore most related to internal validity. It is possible to have both random selection and random assignment in well-designed experimental research, but quasi-experimental research involves neither random selection nor random assignment.

Threats to internal validity. Although experimental designs are considered more rigorous than other research methods in terms of the internal validity of their inferences (by virtue of their ability to control causes through treatment manipulation), they are not immune to internal validity threats. Some of these threats to internal validity are described below, within the context of a study of the impact of a special remedial math tutoring program for improving the math abilities of high school students.

History threat is the possibility that the observed effects (dependent variables) are caused by extraneous or historical events rather than by the experimental treatment. For instance, students’ post-remedial math score improvement may have been caused by their preparation for a math exam at their school, rather than the remedial math program.

Maturation threat refers to the possibility that observed effects are caused by natural maturation of subjects (e.g., a general improvement in their intellectual ability to understand complex concepts) rather than the experimental treatment.

Testing threat is a threat in pre-post designs where subjects’ posttest responses are conditioned by their pretest responses. For instance, if students remember their answers from the pretest evaluation, they may tend to repeat them in the posttest exam.

Not conducting a pretest can help avoid this threat.

Instrumentation threat , which also occurs in pre-post designs, refers to the possibility that the difference between pretest and posttest scores is not due to the remedial math program, but due to changes in the administered test, such as the posttest having a higher or lower degree of difficulty than the pretest.

Mortality threat refers to the possibility that subjects may be dropping out of the study at differential rates between the treatment and control groups due to a systematic reason, such that the dropouts were mostly students who scored low on the pretest. If the low-performing students drop out, the results of the posttest will be artificially inflated by the preponderance of high-performing students.

Regression threat —also called a regression to the mean—refers to the statistical tendency of a group’s overall performance to regress toward the mean during a posttest rather than in the anticipated direction. For instance, if subjects scored high on a pretest, they will have a tendency to score lower on the posttest (closer to the mean) because their high scores (away from the mean) during the pretest were possibly a statistical aberration. This problem tends to be more prevalent in non-random samples and when the two measures are imperfectly correlated.

Two-group experimental designs

R

Pretest-posttest control group design . In this design, subjects are randomly assigned to treatment and control groups, subjected to an initial (pretest) measurement of the dependent variables of interest, the treatment group is administered a treatment (representing the independent variable of interest), and the dependent variables measured again (posttest). The notation of this design is shown in Figure 10.1.

Pretest-posttest control group design

Statistical analysis of this design involves a simple analysis of variance (ANOVA) between the treatment and control groups. The pretest-posttest design handles several threats to internal validity, such as maturation, testing, and regression, since these threats can be expected to influence both treatment and control groups in a similar (random) manner. The selection threat is controlled via random assignment. However, additional threats to internal validity may exist. For instance, mortality can be a problem if there are differential dropout rates between the two groups, and the pretest measurement may bias the posttest measurement—especially if the pretest introduces unusual topics or content.

Posttest -only control group design . This design is a simpler version of the pretest-posttest design where pretest measurements are omitted. The design notation is shown in Figure 10.2.

Posttest-only control group design

The treatment effect is measured simply as the difference in the posttest scores between the two groups:

\[E = (O_{1} - O_{2})\,.\]

The appropriate statistical analysis of this design is also a two-group analysis of variance (ANOVA). The simplicity of this design makes it more attractive than the pretest-posttest design in terms of internal validity. This design controls for maturation, testing, regression, selection, and pretest-posttest interaction, though the mortality threat may continue to exist.

C

Because the pretest measure is not a measurement of the dependent variable, but rather a covariate, the treatment effect is measured as the difference in the posttest scores between the treatment and control groups as:

Due to the presence of covariates, the right statistical analysis of this design is a two-group analysis of covariance (ANCOVA). This design has all the advantages of posttest-only design, but with internal validity due to the controlling of covariates. Covariance designs can also be extended to pretest-posttest control group design.

Factorial designs

Two-group designs are inadequate if your research requires manipulation of two or more independent variables (treatments). In such cases, you would need four or higher-group designs. Such designs, quite popular in experimental research, are commonly called factorial designs. Each independent variable in this design is called a factor , and each subdivision of a factor is called a level . Factorial designs enable the researcher to examine not only the individual effect of each treatment on the dependent variables (called main effects), but also their joint effect (called interaction effects).

2 \times 2

In a factorial design, a main effect is said to exist if the dependent variable shows a significant difference between multiple levels of one factor, at all levels of other factors. No change in the dependent variable across factor levels is the null case (baseline), from which main effects are evaluated. In the above example, you may see a main effect of instructional type, instructional time, or both on learning outcomes. An interaction effect exists when the effect of differences in one factor depends upon the level of a second factor. In our example, if the effect of instructional type on learning outcomes is greater for three hours/week of instructional time than for one and a half hours/week, then we can say that there is an interaction effect between instructional type and instructional time on learning outcomes. Note that the presence of interaction effects dominate and make main effects irrelevant, and it is not meaningful to interpret main effects if interaction effects are significant.

Hybrid experimental designs

Hybrid designs are those that are formed by combining features of more established designs. Three such hybrid designs are randomised bocks design, Solomon four-group design, and switched replications design.

Randomised block design. This is a variation of the posttest-only or pretest-posttest control group design where the subject population can be grouped into relatively homogeneous subgroups (called blocks ) within which the experiment is replicated. For instance, if you want to replicate the same posttest-only design among university students and full-time working professionals (two homogeneous blocks), subjects in both blocks are randomly split between the treatment group (receiving the same treatment) and the control group (see Figure 10.5). The purpose of this design is to reduce the ‘noise’ or variance in data that may be attributable to differences between the blocks so that the actual effect of interest can be detected more accurately.

Randomised blocks design

Solomon four-group design . In this design, the sample is divided into two treatment groups and two control groups. One treatment group and one control group receive the pretest, and the other two groups do not. This design represents a combination of posttest-only and pretest-posttest control group design, and is intended to test for the potential biasing effect of pretest measurement on posttest measures that tends to occur in pretest-posttest designs, but not in posttest-only designs. The design notation is shown in Figure 10.6.

Solomon four-group design

Switched replication design . This is a two-group design implemented in two phases with three waves of measurement. The treatment group in the first phase serves as the control group in the second phase, and the control group in the first phase becomes the treatment group in the second phase, as illustrated in Figure 10.7. In other words, the original design is repeated or replicated temporally with treatment/control roles switched between the two groups. By the end of the study, all participants will have received the treatment either during the first or the second phase. This design is most feasible in organisational contexts where organisational programs (e.g., employee training) are implemented in a phased manner or are repeated at regular intervals.

Switched replication design

Quasi-experimental designs

Quasi-experimental designs are almost identical to true experimental designs, but lacking one key ingredient: random assignment. For instance, one entire class section or one organisation is used as the treatment group, while another section of the same class or a different organisation in the same industry is used as the control group. This lack of random assignment potentially results in groups that are non-equivalent, such as one group possessing greater mastery of certain content than the other group, say by virtue of having a better teacher in a previous semester, which introduces the possibility of selection bias . Quasi-experimental designs are therefore inferior to true experimental designs in interval validity due to the presence of a variety of selection related threats such as selection-maturation threat (the treatment and control groups maturing at different rates), selection-history threat (the treatment and control groups being differentially impacted by extraneous or historical events), selection-regression threat (the treatment and control groups regressing toward the mean between pretest and posttest at different rates), selection-instrumentation threat (the treatment and control groups responding differently to the measurement), selection-testing (the treatment and control groups responding differently to the pretest), and selection-mortality (the treatment and control groups demonstrating differential dropout rates). Given these selection threats, it is generally preferable to avoid quasi-experimental designs to the greatest extent possible.

N

In addition, there are quite a few unique non-equivalent designs without corresponding true experimental design cousins. Some of the more useful of these designs are discussed next.

Regression discontinuity (RD) design . This is a non-equivalent pretest-posttest design where subjects are assigned to the treatment or control group based on a cut-off score on a preprogram measure. For instance, patients who are severely ill may be assigned to a treatment group to test the efficacy of a new drug or treatment protocol and those who are mildly ill are assigned to the control group. In another example, students who are lagging behind on standardised test scores may be selected for a remedial curriculum program intended to improve their performance, while those who score high on such tests are not selected from the remedial program.

RD design

Because of the use of a cut-off score, it is possible that the observed results may be a function of the cut-off score rather than the treatment, which introduces a new threat to internal validity. However, using the cut-off score also ensures that limited or costly resources are distributed to people who need them the most, rather than randomly across a population, while simultaneously allowing a quasi-experimental treatment. The control group scores in the RD design do not serve as a benchmark for comparing treatment group scores, given the systematic non-equivalence between the two groups. Rather, if there is no discontinuity between pretest and posttest scores in the control group, but such a discontinuity persists in the treatment group, then this discontinuity is viewed as evidence of the treatment effect.

Proxy pretest design . This design, shown in Figure 10.11, looks very similar to the standard NEGD (pretest-posttest) design, with one critical difference: the pretest score is collected after the treatment is administered. A typical application of this design is when a researcher is brought in to test the efficacy of a program (e.g., an educational program) after the program has already started and pretest data is not available. Under such circumstances, the best option for the researcher is often to use a different prerecorded measure, such as students’ grade point average before the start of the program, as a proxy for pretest data. A variation of the proxy pretest design is to use subjects’ posttest recollection of pretest data, which may be subject to recall bias, but nevertheless may provide a measure of perceived gain or change in the dependent variable.

Proxy pretest design

Separate pretest-posttest samples design . This design is useful if it is not possible to collect pretest and posttest data from the same subjects for some reason. As shown in Figure 10.12, there are four groups in this design, but two groups come from a single non-equivalent group, while the other two groups come from a different non-equivalent group. For instance, say you want to test customer satisfaction with a new online service that is implemented in one city but not in another. In this case, customers in the first city serve as the treatment group and those in the second city constitute the control group. If it is not possible to obtain pretest and posttest measures from the same customers, you can measure customer satisfaction at one point in time, implement the new service program, and measure customer satisfaction (with a different set of customers) after the program is implemented. Customer satisfaction is also measured in the control group at the same times as in the treatment group, but without the new program implementation. The design is not particularly strong, because you cannot examine the changes in any specific customer’s satisfaction score before and after the implementation, but you can only examine average customer satisfaction scores. Despite the lower internal validity, this design may still be a useful way of collecting quasi-experimental data when pretest and posttest data is not available from the same subjects.

Separate pretest-posttest samples design

An interesting variation of the NEDV design is a pattern-matching NEDV design , which employs multiple outcome variables and a theory that explains how much each variable will be affected by the treatment. The researcher can then examine if the theoretical prediction is matched in actual observations. This pattern-matching technique—based on the degree of correspondence between theoretical and observed patterns—is a powerful way of alleviating internal validity concerns in the original NEDV design.

NEDV design

Perils of experimental research

Experimental research is one of the most difficult of research designs, and should not be taken lightly. This type of research is often best with a multitude of methodological problems. First, though experimental research requires theories for framing hypotheses for testing, much of current experimental research is atheoretical. Without theories, the hypotheses being tested tend to be ad hoc, possibly illogical, and meaningless. Second, many of the measurement instruments used in experimental research are not tested for reliability and validity, and are incomparable across studies. Consequently, results generated using such instruments are also incomparable. Third, often experimental research uses inappropriate research designs, such as irrelevant dependent variables, no interaction effects, no experimental controls, and non-equivalent stimulus across treatment groups. Findings from such studies tend to lack internal validity and are highly suspect. Fourth, the treatments (tasks) used in experimental research may be diverse, incomparable, and inconsistent across studies, and sometimes inappropriate for the subject population. For instance, undergraduate student subjects are often asked to pretend that they are marketing managers and asked to perform a complex budget allocation task in which they have no experience or expertise. The use of such inappropriate tasks, introduces new threats to internal validity (i.e., subject’s performance may be an artefact of the content or difficulty of the task setting), generates findings that are non-interpretable and meaningless, and makes integration of findings across studies impossible.

The design of proper experimental treatments is a very important task in experimental design, because the treatment is the raison d’etre of the experimental method, and must never be rushed or neglected. To design an adequate and appropriate task, researchers should use prevalidated tasks if available, conduct treatment manipulation checks to check for the adequacy of such tasks (by debriefing subjects after performing the assigned task), conduct pilot tests (repeatedly, if necessary), and if in doubt, use tasks that are simple and familiar for the respondent sample rather than tasks that are complex or unfamiliar.

In summary, this chapter introduced key concepts in the experimental design research method and introduced a variety of true experimental and quasi-experimental designs. Although these designs vary widely in internal validity, designs with less internal validity should not be overlooked and may sometimes be useful under specific circumstances and empirical contingencies.

Social Science Research: Principles, Methods and Practices (Revised edition) Copyright © 2019 by Anol Bhattacherjee is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • A Quick Guide to Experimental Design | 5 Steps & Examples

A Quick Guide to Experimental Design | 5 Steps & Examples

Published on 11 April 2022 by Rebecca Bevans . Revised on 5 December 2022.

Experiments are used to study causal relationships . You manipulate one or more independent variables and measure their effect on one or more dependent variables.

Experimental design means creating a set of procedures to systematically test a hypothesis . A good experimental design requires a strong understanding of the system you are studying. 

There are five key steps in designing an experiment:

  • Consider your variables and how they are related
  • Write a specific, testable hypothesis
  • Design experimental treatments to manipulate your independent variable
  • Assign subjects to groups, either between-subjects or within-subjects
  • Plan how you will measure your dependent variable

For valid conclusions, you also need to select a representative sample and control any  extraneous variables that might influence your results. If if random assignment of participants to control and treatment groups is impossible, unethical, or highly difficult, consider an observational study instead.

Table of contents

Step 1: define your variables, step 2: write your hypothesis, step 3: design your experimental treatments, step 4: assign your subjects to treatment groups, step 5: measure your dependent variable, frequently asked questions about experimental design.

You should begin with a specific research question . We will work with two research question examples, one from health sciences and one from ecology:

To translate your research question into an experimental hypothesis, you need to define the main variables and make predictions about how they are related.

Start by simply listing the independent and dependent variables .

Research question Independent variable Dependent variable
Phone use and sleep Minutes of phone use before sleep Hours of sleep per night
Temperature and soil respiration Air temperature just above the soil surface CO2 respired from soil

Then you need to think about possible extraneous and confounding variables and consider how you might control  them in your experiment.

Extraneous variable How to control
Phone use and sleep in sleep patterns among individuals. measure the average difference between sleep with phone use and sleep without phone use rather than the average amount of sleep per treatment group.
Temperature and soil respiration also affects respiration, and moisture can decrease with increasing temperature. monitor soil moisture and add water to make sure that soil moisture is consistent across all treatment plots.

Finally, you can put these variables together into a diagram. Use arrows to show the possible relationships between variables and include signs to show the expected direction of the relationships.

Diagram of the relationship between variables in a sleep experiment

Here we predict that increasing temperature will increase soil respiration and decrease soil moisture, while decreasing soil moisture will lead to decreased soil respiration.

Prevent plagiarism, run a free check.

Now that you have a strong conceptual understanding of the system you are studying, you should be able to write a specific, testable hypothesis that addresses your research question.

Null hypothesis (H ) Alternate hypothesis (H )
Phone use and sleep Phone use before sleep does not correlate with the amount of sleep a person gets. Increasing phone use before sleep leads to a decrease in sleep.
Temperature and soil respiration Air temperature does not correlate with soil respiration. Increased air temperature leads to increased soil respiration.

The next steps will describe how to design a controlled experiment . In a controlled experiment, you must be able to:

  • Systematically and precisely manipulate the independent variable(s).
  • Precisely measure the dependent variable(s).
  • Control any potential confounding variables.

If your study system doesn’t match these criteria, there are other types of research you can use to answer your research question.

How you manipulate the independent variable can affect the experiment’s external validity – that is, the extent to which the results can be generalised and applied to the broader world.

First, you may need to decide how widely to vary your independent variable.

  • just slightly above the natural range for your study region.
  • over a wider range of temperatures to mimic future warming.
  • over an extreme range that is beyond any possible natural variation.

Second, you may need to choose how finely to vary your independent variable. Sometimes this choice is made for you by your experimental system, but often you will need to decide, and this will affect how much you can infer from your results.

  • a categorical variable : either as binary (yes/no) or as levels of a factor (no phone use, low phone use, high phone use).
  • a continuous variable (minutes of phone use measured every night).

How you apply your experimental treatments to your test subjects is crucial for obtaining valid and reliable results.

First, you need to consider the study size : how many individuals will be included in the experiment? In general, the more subjects you include, the greater your experiment’s statistical power , which determines how much confidence you can have in your results.

Then you need to randomly assign your subjects to treatment groups . Each group receives a different level of the treatment (e.g. no phone use, low phone use, high phone use).

You should also include a control group , which receives no treatment. The control group tells us what would have happened to your test subjects without any experimental intervention.

When assigning your subjects to groups, there are two main choices you need to make:

  • A completely randomised design vs a randomised block design .
  • A between-subjects design vs a within-subjects design .

Randomisation

An experiment can be completely randomised or randomised within blocks (aka strata):

  • In a completely randomised design , every subject is assigned to a treatment group at random.
  • In a randomised block design (aka stratified random design), subjects are first grouped according to a characteristic they share, and then randomly assigned to treatments within those groups.
Completely randomised design Randomised block design
Phone use and sleep Subjects are all randomly assigned a level of phone use using a random number generator. Subjects are first grouped by age, and then phone use treatments are randomly assigned within these groups.
Temperature and soil respiration Warming treatments are assigned to soil plots at random by using a number generator to generate map coordinates within the study area. Soils are first grouped by average rainfall, and then treatment plots are randomly assigned within these groups.

Sometimes randomisation isn’t practical or ethical , so researchers create partially-random or even non-random designs. An experimental design where treatments aren’t randomly assigned is called a quasi-experimental design .

Between-subjects vs within-subjects

In a between-subjects design (also known as an independent measures design or classic ANOVA design), individuals receive only one of the possible levels of an experimental treatment.

In medical or social research, you might also use matched pairs within your between-subjects design to make sure that each treatment group contains the same variety of test subjects in the same proportions.

In a within-subjects design (also known as a repeated measures design), every individual receives each of the experimental treatments consecutively, and their responses to each treatment are measured.

Within-subjects or repeated measures can also refer to an experimental design where an effect emerges over time, and individual responses are measured over time in order to measure this effect as it emerges.

Counterbalancing (randomising or reversing the order of treatments among subjects) is often used in within-subjects designs to ensure that the order of treatment application doesn’t influence the results of the experiment.

Between-subjects (independent measures) design Within-subjects (repeated measures) design
Phone use and sleep Subjects are randomly assigned a level of phone use (none, low, or high) and follow that level of phone use throughout the experiment. Subjects are assigned consecutively to zero, low, and high levels of phone use throughout the experiment, and the order in which they follow these treatments is randomised.
Temperature and soil respiration Warming treatments are assigned to soil plots at random and the soils are kept at this temperature throughout the experiment. Every plot receives each warming treatment (1, 3, 5, 8, and 10C above ambient temperatures) consecutively over the course of the experiment, and the order in which they receive these treatments is randomised.

Finally, you need to decide how you’ll collect data on your dependent variable outcomes. You should aim for reliable and valid measurements that minimise bias or error.

Some variables, like temperature, can be objectively measured with scientific instruments. Others may need to be operationalised to turn them into measurable observations.

  • Ask participants to record what time they go to sleep and get up each day.
  • Ask participants to wear a sleep tracker.

How precisely you measure your dependent variable also affects the kinds of statistical analysis you can use on your data.

Experiments are always context-dependent, and a good experimental design will take into account all of the unique considerations of your study system to produce information that is both valid and relevant to your research question.

Experimental designs are a set of procedures that you plan in order to examine the relationship between variables that interest you.

To design a successful experiment, first identify:

  • A testable hypothesis
  • One or more independent variables that you will manipulate
  • One or more dependent variables that you will measure

When designing the experiment, first decide:

  • How your variable(s) will be manipulated
  • How you will control for any potential confounding or lurking variables
  • How many subjects you will include
  • How you will assign treatments to your subjects

The key difference between observational studies and experiments is that, done correctly, an observational study will never influence the responses or behaviours of participants. Experimental designs will have a treatment condition applied to at least a portion of participants.

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word ‘between’ means that you’re comparing different conditions between groups, while the word ‘within’ means you’re comparing different conditions within the same group.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Bevans, R. (2022, December 05). A Quick Guide to Experimental Design | 5 Steps & Examples. Scribbr. Retrieved 29 August 2024, from https://www.scribbr.co.uk/research-methods/guide-to-experimental-design/

Is this article helpful?

Rebecca Bevans

Rebecca Bevans

Experimental design: Guide, steps, examples

Last updated

27 April 2023

Reviewed by

Miroslav Damyanov

Short on time? Get an AI generated summary of this article instead

Experimental research design is a scientific framework that allows you to manipulate one or more variables while controlling the test environment. 

When testing a theory or new product, it can be helpful to have a certain level of control and manipulate variables to discover different outcomes. You can use these experiments to determine cause and effect or study variable associations. 

This guide explores the types of experimental design, the steps in designing an experiment, and the advantages and limitations of experimental design. 

Make research less tedious

Dovetail streamlines research to help you uncover and share actionable insights

  • What is experimental research design?

You can determine the relationship between each of the variables by: 

Manipulating one or more independent variables (i.e., stimuli or treatments)

Applying the changes to one or more dependent variables (i.e., test groups or outcomes)

With the ability to analyze the relationship between variables and using measurable data, you can increase the accuracy of the result. 

What is a good experimental design?

A good experimental design requires: 

Significant planning to ensure control over the testing environment

Sound experimental treatments

Properly assigning subjects to treatment groups

Without proper planning, unexpected external variables can alter an experiment's outcome. 

To meet your research goals, your experimental design should include these characteristics:

Provide unbiased estimates of inputs and associated uncertainties

Enable the researcher to detect differences caused by independent variables

Include a plan for analysis and reporting of the results

Provide easily interpretable results with specific conclusions

What's the difference between experimental and quasi-experimental design?

The major difference between experimental and quasi-experimental design is the random assignment of subjects to groups. 

A true experiment relies on certain controls. Typically, the researcher designs the treatment and randomly assigns subjects to control and treatment groups. 

However, these conditions are unethical or impossible to achieve in some situations.

When it's unethical or impractical to assign participants randomly, that’s when a quasi-experimental design comes in. 

This design allows researchers to conduct a similar experiment by assigning subjects to groups based on non-random criteria. 

Another type of quasi-experimental design might occur when the researcher doesn't have control over the treatment but studies pre-existing groups after they receive different treatments.

When can a researcher conduct experimental research?

Various settings and professions can use experimental research to gather information and observe behavior in controlled settings. 

Basically, a researcher can conduct experimental research any time they want to test a theory with variable and dependent controls. 

Experimental research is an option when the project includes an independent variable and a desire to understand the relationship between cause and effect. 

  • The importance of experimental research design

Experimental research enables researchers to conduct studies that provide specific, definitive answers to questions and hypotheses. 

Researchers can test Independent variables in controlled settings to:

Test the effectiveness of a new medication

Design better products for consumers

Answer questions about human health and behavior

Developing a quality research plan means a researcher can accurately answer vital research questions with minimal error. As a result, definitive conclusions can influence the future of the independent variable. 

Types of experimental research designs

There are three main types of experimental research design. The research type you use will depend on the criteria of your experiment, your research budget, and environmental limitations. 

Pre-experimental research design

A pre-experimental research study is a basic observational study that monitors independent variables’ effects. 

During research, you observe one or more groups after applying a treatment to test whether the treatment causes any change. 

The three subtypes of pre-experimental research design are:

One-shot case study research design

This research method introduces a single test group to a single stimulus to study the results at the end of the application. 

After researchers presume the stimulus or treatment has caused changes, they gather results to determine how it affects the test subjects. 

One-group pretest-posttest design

This method uses a single test group but includes a pretest study as a benchmark. The researcher applies a test before and after the group’s exposure to a specific stimulus. 

Static group comparison design

This method includes two or more groups, enabling the researcher to use one group as a control. They apply a stimulus to one group and leave the other group static. 

A posttest study compares the results among groups. 

True experimental research design

A true experiment is the most common research method. It involves statistical analysis to prove or disprove a specific hypothesis . 

Under completely experimental conditions, researchers expose participants in two or more randomized groups to different stimuli. 

Random selection removes any potential for bias, providing more reliable results. 

These are the three main sub-groups of true experimental research design:

Posttest-only control group design

This structure requires the researcher to divide participants into two random groups. One group receives no stimuli and acts as a control while the other group experiences stimuli.

Researchers perform a test at the end of the experiment to observe the stimuli exposure results.

Pretest-posttest control group design

This test also requires two groups. It includes a pretest as a benchmark before introducing the stimulus. 

The pretest introduces multiple ways to test subjects. For instance, if the control group also experiences a change, it reveals that taking the test twice changes the results.

Solomon four-group design

This structure divides subjects into two groups, with two as control groups. Researchers assign the first control group a posttest only and the second control group a pretest and a posttest. 

The two variable groups mirror the control groups, but researchers expose them to stimuli. The ability to differentiate between groups in multiple ways provides researchers with more testing approaches for data-based conclusions. 

Quasi-experimental research design

Although closely related to a true experiment, quasi-experimental research design differs in approach and scope. 

Quasi-experimental research design doesn’t have randomly selected participants. Researchers typically divide the groups in this research by pre-existing differences. 

Quasi-experimental research is more common in educational studies, nursing, or other research projects where it's not ethical or practical to use randomized subject groups.

  • 5 steps for designing an experiment

Experimental research requires a clearly defined plan to outline the research parameters and expected goals. 

Here are five key steps in designing a successful experiment:

Step 1: Define variables and their relationship

Your experiment should begin with a question: What are you hoping to learn through your experiment? 

The relationship between variables in your study will determine your answer.

Define the independent variable (the intended stimuli) and the dependent variable (the expected effect of the stimuli). After identifying these groups, consider how you might control them in your experiment. 

Could natural variations affect your research? If so, your experiment should include a pretest and posttest. 

Step 2: Develop a specific, testable hypothesis

With a firm understanding of the system you intend to study, you can write a specific, testable hypothesis. 

What is the expected outcome of your study? 

Develop a prediction about how the independent variable will affect the dependent variable. 

How will the stimuli in your experiment affect your test subjects? 

Your hypothesis should provide a prediction of the answer to your research question . 

Step 3: Design experimental treatments to manipulate your independent variable

Depending on your experiment, your variable may be a fixed stimulus (like a medical treatment) or a variable stimulus (like a period during which an activity occurs). 

Determine which type of stimulus meets your experiment’s needs and how widely or finely to vary your stimuli. 

Step 4: Assign subjects to groups

When you have a clear idea of how to carry out your experiment, you can determine how to assemble test groups for an accurate study. 

When choosing your study groups, consider: 

The size of your experiment

Whether you can select groups randomly

Your target audience for the outcome of the study

You should be able to create groups with an equal number of subjects and include subjects that match your target audience. Remember, you should assign one group as a control and use one or more groups to study the effects of variables. 

Step 5: Plan how to measure your dependent variable

This step determines how you'll collect data to determine the study's outcome. You should seek reliable and valid measurements that minimize research bias or error. 

You can measure some data with scientific tools, while you’ll need to operationalize other forms to turn them into measurable observations.

  • Advantages of experimental research

Experimental research is an integral part of our world. It allows researchers to conduct experiments that answer specific questions. 

While researchers use many methods to conduct different experiments, experimental research offers these distinct benefits:

Researchers can determine cause and effect by manipulating variables.

It gives researchers a high level of control.

Researchers can test multiple variables within a single experiment.

All industries and fields of knowledge can use it. 

Researchers can duplicate results to promote the validity of the study .

Replicating natural settings rapidly means immediate research.

Researchers can combine it with other research methods.

It provides specific conclusions about the validity of a product, theory, or idea.

  • Disadvantages (or limitations) of experimental research

Unfortunately, no research type yields ideal conditions or perfect results. 

While experimental research might be the right choice for some studies, certain conditions could render experiments useless or even dangerous. 

Before conducting experimental research, consider these disadvantages and limitations:

Required professional qualification

Only competent professionals with an academic degree and specific training are qualified to conduct rigorous experimental research. This ensures results are unbiased and valid. 

Limited scope

Experimental research may not capture the complexity of some phenomena, such as social interactions or cultural norms. These are difficult to control in a laboratory setting.

Resource-intensive

Experimental research can be expensive, time-consuming, and require significant resources, such as specialized equipment or trained personnel.

Limited generalizability

The controlled nature means the research findings may not fully apply to real-world situations or people outside the experimental setting.

Practical or ethical concerns

Some experiments may involve manipulating variables that could harm participants or violate ethical guidelines . 

Researchers must ensure their experiments do not cause harm or discomfort to participants. 

Sometimes, recruiting a sample of people to randomly assign may be difficult. 

  • Experimental research design example

Experiments across all industries and research realms provide scientists, developers, and other researchers with definitive answers. These experiments can solve problems, create inventions, and heal illnesses. 

Product design testing is an excellent example of experimental research. 

A company in the product development phase creates multiple prototypes for testing. With a randomized selection, researchers introduce each test group to a different prototype. 

When groups experience different product designs , the company can assess which option most appeals to potential customers. 

Experimental research design provides researchers with a controlled environment to conduct experiments that evaluate cause and effect. 

Using the five steps to develop a research plan ensures you anticipate and eliminate external variables while answering life’s crucial questions.

Should you be using a customer insights hub?

Do you want to discover previous research faster?

Do you share your research findings with others?

Do you analyze research data?

Start for free today, add your research, and get to key insights faster

Editor’s picks

Last updated: 18 April 2023

Last updated: 27 February 2023

Last updated: 22 August 2024

Last updated: 5 February 2023

Last updated: 16 August 2024

Last updated: 9 March 2023

Last updated: 30 April 2024

Last updated: 12 December 2023

Last updated: 11 March 2024

Last updated: 4 July 2024

Last updated: 6 March 2024

Last updated: 5 March 2024

Last updated: 13 May 2024

Latest articles

Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next.

  • Types of experimental

Log in or sign up

Get started for free

  • Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Sweepstakes
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

How the Experimental Method Works in Psychology

sturti/Getty Images

The Experimental Process

Types of experiments, potential pitfalls of the experimental method.

The experimental method is a type of research procedure that involves manipulating variables to determine if there is a cause-and-effect relationship. The results obtained through the experimental method are useful but do not prove with 100% certainty that a singular cause always creates a specific effect. Instead, they show the probability that a cause will or will not lead to a particular effect.

At a Glance

While there are many different research techniques available, the experimental method allows researchers to look at cause-and-effect relationships. Using the experimental method, researchers randomly assign participants to a control or experimental group and manipulate levels of an independent variable. If changes in the independent variable lead to changes in the dependent variable, it indicates there is likely a causal relationship between them.

What Is the Experimental Method in Psychology?

The experimental method involves manipulating one variable to determine if this causes changes in another variable. This method relies on controlled research methods and random assignment of study subjects to test a hypothesis.

For example, researchers may want to learn how different visual patterns may impact our perception. Or they might wonder whether certain actions can improve memory . Experiments are conducted on many behavioral topics, including:

The scientific method forms the basis of the experimental method. This is a process used to determine the relationship between two variables—in this case, to explain human behavior .

Positivism is also important in the experimental method. It refers to factual knowledge that is obtained through observation, which is considered to be trustworthy.

When using the experimental method, researchers first identify and define key variables. Then they formulate a hypothesis, manipulate the variables, and collect data on the results. Unrelated or irrelevant variables are carefully controlled to minimize the potential impact on the experiment outcome.

History of the Experimental Method

The idea of using experiments to better understand human psychology began toward the end of the nineteenth century. Wilhelm Wundt established the first formal laboratory in 1879.

Wundt is often called the father of experimental psychology. He believed that experiments could help explain how psychology works, and used this approach to study consciousness .

Wundt coined the term "physiological psychology." This is a hybrid of physiology and psychology, or how the body affects the brain.

Other early contributors to the development and evolution of experimental psychology as we know it today include:

  • Gustav Fechner (1801-1887), who helped develop procedures for measuring sensations according to the size of the stimulus
  • Hermann von Helmholtz (1821-1894), who analyzed philosophical assumptions through research in an attempt to arrive at scientific conclusions
  • Franz Brentano (1838-1917), who called for a combination of first-person and third-person research methods when studying psychology
  • Georg Elias Müller (1850-1934), who performed an early experiment on attitude which involved the sensory discrimination of weights and revealed how anticipation can affect this discrimination

Key Terms to Know

To understand how the experimental method works, it is important to know some key terms.

Dependent Variable

The dependent variable is the effect that the experimenter is measuring. If a researcher was investigating how sleep influences test scores, for example, the test scores would be the dependent variable.

Independent Variable

The independent variable is the variable that the experimenter manipulates. In the previous example, the amount of sleep an individual gets would be the independent variable.

A hypothesis is a tentative statement or a guess about the possible relationship between two or more variables. In looking at how sleep influences test scores, the researcher might hypothesize that people who get more sleep will perform better on a math test the following day. The purpose of the experiment, then, is to either support or reject this hypothesis.

Operational definitions are necessary when performing an experiment. When we say that something is an independent or dependent variable, we must have a very clear and specific definition of the meaning and scope of that variable.

Extraneous Variables

Extraneous variables are other variables that may also affect the outcome of an experiment. Types of extraneous variables include participant variables, situational variables, demand characteristics, and experimenter effects. In some cases, researchers can take steps to control for extraneous variables.

Demand Characteristics

Demand characteristics are subtle hints that indicate what an experimenter is hoping to find in a psychology experiment. This can sometimes cause participants to alter their behavior, which can affect the results of the experiment.

Intervening Variables

Intervening variables are factors that can affect the relationship between two other variables. 

Confounding Variables

Confounding variables are variables that can affect the dependent variable, but that experimenters cannot control for. Confounding variables can make it difficult to determine if the effect was due to changes in the independent variable or if the confounding variable may have played a role.

Psychologists, like other scientists, use the scientific method when conducting an experiment. The scientific method is a set of procedures and principles that guide how scientists develop research questions, collect data, and come to conclusions.

The five basic steps of the experimental process are:

  • Identifying a problem to study
  • Devising the research protocol
  • Conducting the experiment
  • Analyzing the data collected
  • Sharing the findings (usually in writing or via presentation)

Most psychology students are expected to use the experimental method at some point in their academic careers. Learning how to conduct an experiment is important to understanding how psychologists prove and disprove theories in this field.

There are a few different types of experiments that researchers might use when studying psychology. Each has pros and cons depending on the participants being studied, the hypothesis, and the resources available to conduct the research.

Lab Experiments

Lab experiments are common in psychology because they allow experimenters more control over the variables. These experiments can also be easier for other researchers to replicate. The drawback of this research type is that what takes place in a lab is not always what takes place in the real world.

Field Experiments

Sometimes researchers opt to conduct their experiments in the field. For example, a social psychologist interested in researching prosocial behavior might have a person pretend to faint and observe how long it takes onlookers to respond.

This type of experiment can be a great way to see behavioral responses in realistic settings. But it is more difficult for researchers to control the many variables existing in these settings that could potentially influence the experiment's results.

Quasi-Experiments

While lab experiments are known as true experiments, researchers can also utilize a quasi-experiment. Quasi-experiments are often referred to as natural experiments because the researchers do not have true control over the independent variable.

A researcher looking at personality differences and birth order, for example, is not able to manipulate the independent variable in the situation (personality traits). Participants also cannot be randomly assigned because they naturally fall into pre-existing groups based on their birth order.

So why would a researcher use a quasi-experiment? This is a good choice in situations where scientists are interested in studying phenomena in natural, real-world settings. It's also beneficial if there are limits on research funds or time.

Field experiments can be either quasi-experiments or true experiments.

Examples of the Experimental Method in Use

The experimental method can provide insight into human thoughts and behaviors, Researchers use experiments to study many aspects of psychology.

A 2019 study investigated whether splitting attention between electronic devices and classroom lectures had an effect on college students' learning abilities. It found that dividing attention between these two mediums did not affect lecture comprehension. However, it did impact long-term retention of the lecture information, which affected students' exam performance.

An experiment used participants' eye movements and electroencephalogram (EEG) data to better understand cognitive processing differences between experts and novices. It found that experts had higher power in their theta brain waves than novices, suggesting that they also had a higher cognitive load.

A study looked at whether chatting online with a computer via a chatbot changed the positive effects of emotional disclosure often received when talking with an actual human. It found that the effects were the same in both cases.

One experimental study evaluated whether exercise timing impacts information recall. It found that engaging in exercise prior to performing a memory task helped improve participants' short-term memory abilities.

Sometimes researchers use the experimental method to get a bigger-picture view of psychological behaviors and impacts. For example, one 2018 study examined several lab experiments to learn more about the impact of various environmental factors on building occupant perceptions.

A 2020 study set out to determine the role that sensation-seeking plays in political violence. This research found that sensation-seeking individuals have a higher propensity for engaging in political violence. It also found that providing access to a more peaceful, yet still exciting political group helps reduce this effect.

While the experimental method can be a valuable tool for learning more about psychology and its impacts, it also comes with a few pitfalls.

Experiments may produce artificial results, which are difficult to apply to real-world situations. Similarly, researcher bias can impact the data collected. Results may not be able to be reproduced, meaning the results have low reliability .

Since humans are unpredictable and their behavior can be subjective, it can be hard to measure responses in an experiment. In addition, political pressure may alter the results. The subjects may not be a good representation of the population, or groups used may not be comparable.

And finally, since researchers are human too, results may be degraded due to human error.

What This Means For You

Every psychological research method has its pros and cons. The experimental method can help establish cause and effect, and it's also beneficial when research funds are limited or time is of the essence.

At the same time, it's essential to be aware of this method's pitfalls, such as how biases can affect the results or the potential for low reliability. Keeping these in mind can help you review and assess research studies more accurately, giving you a better idea of whether the results can be trusted or have limitations.

Colorado State University. Experimental and quasi-experimental research .

American Psychological Association. Experimental psychology studies human and animals .

Mayrhofer R, Kuhbandner C, Lindner C. The practice of experimental psychology: An inevitably postmodern endeavor . Front Psychol . 2021;11:612805. doi:10.3389/fpsyg.2020.612805

Mandler G. A History of Modern Experimental Psychology .

Stanford University. Wilhelm Maximilian Wundt . Stanford Encyclopedia of Philosophy.

Britannica. Gustav Fechner .

Britannica. Hermann von Helmholtz .

Meyer A, Hackert B, Weger U. Franz Brentano and the beginning of experimental psychology: implications for the study of psychological phenomena today . Psychol Res . 2018;82:245-254. doi:10.1007/s00426-016-0825-7

Britannica. Georg Elias Müller .

McCambridge J, de Bruin M, Witton J.  The effects of demand characteristics on research participant behaviours in non-laboratory settings: A systematic review .  PLoS ONE . 2012;7(6):e39116. doi:10.1371/journal.pone.0039116

Laboratory experiments . In: The Sage Encyclopedia of Communication Research Methods. Allen M, ed. SAGE Publications, Inc. doi:10.4135/9781483381411.n287

Schweizer M, Braun B, Milstone A. Research methods in healthcare epidemiology and antimicrobial stewardship — quasi-experimental designs . Infect Control Hosp Epidemiol . 2016;37(10):1135-1140. doi:10.1017/ice.2016.117

Glass A, Kang M. Dividing attention in the classroom reduces exam performance . Educ Psychol . 2019;39(3):395-408. doi:10.1080/01443410.2018.1489046

Keskin M, Ooms K, Dogru AO, De Maeyer P. Exploring the cognitive load of expert and novice map users using EEG and eye tracking . ISPRS Int J Geo-Inf . 2020;9(7):429. doi:10.3390.ijgi9070429

Ho A, Hancock J, Miner A. Psychological, relational, and emotional effects of self-disclosure after conversations with a chatbot . J Commun . 2018;68(4):712-733. doi:10.1093/joc/jqy026

Haynes IV J, Frith E, Sng E, Loprinzi P. Experimental effects of acute exercise on episodic memory function: Considerations for the timing of exercise . Psychol Rep . 2018;122(5):1744-1754. doi:10.1177/0033294118786688

Torresin S, Pernigotto G, Cappelletti F, Gasparella A. Combined effects of environmental factors on human perception and objective performance: A review of experimental laboratory works . Indoor Air . 2018;28(4):525-538. doi:10.1111/ina.12457

Schumpe BM, Belanger JJ, Moyano M, Nisa CF. The role of sensation seeking in political violence: An extension of the significance quest theory . J Personal Social Psychol . 2020;118(4):743-761. doi:10.1037/pspp0000223

By Kendra Cherry, MSEd Kendra Cherry, MS, is a psychosocial rehabilitation specialist, psychology educator, and author of the "Everything Psychology Book."

  • Privacy Policy

Research Method

Home » Research Methodology – Types, Examples and writing Guide

Research Methodology – Types, Examples and writing Guide

Table of Contents

Research Methodology

Research Methodology

Definition:

Research Methodology refers to the systematic and scientific approach used to conduct research, investigate problems, and gather data and information for a specific purpose. It involves the techniques and procedures used to identify, collect , analyze , and interpret data to answer research questions or solve research problems . Moreover, They are philosophical and theoretical frameworks that guide the research process.

Structure of Research Methodology

Research methodology formats can vary depending on the specific requirements of the research project, but the following is a basic example of a structure for a research methodology section:

I. Introduction

  • Provide an overview of the research problem and the need for a research methodology section
  • Outline the main research questions and objectives

II. Research Design

  • Explain the research design chosen and why it is appropriate for the research question(s) and objectives
  • Discuss any alternative research designs considered and why they were not chosen
  • Describe the research setting and participants (if applicable)

III. Data Collection Methods

  • Describe the methods used to collect data (e.g., surveys, interviews, observations)
  • Explain how the data collection methods were chosen and why they are appropriate for the research question(s) and objectives
  • Detail any procedures or instruments used for data collection

IV. Data Analysis Methods

  • Describe the methods used to analyze the data (e.g., statistical analysis, content analysis )
  • Explain how the data analysis methods were chosen and why they are appropriate for the research question(s) and objectives
  • Detail any procedures or software used for data analysis

V. Ethical Considerations

  • Discuss any ethical issues that may arise from the research and how they were addressed
  • Explain how informed consent was obtained (if applicable)
  • Detail any measures taken to ensure confidentiality and anonymity

VI. Limitations

  • Identify any potential limitations of the research methodology and how they may impact the results and conclusions

VII. Conclusion

  • Summarize the key aspects of the research methodology section
  • Explain how the research methodology addresses the research question(s) and objectives

Research Methodology Types

Types of Research Methodology are as follows:

Quantitative Research Methodology

This is a research methodology that involves the collection and analysis of numerical data using statistical methods. This type of research is often used to study cause-and-effect relationships and to make predictions.

Qualitative Research Methodology

This is a research methodology that involves the collection and analysis of non-numerical data such as words, images, and observations. This type of research is often used to explore complex phenomena, to gain an in-depth understanding of a particular topic, and to generate hypotheses.

Mixed-Methods Research Methodology

This is a research methodology that combines elements of both quantitative and qualitative research. This approach can be particularly useful for studies that aim to explore complex phenomena and to provide a more comprehensive understanding of a particular topic.

Case Study Research Methodology

This is a research methodology that involves in-depth examination of a single case or a small number of cases. Case studies are often used in psychology, sociology, and anthropology to gain a detailed understanding of a particular individual or group.

Action Research Methodology

This is a research methodology that involves a collaborative process between researchers and practitioners to identify and solve real-world problems. Action research is often used in education, healthcare, and social work.

Experimental Research Methodology

This is a research methodology that involves the manipulation of one or more independent variables to observe their effects on a dependent variable. Experimental research is often used to study cause-and-effect relationships and to make predictions.

Survey Research Methodology

This is a research methodology that involves the collection of data from a sample of individuals using questionnaires or interviews. Survey research is often used to study attitudes, opinions, and behaviors.

Grounded Theory Research Methodology

This is a research methodology that involves the development of theories based on the data collected during the research process. Grounded theory is often used in sociology and anthropology to generate theories about social phenomena.

Research Methodology Example

An Example of Research Methodology could be the following:

Research Methodology for Investigating the Effectiveness of Cognitive Behavioral Therapy in Reducing Symptoms of Depression in Adults

Introduction:

The aim of this research is to investigate the effectiveness of cognitive-behavioral therapy (CBT) in reducing symptoms of depression in adults. To achieve this objective, a randomized controlled trial (RCT) will be conducted using a mixed-methods approach.

Research Design:

The study will follow a pre-test and post-test design with two groups: an experimental group receiving CBT and a control group receiving no intervention. The study will also include a qualitative component, in which semi-structured interviews will be conducted with a subset of participants to explore their experiences of receiving CBT.

Participants:

Participants will be recruited from community mental health clinics in the local area. The sample will consist of 100 adults aged 18-65 years old who meet the diagnostic criteria for major depressive disorder. Participants will be randomly assigned to either the experimental group or the control group.

Intervention :

The experimental group will receive 12 weekly sessions of CBT, each lasting 60 minutes. The intervention will be delivered by licensed mental health professionals who have been trained in CBT. The control group will receive no intervention during the study period.

Data Collection:

Quantitative data will be collected through the use of standardized measures such as the Beck Depression Inventory-II (BDI-II) and the Generalized Anxiety Disorder-7 (GAD-7). Data will be collected at baseline, immediately after the intervention, and at a 3-month follow-up. Qualitative data will be collected through semi-structured interviews with a subset of participants from the experimental group. The interviews will be conducted at the end of the intervention period, and will explore participants’ experiences of receiving CBT.

Data Analysis:

Quantitative data will be analyzed using descriptive statistics, t-tests, and mixed-model analyses of variance (ANOVA) to assess the effectiveness of the intervention. Qualitative data will be analyzed using thematic analysis to identify common themes and patterns in participants’ experiences of receiving CBT.

Ethical Considerations:

This study will comply with ethical guidelines for research involving human subjects. Participants will provide informed consent before participating in the study, and their privacy and confidentiality will be protected throughout the study. Any adverse events or reactions will be reported and managed appropriately.

Data Management:

All data collected will be kept confidential and stored securely using password-protected databases. Identifying information will be removed from qualitative data transcripts to ensure participants’ anonymity.

Limitations:

One potential limitation of this study is that it only focuses on one type of psychotherapy, CBT, and may not generalize to other types of therapy or interventions. Another limitation is that the study will only include participants from community mental health clinics, which may not be representative of the general population.

Conclusion:

This research aims to investigate the effectiveness of CBT in reducing symptoms of depression in adults. By using a randomized controlled trial and a mixed-methods approach, the study will provide valuable insights into the mechanisms underlying the relationship between CBT and depression. The results of this study will have important implications for the development of effective treatments for depression in clinical settings.

How to Write Research Methodology

Writing a research methodology involves explaining the methods and techniques you used to conduct research, collect data, and analyze results. It’s an essential section of any research paper or thesis, as it helps readers understand the validity and reliability of your findings. Here are the steps to write a research methodology:

  • Start by explaining your research question: Begin the methodology section by restating your research question and explaining why it’s important. This helps readers understand the purpose of your research and the rationale behind your methods.
  • Describe your research design: Explain the overall approach you used to conduct research. This could be a qualitative or quantitative research design, experimental or non-experimental, case study or survey, etc. Discuss the advantages and limitations of the chosen design.
  • Discuss your sample: Describe the participants or subjects you included in your study. Include details such as their demographics, sampling method, sample size, and any exclusion criteria used.
  • Describe your data collection methods : Explain how you collected data from your participants. This could include surveys, interviews, observations, questionnaires, or experiments. Include details on how you obtained informed consent, how you administered the tools, and how you minimized the risk of bias.
  • Explain your data analysis techniques: Describe the methods you used to analyze the data you collected. This could include statistical analysis, content analysis, thematic analysis, or discourse analysis. Explain how you dealt with missing data, outliers, and any other issues that arose during the analysis.
  • Discuss the validity and reliability of your research : Explain how you ensured the validity and reliability of your study. This could include measures such as triangulation, member checking, peer review, or inter-coder reliability.
  • Acknowledge any limitations of your research: Discuss any limitations of your study, including any potential threats to validity or generalizability. This helps readers understand the scope of your findings and how they might apply to other contexts.
  • Provide a summary: End the methodology section by summarizing the methods and techniques you used to conduct your research. This provides a clear overview of your research methodology and helps readers understand the process you followed to arrive at your findings.

When to Write Research Methodology

Research methodology is typically written after the research proposal has been approved and before the actual research is conducted. It should be written prior to data collection and analysis, as it provides a clear roadmap for the research project.

The research methodology is an important section of any research paper or thesis, as it describes the methods and procedures that will be used to conduct the research. It should include details about the research design, data collection methods, data analysis techniques, and any ethical considerations.

The methodology should be written in a clear and concise manner, and it should be based on established research practices and standards. It is important to provide enough detail so that the reader can understand how the research was conducted and evaluate the validity of the results.

Applications of Research Methodology

Here are some of the applications of research methodology:

  • To identify the research problem: Research methodology is used to identify the research problem, which is the first step in conducting any research.
  • To design the research: Research methodology helps in designing the research by selecting the appropriate research method, research design, and sampling technique.
  • To collect data: Research methodology provides a systematic approach to collect data from primary and secondary sources.
  • To analyze data: Research methodology helps in analyzing the collected data using various statistical and non-statistical techniques.
  • To test hypotheses: Research methodology provides a framework for testing hypotheses and drawing conclusions based on the analysis of data.
  • To generalize findings: Research methodology helps in generalizing the findings of the research to the target population.
  • To develop theories : Research methodology is used to develop new theories and modify existing theories based on the findings of the research.
  • To evaluate programs and policies : Research methodology is used to evaluate the effectiveness of programs and policies by collecting data and analyzing it.
  • To improve decision-making: Research methodology helps in making informed decisions by providing reliable and valid data.

Purpose of Research Methodology

Research methodology serves several important purposes, including:

  • To guide the research process: Research methodology provides a systematic framework for conducting research. It helps researchers to plan their research, define their research questions, and select appropriate methods and techniques for collecting and analyzing data.
  • To ensure research quality: Research methodology helps researchers to ensure that their research is rigorous, reliable, and valid. It provides guidelines for minimizing bias and error in data collection and analysis, and for ensuring that research findings are accurate and trustworthy.
  • To replicate research: Research methodology provides a clear and detailed account of the research process, making it possible for other researchers to replicate the study and verify its findings.
  • To advance knowledge: Research methodology enables researchers to generate new knowledge and to contribute to the body of knowledge in their field. It provides a means for testing hypotheses, exploring new ideas, and discovering new insights.
  • To inform decision-making: Research methodology provides evidence-based information that can inform policy and decision-making in a variety of fields, including medicine, public health, education, and business.

Advantages of Research Methodology

Research methodology has several advantages that make it a valuable tool for conducting research in various fields. Here are some of the key advantages of research methodology:

  • Systematic and structured approach : Research methodology provides a systematic and structured approach to conducting research, which ensures that the research is conducted in a rigorous and comprehensive manner.
  • Objectivity : Research methodology aims to ensure objectivity in the research process, which means that the research findings are based on evidence and not influenced by personal bias or subjective opinions.
  • Replicability : Research methodology ensures that research can be replicated by other researchers, which is essential for validating research findings and ensuring their accuracy.
  • Reliability : Research methodology aims to ensure that the research findings are reliable, which means that they are consistent and can be depended upon.
  • Validity : Research methodology ensures that the research findings are valid, which means that they accurately reflect the research question or hypothesis being tested.
  • Efficiency : Research methodology provides a structured and efficient way of conducting research, which helps to save time and resources.
  • Flexibility : Research methodology allows researchers to choose the most appropriate research methods and techniques based on the research question, data availability, and other relevant factors.
  • Scope for innovation: Research methodology provides scope for innovation and creativity in designing research studies and developing new research techniques.

Research Methodology Vs Research Methods

Research MethodologyResearch Methods
Research methodology refers to the philosophical and theoretical frameworks that guide the research process. refer to the techniques and procedures used to collect and analyze data.
It is concerned with the underlying principles and assumptions of research.It is concerned with the practical aspects of research.
It provides a rationale for why certain research methods are used.It determines the specific steps that will be taken to conduct research.
It is broader in scope and involves understanding the overall approach to research.It is narrower in scope and focuses on specific techniques and tools used in research.
It is concerned with identifying research questions, defining the research problem, and formulating hypotheses.It is concerned with collecting data, analyzing data, and interpreting results.
It is concerned with the validity and reliability of research.It is concerned with the accuracy and precision of data.
It is concerned with the ethical considerations of research.It is concerned with the practical considerations of research.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Appendices

Appendices – Writing Guide, Types and Examples

Limitations in Research

Limitations in Research – Types, Examples and...

Research Findings

Research Findings – Types Examples and Writing...

Scope of the Research

Scope of the Research – Writing Guide and...

How to Publish a Research Paper

How to Publish a Research Paper – Step by Step...

Research Paper Outline

Research Paper Outline – Types, Example, Template

helpful professor logo

10 Real-Life Experimental Research Examples

10 Real-Life Experimental Research Examples

Chris Drew (PhD)

Dr. Chris Drew is the founder of the Helpful Professor. He holds a PhD in education and has published over 20 articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education. [Image Descriptor: Photo of Chris]

Learn about our Editorial Process

experimental reseasrch examples and definition, explained below

Experimental research is research that involves using a scientific approach to examine research variables.

Below are some famous experimental research examples. Some of these studies were conducted quite a long time ago. Some were so controversial that they would never be attempted today. And some were so unethical that they would never be permitted again.

A few of these studies have also had very practical implications for modern society involving criminal investigations, the impact of television and the media, and the power of authority figures.

Examples of Experimental Research

1. pavlov’s dog: classical conditioning.

Dr. Ivan Pavlov was a physiologist studying animal digestive systems in the 1890s. In one study, he presented food to a dog and then collected its salivatory juices via a tube attached to the inside of the animal’s mouth.

As he was conducting his experiments, an annoying thing kept happening; every time his assistant would enter the lab with a bowl of food for the experiment, the dog would start to salivate at the sound of the assistant’s footsteps.

Although this disrupted his experimental procedures, eventually, it dawned on Pavlov that something else was to be learned from this problem.

Pavlov learned that animals could be conditioned into responding on a physiological level to various stimuli, such as food, or even the sound of the assistant bringing the food down the hall.

Hence, the creation of the theory of classical conditioning. One of the most influential theories in psychology still to this day.

2. Bobo Doll Experiment: Observational Learning

Dr. Albert Bandura conducted one of the most influential studies in psychology in the 1960s at Stanford University.

His intention was to demonstrate that cognitive processes play a fundamental role in learning. At the time, Behaviorism was the predominant theoretical perspective, which completely rejected all inferences to constructs not directly observable .

So, Bandura made two versions of a video. In version #1, an adult behaved aggressively with a Bobo doll by throwing it around the room and striking it with a wooden mallet. In version #2, the adult played gently with the doll by carrying it around to different parts of the room and pushing it gently.

After showing children one of the two versions, they were taken individually to a room that had a Bobo doll. Their behavior was observed and the results indicated that children that watched version #1 of the video were far more aggressive than those that watched version #2.

Not only did Bandura’s Bobo doll study form the basis of his social learning theory, it also helped start the long-lasting debate about the harmful effects of television on children.

Worth Checking Out: What’s the Difference between Experimental and Observational Studies?

3. The Asch Study: Conformity  

Dr. Solomon Asch was interested in conformity and the power of group pressure. His study was quite simple. Different groups of students were shown lines of varying lengths and asked, “which line is longest.”

However, out of each group, only one was an actual participant. All of the others in the group were working with Asch and instructed to say that one of the shorter lines was actually the longest.

Nearly every time, the real participant gave an answer that was clearly wrong, but the same as the rest of the group.

The study is one of the most famous in psychology because it demonstrated the power of social pressure so clearly.  

4. Car Crash Experiment: Leading Questions

In 1974, Dr. Elizabeth Loftus and her undergraduate student John Palmer designed a study to examine how fallible human judgement is under certain conditions.

They showed groups of research participants videos that depicted accidents between two cars. Later, the participants were asked to estimate the rate of speed of the cars.

Here’s the interesting part. All participants were asked the same question with the exception of a single word: “How fast were the two cars going when they ______into each other?” The word in the blank varied in its implied severity.

Participants’ estimates were completely affected by the word in the blank. When the word “smashed” was used, participants estimated the cars were going much faster than when the word “contacted” was used. 

This line of research has had a huge impact on law enforcement interrogation practices, line-up procedures, and the credibility of eyewitness testimony .

5. The 6 Universal Emotions

The research by Dr. Paul Ekman has been influential in the study of emotions. His early research revealed that all human beings, regardless of culture, experience the same 6 basic emotions: happiness, sadness, disgust, fear, surprise, and anger.

In the late 1960s, Ekman traveled to Papua New Guinea. He approached a tribe of people that were extremely isolated from modern culture. With the help of a guide, he would describe different situations to individual members and take a photo of their facial expressions.

The situations included: if a good friend had come; their child had just died; they were about to get into a fight; or had just stepped on a dead pig.

The facial expressions of this highly isolated tribe were nearly identical to those displayed by people in his studies in California.

6. The Little Albert Study: Development of Phobias  

Dr. John Watson and Dr. Rosalie Rayner sought to demonstrate how irrational fears were developed.

Their study involved showing a white rat to an infant. Initially, the child had no fear of the rat. However, the researchers then began to create a loud noise each time they showed the child the rat by striking a steel bar with a hammer.

Eventually, the child started to cry and feared the white rat. The child also developed a fear of other white, furry objects such as white rabbits and a Santa’s beard.

This study is famous because it demonstrated one way in which phobias are developed in humans, and also because it is now considered highly unethical for its mistreatment of children, lack of study debriefing , and intent to instil fear.  

7. A Class Divided: Discrimination

Perhaps one of the most famous psychological experiments of all time was not conducted by a psychologist. In 1968, third grade teacher Jane Elliott conducted one of the most famous studies on discrimination in history. It took place shortly after the assassination of Dr. Martin Luther King, Jr.

She divided her class into two groups: brown-eyed and blue-eyed students. On the first day of the experiment, she announced the blue-eyed group as superior. They received extra privileges and were told not to intermingle with the brown-eyed students.

They instantly became happier, more self-confident, and started performing better academically.

The next day, the roles were reversed. The brown-eyed students were announced as superior and given extra privileges. Their behavior changed almost immediately and exhibited the same patterns as the other group had the day before.

This study was a remarkable demonstration of the harmful effects of discrimination.

8. The Milgram Study: Obedience to Authority

Dr. Stanley Milgram conducted one of the most influential experiments on authority and obedience in 1961 at Yale University.

Participants were told they were helping study the effects of punishment on learning. Their job was to administer an electric shock to another participant each time they made an error on a test. The other participant was actually an actor in another room that only pretended to be shocked.

However, each time a mistake was made, the level of shock was supposed to increase, eventually reaching quite high voltage levels. When the real participants expressed reluctance to administer the next level of shock, the experimenter, who served as the authority figure in the room, pressured the participant to deliver the next level of shock.

The results of this study were truly astounding. A surprisingly high percentage of participants continued to deliver the shocks to the highest level possible despite the very strong objections by the “other participant.”

This study demonstrated the power of authority figures.

9. The Marshmallow Test: Delay of Gratification

The Marshmallow Test was designed by Dr. Walter Mischel to examine the role of delay of gratification and academic success.

Children ages 4-6 years old were seated at a table with one marshmallow placed in front of them. The experimenter explained that if they did not eat the marshmallow, they would receive a second one. They could then eat both.

The children that were able to delay gratification the longest were rated as significantly more competent later in life and earned higher SAT scores than children that could not withstand the temptation.  

The study has since been conceptually replicated by other researchers that have revealed additional factors involved in delay of gratification and academic achievement.

10. Stanford Prison Study: Deindividuation

Dr. Philip Zimbardo conducted one of the most famous psychological studies of all time in 1971. The purpose of the study was to investigate how the power structure in some situations can lead people to behave in ways highly uncharacteristic of their usual behavior.

College students were recruited to participate in the study. Some were randomly assigned to play the role of prison guard. The others were actually “arrested” by real police officers. They were blindfolded and taken to the basement of the university’s psychology building which had been converted to look like a prison.

Although the study was supposed to last 2 weeks, it had to be halted due to the abusive actions of the guards.

The study demonstrated that people will behave in ways they never thought possible when placed in certain roles and power structures. Although the Stanford Prison Study is so well-known for what it revealed about human nature, it is also famous because of the numerous violations of ethical principles.

The studies above are varied and focused on many different aspects of human behavior . However, each example of experimental research listed above has had a lasting impact on society. Some have had tremendous sway in how very practical matters are conducted, such as criminal investigations and legal proceedings.

Psychology is a field of study that is often not fully understood by the general public. When most people hear the term “psychology,” they think of a therapist that listens carefully to the revealing statements of a patient. The therapist then tries to help their patient learn to cope with many of life’s challenges. Nothing wrong with that.

In reality however, most psychologists are researchers. They spend most of their time designing and conducting experiments to enhance our understanding of the human condition.

Asch SE. (1956). Studies of independence and conformity: I. A minority of one against a unanimous majority . Psychological Monographs: General and Applied, 70 (9),1-70. https://doi.org/doi:10.1037/h0093718

Bandura A. (1965). Influence of models’ reinforcement contingencies on the acquisition of imitative responses. Journal of Personality and Social Psychology, 1 (6), 589-595. https://doi.org/doi:10.1037/h0022070

Beck, H. P., Levinson, S., & Irons, G. (2009). Finding little Albert: A journey to John B. Watson’s infant laboratory.  American Psychologist, 64(7),  605-614.

Ekman, P. & Friesen, W. V. (1971).  Constants Across Cultures in the Face and motion .  Journal of Personality and Social Psychology, 17(2) , 124-129.

Loftus, E. F., & Palmer, J. C. (1974). Reconstruction of automobile destruction: An example of

the interaction between language and memory. Journal of Verbal Learning and Verbal

Behavior, 13 (5), 585–589.

Milgram S (1965). Some Conditions of Obedience and Disobedience to Authority. Human Relations, 18(1), 57–76.

Mischel, W., & Ebbesen, E. B. (1970). Attention in delay of gratification . Journal of Personality and Social Psychology, 16 (2), 329-337.

Pavlov, I.P. (1927). Conditioned Reflexes . London: Oxford University Press.

Watson, J. & Rayner, R. (1920). Conditioned emotional reactions.  Journal of Experimental Psychology, 3 , 1-14. Zimbardo, P., Haney, C., Banks, W. C., & Jaffe, D. (1971). The Stanford Prison Experiment: A simulation study of the psychology of imprisonment . Stanford University, Stanford Digital Repository, Stanford.

Chris

  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 21 Montessori Homeschool Setups
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 101 Hidden Talents Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 15 Green Flags in a Relationship
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 15 Signs you're Burnt Out, Not Lazy

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

  • Foundations
  • Write Paper

Search form

  • Experiments
  • Anthropology
  • Self-Esteem
  • Social Anxiety

experimental methodology in research example

Experimental Research

Experimental Research

Experimental research is commonly used in sciences such as sociology and psychology, physics, chemistry, biology and medicine etc.

This article is a part of the guide:

  • Pretest-Posttest
  • Third Variable
  • Research Bias
  • Independent Variable
  • Between Subjects

Browse Full Outline

  • 1 Experimental Research
  • 2.1 Independent Variable
  • 2.2 Dependent Variable
  • 2.3 Controlled Variables
  • 2.4 Third Variable
  • 3.1 Control Group
  • 3.2 Research Bias
  • 3.3.1 Placebo Effect
  • 3.3.2 Double Blind Method
  • 4.1 Randomized Controlled Trials
  • 4.2 Pretest-Posttest
  • 4.3 Solomon Four Group
  • 4.4 Between Subjects
  • 4.5 Within Subject
  • 4.6 Repeated Measures
  • 4.7 Counterbalanced Measures
  • 4.8 Matched Subjects

It is a collection of research designs which use manipulation and controlled testing to understand causal processes. Generally, one or more variables are manipulated to determine their effect on a dependent variable.

The experimental method is a systematic and scientific approach to research in which the researcher manipulates one or more variables, and controls and measures any change in other variables.

Experimental Research is often used where:

  • There is time priority in a causal relationship ( cause precedes effect )
  • There is consistency in a causal relationship (a cause will always lead to the same effect)
  • The magnitude of the correlation is great.

(Reference: en.wikipedia.org)

The word experimental research has a range of definitions. In the strict sense, experimental research is what we call a true experiment .

This is an experiment where the researcher manipulates one variable, and control/randomizes the rest of the variables. It has a control group , the subjects have been randomly assigned between the groups, and the researcher only tests one effect at a time. It is also important to know what variable(s) you want to test and measure.

A very wide definition of experimental research, or a quasi experiment , is research where the scientist actively influences something to observe the consequences. Most experiments tend to fall in between the strict and the wide definition.

A rule of thumb is that physical sciences, such as physics, chemistry and geology tend to define experiments more narrowly than social sciences, such as sociology and psychology, which conduct experiments closer to the wider definition.

experimental methodology in research example

Aims of Experimental Research

Experiments are conducted to be able to predict phenomenons. Typically, an experiment is constructed to be able to explain some kind of causation . Experimental research is important to society - it helps us to improve our everyday lives.

experimental methodology in research example

Identifying the Research Problem

After deciding the topic of interest, the researcher tries to define the research problem . This helps the researcher to focus on a more narrow research area to be able to study it appropriately.  Defining the research problem helps you to formulate a  research hypothesis , which is tested against the  null hypothesis .

The research problem is often operationalizationed , to define how to measure the research problem. The results will depend on the exact measurements that the researcher chooses and may be operationalized differently in another study to test the main conclusions of the study.

An ad hoc analysis is a hypothesis invented after testing is done, to try to explain why the contrary evidence. A poor ad hoc analysis may be seen as the researcher's inability to accept that his/her hypothesis is wrong, while a great ad hoc analysis may lead to more testing and possibly a significant discovery.

Constructing the Experiment

There are various aspects to remember when constructing an experiment. Planning ahead ensures that the experiment is carried out properly and that the results reflect the real world, in the best possible way.

Sampling Groups to Study

Sampling groups correctly is especially important when we have more than one condition in the experiment. One sample group often serves as a control group , whilst others are tested under the experimental conditions.

Deciding the sample groups can be done in using many different sampling techniques. Population sampling may chosen by a number of methods, such as randomization , "quasi-randomization" and pairing.

Reducing sampling errors is vital for getting valid results from experiments. Researchers often adjust the sample size to minimize chances of random errors .

Here are some common sampling techniques :

  • probability sampling
  • non-probability sampling
  • simple random sampling
  • convenience sampling
  • stratified sampling
  • systematic sampling
  • cluster sampling
  • sequential sampling
  • disproportional sampling
  • judgmental sampling
  • snowball sampling
  • quota sampling

Creating the Design

The research design is chosen based on a range of factors. Important factors when choosing the design are feasibility, time, cost, ethics, measurement problems and what you would like to test. The design of the experiment is critical for the validity of the results.

Typical Designs and Features in Experimental Design

  • Pretest-Posttest Design Check whether the groups are different before the manipulation starts and the effect of the manipulation. Pretests sometimes influence the effect.
  • Control Group Control groups are designed to measure research bias and measurement effects, such as the Hawthorne Effect or the Placebo Effect . A control group is a group not receiving the same manipulation as the experimental group. Experiments frequently have 2 conditions, but rarely more than 3 conditions at the same time.
  • Randomized Controlled Trials Randomized Sampling, comparison between an Experimental Group and a Control Group and strict control/randomization of all other variables
  • Solomon Four-Group Design With two control groups and two experimental groups. Half the groups have a pretest and half do not have a pretest. This to test both the effect itself and the effect of the pretest.
  • Between Subjects Design Grouping Participants to Different Conditions
  • Within Subject Design Participants Take Part in the Different Conditions - See also: Repeated Measures Design
  • Counterbalanced Measures Design Testing the effect of the order of treatments when no control group is available/ethical
  • Matched Subjects Design Matching Participants to Create Similar Experimental- and Control-Groups
  • Double-Blind Experiment Neither the researcher, nor the participants, know which is the control group. The results can be affected if the researcher or participants know this.
  • Bayesian Probability Using bayesian probability to "interact" with participants is a more "advanced" experimental design. It can be used for settings were there are many variables which are hard to isolate. The researcher starts with a set of initial beliefs, and tries to adjust them to how participants have responded

Pilot Study

It may be wise to first conduct a pilot-study or two before you do the real experiment. This ensures that the experiment measures what it should, and that everything is set up right.

Minor errors, which could potentially destroy the experiment, are often found during this process. With a pilot study, you can get information about errors and problems, and improve the design, before putting a lot of effort into the real experiment.

If the experiments involve humans, a common strategy is to first have a pilot study with someone involved in the research, but not too closely, and then arrange a pilot with a person who resembles the subject(s) . Those two different pilots are likely to give the researcher good information about any problems in the experiment.

Conducting the Experiment

An experiment is typically carried out by manipulating a variable, called the independent variable , affecting the experimental group. The effect that the researcher is interested in, the dependent variable(s) , is measured.

Identifying and controlling non-experimental factors which the researcher does not want to influence the effects, is crucial to drawing a valid conclusion. This is often done by controlling variables , if possible, or randomizing variables to minimize effects that can be traced back to third variables . Researchers only want to measure the effect of the independent variable(s) when conducting an experiment , allowing them to conclude that this was the reason for the effect.

Analysis and Conclusions

In quantitative research , the amount of data measured can be enormous. Data not prepared to be analyzed is called "raw data". The raw data is often summarized as something called "output data", which typically consists of one line per subject (or item). A cell of the output data is, for example, an average of an effect in many trials for a subject. The output data is used for statistical analysis, e.g. significance tests, to see if there really is an effect.

The aim of an analysis is to draw a conclusion , together with other observations. The researcher might generalize the results to a wider phenomenon, if there is no indication of confounding variables "polluting" the results.

If the researcher suspects that the effect stems from a different variable than the independent variable, further investigation is needed to gauge the validity of the results. An experiment is often conducted because the scientist wants to know if the independent variable is having any effect upon the dependent variable. Variables correlating are not proof that there is causation .

Experiments are more often of quantitative nature than qualitative nature, although it happens.

Examples of Experiments

This website contains many examples of experiments. Some are not true experiments , but involve some kind of manipulation to investigate a phenomenon. Others fulfill most or all criteria of true experiments.

Here are some examples of scientific experiments:

Social Psychology

  • Stanley Milgram Experiment - Will people obey orders, even if clearly dangerous?
  • Asch Experiment - Will people conform to group behavior?
  • Stanford Prison Experiment - How do people react to roles? Will you behave differently?
  • Good Samaritan Experiment - Would You Help a Stranger? - Explaining Helping Behavior
  • Law Of Segregation - The Mendel Pea Plant Experiment
  • Transforming Principle - Griffith's Experiment about Genetics
  • Ben Franklin Kite Experiment - Struck by Lightning
  • J J Thomson Cathode Ray Experiment
  • Psychology 101
  • Flags and Countries
  • Capitals and Countries

Oskar Blakstad (Jul 10, 2008). Experimental Research. Retrieved Aug 30, 2024 from Explorable.com: https://explorable.com/experimental-research

You Are Allowed To Copy The Text

The text in this article is licensed under the Creative Commons-License Attribution 4.0 International (CC BY 4.0) .

This means you're free to copy, share and adapt any parts (or all) of the text in the article, as long as you give appropriate credit and provide a link/reference to this page.

That is it. You don't need our permission to copy the article; just include a link/reference back to this page. You can use it freely (with some kind of link), and we're also okay with people reprinting in publications like books, blogs, newsletters, course-material, papers, wikipedia and presentations (with clear attribution).

Want to stay up to date? Follow us!

Get all these articles in 1 guide.

Want the full version to study at home, take to school or just scribble on?

Whether you are an academic novice, or you simply want to brush up your skills, this book will take your academic writing skills to the next level.

experimental methodology in research example

Download electronic versions: - Epub for mobiles and tablets - For Kindle here - For iBooks here - PDF version here

Save this course for later

Don't have time for it all now? No problem, save it as a course and come back to it later.

Footer bottom

  • Privacy Policy

experimental methodology in research example

  • Subscribe to our RSS Feed
  • Like us on Facebook
  • Follow us on Twitter

Experimental Research

  • First Online: 25 February 2021

Cite this chapter

experimental methodology in research example

  • C. George Thomas 2  

4892 Accesses

Experiments are part of the scientific method that helps to decide the fate of two or more competing hypotheses or explanations on a phenomenon. The term ‘experiment’ arises from Latin, Experiri, which means, ‘to try’. The knowledge accrues from experiments differs from other types of knowledge in that it is always shaped upon observation or experience. In other words, experiments generate empirical knowledge. In fact, the emphasis on experimentation in the sixteenth and seventeenth centuries for establishing causal relationships for various phenomena happening in nature heralded the resurgence of modern science from its roots in ancient philosophy spearheaded by great Greek philosophers such as Aristotle.

The strongest arguments prove nothing so long as the conclusions are not verified by experience. Experimental science is the queen of sciences and the goal of all speculation . Roger Bacon (1214–1294)

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Bibliography

Best, J.W. and Kahn, J.V. 1993. Research in Education (7th Ed., Indian Reprint, 2004). Prentice–Hall of India, New Delhi, 435p.

Google Scholar  

Campbell, D. and Stanley, J. 1963. Experimental and quasi-experimental designs for research. In: Gage, N.L., Handbook of Research on Teaching. Rand McNally, Chicago, pp. 171–247.

Chandel, S.R.S. 1991. A Handbook of Agricultural Statistics. Achal Prakashan Mandir, Kanpur, 560p.

Cox, D.R. 1958. Planning of Experiments. John Wiley & Sons, New York, 308p.

Fathalla, M.F. and Fathalla, M.M.F. 2004. A Practical Guide for Health Researchers. WHO Regional Publications Eastern Mediterranean Series 30. World Health Organization Regional Office for the Eastern Mediterranean, Cairo, 232p.

Fowkes, F.G.R., and Fulton, P.M. 1991. Critical appraisal of published research: Introductory guidelines. Br. Med. J. 302: 1136–1140.

Gall, M.D., Borg, W.R., and Gall, J.P. 1996. Education Research: An Introduction (6th Ed.). Longman, New York, 788p.

Gomez, K.A. 1972. Techniques for Field Experiments with Rice. International Rice Research Institute, Manila, Philippines, 46p.

Gomez, K.A. and Gomez, A.A. 1984. Statistical Procedures for Agricultural Research (2nd Ed.). John Wiley & Sons, New York, 680p.

Hill, A.B. 1971. Principles of Medical Statistics (9th Ed.). Oxford University Press, New York, 390p.

Holmes, D., Moody, P., and Dine, D. 2010. Research Methods for the Bioscience (2nd Ed.). Oxford University Press, Oxford, 457p.

Kerlinger, F.N. 1986. Foundations of Behavioural Research (3rd Ed.). Holt, Rinehart and Winston, USA. 667p.

Kirk, R.E. 2012. Experimental Design: Procedures for the Behavioural Sciences (4th Ed.). Sage Publications, 1072p.

Kothari, C.R. 2004. Research Methodology: Methods and Techniques (2nd Ed.). New Age International, New Delhi, 401p.

Kumar, R. 2011. Research Methodology: A Step-by step Guide for Beginners (3rd Ed.). Sage Publications India, New Delhi, 415p.

Leedy, P.D. and Ormrod, J.L. 2010. Practical Research: Planning and Design (9th Ed.), Pearson Education, New Jersey, 360p.

Marder, M.P. 2011. Research Methods for Science. Cambridge University Press, 227p.

Panse, V.G. and Sukhatme, P.V. 1985. Statistical Methods for Agricultural Workers (4th Ed., revised: Sukhatme, P.V. and Amble, V. N.). ICAR, New Delhi, 359p.

Ross, S.M. and Morrison, G.R. 2004. Experimental research methods. In: Jonassen, D.H. (ed.), Handbook of Research for Educational Communications and Technology (2nd Ed.). Lawrence Erlbaum Associates, New Jersey, pp. 10211043.

Snedecor, G.W. and Cochran, W.G. 1980. Statistical Methods (7th Ed.). Iowa State University Press, Ames, Iowa, 507p.

Download references

Author information

Authors and affiliations.

Kerala Agricultural University, Thrissur, Kerala, India

C. George Thomas

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to C. George Thomas .

Rights and permissions

Reprints and permissions

Copyright information

© 2021 The Author(s)

About this chapter

Thomas, C.G. (2021). Experimental Research. In: Research Methodology and Scientific Writing . Springer, Cham. https://doi.org/10.1007/978-3-030-64865-7_5

Download citation

DOI : https://doi.org/10.1007/978-3-030-64865-7_5

Published : 25 February 2021

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-64864-0

Online ISBN : 978-3-030-64865-7

eBook Packages : Education Education (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

Research Methods | Definitions, Types, Examples

Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design . When planning your methods, there are two key decisions you will make.

First, decide how you will collect data . Your methods depend on what type of data you need to answer your research question :

  • Qualitative vs. quantitative : Will your data take the form of words or numbers?
  • Primary vs. secondary : Will you collect original data yourself, or will you use data that has already been collected by someone else?
  • Descriptive vs. experimental : Will you take measurements of something as it is, or will you perform an experiment?

Second, decide how you will analyze the data .

  • For quantitative data, you can use statistical analysis methods to test relationships between variables.
  • For qualitative data, you can use methods such as thematic analysis to interpret patterns and meanings in the data.

Table of contents

Methods for collecting data, examples of data collection methods, methods for analyzing data, examples of data analysis methods, other interesting articles, frequently asked questions about research methods.

Data is the information that you collect for the purposes of answering your research question . The type of data you need depends on the aims of your research.

Qualitative vs. quantitative data

Your choice of qualitative or quantitative data collection depends on the type of knowledge you want to develop.

For questions about ideas, experiences and meanings, or to study something that can’t be described numerically, collect qualitative data .

If you want to develop a more mechanistic understanding of a topic, or your research involves hypothesis testing , collect quantitative data .

Qualitative to broader populations. .
Quantitative .

You can also take a mixed methods approach , where you use both qualitative and quantitative research methods.

Primary vs. secondary research

Primary research is any original data that you collect yourself for the purposes of answering your research question (e.g. through surveys , observations and experiments ). Secondary research is data that has already been collected by other researchers (e.g. in a government census or previous scientific studies).

If you are exploring a novel research question, you’ll probably need to collect primary data . But if you want to synthesize existing knowledge, analyze historical trends, or identify patterns on a large scale, secondary data might be a better choice.

Primary . methods.
Secondary

Descriptive vs. experimental data

In descriptive research , you collect data about your study subject without intervening. The validity of your research will depend on your sampling method .

In experimental research , you systematically intervene in a process and measure the outcome. The validity of your research will depend on your experimental design .

To conduct an experiment, you need to be able to vary your independent variable , precisely measure your dependent variable, and control for confounding variables . If it’s practically and ethically possible, this method is the best choice for answering questions about cause and effect.

Descriptive . .
Experimental

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Research methods for collecting data
Research method Primary or secondary? Qualitative or quantitative? When to use
Primary Quantitative To test cause-and-effect relationships.
Primary Quantitative To understand general characteristics of a population.
Interview/focus group Primary Qualitative To gain more in-depth understanding of a topic.
Observation Primary Either To understand how something occurs in its natural setting.
Secondary Either To situate your research in an existing body of work, or to evaluate trends within a research topic.
Either Either To gain an in-depth understanding of a specific group or context, or when you don’t have the resources for a large study.

Your data analysis methods will depend on the type of data you collect and how you prepare it for analysis.

Data can often be analyzed both quantitatively and qualitatively. For example, survey responses could be analyzed qualitatively by studying the meanings of responses or quantitatively by studying the frequencies of responses.

Qualitative analysis methods

Qualitative analysis is used to understand words, ideas, and experiences. You can use it to interpret data that was collected:

  • From open-ended surveys and interviews , literature reviews , case studies , ethnographies , and other sources that use text rather than numbers.
  • Using non-probability sampling methods .

Qualitative analysis tends to be quite flexible and relies on the researcher’s judgement, so you have to reflect carefully on your choices and assumptions and be careful to avoid research bias .

Quantitative analysis methods

Quantitative analysis uses numbers and statistics to understand frequencies, averages and correlations (in descriptive studies) or cause-and-effect relationships (in experiments).

You can use quantitative analysis to interpret data that was collected either:

  • During an experiment .
  • Using probability sampling methods .

Because the data is collected and analyzed in a statistically valid way, the results of quantitative analysis can be easily standardized and shared among researchers.

Research methods for analyzing data
Research method Qualitative or quantitative? When to use
Quantitative To analyze data collected in a statistically valid manner (e.g. from experiments, surveys, and observations).
Meta-analysis Quantitative To statistically analyze the results of a large collection of studies.

Can only be applied to studies that collected data in a statistically valid manner.

Qualitative To analyze data collected from interviews, , or textual sources.

To understand general themes in the data and how they are communicated.

Either To analyze large volumes of textual or visual data collected from surveys, literature reviews, or other sources.

Can be quantitative (i.e. frequencies of words) or qualitative (i.e. meanings of words).

Prevent plagiarism. Run a free check.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square test of independence
  • Statistical power
  • Descriptive statistics
  • Degrees of freedom
  • Pearson correlation
  • Null hypothesis
  • Double-blind study
  • Case-control study
  • Research ethics
  • Data collection
  • Hypothesis testing
  • Structured interviews

Research bias

  • Hawthorne effect
  • Unconscious bias
  • Recall bias
  • Halo effect
  • Self-serving bias
  • Information bias

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts and meanings, use qualitative methods .
  • If you want to analyze a large amount of readily-available data, use secondary data. If you want data specific to your purposes with control over how it is generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyze data (for example, experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

Is this article helpful?

Other students also liked, writing strong research questions | criteria & examples.

  • What Is a Research Design | Types, Guide & Examples
  • Data Collection | Definition, Methods & Examples

More interesting articles

  • Between-Subjects Design | Examples, Pros, & Cons
  • Cluster Sampling | A Simple Step-by-Step Guide with Examples
  • Confounding Variables | Definition, Examples & Controls
  • Construct Validity | Definition, Types, & Examples
  • Content Analysis | Guide, Methods & Examples
  • Control Groups and Treatment Groups | Uses & Examples
  • Control Variables | What Are They & Why Do They Matter?
  • Correlation vs. Causation | Difference, Designs & Examples
  • Correlational Research | When & How to Use
  • Critical Discourse Analysis | Definition, Guide & Examples
  • Cross-Sectional Study | Definition, Uses & Examples
  • Descriptive Research | Definition, Types, Methods & Examples
  • Ethical Considerations in Research | Types & Examples
  • Explanatory and Response Variables | Definitions & Examples
  • Explanatory Research | Definition, Guide, & Examples
  • Exploratory Research | Definition, Guide, & Examples
  • External Validity | Definition, Types, Threats & Examples
  • Extraneous Variables | Examples, Types & Controls
  • Guide to Experimental Design | Overview, Steps, & Examples
  • How Do You Incorporate an Interview into a Dissertation? | Tips
  • How to Do Thematic Analysis | Step-by-Step Guide & Examples
  • How to Write a Literature Review | Guide, Examples, & Templates
  • How to Write a Strong Hypothesis | Steps & Examples
  • Inclusion and Exclusion Criteria | Examples & Definition
  • Independent vs. Dependent Variables | Definition & Examples
  • Inductive Reasoning | Types, Examples, Explanation
  • Inductive vs. Deductive Research Approach | Steps & Examples
  • Internal Validity in Research | Definition, Threats, & Examples
  • Internal vs. External Validity | Understanding Differences & Threats
  • Longitudinal Study | Definition, Approaches & Examples
  • Mediator vs. Moderator Variables | Differences & Examples
  • Mixed Methods Research | Definition, Guide & Examples
  • Multistage Sampling | Introductory Guide & Examples
  • Naturalistic Observation | Definition, Guide & Examples
  • Operationalization | A Guide with Examples, Pros & Cons
  • Population vs. Sample | Definitions, Differences & Examples
  • Primary Research | Definition, Types, & Examples
  • Qualitative vs. Quantitative Research | Differences, Examples & Methods
  • Quasi-Experimental Design | Definition, Types & Examples
  • Questionnaire Design | Methods, Question Types & Examples
  • Random Assignment in Experiments | Introduction & Examples
  • Random vs. Systematic Error | Definition & Examples
  • Reliability vs. Validity in Research | Difference, Types and Examples
  • Reproducibility vs Replicability | Difference & Examples
  • Reproducibility vs. Replicability | Difference & Examples
  • Sampling Methods | Types, Techniques & Examples
  • Semi-Structured Interview | Definition, Guide & Examples
  • Simple Random Sampling | Definition, Steps & Examples
  • Single, Double, & Triple Blind Study | Definition & Examples
  • Stratified Sampling | Definition, Guide & Examples
  • Structured Interview | Definition, Guide & Examples
  • Survey Research | Definition, Examples & Methods
  • Systematic Review | Definition, Example, & Guide
  • Systematic Sampling | A Step-by-Step Guide with Examples
  • Textual Analysis | Guide, 3 Approaches & Examples
  • The 4 Types of Reliability in Research | Definitions & Examples
  • The 4 Types of Validity in Research | Definitions & Examples
  • Transcribing an Interview | 5 Steps & Transcription Software
  • Triangulation in Research | Guide, Types, Examples
  • Types of Interviews in Research | Guide & Examples
  • Types of Research Designs Compared | Guide & Examples
  • Types of Variables in Research & Statistics | Examples
  • Unstructured Interview | Definition, Guide & Examples
  • What Is a Case Study? | Definition, Examples & Methods
  • What Is a Case-Control Study? | Definition & Examples
  • What Is a Cohort Study? | Definition & Examples
  • What Is a Conceptual Framework? | Tips & Examples
  • What Is a Controlled Experiment? | Definitions & Examples
  • What Is a Double-Barreled Question?
  • What Is a Focus Group? | Step-by-Step Guide & Examples
  • What Is a Likert Scale? | Guide & Examples
  • What Is a Prospective Cohort Study? | Definition & Examples
  • What Is a Retrospective Cohort Study? | Definition & Examples
  • What Is Action Research? | Definition & Examples
  • What Is an Observational Study? | Guide & Examples
  • What Is Concurrent Validity? | Definition & Examples
  • What Is Content Validity? | Definition & Examples
  • What Is Convenience Sampling? | Definition & Examples
  • What Is Convergent Validity? | Definition & Examples
  • What Is Criterion Validity? | Definition & Examples
  • What Is Data Cleansing? | Definition, Guide & Examples
  • What Is Deductive Reasoning? | Explanation & Examples
  • What Is Discriminant Validity? | Definition & Example
  • What Is Ecological Validity? | Definition & Examples
  • What Is Ethnography? | Definition, Guide & Examples
  • What Is Face Validity? | Guide, Definition & Examples
  • What Is Non-Probability Sampling? | Types & Examples
  • What Is Participant Observation? | Definition & Examples
  • What Is Peer Review? | Types & Examples
  • What Is Predictive Validity? | Examples & Definition
  • What Is Probability Sampling? | Types & Examples
  • What Is Purposive Sampling? | Definition & Examples
  • What Is Qualitative Observation? | Definition & Examples
  • What Is Qualitative Research? | Methods & Examples
  • What Is Quantitative Observation? | Definition & Examples
  • What Is Quantitative Research? | Definition, Uses & Methods

"I thought AI Proofreading was useless but.."

I've been using Scribbr for years now and I know it's a service that won't disappoint. It does a good job spotting mistakes”

  • Open access
  • Published: 30 August 2024

Integrating causal pathway diagrams into practice facilitation to address colorectal cancer screening disparities in primary care

  • Brooke Ike 1 ,
  • Ashley Johnson 1 ,
  • Rosemary Meza 2 &
  • Allison Cole 1  

BMC Health Services Research volume  24 , Article number:  1007 ( 2024 ) Cite this article

Metrics details

Colorectal cancer (CRC) is the second leading cause of cancer death and the second most common cancer diagnosis among the Hispanic population in the United States. However, CRC screening prevalence remains lower among Hispanic adults than among non-Hispanic white adults. To reduce CRC screening disparities, efforts to implement CRC screening evidence-based interventions in primary care organizations (PCOs) must consider their potential effect on existing screening disparities. More research is needed to understand how to leverage existing implementation science methodologies to improve health disparities. The Coaching to Improve Colorectal Cancer Screening Equity (CoachIQ) pilot study explores whether integrating two implementation science tools, Causal Pathway Diagrams and practice facilitation, is a feasible and effective way to address CRC screening disparities among Hispanic patients.

We used a quasi-experimental, mixed methods design to evaluate feasibility and assess initial signals of effectiveness of the CoachIQ approach. Three PCOs received coaching from CoachIQ practice facilitators over a 12-month period. Three non-equivalent comparison group PCOs received coaching during the same period as participants in a state quality improvement program. We conducted descriptive analyses of screening rates and coaching activities.

The CoachIQ practice facilitators discussed equity, facilitated prioritization of QI activities, and reviewed CRC screening disparities during a higher proportion of coaching encounters than the comparison group practice facilitator. While the mean overall CRC screening rate in the comparison PCOs increased from 34 to 41%, the mean CRC screening rate for Hispanic patients did not increase from 30%. In contrast, the mean overall CRC screening rate at the CoachIQ PCOs increased from 41 to 44%, and the mean CRC screening rate for Hispanic patients increased from 35 to 39%.

Conclusions

The CoachIQ program merges two implementation science methodologies, practice facilitation and causal pathway diagrams, to help PCOs focus quality improvement efforts on improving CRC screening while also reducing screening disparities. Results from this pilot study demonstrate key differences between CoachIQ facilitation and standard facilitation, and point to the potential of the CoachIQ approach to decrease disparities in CRC screening.

Peer Review reports

Colorectal cancer (CRC) is the second leading cause of cancer death and the second most common cancer diagnosis among the Hispanic population in the United States (US) [ 1 ]. The US Preventive Services Task Force recommends that adults age 45–75 screen for CRC as screening reduces CRC incidence and mortality [ 2 , 3 , 4 ]. However, CRC screening prevalence remains lower among Hispanic adults 45 years of age and older than among non-Hispanic white adults (64% vs. 74% in 2020) [ 5 ]. Primary care organizations (PCOs) have a range of evidence-based interventions (EBIs) to utilize for increasing CRC screening, including small media, clinician assessment and feedback, and patient reminders [ 6 ]. To reduce CRC screening disparities, it is imperative that efforts to implement CRC screening EBIs also consider their potential effect on existing screening disparities. Yet, there is no established approach for ensuring equity is integrated into implementation efforts in PCOs.

There have been recent calls to bring more of an equity focus to implementation science [ 7 , 8 , 9 ]. Brownson et al. suggest further examination of how to leverage existing implementation science methodologies to address equity determinants and improve health disparities [ 7 ]. Practice facilitation (PF) is an established implementation method for guiding PCOs in implementing EBIs [ 10 , 11 , 12 , 13 ]. PF draws on the Model for Improvement [ 14 ], which guides practice facilitators to ask three key questions: (1) What are we trying to accomplish? (2) How will we know that a change is an improvement? (3) What change can we make that will result in improvement? In order to select what changes to make, PF involves assessing existing systems, barriers to improvement, and potential interventions for improvement [ 14 ]. PF may be an approach to improving health equity, however, minimal research has been done on how or the degree to which PF may decrease health disparities [ 15 ].

A complementary implementation science visualization tool, the causal pathway diagram (CPD), provides a structure for implementers to be explicit about the outcomes they are trying to influence, barriers that are inhibiting those outcomes, and change strategies that may be poised to bring out improved outcomes [ 16 ]. By carefully articulating how strategies work, CPDs aim to improve their effectiveness [ 17 , 18 ]. CPDs help implementers to consider whether a strategy will work under the local conditions by considering what is necessary for the strategy to work (i.e., preconditions) and what might enhance or diminish the effectiveness of the strategy (i.e., moderators). Within the context of PF, there could be potential in applying CPDs as a means to help facilitators ensure that the EBIs PCOs choose and the quality improvement (QI) strategies PCOs apply have genuine potential to address important local barriers to decreasing CRC screening disparities.

The Coaching to Improve Colorectal Cancer Screening Equity (CoachIQ) pilot study explores whether integrating CPDs into PF is a feasible and effective way to address disparities in CRC screening among Hispanic patients. The goal of this paper is to describe the CoachIQ practice facilitation approach and report changes in overall CRC screening rates and changes in CRC screening disparities before and after CoachIQ PF.

Study design

For our pilot study, we used a quasi-experimental, mixed methods design to evaluate feasibility and assess initial signals of effectiveness of the CoachIQ approach. Study procedures were reviewed by the University of Washington Human Subjects Division (STUDY00016086) and deemed to be human subjects research that qualifies for exempt status. Participants provided informed consent prior to participation.

Study setting and recruitment

We partnered with the Washington, Wyoming, Alaska, Montana, and Idaho (WWAMI) region Practice and Research Network and the Washington Association for Community Health to recruit PCOs with Hispanic patient populations and CRC screening disparities. Of the eight PCOs approached directly about participating in CoachIQ, four had the capacity and interest to participate, and the study team selected the three PCOs with the larger Hispanic patient populations for inclusion. Coaching was provided at the organization level at three PCOs located in Wyoming ( n  = 1), Washington ( n  = 1), and Idaho ( n  = 1), and involved 4 practices. Two of the PCOs were federally qualified health centers and one was a hospital affiliated health center. We provided $1500 to each PCO to compensate them for time spent on research activities.

We worked with an organization that provides PF support to Washington state PCOs to improve CRC screening to identify and engage PCOs for the non-equivalent comparison group. Of the five PCOs approached to participate, three had the interest and capacity to share CRC screening data for the study. The three PCOs received coaching support during the same time period as the CoachIQ intervention. The three non-equivalent comparison group PCOs were federally qualified health centers and included 24 practices across three organizations providing care in Washington. Coaching support was provided at the organization level. We provided $500 to each comparison group PCO to compensate them for time spent on study-specific evaluation activities.

Data collection

Throughout the study period (January 2023 – December 2023), the two CoachIQ practice facilitators kept field notes on their work with practices. At the end of each month, the two CoachIQ practice facilitators and the one comparison group practice facilitator completed a survey about coaching and QI activities. The survey was developed collaboratively with all three practice facilitators to include standard practice facilitator activities along with coaching elements related to CPD, such as whether the facilitator worked with the PCOs to understand how QI activities were expected to affect a prioritized barrier. The three practice facilitators received standardized instructions on how to interpret and respond to the survey questions. A copy of the survey is available in supplementary materials.

For each study site, we requested data on CRC screening rates from the electronic health records at two time points, the beginning and ending of coaching as was feasible for the participating practices. Data included CRC screening rates overall, among Hispanic patients, and among non-Hispanic patients. Additionally, we collected descriptive data about the participating PCOs and demographics information about their patient populations. Patient demographics data came from electronic health records data prior to the start of coaching (January 2024 for intervention practices, 2019 for comparison group practices).

Data analysis

For our qualitative analysis, we used a basic qualitative descriptive approach [ 19 ] as the aim of the qualitative work was to identify and illustrate case examples of the CoachIQ approach as experienced by participating PCOs. A trained and experienced qualitative analyst (BI) independently reviewed and hand-coded practice facilitator field notes for examples of the facilitator and PCO applying CPDs to the QI work. The qualitative analyst created data displays of poignant examples of the application of the CoachIQ approach and reviewed these displays with the larger study team (AC, AJ, and RM) to reflect on their accurate representation of the data and experiences of practices.

For our quantitative analysis, we conducted descriptive analyses. We determined the baseline CRC screening disparity by calculating the difference between the non-Hispanic CRC screening rate and the Hispanic CRC screening rate. We compared baseline data with post-coaching data for the CoachIQ and comparison PF organizations. We also conducted descriptive analyses of participant PCO descriptive information and practice facilitator monthly coaching activity data.

CoachIQ program

Each CoachIQ organization received approximately 12 months of QI support from two practice facilitators (one lead and one support) and a clinical advisor, who were also members of the study team. The CoachIQ organizations had no prior coaching on improving CRC screening. CoachIQ practice facilitators had 8 years (BI) and 3 years (AJ) of prior coaching experience. The clinical advisor was a family medicine physician (AC). The CoachIQ study team also collaborated with an implementation scientist with expertise in CPDs (RM) who trained the CoachIQ practice facilitators and clinical advisor on the CPD methodology and contributed to the CoachIQ program development.

The CoachIQ program design was derived from creating a CPD model specific to decreasing CRC screening disparities in primary care (Fig.  1 ) and blending that model into standard PF approaches. The structure of the CoachIQ program was an adaptation of key elements of study team members’ (BI and AC) prior QI work around supporting PCOs in implementing system based changes to improve opioid prescribing, The Six Building Blocks, particularly the use of three facilitation stages: prepare, implement, and sustain [ 20 , 21 , 22 ]. The CoachIQ program incorporated an equity focus and used CPDs to inform the strategies used in three practice facilitation stages as outlined in Table  1 and detailed below.

figure 1

CoachIQ Causal Pathway Diagram

Stage 1: prepare

The first stage, Prepare, occurred during the first four months of the intervention and involved building the QI team, assessing baseline, and prioritizing the work. When building their QI team, PCOs were encouraged to consider including members who represented the targeted underserved demographics, Hispanic patients, and representatives with knowledge and experience about CRC screening. To assess the baseline, the QI team completed a survey and individual members participated in interviews with the practice facilitator to assess current CRC screening practices, past improvement activities, existing barriers to screening, potential strategies to overcome those barriers, and factors that might support or impede the success of those strategies. To prioritize work, the QI team participated in a coaching meeting to discuss results of the baseline assessment and identify and prioritize barriers faced by their Hispanic patient populations, and QI improvement strategies to try that could potentially overcome those prioritized barriers. The practice facilitator used CPDs to lead the team in vetting the effectiveness of alternative strategies by assessing (1) whether strategies were clearly matched with the barriers by facilitating discussions on how the strategy would address the barrier (i.e., the mechanism), (2) whether strategies were feasible by considering the preconditions for a strategy to work and factors that could moderate how well it works, and (3) what early indicators would signal whether the strategy was working to reduce the prioritized barrier. The final product of the meeting was a CPD Action Plan outlining steps to achieve “SMART” (specific, measurable, actionable, realistic, and timebound) goals and outlining the relationships in the related CPD figures guiding the work.

Stage 2: implement

During the seven months of the second stage, Implement, the QI team implemented the work prioritized during Stage 1 using CPD Action Plans developed at each monthly QI meeting with the practice facilitator. During monthly meetings, the facilitator used the CPD to guide the PCO in assessing their progress, reviewing early outcomes and equitable screening data, and adjusting implementation plans, as needed. A key aspect of the CoachIQ practice facilitator’s role was to interrogate whether QI team implementation activities were targeting the prioritized barriers identified during CPD assessment work in Stage 1. CoachIQ practice facilitators also aided the QI team in investigating why QI strategy implementation was struggling by checking in on the necessary strategy preconditions (e.g., clinicians available to attend the health equity training) or moderators (e.g., training materials relevant to clinician CRC screening work) that might be reducing the effectiveness of the strategy. Finally, CoachIQ practice facilitators worked with the QI team to review early outcome measures that were expected as a precursor to eventual decreasing of CRC screening disparities.

Stage 3: sustain

During the last month of the program, the practice facilitator worked with the QI team to assess progress and what facilitated and held back the work. The practice facilitator met with the team to discuss work left to accomplish, and helped the PCO make a sustainability plan to continue the work.

Comparison group

Throughout the study period, each PCO in the comparison group received approximately 12 months of QI support from one practice facilitator. The comparison group practice facilitator had 7 years of prior coaching experience and was not affiliated with the study. The comparison group practices had been receiving coaching on improving CRC screening for several years prior to the study start and were focused on reestablishing effective CRC screening practices and sustaining those still in effect. The QI strategies for the study period were chosen by PCOs from a list of EBIs provided by the practice facilitator. PF support involved quarterly meetings where the practice facilitator checked in on QI activities, worked through challenges, and connected the QI team to resources. There was also financial support available to these practices for staffing, patient navigation, population tracking, and patient colonoscopies.

Characteristics

The characteristics of the CoachIQ and comparison group PF organizations are shown in Table  2 . CoachIQ PCOs ranged in size from 11 to 36 primary care clinicians. The comparison group PCOs ranged in size from 19 to 74 primary care clinicians. All CoachIQ PCOs and comparison group PCOs reported using clinician reminders as an evidence-based CRC screening intervention at the start of the study. Two organizations in the comparison group and one organization in CoachIQ reported efforts to reduce structural barriers to CRC screening.

Each participating PCO reported patient characteristics for the population of patients eligible for CRC screening. In the CoachIQ PCOs, the proportion of patients identified as Hispanic ranged from 8 to 13%, and in the comparison group organizations, the range was 4–6% Hispanic. At one CoachIQ PCO, the proportion of patients without health insurance was 20%. At the remaining CoachIQ and comparison group organizations, the proportions of patients without health insurance ranged from 1 to 9%.

The two CoachIQ practice facilitators and the single comparison group practice facilitator entered data each month to record the coaching activities. For each activity, we calculated the proportion of months during the 12-month coaching cycle that the practice facilitator reported doing the coaching activity. CoachIQ practice facilitators reported discussing equity in the majority (75%) of monthly coaching encounters, compared to the comparison group practice facilitator who reported discussing equity in only 25% of coaching encounters (Table  3 ). The CoachIQ practice facilitators also reported that they facilitated prioritization of QI activities, facilitated development of an action plan, reviewed process steps and measures, and reviewed CRC screening disparities during a higher proportion of coaching encounters, compared to the comparison group practice facilitator. The comparison group practice facilitator reported providing technical support or education, connecting to others doing similar work, and sharing relevant resources at a higher proportion of coaching encounters than the CoachIQ practice facilitators.

Examples of CPD application in CoachIQ

In addition to examining the differences reported by practice facilitators in monthly surveys about their coaching activities, we developed two case examples of how CoachIQ practice facilitators used CPDs to guide the selection and implementation of QI activities with an equity focus (see the two example CPDs in supplementary materials).

The first example involved a PCO QI team that needed to adjust their implementation approach to meet a precondition for the strategy to be effective. This PCO planned to use educational brochures to address the barriers of (1) Hispanic patients’ limited knowledge of the need for CRC screening and (2) clinicians forgetting to recommend screening. The PCO QI team theorized that the brochures would help Hispanic patients learn about the importance of CRC screening and motivate them to ask about screening during busy appointments with their primary care clinician. During a CoachIQ meeting, the QI team reported that they received the brochures and placed them in their waiting rooms. The CoachIQ practice facilitator used the CPD to prompt the team to think through whether this deployment of educational brochures would be effective. It emerged that an important precondition for the strategy to be effective might not be met. If the brochures were only in the waiting rooms, it was unclear whether the patients would notice and access them prior to their appointments. Therefore, the team adjusted their implementation approach to instead incorporate giving the brochures to patients during rooming, which would make it much more likely that the brochures would address their intended barriers and outcomes.

The CoachIQ practice facilitator also helped the QI teams use early outcome measures to confirm the strength of the strategy and barrier match. The second CPD example was for a practice using targeted patient reminders to address two prioritized barriers: (1) Patients not knowing about or forgetting about needing to be screened, (2) clinicians forgetting to recommend screening. During each CoachIQ meeting with the practice facilitator, the early outcome measure of number of Hispanic patients due for screening without a referral in the chart was monitored. This PCO made targeted outreach calls to Hispanic patients who were due for screening to encourage them to schedule an appointment and to enter information in their chart highlighting their CRC screening gap. After implementing targeted patient reminders, the number of Hispanic patients without referrals who were due for screening went from 188 in May 2023 to 16 in October 2023, serving as a strong initial indicator that the strategy was working as planned.

Figure  2 summarizes the pre/post change in the primary outcomes for the CoachIQ PCOs and the comparison group PCOs: mean CRC screening rate overall, Hispanic CRC screening rate, and non-Hispanic CRC screening rate. While the mean overall CRC screening rate in the comparison PCOs increased from 34 to 41%, the mean CRC screening rate for Hispanic patients did not increase from 30% after the period of coaching. In contrast, the mean overall CRC screening rate in the CoachIQ PCOs increased from 41 to 44%, and the mean CRC screening rate for Hispanic patients increased from 35 to 39%.

figure 2

Pre-Post Colorectal Cancer Screening Rates

In Table  4 , we report the baseline CRC screening rate, change in overall CRC screening rate, baseline CRC screening disparity, and change in CRC screening disparity for each of the CoachIQ and comparison group PCOs. The change in disparity for the CoachIQ PCOs ranged from growing by 1% in PCO 3 to reducing by 6% in PCO 1. In the comparison group, the change in disparity ranged from growing by 2% in PCO 1 to growing by 22% in PCO 3.

This study designed and piloted the CoachIQ program which utilized a novel application of CPD within a PF model for decreasing CRC screening disparities. CoachIQ practice facilitators worked with PCO QI teams to prioritize barriers to equitable CRC screening and design and implement QI strategies to overcome those barriers. We demonstrate that CPD can be utilized to guide practice facilitators and PCOs in their efforts to decrease disparities in CRC screening. CPD provides an operational approach to principles for equitable QI outlined by Galifant et al., [ 23 ] including using tools for health disparity tracking and understanding contextual differences when planning implementation. The practice facilitators used CPD to help guide QI teams in selecting QI activities (i.e., strategies) that would be feasible within their context considering existing circumstances (i.e., preconditions and moderators) and those that had a clear relationship to prioritized local barriers to equitable CRC screening (i.e., outcome). The practice facilitators also took time to explore how the QI teams anticipated the activities would work to affect the barriers (i.e., mechanisms), and how to measure what the QI teams expected to see as an early result of implementation (i.e., early outcomes). Through the CoachIQ approach, the practice facilitators tracked the details of the CPDs for QI teams, prompting them through targeted questions during meetings to fine tune their QI implementation. One potential strength of the CoachIQ model is that the integration of CPD methods was accomplished with the practice facilitators, rather than direct training in the method to PCO QI teams. PCO QI teams may not have sufficient time or expertise to translate implementation science methods into actionable QI activities, [ 24 ] and the CoachIQ model provides a means to bring implementation science to PCOs without the burden of them having to identify and learn these methods.

In this study, CoachIQ practice facilitators recorded completing several activities at a greater proportion of coaching encounters compared to the comparison group practice facilitator: (1) incorporated identified barriers and prioritized activities into Action Plans for the PCOs, (2) kept equity at the forefront of coaching, and (3) consistently assessed progress to check that QI activities were having the intended effect. These activities align with the core components of CPD, suggesting practice facilitators maintained fidelity to the CoachIQ approach. Few published studies describing PF programs provide detailed data about the activities performed by practice facilitators [ 25 ]. Our approach for collecting this data was assessed as feasible by the practice facilitators and may contribute to future efforts to better characterize and compare PF approaches. Demonstrating feasible measurement and documentation of implementation strategies is a critical need in the field of implementation science [ 26 ].

QI efforts have historically failed to address, or even exacerbated health disparities [ 27 , 28 , 29 ]. In our study, among the three practices receiving support through CoachIQ, all three increased their CRC screening rates overall, and two practices successfully reduced CRC screening disparities for Hispanic patients. In the comparison practices, there was an increase in the Hispanic/Non-Hispanic CRC screening disparity in all three practices, despite improved overall CRC screening rates. Although we are uncertain as to why all comparison group practices increased CRC screening disparities, including a significant increase in disparities in comparison group PCO 3, we hypothesize that without intentionally focusing on equity through the QI process, there is risk for further exacerbating disparities [ 30 ]. A strength of the CoachIQ program is using both the dynamic role of the practice facilitator and a systematic approach (CPD) to potentially help the PCO engage with equity as an ongoing practice rather than a QI project finished after one cycle [ 23 ]. Improving health equity requires a systematic approach that aligns well with the CPD approach [ 31 ].

Though we did compare CRC screening outcomes for organizations receiving support from CoachIQ practice facilitators to the CRC screening outcomes for organizations receiving standard coaching through an ongoing practice facilitation program, these two groups were non-equivalent. Despite the lack of equivalency, the detailed description of the CoachIQ program and its incorporation of CPD into practice facilitation, and the demonstration that practices receiving CoachIQ support made progress in improving equitable CRC screening contributes important data on a promising implementation science approach to decreasing CRC screening disparities. The organizations in the two arms received different financial incentives, which is a potential cofounder of the effect observed. For the pilot study, CoachIQ teams were encouraged to include Hispanic patients. Future versions of the program could go farther and include patients more intentionally as part of the baseline assessment process, and throughout implementation.

The CoachIQ program merges two implementation science methodologies, PF and CPD, to help PCOs focus QI efforts on improving CRC screening while also reducing screening disparities. Results from this pilot study demonstrate key differences between CoachIQ facilitation and standard facilitation, and point to the potential of the CoachIQ approach in decreasing disparities in CRC screening.

Data availability

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

Coaching to Improve Colorectal Cancer Screening Equity

Causal Pathway Diagram

Colorectal Cancer

Evidence Based Intervention

Primary Care Organization

Practice Facilitation

Quality Improvement

United States

Cancer Facts & Figures for Hispanic and Latino People. Accessed April 30. 2024. https://www.cancer.org/research/cancer-facts-statistics/hispanics-latinos-facts-figures.html

Wolf AMD, Fontham ETH, Church TR, et al. Colorectal cancer screening for average-risk adults: 2018 guideline update from the American Cancer Society. Cancer J Clin. 2018;68(4):250–81. https://doi.org/10.3322/caac.21457 .

Article   Google Scholar  

Siegel RL, Miller KD, Wagle NS, Jemal A. Cancer statistics, 2023. Cancer J Clin. 2023;73(1):17–48. https://doi.org/10.3322/caac.21763 .

US Preventive Services Task Force. Screening for Colorectal Cancer: US Preventive Services Task Force Recommendation Statement. JAMA. 2021;325(19):1965–77. https://doi.org/10.1001/jama.2021.6238 .

CDC. Use of Colorectal Cancer Screening Tests. Colorectal Cancer. February 28. 2024. Accessed May 23, 2024. https://www.cdc.gov/colorectal-cancer/use-screening-tests/index.html

Brouwers MC, De Vito C, Bahirathan L, et al. What implementation interventions increase cancer screening rates? A systematic review. Implement Sci. 2011;6:111. https://doi.org/10.1186/1748-5908-6-111 .

Article   PubMed   PubMed Central   Google Scholar  

Brownson RC, Kumanyika SK, Kreuter MW, Haire-Joshu D. Implementation science should give higher priority to health equity. Implement Sci. 2021;16(1):28. https://doi.org/10.1186/s13012-021-01097-0 .

Kerkhoff AD, Farrand E, Marquez C, Cattamanchi A, Handley MA. Addressing health disparities through implementation science-a need to integrate an equity lens from the outset. Implement Sci. 2022;17(1):13. https://doi.org/10.1186/s13012-022-01189-5 .

Adsul P, Chambers D, Brandt HM, et al. Grounding implementation science in health equity for cancer prevention and control. Implement Sci Commun. 2022;3(1):56. https://doi.org/10.1186/s43058-022-00311-4 .

Baskerville NB, Liddy C, Hogg W. Systematic review and Meta-analysis of Practice Facilitation within Primary Care settings. Ann Fam Med. 2012;10(1):63–74. https://doi.org/10.1370/afm.1312 .

Dogherty EJ, Harrison MB, Graham ID. Facilitation as a role and process in achieving evidence-based practice in nursing: a focused review of Concept and meaning. Worldviews Evidence-Based Nurs. 2010;7(2):76–89. https://doi.org/10.1111/j.1741-6787.2010.00186.x .

Weiner BJ, Rohweder CL, Scott JE, et al. Using practice facilitation to increase Rates of Colorectal Cancer Screening in Community Health Centers, North Carolina, 2012–2013: feasibility, facilitators, and barriers. Prev Chronic Dis. 2017;14:E66. https://doi.org/10.5888/pcd14.160454 .

Kilbourne AM, Geng E, Eshun-Wilson I, et al. How does facilitation in healthcare work? Using mechanism mapping to illuminate the black box of a meta-implementation strategy. Implement Sci Commun. 2023;4(1):53. https://doi.org/10.1186/s43058-023-00435-1 .

How to Improve: Model for Improvement | Institute for Healthcare Improvement. Accessed May 23. 2024. https://www.ihi.org/resources/how-to-improve

Glaser KM, Crabtree-Ide CR, McNulty AD, et al. Improving Guideline-recommended Colorectal Cancer Screening in a federally qualified Health Center (FQHC): implementing a patient Navigation and Practice Facilitation Intervention to Promote Health Equity. Int J Environ Res Public Health. 2024;21(2):126. https://doi.org/10.3390/ijerph21020126 .

Klasnja P, Meza RD, Pullmann MD, et al. Getting cozy with causality: advances to the causal pathway diagramming method to enhance implementation precision. Implement Res Pract. 2024;5:26334895241248851. https://doi.org/10.1177/26334895241248851 .

Lewis CC, Klasnja P, Tuzzio L, Jones S, Walsh-Bailey C, Weiner B. From classification to causality: advancing understanding of mechanisms of change in implementation science. Front Public Health. 2018;6:336504. https://doi.org/10.3389/fpubh.2018.00136 .

Lewis CC, Klasnja P, Lyon AR, et al. The mechanics of implementation strategies and measures: advancing the study of implementation mechanisms. Implement Sci Commun. 2022;3(1):114. https://doi.org/10.1186/s43058-022-00358-3 .

Sandelowski M. Whatever happened to qualitative description? Res Nurs Health. 2000;23(4):334–40. https://doi.org/10.1002/1098-240X(200008)23:4%3C334::AID-NUR9%3E3.0.CO;2-G .

Article   CAS   PubMed   Google Scholar  

Six Building Blocks: A Team-Based Approach to Improving Opioid Management in Primary Care Self-Service How-To Guide. Accessed May 23. 2024. https://www.ahrq.gov/patient-safety/settings/ambulatory/improve/six-building-blocks-guide.html

Shoemaker-Hunt SJ, Evans L, Swan H, et al. Study protocol for evaluating six building blocks for opioid management implementation in primary care practices. Implement Sci Commun. 2020;1:16. https://doi.org/10.1186/s43058-020-00008-6 .

Parchman ML, Korff MV, Baldwin LM, et al. Primary care clinic re-design for prescription opioid management. J Am Board Fam Med. 2017;30(1):44–51. https://doi.org/10.3122/jabfm.2017.01.160183 .

Gallifant J, Griffin M, Pierce RL, Celi LA. From quality improvement to equality improvement projects: a scoping review and framework. iScience. 2023;26(10):107924. https://doi.org/10.1016/j.isci.2023.107924 .

King O, West E, Alston L, et al. Models and approaches for building knowledge translation capacity and capability in health services: a scoping review. Implement Sci. 2024;19:7. https://doi.org/10.1186/s13012-024-01336-0 .

Cole AM, Keppel GA, Baldwin LM, Holden E, Parchman M. Implementation Strategies Used by Facilitators to Improve Control of Cardiovascular Risk Factors in Primary Care. J Am Board Fam Med . Published online June 28, 2024:jabfm.2023.230312R1. https://doi.org/10.3122/jabfm.2023.230312R1

Kim B, Cruden G, Crable EL, Quanbeck A, Mittman BS, Wagner AD. A structured approach to applying systems analysis methods for examining implementation mechanisms. Implement Sci Commun. 2023;4(1):127. https://doi.org/10.1186/s43058-023-00504-5 .

Lion KC, Raphael JL. Partnering Health disparities Research with Quality Improvement Science in Pediatrics. Pediatrics. 2015;135(2):354–61. https://doi.org/10.1542/peds.2014-2982 .

Burnett-Hartman AN, Mehta SJ, Zheng Y, et al. Racial/Ethnic disparities in Colorectal Cancer Screening Across Healthcare systems. Am J Prev Med. 2016;51(4):e107–15. https://doi.org/10.1016/j.amepre.2016.02.025 .

Guillaume E, Dejardin O, Bouvier V, et al. Patient navigation to reduce social inequalities in colorectal cancer screening participation: a cluster randomized controlled trial. Prev Med. 2017;103:76–83. https://doi.org/10.1016/j.ypmed.2017.08.012 .

Article   PubMed   Google Scholar  

Barrell AM, Johnson L, Dehn Lunn A, Ford JA. Do primary care quality improvement frameworks consider equity? BMJ Open Qual. 2024;13(3):e002839. https://doi.org/10.1136/bmjoq-2024-002839 .

Mutha S, Marks A, Bau I, Regenstein M. Bringing Equity into Quality Improvement: An Overview and Opportunities Ahead; 2012. Retrieved from https://hsrc.himmelfarb.gwu.edu/sphhs_policy_facpubs/761

Download references

Acknowledgements

The authors would like to sincerely thank our community partners for their enthusiastic participation. The project is also supported by the National Center for Advancing Translational Sciences of the National Institutes of Health under Award Number UL1 TR002319. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Funding for this study comes from the National Institutes of Health National Cancer Institute (5P50CA244432-03).

Author information

Authors and affiliations.

Department of Family Medicine, University of Washington, Box 354982, Seattle, WA, 98195-4982, USA

Brooke Ike, Ashley Johnson & Allison Cole

Kaiser Permanente Washington, Health Research Institute, 1730 Minor Ave, Suite 1360, Seattle, WA, 98101-1466, USA

Rosemary Meza

You can also search for this author in PubMed   Google Scholar

Contributions

AC conceived the original research idea. BI, AJ, RM, and AC contributed to the design and development of the study. BI, AJ, and AC took field notes. BI and AC conducted the analyses with additional input from AJ and RM. BI, AJ, and AC drafted the manuscript. All authors read, edited, and approved the final manuscript.

Corresponding author

Correspondence to Brooke Ike .

Ethics declarations

Ethics approval and consent to participate.

Study procedures were reviewed by the University of Washington Human Subjects Division (STUDY00016086) and deemed to be human subjects research that qualifies for exempt status. Participants provided informed consent prior to participation.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

12913_2024_11471_MOESM1_ESM.docx

Supplementary Material 1: Additional File 1: CoachIQ Causal Pathway Diagram Example 1. Description of data: A diagram outlining the first case study example of applying the Causal Pathway Diagram in the CoachIQ program.

12913_2024_11471_MOESM2_ESM.docx

Supplementary Material 2: Additional File 2: CoachIQ Causal Pathway Diagram Example 2. Description of data: A diagram outlining the second case study example of applying the Causal Pathway Diagram in the CoachIQ program.

12913_2024_11471_MOESM3_ESM.docx

Supplementary Material 3: Additional File 3: CoachIQ Practice Facilitator Monthly Survey. Description of data: A monthly electronic survey practice facilitators completed about coaching activities conducted with each primary care organization during the prior month.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Ike, B., Johnson, A., Meza, R. et al. Integrating causal pathway diagrams into practice facilitation to address colorectal cancer screening disparities in primary care. BMC Health Serv Res 24 , 1007 (2024). https://doi.org/10.1186/s12913-024-11471-5

Download citation

Received : 10 July 2024

Accepted : 20 August 2024

Published : 30 August 2024

DOI : https://doi.org/10.1186/s12913-024-11471-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Colorectal cancer screening
  • Practice facilitation
  • Causal pathway diagram
  • Implementation
  • Screening disparities
  • Primary care
  • Quality improvement

BMC Health Services Research

ISSN: 1472-6963

experimental methodology in research example

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 30 August 2024

Theoretical and experimental investigation of the impact of oil functional groups on the performance of smart water in clay-rich sandstones

  • Alireza Kazemi 1 ,
  • Saeed Khezerloo-ye Aghdam 1 &
  • Mohammad Ahmadi 2  

Scientific Reports volume  14 , Article number:  20172 ( 2024 ) Cite this article

Metrics details

  • Energy science and technology
  • Engineering

This research investigated the effect of ion concentration on the performance of low salinity water under different conditions. First, the effect of injection water composition on interparticle forces in quartz-kaolinite, kaolinite-kaolinite, and quartz-oil complexes was tested and modeled. The study used two oil samples, one with a high total acid number (TAN) and the other with a low TAN. The results illustrated that reducing the concentration of divalent ions to 10 mM resulted in the electric double layer (EDL) around the clay and quartz particles and the high TAN oil droplets, expanding and intensifying the repulsive forces. Next, the study investigated the effect of injection water composition and formation oil type on wettability and oil/water interfacial tension (IFT). The results were consistent with the modeling of interparticle forces. Reducing the divalent cation concentration to 10 mM led to IFT reduction and wettability alteration in high TAN oil, but low TAN oil reacted less to this change, with the contact angle and IFT remaining almost constant. Sandpack flooding experiments demonstrated that reducing the concentration of divalent cations incremented the recovery factor (RF) in the presence of high TAN oil. However, the RF increment was minimal for the low TAN oil sample. Finally, different low salinity water scenarios were injected into sandpacks containing migrating fines. By comparing the results of high TAN oil and low TAN oil samples, the study observed that fine migration was more effective than wettability alteration and IFT reduction mechanisms for increasing the RF of sandstone reservoirs.

Introduction

Using low salinity water is a challenging method for enhanced oil recovery 1 , 2 , 3 . The mechanism that comprehensively explains the function of this approach has not been proposed 4 , 5 . Low salinity water flooding alters the rock porous media substantially 6 , 7 , 8 , which can result from different interactions like rock/fluid interplay, fluid/fluid interplay, and rock/rock interplay 9 . Therefore, investigating and identifying the strength of this approach in extended situations is essential. The pivotal mechanism of this approach to rock inner structure is challenging 10 , 11 . These mechanisms can either upgrade or worsen the EOR approach 12 .

Recent papers have identified several impactful mechanisms such as altering the rock wettability, Migration of fine particles, lowering oil/water IFT, pH, and salt-in effects 6 , 13 , 14 . Alteration of wettability and migration of fine particles have more effect than other mechanisms 11 . However, numerous studies have already demonstrated that altering rock wettability does not always happen 15 . This mechanism relies on the reservoir rock and fluid characteristics 15 . There are cases in which implementing low-salinity water can worsen the condition of wettability, particularly in reservoirs with carbonate lithology. Additionally, the fine migration mechanism is controversial, as researchers do not fully agree on its effectiveness 13 .

In the field of petroleum engineering, the migration of fine particles has been introduced deteriorating and potentially harmful phenomenon in production and injection wells, injection wells, which is an essential topic for researchers 16 , 17 , 18 , 19 . Various approaches to deal with this problem have already been suggested, including adjusting the injection rate in injection wells as well as the production rate in production wells 20 , using clay inhibitors 21 and surfactants 22 , 23 to stop the occurrence of this phenomenon. Fine particles are stable in the porous media in a normal situation where the production rate is low and stable 24 . However, applying IOR and EOR approaches may impact the reservoir adversely 25 . Under the implementation of EOR and with changes in injection or production conditions, as well as the reservoir fluid compositions, fines can become suspended and separated from the facing of the pores 26 , 27 . Drag forces from fluid flow move the fine particles that detach the porous media. This continues unless the fine particles trap behind a narrower pore throat. Therefore, the connection level between pores is reduced, which lowers the formation permeability 28 , 29 , 30 .

Contrary to the findings of many papers, some researchers claim that fine migration is an important mechanism 25 , 26 . Tor Austad states that clay particles are mandatory for performing the low salinity water approach 26 . The migration of fine particles may raise the recovery factor through two mechanisms. The first one is related to the pore bodies sweeping. When clay particles move forward in the porous media, the oil is expended from the pore bodies 31 , 32 . The second mechanism is related to the conformance control. The pore throats that have been affected by the injected low salinity water becomes plugged. Thus, water can enter the areas that have not been swept 33 , 34 , 35 .

Another important mechanism is IFT reduction, which has also been identified in some research as a mechanism for improving RF 13 , 36 , 37 , 38 . However, it is essential to note that this mechanism does not occur in all cases, and sometimes, the IFT between oil and water is entirely indifferent to water salinity 38 , 39 , 40 . In other cases, it is classified as having lower importance than wettability alteration and fine migration 41 . Indeed, it is crucial to recognize that each of these mechanisms occurs under specific conditions. Thus, some low salinity water projects have been unsuccessful due to a lack of attention to the specific conditions required to successfully implement these mechanisms 6 , 42 . Exploring the science behind phenomena is crucial to recognizing and controlling the impactful factors. This understanding can help researchers design low salinity water formulations tailored to each reservoir's specific characteristics, leading to a more effective and successful EOR approach.

So far, there have been papers in the field of fine migration, as well as a qualitative study of the effect of interparticle forces. However, this study, for the first time, uses these forces to quantify the composition of injected fluids, in which the concentration and types of ions are changing to investigate their effect on the possibility of fine migration, both theoretically and experimentally. It also examines the effect of functional groups in oil on the recovery enhancement mechanism from the point of view of interparticle forces both in the laboratory and in theory. The study used the plate and sphere model and assumed that quartz particles are plates, water denotes the fluid, clay particles, and oil droplets are spheres. The study then performed IFT and wettability measurements to identify the impact of interparticle force on these phenomena. Additionally, several sandpacks with and without clay were made to observe the effect of interparticle force on the migration of fine particles. The study injected various scenarios to investigate the effect of interparticle force on the migration of fine particles and the impact of fine migration on the oil recovery factor. Ultimately, the study aimed to understand the factors that activate and control mechanisms at the particle scale and investigate their contribution to improving RF.

Methodology

Theoretical aspect.

This area illustrates up to examine the impact of the encompassing water composition on interparticle powers. The DLVO hypothesis was, to begin with, presented for steadiness calculations of colloidal liquids, and it can be utilized to recognize the sort of overwhelming drive between particles 43 . That is rectified. The overall constraint calculated by the DLVO hypothesis decides the attraction and repugnance between two particles. On the off chance that the entire drive is attractive, the particles will tend to total and shape more giant clusters, while on the off chance that the overall constrain is terrible, the particles will tend to scatter and stay isolated 44 .

Researchers illustrated that the attraction force within rock porous media and oil particles may lead to the injection water being released from pores. As a result of the attraction force, most of the oil is adsorbed to the facing of the rock. In contrast, by flooding water, oil with repulsive forces on the facing of the matrix is peeled off from the rock surface. The first case is likely to occur in reservoir conditions. Thus, an effective increment in the recovery factor may be achieved by altering the injected water composition to change this force into repulsion. It is also evident that migrating fine particles exist in most sandstones, and changing the magnitude and size of interparticle force impacts the movement of the particles. Fine migration may damage the rock media and lower the permeability of the rock. Furthermore, migrating fines may also move oil droplets forward in the porous media, enhancing oil recovery. Thus, investigating interparticle forces in various complexs is necessary.

Generally, DLVO theory models the energy of interplay between particles based on the Lifshitz-van der Waals (ULV) and electrostatic (UEL) energies 45 .

The interplay of van der Waals is a microscopic force that results in particles being attracted to one another by their mass. The sphere and planar model has been used to calculate the amount of this force. The Van der Waals energy value shall be calculated using a Hamaker approximation method for short distances. Hamaker is needed to compute the ULV interplay strength of the facing particles, which depends on the dielectric features of the particles 45 .

where A 132 is the Hamaker constant, r denotes the particle radius, and H is the distance between the sphere and plane.

When two particles with EDL come closer, they impose electrostatic forces on each other, which depend on various parameters. The electrostatic energy of particles depends on the distance between them. The below equation calculates the electrostatic interaction energy 45 .

where, \( \varepsilon \) denotes the electrolyte's dielectric constant (F/m), \( {\varepsilon_0} \) presents the vacuum permittivity coefficient (F/m), r illustrates the particle radius (m), \( \zeta \) denotes the zeta potential value (V), and h presents the particles' distance in meters. KB indicates Boltzmann's constant (J/K), and T illustrates the average temperature in Kelvin.

Within the over condition, the parameter 1/k is the Debye length consistent in meters, demonstrating the EDL's thickness around the particle. This parameter is calculated through the taking-after condition.

Here, N A is Avogadro's number, e denotes the electron's electric charge, and I refers to the electrolyte's ionic strength (mole/Liter), calculated through the equation below.

C represents the target ion concentration in moles per liter (M), while Z denotes the electric charge of the ion. By utilizing the equations above, the energy levels of various particles can be computed relative to their distance from one another. These energy level values are then graphed against particle separation in various complexes such as kaolin-kaolin or kaolin-quartz. By utilizing the generated illustration alongside the given equation, it becomes feasible to forecast the overall interplay force present between differing particles.

Thus, the force direction can be identified according to the slope of the energy diagram. The presence of a significant hump in the diagram indicates that the colloid is stable and the particles are not agglomerated. In contrast, a uniform diagram illustrates particle agglomeration.

Experimental section

Water samples.

Various salts such as sodium chloride (NaCl), calcium chloride (CaCl 2 ), magnesium chloride (MgCl 2 ), and sodium sulfate (Na 2 SO 4 ) were used to make formation water (FW) and low salinity water samples. Table 1 illustrates the composition of different waters and their characteristics.

All brines were made in the laboratory. For this purpose, the required amount of salt was slowly added to the distilled water and mixed with a magnetic stirrer for 24 h to prevent the formation of sediments. Finally, the brine fluid was filtered by filter paper.

Oil samples

Here, two samples of crude oil gathered from the southwestern oil fields of Iran were tested. The characteristics of tested samples are presented in Table 2 .

Based on the table above, oil(A) is richer in aromatic and asphaltic molecules than oil(B). Timothy et al. illustrated that the concentration of aromatic and asphaltic functional groups highly depends on oxygenated and hydrogenated groups in oil composition. Hence, oil(A) contains more acidic functional groups than oil(B). A total acid number (TAN) experiment was conducted to bring concrete evidence, which confirmed the fact.

Zeta potential measurement

This work utilized a Zetasizer Nano ZS (Malvern Instruments, UK) to identify the potential in the zeta layer of various particles in different brine samples. This setup utilizes electrophoretic light scattering to identify the value.

This test measures the amount of electric potential on the surface of suspended particles. Therefore, the particles should be well suspended in the aqueous solution. Therefore, 6000 rpm rod mixers were used for 20 min to prepare the measured samples. After the mixing process for clays, sands, and oil samples, the uniformity of the fluid was visible, and this fluid was used to measure the zeta potential.

Interfacial tension (IFT) measurement

This work utilized the Pendent drop approach to determine the interfacial tension (IFT) between various fluids. The setup in this regard consists of various devices, such as an online image-capturing camera, a glass for the lighter liquid, and a narrow syringe pump. The IFT between oil droplets and different brine samples was determined in this study. In this experiment, an oil droplet was dropped into the brine, the bulk fluid. The injection was performed smoothly and slowly. After reaching the equilibrium, the injected droplet's image was captured and analyzed. In this study, IFT experiments have been conducted at 60 °C.

Wettability measurement

The impact of different brines on the rock slices' wettability was determined through a sessile drop experiment. Rock slices were prepared via a Hitti DD130 rock cutter, and the roughness of their facing was smoothed through ultra-fine sandpaper. Afterwards, the smoothed slices were washed in soxhelet via methanol to eliminate salts and then dried at 100 °C. The smooth and polished slices were immersed in crude oil for two weeks at 60 °C to approach an oil-wet condition. Aged flat pellets were then immersed in various brines to investigate their effect on wettability alteration.

Then, slices were placed on top of a chamber filled with brine. An oil droplet was poured on the facing of the slices, and a photograph was taken after stabilizing the droplet on the slice. The wettability state of the slices was determined Based on the droplet's shape. It should be noted that the contact angle measured through this experiment refers to the apparent contact angle, and to minimize the effect of hysteresis, the experiment should be conducted very slowly 46 .

Sandpack flooding experiment

The Pars Ore company provided quartz grains and kaolinite powder. A disk mill mortared quartz grains. The compounds of quarts and kaolinite were determined through an X-ray diffraction test after removing impurities, as illustrated in Table 3 .

XRD analysis was also conducted to characterize sand particles used in this study. Figure  1 shows the XRD results obtained from sand particle analysis, showing that 97% of the rock sample is quartz and 1.5% is clay minerals.

figure 1

XRD analysis graph of sand particles.

Based on a two-step procedure, the impurities were eliminated through rock samples. Firstly, rock powder was washed with toluene in soxhelet for 4 h to eliminate all organic impurities. Acetone was then utilized to wash toluene and soluble organic materials. Deionized water was then implemented to eliminate acetone and salts. At the end of this step, the washed powders were dried at 80 °C for 1 week.

The next step, conducted only on a quartz sample, removed inorganic materials through 15% HCl acid. Afterward, the quartz grains were washed through distilled water to eliminate the acid and achieve the inlet water's pH at the outlet.

Sandpack preparation procedure

Sandpacks were used instead of plugs taken from shaly sand cores. To do this, kaolinite and quartz grains were sieved to obtain the desired sizes. The mean size of kaolinite grains was 3 µm. Thus, grains were sieved through a 400 mesh to make sure that the grains were not agglomerate. Quartz grains were used in various sizes, including 177 to 250 µm, 125 to 177 µm, and 74 to 125 µm.

The sieved grains were packed in a rubber sleeve with a diameter of 1 cm. A 400 No. mesh screed was glued to the sleeve outlet to hinder sandpack disassociation. The grains that were sieved were packed in three steps. The biggest grains were poured into the sleeve in the first step, and in the last step, the smallest grains were packed.

The noteworthy point is that the inner diameter of the sandpack was 1 inch, which is considered in all calculations. However, the outer diameter is 1.5 inches, which fits in a regular core plug holder.

Sandpack flooding setup

A flooding test was conducted to assess the efficiency of low salinity water samples in raising the recovery factor in shaly sandstones. The apparatus used for this step is illustrated in Fig.  2 . Various devices were used to assemble the apparatus, including a syringe pump, three cylinders for various fluids, a core plug holder, and two pressure indicator transmitters to measure the differential pressure (dP).

figure 2

Schematic of core flooding apparatus 47 .

Along with the test, the low salinity water samples were flooded into the shaly sandpacks, and the dp across the sandpacks was recorded. The syringe pump controlled the injection rate, and the pressure transmitters were used to measure the pressure drop across the sandpacks as the low salinity water samples flowed through them. The vacuum pump was used to maintain a constant and zero outlet pressure. The experiment was performed for various low salinity water samples, and the results were analyzed to evaluate their ability to improve RF in shaly sandpacks. In this study, all flooding experiments have been conducted at 60 °C.

Firstly, the dried and cleaned sandpacks were placed into the core holder. In the next step, they were saturated with FW. The porosity of the sandpacks can be determined by the obtained data (see Eq.  7 ).

V refers to the fluid volume in which the index of i and o pertain to the inlet and outlet, and the index b is bulk.

To measure the absolute permeability of sandpacks, they were saturated and then flooded by formation water. The magnitude of this parameter was obtained from Eq. ( 8 ) (Darcy equation).

In the core flooding experiment, the injection flow rate (q) was calculated based on the viscosity of the flooded brine (µ), the length (l) and area (A) of the core plug, and the pressure difference (∆P) between the inlet and outlet of the plug.

After determining the injection flow rate, the sandpack was saturated with brine until it reached irreducible water saturation (S wir ). Brine-saturated sandpacks were flooded by crude oil samples to achieve this condition.

The produced oil volume was plotted versus the time to investigate the impact of different parameters on crude oil recovery. The RF for different scenarios was determined through Eq. ( 9 ).

Fine migration causes the permeability reduction in the porous medium, and as a result, the injection pressure increases while fluctuating. Therefore, it is necessary to conduct a statistical study to compare the intensity of fine migration in different sandpacks. This work calculates and determines the moving average for pressure data over time. Then, the deviation of each data is calculated concerning this moving average, and the result shows the mean absolute deviation (MAD), which can be used to compare the intensity of fine migration.

In Eq. ( 10 ), P t is the magnitude of pressure at a specific time, and MAP t denotes the magnitude of calculated moving average pressure.

Results and discussion

Here, the zeta potential measurement results and particle size determination tests are discussed. The force between particles for various conditions is modeled according to the DLVO theory. The impact of different parameters, including interparticle force, ion type, and concentration on the oil/brine IFT and wettability alteration, is identified.

In the end, fluids are synthesized according to the results of the batch experiments to maximize the oil recovery factor. This process is surveyed, and the oil recovery is recorded. The obtained data is investigated to determine the effect of the injected fluids in oil recovery augmentation.

Interparticle interaction

According to the procedures above, measuring zeta potential and particle size calculates the interaction energy between particles in various systems. Figure  3 plots the magnitude of energy versus interparticle distance.

figure 3

Interparticle interplay illustration in the complex ( A ) kaolinite-kaolinite ( B ) kaolinite-quartz ( C ) Oil(A)-quartz ( D ) Oil(B)-quartz.

Based on Fig.  3 A, kaolinite particles in FW attract each other, which leads to sedimentation. However, using low salinity water stimulates repulsion force between particles. This figure also illustrates that the repulsive force between kaolinite particles becomes dominant with a further reduction of water ionic strength by eliminating divalent cations. Thus, kaolinite particles in Na50LSW, Na10LSW, and Ca10LSW water samples are expected to repel each other, leading to a homogenous dispersion.

Figure  3 B illustrates that the presence of divalent cations leads to a dominant attraction force between quartz and kaolinite particles. Therefore, quartz grains attract kaolinite particles in the FW, Ca50LSW, and Ca10LSW bulk phases. However, the interparticle forces of this complex in Na10LSW and Na50LSW are repulsive.

Figure  3 C illustrates that Oil(A), which contains large amounts of acidic functional groups, behaves like kaolinite during dispersion in water. The oil droplets only attract quartz grains in the presence of formation water or water samples with divalent cations higher than 50 mM. Thus, when the concentration of these cations is low, the interparticle force in the oil-quartz complex is repulsive.

In contrast, Fig.  3 D illustrates that the interparticle force between oil and quartz highly depends on the oil composition and its TAN. Unlike Oil(A), Oil(B) tends to be adsorbed on the facing of quartz particles under any conditions.

In general, it can be seen in this figure that using ten mM of monovalent ions has no substantial effect on the interparticle forces diagram, and the hump is apparent under any condition. However, by increasing the ionic strength of the fluid to the extent that it contains 50 mM of divalent cations, the hump is eliminated under any condition and makes them uniform. Therefore, investigating this concentration range is critical, and researchers must give it importance.

The results of the IFT measurement

This research measured the effect of injection water composition on the IFT of Oil(A) and Oil(B) samples. The results are presented in Table 4 .

The table illustrates that injecting Ca50LSW and Na50LSW into the oil samples significantly reduces IFT compared to FW. The IFT reduction is more significant for Oil(A) than Oil(B). Injecting Ca10LSW and Na10LSW into the oil samples also reduces IFT, but the reduction is less significant compared to Ca50LSW and Na50LSW.

The results suggest that low salinity water injection can effectively increment RF by reducing the IFT between oil and brine. The effectiveness of this mechanism may depend on the oil composition and its TAN. Oil samples with a higher TAN, such as Oil(A), may experience a more significant reduction in IFT than oil samples with a lower TAN, such as Oil(B).

The coefficient of variation (CV) was calculated for both oil(A) and oil(B) IFT data. CV for oil(A) data was 5.36, however, it was 0.92. The big difference in this parameter shows that the IFT of the first oil sample varies in different water compositions, but the IFT for the second sample is not intensely sensitive to the water composition. The data in Table 4 illustrate that the reduction in water salinity reduces the concentration of divalent cations, making a slight decline in the IFT value. This reduction is more noticeable when oil(A) is used. So, using Na10LSW reduces the IFT value from 41.4 to 36.3 mN/m. The effective mechanism of this IFT reduction is EDL expansion. The total acid number (TAN) in the Oil(A) sample is high due to acidic functional groups. Thus, the oil droplets' EDL expands with the reduction in the bulk water salinity. EDL Expansion results in a repulsive force between oil droplets, which makes them smaller and thus lowers the oil–water IFT 48 , 49 . Also, oil's acidic functional groups are slightly soluble in water 50 . Decreasing the salinity of water and the components dissolved in it increases their solubility, resulting in IFT reductions. However, the reduction of IFT in oil (B) is meager. This is due to the absence of acidic functional groups in this oil sample, and the brine/oil IFT value in this complex is not dependent on water salinity. It should be noted that IFT reduction is not a primary mechanism in low salinity water flooding, and wettability alteration and fine migration phenomenon are much more effective than IFT reduction.

Wettability alteration results

The contact angle test was performed to evaluate the effect of different brines on the wettability of reservoir rock. The figures obtained for different oil samples (oil(A) and oil(B)) are illustrated in Figs.  4 , 5 , respectively.

figure 4

Contact angel images of an oil drop on a quartz slice in different conditions.

figure 5

Contact angel images for brine/oil complex (B) in different bulk fluids.

Figure  5 illustrates that the quartz facing is initially water-wet, with an oil apparent contact angle 46 of 45.1°. However, after aging the quartz slices for 14 days, the facing becomes oil-wet, with an apparent contact angle of 129°. This wettability alteration is attributed to acidic functional groups in the oil composition, which adsorb onto the silicate facings and change the facing's wettability to oil-wet.

When aged rock is exposed to low-salinity water, wettability alteration can occur for two reasons. Firstly, EDL expansion in quartz slices and oil droplets can lead repulsive forces to prevail in the quartz-oil(A) complex, separating oil droplets from the rock facing. This repulsive force results from reduced water salinity, significantly reducing the concentration of divalent cations in the water. Secondly, the solubility of functional groups in water increments as the water salinity reduces. This incremented solubility leads the functional groups to dissolve in the water, separating oil droplets from the rock facing.

Figure  6 illustrates that the wettability alteration of the complex containing Oil(B) is minimally affected by the reduction of water salinity. This may be due to differences in the composition of Oil(B) compared to Oil(A), leading to different adsorption and interplay behavior with the rock facing.

figure 6

The results of ( A ) pressure, ( B ) permeability, and ( C ) recovery volume obtained from flooding of sandpack No. 1 by FW, Na50LSW, and Na10LSW.

Overall, wettability alteration is essential for improving oil recovery in low salinity water injection. The mechanism is complex and can be affected by various factors, including the oil composition, rock properties, and injection water composition. Thus, careful evaluation and optimization are necessary to maximize the potential benefits of this technique.

Figure  5 displays contact angle images for the brine/oil complex (B) in different bulk fluids. Similar to the previous figure, aging the rock with Oil(B) increases the oil wetness of the rock face, but the contact angle increment is less significant compared to Oil(A). This observation can be attributed to the lower concentration of acidic functional groups in Oil(B) compared to Oil(A).

Moreover, reducing water salinity and removing divalent cations had a limited effect on the wettability alteration of the complex containing Oil(B). This could be due to the absence of repulsive forces between the oil and the rock facing resulting from EDL expansion or the low concentration of functional groups in Oil(B), which cannot alter the wettability by dissolving in water.

In summary, the wettability alteration mechanism in low salinity water injection can be influenced by various factors, such as the oil composition, rock properties, and injection water composition. Thus, it is crucial to thoroughly evaluate and optimize these factors to achieve optimal results with low salinity water injection in improving oil recovery.

Sandpack flooding results

Eight quartz sandpacks containing zero and 5 wt% of kaolinite particles were prepared. The sandpacks' permeability and porosity were measured by injecting formation water (FW) at a rate of 0.1 ml/min (equivalent to 3 ft/day). Based on the established procedure, the sandpacks were saturated with Swir and prepared for flooding, as outlined in Table 5 .

The CV parameter for no clay samples is 0.47%, and it is 1.9% for clay-rich sandpacks, which shows that the process is repeatable and there is no significant variation in the porosity in identical samples. In the next step, to check the effect of different factors on the recovery factor and the fine migration phenomenon, each of these sandpacks was flooded by various scenarios.

Test No. 1: Successfully, Sandpack No. 1 was flooded by FW, Na50LSW, and Na10LSW samples. The results of this test are illustrated in Fig.  6 .

During the initial stages of sandpack flooding, the injection pressure increments until it reaches a hump, after which it reduces. This hump is resulted from the water phase being trapped behind the oil bank, and the pressure starts to reduce when the water breaks through. The pressure becomes stable after injecting about 1.6 PV, and no significant changes occur in the sandpack's parameters. However, before this region, the pressure drop and effective permeability increment due to oil exiting from the sandpack outlet, leading to an increment in water saturation in the porous media, as illustrated in Fig.  7 C.

figure 7

The results of ( A ) pressure, ( B ) permeability, and ( C ) recovery volume obtained from flooding of sandpack No. 2 by FW, Na50LSW, and Na10LSW.

When Na50LSW is injected into the sandpack, the injection pressure starts to reduce, and the permeability increments after about 0.5 PV. This is likely due to the separation of some oil from the rock due to Na50LSW entering the sandpack, increasing water saturation and effective permeability of the water phase. The positive effect of Na50LSW on oil recovery is observed in IFT and wettability alteration tests, and the recovery factor increments by 6% at the end of the injection.

Injecting Na10LSW leads to a remarkable increment in the recovery factor and permeability. This is because Na10LSW entering the porous media makes the facing of the oil droplets and rock charge, resulting in a prevailing repulsive force. IFT and wettability data also support this finding.

In summary, the injection of Na50LSW and Na10LSW can positively impact oil recovery through their effects on water saturation, effective permeability, and interfacial tension. However, the optimal injection strategy may depend on various factors, such as reservoir characteristics, oil properties, and injection water composition, and careful evaluation and optimization are necessary to achieve the best results.

According to Fig.  6 , pressure decreases smoothly over time, which shows that no fine migration occurs in this porous media. To make a better index for comparison with other experiments, the parameter MAD for pressure data obtained from this experiment was calculated, equaling 0.0127.

Test No. 2: Sandpack No. 2 was displaced by Oil(B) to S wir . Then it was flooded by FW, Na50LSW, and Na10LSW, successively. The results of this test are illustrated in Fig.  7 .

Contrary to the previous experiment, the recovery factor did not increase when these low salinity water samples were injected. In the wettability alteration and IFT test, it was also observed that these water samples are unsuitable for improving this oil's recovery factor. The parameter MAD for pressure data was equal to 0.0119, which is very low and shows a negligible fluctuation in data, illustrating no fine migration status.

Test No. 3: In this test, the third sandpack was flooded by FW, Ca50LSW, and Ca10LSW. Figure  8 illustrates the results of this test.

figure 8

The results of ( A ) pressure, ( B ) permeability, and ( C ) recovery volume obtained from flooding of sandpack No. 3 by FW, Ca50LSW, and Ca10LSW.

Figure  8 clearly illustrates that the injection of Ca50LSW does not improve conditions. The previous experiments also discussed that this concentration of divalent cations resulted in a severe contraction of EDL in matrix particles and oil droplets. As a result, wettability and IFT do not improve. However, by reducing the concentration of this cation to 10 mM, EDL expansion occurs, and RF increments as repulsive forces dominate. Here, the permeability value incremented from 36 to 45 md, which can be affected by various factors such as wettability alteration, IFT reduction, and Sw increment. The MAD parameter for pressure data obtained from this experiment was 0.0108, which shows a smooth pressure change over time.

Test No. 4: Sandpack No. 4 was flooded by FW, Ca50LSW, and Ca10LSW successively. The data obtained from this experiment is illustrated in Fig.  9 .

figure 9

The results of ( A ) pressure, ( B ) permeability, and ( C ) recovery volume obtained from flooding of sandpack No. 4 by FW, Ca50LSW, and Ca10LSW.

The data in this figure illustrate that reducing water salinity is ineffective in raising the RF of Oil(B). Also, the wettability and IFT data illustrated that the expansion of EDL does not increment the values of repulsive forces by reducing the salinity of injected water. The MAD parameter is 0.0111 showing smooth change in pressure; therefore, it is a convincing reason that fine migration does not occur.

Test No. 5: In this test, sandpack No. 5, which contains five wt% kaolinite, was flooded by FW, Na50LSW, and Na10LSW. The results of this test are illustrated in Fig.  10 .

figure 10

The results of ( A ) pressure, ( B ) permeability, and ( C ) recovery volume obtained from flooding of sandpack No. 5 by FW, Na50LSW, and Na10LSW.

Based on the data in this figure, it is clear that the amount of RF increments with the injection of Na50LSW. This paper previously explained that Na50LSW can alter the matrix's wettability to water-wet. Thus, the positive effect of this brine in increasing RF is apparent. However, there is a fundamental difference here with the increment of RF, which increments the S w in the sandpack, and the value of K eff for the water phase reductions. Also, the increment in RF for this experiment (from 57 to 80%) is much higher than in experiment number 1 (from 62 to 75%), where kaolinite particles were absent in the sandpack. Thus, injecting this water and Na10LSW into the sandpack is a more effective phenomenon than wettability alteration. Based on the analysis of interparticle forces in the kaolinite-quartz complex, it was found that these brines result in repulsive forces prevailing. Thus, this phenomenon is fine migration, which increments the recovery factor.

Statistical analysis of pressure data in this experiment shows a moving average of pressure increases over time. Furthermore, the magnitude of MAD for pressure data was equal to 0.0636, which is much higher than the previous experiment and shows fluctuation in data. As previously discussed, pressure increments with fluctuation are a clue to the occurrence of fine migration.

Test No. 6. Here, sandpack No. 6 containing 5 wt% kaolinite particles and Oil(B) is flooded by FW, Na50LSW, and Na10LSW.

Figure  11 illustrates that injection of Na50LSW and Na10LSW increments RF. Also, in this process, contrary to experiments 1–4, the increment in RF is accompanied by a reduction in permeability. Also, it was found earlier in this article that these brines cannot alter the wettability of oil (B) and quartz. Also, IFT changes can be ignored for this oil sample. Thus, it is clear that fine migration leads this oil to be swept from the pores. Figure  12 illustrates that this phenomenon increments RF by 14%. While the only possible phenomenon in this test is fine migration, it has had an acceptable performance and has resulted in RF improvement even more than wettability alteration and IFT reduction. The parameter MAD was equal to 0.0692, which shows fluctuation in pressure data.

figure 11

The results of ( A ) pressure, ( B ) permeability, and ( C ) recovery volume obtained from flooding of sandpack No. 6 by FW, Na50LSW, and Na10LSW.

figure 12

The results of ( A ) pressure, ( B ) permeability, and ( C ) recovery volume obtained from flooding of sandpack No. 7 by FW, Ca50LSW, and Ca10LSW.

Test No. 7: In this test, sandpack No. 7, which contains 5 wt% kaolinite and oil(A), was flooded by FW, Ca50LSW, and Ca10LSW. The results are illustrated in Fig.  12 .

It can be seen here that when the concentration of divalent cations in the injection water is high, the benefits of low salinity water injection do not occur. Neither fine migration nor wettability alteration nor IFT reduction has happened. For this reason, the amount of RF did not improve much with the injection of Ca50LSW. However, with the reduction in the concentration of divalent cations, the number of RF increments. Considering that the rise in oil production from sandpack coincides with the reduction in permeability, it is sure that fine migration is also effective. The parameter MAD for pressure data was 0.0302 in this experiment, which shows that a lower degree of fine migration occurred.

Test No. 8. Sandpack No. 8 was flooded by FW, Ca50LSW, and Ca10LSW; the results are illustrated in Fig.  13 .

figure 13

The results of ( A ) pressure, ( B ) permeability, and ( C ) recovery volume obtained from flooding of sandpack No. 8 by FW, Ca50LSW, and Ca10LSW.

Figure  13 illustrates that RF starts to increase when the concentration of divalent cations reduces to 10 mmol. Meanwhile, none of the wettability alteration and IFT reduction mechanisms are expected for this condition. Also, because the increment in RF occurs after the reduction in permeability, it is evident that fine migration is the effective mechanism. The parameter MAD for pressure data was 0.0412, which shows partial fine migration.

This research first studied the impact of various parameters such as salinity, divalent cation concentration, and oil type on interparticle forces in a clay-rich porous media. The below results were obtained from this step.

Lowering salinity stimulates repulsive forces between various particles. This occurs when TDS is less than 3000 ppm and divalent cation concentration is less than ten mmol.

The interparticle force in the oil-quartz system is mainly related to the chemistry of the crude oil. Acidic functional groups present in oil make the oil sensitive to water salinity. Thus, reducing salinity, particularly divalent cation concentration, makes the oil droplets smaller.

Interparticle repulsive forces between oil droplets lead to IFT reduction. Thus, water salinity can affect the IFT value of oil with a high total acid number (TAN), but not for other oils. It should be noted that IFT reduction is less effective than wettability alteration and fine migration for oil recovery.

A core flooding experiment was conducted in the next step, which showed that.

Reducing the TDS of the injected brine to less than 3000 ppm and the concentration of divalent cations to 10 mM leads to fine migration in the porous media, which positively impacts the recovery factor (RF) for both types of oil.

Fine migration increments the RF of any oil, while wettability alteration and IFT reduction depend on the oil type in the porous media and are less effective than fine migration.

In conclusion, the study provides valuable insights into the interparticle forces and their effects on low salinity water performance, highlighting the importance of careful evaluation and optimization of injection water composition to achieve optimal results in oil recovery.

Data availability

The authors declare that the data supporting of this study are available within the paper and its Supplementary Information files.

Abbreviations

Base number

Derjaguin, Landau, Verwey, and Overbeek

Electric double layer

Enhanced oil recovery

Formation water

Interfacial tension

  • Low salinity water

Mean absolute deviation

  • Recovery factor

Total acid number

X-ray differaction

MosalmanHaghighi, O. & MohsenatabarFirozjaii, A. An experimental investigation into enhancing oil recovery using combination of new green surfactant with smart water in oil-wet carbonate reservoir. J. Pet. Explor. Prod. Technol. 10 , 893–901. https://doi.org/10.1007/s13202-019-0741-7 (2020).

Article   CAS   Google Scholar  

Honarvar, B., Rahimi, A., Safari, M., Khajehahmadi, S. & Karimi, M. Smart water effects on a crude oil-brine-carbonate rock (CBR) system: Further suggestions on mechanisms and conditions. J. Mol. Liq. 299 , 112173. https://doi.org/10.1016/j.molliq.2019.112173 (2020).

Saw, R. K., Singh, A., Maurya, N. K. & Mandal, A. A mechanistic study of low salinity water-based nanoparticle-polymer complex fluid for improved oil recovery in sandstone reservoirs. Colloids Surf. A 666 , 131308. https://doi.org/10.1016/j.colsurfa.2023.131308 (2023).

Tajikmansori, A., Hossein SaeediDehaghani, A. & Haghighi, M. Improving chemical composition of smart water by investigating performance of active cations for injection in carbonate Reservoirs: A mechanistic study. J. Mol. Liq. 348 , 118043. https://doi.org/10.1016/j.molliq.2021.118043 (2022).

Namaee-Ghasemi, A., Behbahani, H.S.-Z., Kord, S. & Sharifi, A. Geochemical simulation of wettability alteration and effluent ionic analysis during smart water flooding in carbonate rocks: Insights into the mechanisms and their contributions. J. Mol. Liq. 326 , 114854. https://doi.org/10.1016/j.molliq.2020.114854 (2021).

Katende, A. & Sagala, F. A critical review of low salinity water flooding: Mechanism, laboratory and field application. J. Mol. Liq. 278 , 627–649. https://doi.org/10.1016/j.molliq.2019.01.037 (2019).

Gomez, S., Mansi, M. & Fahes, M. In Abu Dhabi International Petroleum Exhibition & Conference D031S088R002 (2018).

Saw, R. K., Pillai, P. & Mandal, A. Synergistic effect of low saline ion tuned Sea Water with ionic liquids for enhanced oil recovery from carbonate reservoirs. J. Mol. Liq. 364 , 120011. https://doi.org/10.1016/j.molliq.2022.120011 (2022).

Farhadi, H., Fatemi, M. & Ayatollahi, S. Experimental investigation on the dominating fluid-fluid and rock-fluid interactions during low salinity water flooding in water-wet and oil-wet calcites. J. Petrol. Sci. Eng. 204 , 108697. https://doi.org/10.1016/j.petrol.2021.108697 (2021).

Ghasemi, M., Shafiei, A. & Foroozesh, J. A systematic and critical review of application of molecular dynamics simulation in low salinity water injection. Adv. Colloids Interface. Sci. 300 , 102594. https://doi.org/10.1016/j.cis.2021.102594 (2022).

Austad, T., RezaeiDoust, A. & Puntervold, T. In SPE Improved Oil Recovery Symposium SPE-129767-MS (2010).

Smith, E. R., Medina-Rodríguez, B. X. & Alvarado, V. Influence of interfacial responses of Berea Sandstone in low-salinity waterflooding environments. Fuel 311 , 121712. https://doi.org/10.1016/j.fuel.2021.121712 (2022).

Sheng, J. J. Critical review of low-salinity waterflooding. J. Pet. Sci. Eng. 120 , 216–224. https://doi.org/10.1016/j.petrol.2014.05.026 (2014).

Kumar Saw, R. & Mandal, A. Experimental investigation on fluid/fluid and rock/fluid interactions in enhanced oil recovery by low salinity water flooding for carbonate reservoirs. Fuel 352 , 129156. https://doi.org/10.1016/j.fuel.2023.129156 (2023).

Alameri, W., Teklu, T. W., Graves, R. M., Kazemi, H. & AlSumaiti, A. M. In SPE Asia Pacific Oil & Gas Conference and Exhibition SPE-171529-MS (2014).

Yang, Y. et al. Formation damage evaluation of a sandstone reservoir via pore-scale X-ray computed tomography analysis. J. Pet. Sci. Eng. 183 , 106356. https://doi.org/10.1016/j.petrol.2019.106356 (2019).

Nasr-El-Din, H. A. et al. Field treatment to stimulate an oil well in an offshore sandstone reservoir using a novel, low-corrosive, environmentally friendly fluid. J. Can. Pet. Technol. 54 , 289–297. https://doi.org/10.2118/168163-PA (2015).

Sheng, J. J. Formation damage in chemical enhanced oil recovery processes. Asia-Pac. J. Chem. Eng. 11 , 826–835. https://doi.org/10.1002/apj.2035 (2016).

Chavan, M., Dandekar, A., Patil, S. & Khataniar, S. Low-salinity-based enhanced oil recovery literature review and associated screening criteria. Pet. Sci. 16 , 1344–1360. https://doi.org/10.1007/s12182-019-0325-7 (2019).

Miranda, R. M. & Underdown, D. R. In SPE Production Operations Symposium SPE-25432-MS (1993).

Zhou, Z. J., Gunter, W. O. & Jonasson, R. G. In Annual Technical Meeting PETSOC-95-71 (1995).

Khezerlooe-ye Aghdam, S. et al. Mechanistic assessment of Seidlitzia Rosmarinus-derived surfactant for restraining shale hydration: A comprehensive experimental investigation. Chem. Eng. Res. Des. 147 , 570–578. https://doi.org/10.1016/j.cherd.2019.05.042 (2019).

Aghdam, S.K.-Y., Kazemi, A. & Ahmadi, M. A laboratory study of a novel bio-based nonionic surfactant to mitigate clay swelling. Petroleum 7 , 178–187. https://doi.org/10.1016/j.petlm.2020.09.002 (2021).

Article   Google Scholar  

Oyeneyin, M. B., Peden, J. M., Hosseini, A. & Ren, G. In SPE European Formation Damage Conference SPE-30112-MS (1995).

Al-Sarihi, A . et al. In SPE Asia Pacific Oil and Gas Conference and Exhibition D021S008R004 (2018).

Bedrikovetsky, P., Zeinijahromi, A., Badalyan, A., Ahmetgareev, V. & Khisamov, R. In SPE Russian Petroleum Technology Conference SPE-176721-MS (2015).

Alhuraishawy, A. K., Bai, B., Wei, M., Geng, J. & Pu, J. Mineral dissolution and fine migration effect on oil recovery factor by low-salinity water flooding in low-permeability sandstone reservoir. Fuel 220 , 898–907. https://doi.org/10.1016/j.fuel.2018.02.016 (2018).

Bigno, Y., Oyeneyin, M. B. & Peden, J. M. In SPE Formation Damage Control Symposium SPE-27342-MS (1994).

Zhang, L. et al. Particle migration and blockage in geothermal reservoirs during water reinjection: Laboratory experiment and reaction kinetic model. Energy 206 , 118234. https://doi.org/10.1016/j.energy.2020.118234 (2020).

Musharova, D. A., Mohamed, I. M. & Nasr-El-Din, H. A. In SPE International Symposium and Exhibition on Formation Damage Control SPE-150953-MS (2012).

Barnaji, M. J., Pourafshary, P. & Rasaie, M. R. Visual investigation of the effects of clay minerals on enhancement of oil recovery by low salinity water flooding. Fuel 184 , 826–835. https://doi.org/10.1016/j.fuel.2016.07.076 (2016).

Tang, G.-Q. & Morrow, N. R. Influence of brine composition and fines migration on crude oil/brine/rock interactions and oil recovery. J. Pet. Sci. Eng. 24 , 99–111. https://doi.org/10.1016/S0920-4105(99)00034-0 (1999).

Zeinijahromi, A. & Bedrikovetsky, P. In SPE Russian Petroleum Technology Conference SPE-176548-MS (2015).

Zeinijahromi, A. & Bedrikovetsky, P. In SPE International Conference and Exhibition on Formation Damage Control D021S010R003 (2016).

Wu, Q.-H., Ge, J.-J., Ding, L. & Zhang, G.-C. Unlocking the potentials of gel conformance for water shutoff in fractured reservoirs: Favorable attributes of the double network gel for enhancing oil recovery. Pet. Sci. https://doi.org/10.1016/j.petsci.2022.10.018 (2022).

Olayiwola, S. O. & Dejam, M. A comprehensive review on interaction of nanoparticles with low salinity water and surfactant for enhanced oil recovery in sandstone and carbonate reservoirs. Fuel 241 , 1045–1057. https://doi.org/10.1016/j.fuel.2018.12.122 (2019).

Al-Attar, H. H., Mahmoud, M. Y., Zekri, A. Y., Almehaideb, R. A. & Ghannam, M. T. In EAGE Annual Conference & Exhibition incorporating SPE Europec SPE-164788-MS (2013).

Farhadi, H., Ayatollahi, S. & Fatemi, M. The effect of brine salinity and oil components on dynamic IFT behavior of oil-brine during low salinity water flooding: Diffusion coefficient, EDL establishment time, and IFT reduction rate. J. Pet. Sci. Eng. 196 , 107862. https://doi.org/10.1016/j.petrol.2020.107862 (2021).

Bashir, A., Sharifi Haddad, A. & Rafati, R. A review of fluid displacement mechanisms in surfactant-based chemical enhanced oil recovery processes: Analyses of key influencing factors. Pet. Sci. 19 , 1211–1235. https://doi.org/10.1016/j.petsci.2021.11.021 (2022).

Saw, R. K. & Mandal, A. A mechanistic investigation of low salinity water flooding coupled with ion tuning for enhanced oil recovery. RSC Adv. 10 , 42570–42583. https://doi.org/10.1039/D0RA08301A (2020).

Article   ADS   CAS   Google Scholar  

Al-Shalabi, E. W. & Sepehrnoori, K. A comprehensive review of low salinity/engineered water injections and their applications in sandstone and carbonate rocks. J. Pet. Sci. Eng. 139 , 137–161. https://doi.org/10.1016/j.petrol.2015.11.027 (2016).

Khisamov, V., Akhmetgareev, V. & Shilova, T. Core tests and field case studies of successful and unsuccessful low-salinity waterfloods from four oil fields. https://doi.org/10.3997/2214-4609.201701703 (2017).

Missana, T. & Adell, A. On the applicability of DLVO theory to the prediction of clay colloids stability. J. Colloid Interface Sci. 230 , 150–156. https://doi.org/10.1006/jcis.2000.7003 (2000).

Adamczyk, Z. & Weroński, P. Application of the DLVO theory for particle deposition problems. Adv. Colloid Interface Sci. 83 , 137–226. https://doi.org/10.1016/S0001-8686(99)00009-3 (1999).

RohemPeçanha, E. & da Fonseca de Albuquerque, M. D., AntounSimão, R., de Salles Leal Filho, L. & de Mello Monte, M. B.,. Interaction forces between colloidal starch and quartz and hematite particles in mineral flotation. Colloids Surf. A Physicochem. Eng. Asp. 562 , 79–85. https://doi.org/10.1016/j.colsurfa.2018.11.026 (2019).

Bormashenko, E., Balter, S., Bormashenko, Y. & Aurbach, D. Honeycomb structures obtained with breath figures self-assembly allow water/oil separation. Colloids Surf. A Physicochem. Eng. Asp. 415 , 394–398. https://doi.org/10.1016/j.colsurfa.2012.09.022 (2012).

Aghdam, S.K.-Y., Kazemi, A. & Ahmadi, M. Studying the effect of various surfactants on the possibility and intensity of fine migration during low-salinity water flooding in clay-rich sandstones. Results Eng. 18 , 101149. https://doi.org/10.1016/j.rineng.2023.101149 (2023).

Aghdam, S. K.-y., Kazemi, A. & Ahmadi, M. Theoretical and experimental study of fine migration during low-salinity water flooding: Effect of brine composition on interparticle forces. In SPE Reservoir Evaluation & Engineering 1–16. https://doi.org/10.2118/212852-PA (2022).

Bazyari, A., Soulgani, B. S., Jamialahmadi, M., DehghanMonfared, A. & Zeinijahromi, A. Performance of smart water in clay-rich sandstones: Experimental and theoretical analysis. Energy Fuels 32 , 10354–10366. https://doi.org/10.1021/acs.energyfuels.8b01663 (2018).

Lashkarbolooki, M., Ayatollahi, S. & Riazi, M. The impacts of aqueous ions on interfacial tension and wettability of an asphaltenic–acidic crude oil reservoir during smart water injection. J. Chem. Eng. Data 59 , 3624–3634 (2014).

Download references

Author information

Authors and affiliations.

Department of Petroleum and Chemical Engineering, College of Engineering, Sultan Qaboos University, P. O. Box 123, Muscat, Oman

Alireza Kazemi & Saeed Khezerloo-ye Aghdam

Department of Petroleum Engineering, Amirkabir University of Technology (AUT), Tehran, Iran

Mohammad Ahmadi

You can also search for this author in PubMed   Google Scholar

Contributions

A.K.: Supervision, conceptualization, methodology, formal analysis, validation, writing-reviewing. S.K.A.: Conceptualization, data curation, methodology, investigation, validation, writing—original draft. M.A.: Formal analysis, writing-reviewing.

Corresponding author

Correspondence to Alireza Kazemi .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary information., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Kazemi, A., Khezerloo-ye Aghdam, S. & Ahmadi, M. Theoretical and experimental investigation of the impact of oil functional groups on the performance of smart water in clay-rich sandstones. Sci Rep 14 , 20172 (2024). https://doi.org/10.1038/s41598-024-71237-1

Download citation

Received : 12 May 2024

Accepted : 26 August 2024

Published : 30 August 2024

DOI : https://doi.org/10.1038/s41598-024-71237-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Fine migration
  • Interparticle forces
  • Wettability
  • Acid number

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

experimental methodology in research example

NTRS - NASA Technical Reports Server

Available downloads, related records.

IMAGES

  1. Methodology Sample In Research : Research Support: Research Methodology

    experimental methodology in research example

  2. (PDF) Experimental Research Methods for Students in Built Environment

    experimental methodology in research example

  3. Experimental Research

    experimental methodology in research example

  4. Methodology Sample In Research

    experimental methodology in research example

  5. FREE 10+ Experimental Research Samples & Templates in MS Word

    experimental methodology in research example

  6. Research Methodology Examples

    experimental methodology in research example

VIDEO

  1. Experimental research #research methodology #psychology #variables #ncertpsychology #lecture28

  2. CHARACTERISTICS OF EXPERIMENTAL RESEARCH

  3. Research Methodology (Research Trailer) Group 21

  4. 3D Printed Savonius Rotor as an Ocean Wave Energy Converter

  5. Types of Research Methodology|Research Methodology|Conceptual Research|Experimental Research|NET

  6. Research Methodology /Episode 8 /Types of Research / Dr Chaminda Malalasekara / Sinhalen research

COMMENTS

  1. Guide to Experimental Design

    Step 1: Define your variables. You should begin with a specific research question. We will work with two research question examples, one from health sciences and one from ecology: Example question 1: Phone use and sleep. You want to know how phone use before bedtime affects sleep patterns.

  2. 19+ Experimental Design Examples (Methods + Types)

    1) True Experimental Design. In the world of experiments, the True Experimental Design is like the superstar quarterback everyone talks about. Born out of the early 20th-century work of statisticians like Ronald A. Fisher, this design is all about control, precision, and reliability.

  3. Experimental Research Designs: Types, Examples & Advantages

    Experimental research design is a framework of protocols and procedures created to conduct experimental research with a scientific approach using two sets of variables. Herein, the first set of variables acts as a constant, used to measure the differences of the second set. The best example of experimental research methods is quantitative research.

  4. Exploring Experimental Research: Methodologies, Designs, and

    Experimental research serves as a fundamental scientific method aimed at unraveling. cause-and-effect relationships between variables across various disciplines. This. paper delineates the key ...

  5. Experimental Research Designs: Types, Examples & Methods

    The experimental research method is widely used in physical and social sciences, psychology, and education. It is based on the comparison between two or more groups with a straightforward logic, which may, however, be difficult to execute. ... while the other two are tested using the pretest-posttest method. Examples of Experimental Research ...

  6. Experimental Design: Types, Examples & Methods

    Three types of experimental designs are commonly used: 1. Independent Measures. Independent measures design, also known as between-groups, is an experimental design where different participants are used in each condition of the independent variable. This means that each condition of the experiment includes a different group of participants.

  7. Experimental Method In Psychology

    There are three types of experiments you need to know: 1. Lab Experiment. A laboratory experiment in psychology is a research method in which the experimenter manipulates one or more independent variables and measures the effects on the dependent variable under controlled conditions. A laboratory experiment is conducted under highly controlled ...

  8. Experimental Research: What it is + Types of designs

    Experimental research is a study conducted with a scientific approach using two sets of variables. The first set acts as a constant, which you use to measure the differences of the second set. Quantitative research methods, for example, are experimental. If you don't have enough data to support your decisions, you must first determine the ...

  9. Experimental research

    10 Experimental research. 10. Experimental research. Experimental research—often considered to be the 'gold standard' in research designs—is one of the most rigorous of all research designs. In this design, one or more independent variables are manipulated by the researcher (as treatments), subjects are randomly assigned to different ...

  10. A Quick Guide to Experimental Design

    Step 1: Define your variables. You should begin with a specific research question. We will work with two research question examples, one from health sciences and one from ecology: Example question 1: Phone use and sleep. You want to know how phone use before bedtime affects sleep patterns.

  11. Experimental Design

    Examples of Experimental Design . Here are some examples of experimental design in different fields: Example in Medical research: A study that investigates the effectiveness of a new drug treatment for a particular condition. Patients are randomly assigned to either a treatment group or a control group, with the treatment group receiving the ...

  12. Experimental Research Design

    Experimental research design is centrally concerned with constructing research that is high in causal (internal) validity. Randomized experimental designs provide the highest levels of causal validity. Quasi-experimental designs have a number of potential threats to their causal validity. Yet, new quasi-experimental designs adopted from fields ...

  13. Guide to experimental research design

    Experimental design is a research method that enables researchers to assess the effect of multiple factors on an outcome.. You can determine the relationship between each of the variables by: Manipulating one or more independent variables (i.e., stimuli or treatments). Applying the changes to one or more dependent variables (i.e., test groups or outcomes)

  14. How the Experimental Method Works in Psychology

    The experimental method involves manipulating one variable to determine if this causes changes in another variable. This method relies on controlled research methods and random assignment of study subjects to test a hypothesis. For example, researchers may want to learn how different visual patterns may impact our perception.

  15. PDF Experimental Design & Methodology

    Experimental Design & Methodology Basic lessons in empiricism ... Part II: Designing Experiments Example: Exp. Methodology & Design Part III: Conducting Experiments Part IV: Presenting Experiments Example: Conducting & Presenting Exp. EClab - Summer Lecture SeriesΠp.2. ... Methodology and Design Examples Epistasis in GAs Research questions posed:

  16. Experiments and Quantitative Research

    Here is a brief overview from the SAGE Encyclopedia of Survey Research Methods: Experimental design is one of several forms of scientific inquiry employed to identify the cause-and-effect relation between two or more variables and to assess the magnitude of the effect (s) produced. The independent variable is the experiment or treatment applied ...

  17. Research Methodology

    Experimental Research Methodology. This is a research methodology that involves the manipulation of one or more independent variables to observe their effects on a dependent variable. Experimental research is often used to study cause-and-effect relationships and to make predictions. ... Research Methodology Example. An Example of Research ...

  18. 10 Real-Life Experimental Research Examples (2024)

    Examples of Experimental Research. 1. Pavlov's Dog: Classical Conditioning. Pavlovs Dogs. Dr. Ivan Pavlov was a physiologist studying animal digestive systems in the 1890s. In one study, he presented food to a dog and then collected its salivatory juices via a tube attached to the inside of the animal's mouth.

  19. Experimental Research

    The experimental method. is a systematic and scientific approach to research in which the researcher manipulates one or more variables, and controls and measures any change in other variables. Experimental Research is often used where: There is time priority in a causal relationship ( cause precedes effect)

  20. Experimental Research

    Experimental science is the queen of sciences and the goal of all speculation. Roger Bacon (1214-1294) Download chapter PDF. Experiments are part of the scientific method that helps to decide the fate of two or more competing hypotheses or explanations on a phenomenon. The term 'experiment' arises from Latin, Experiri, which means, 'to ...

  21. Types of Research Designs Compared

    You can also create a mixed methods research design that has elements of both. Descriptive research vs experimental research. Descriptive research gathers data without controlling any variables, while experimental research manipulates and controls variables to determine cause and effect.

  22. Experimental Research: Definition, Types and Examples

    3. True experimental research. True experimental research is the main method of applying untested research to a subject. Under true experimental conditions, participants receive randomized assignments to different groups in the study. This removes any potential for bias in creating study groups to provide more reliable results.

  23. Research Methods

    Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design. When planning your methods, there are two key decisions you will make. First, decide how you will collect data. Your methods depend on what type of data you need to answer your research question:

  24. Integrating causal pathway diagrams into practice facilitation to

    For our pilot study, we used a quasi-experimental, mixed methods design to evaluate feasibility and assess initial signals of effectiveness of the CoachIQ approach. Study procedures were reviewed by the University of Washington Human Subjects Division (STUDY00016086) and deemed to be human subjects research that qualifies for exempt status.

  25. Theoretical and experimental investigation of the impact of oil

    This research investigated the effect of ion concentration on the performance of low salinity water under different conditions. First, the effect of injection water composition on interparticle ...

  26. NTRS

    There exists a plethora of ice adhesion testing methods such as centrifuge-based methods, push/pull methods, and lap joint shear tests. However, these methods often cannot be done in-situ during an icing spray and require researcher handling/preparation prior to testing. Additionally, many of these methods test ice accretions that are not representative of the ice shapes that grow on airframes ...

  27. Full article: Selective separation and determination of palladium ions

    Many research studies have been applied for the separation and determination of palladium from different samples via precipitation, ion exchange, and solvent extraction methods. In this study, an easy and non-time-consuming ion flotation technique was investigated to separate this metal ion prior to determination using inductivity-coupled ...