Writing Explained

What Are Open-Ended, Close-Ended Questions? Definition, Examples

Home » The Writer’s Dictionary » What Are Open-Ended, Close-Ended Questions? Definition, Examples

Open-ended question definition: Open-ended questions are questions that have unlimited response options.

Close-ended question definition: Close-ended questions are questions that have limited response options.

What is an Open-ended Question?

Open-ended questions are questions that allow for various response options. Open-ended questions do not expect a particular answer. Rather, they allow the individual providing the response to answer however he chooses.

Examples of Open-ended Questions

Open ended questions examples

  • What was your childhood like?
  • How did you decide to enter this profession?
  • When would you like to visit the museum?

Open-ended questions are common in job interviews.

What is a Close-ended Question?

Open and closed questionnaire

The answers to close-ended questions are limited and require certain answers.

Typically, close-ended questions lend themselves to “yes” or “no” responses. Furthermore, close-ended questions are usually specific in nature.

Examples of Close-ended Questions

  • Did you attend the conference?
  • Will you eat dinner with us?
  • Do you like vanilla ice cream?
  • When were you born?

As you can see, the answers to these questions will be much less involved than those of the open question.

Open-End Questions vs. Close-Ended Questions

Open-ended questions and close-ended questions are different in that they elicit very different responses.

The following questions illustrate close- and open-ended questions side-by-side. The questions are similar in subject matter, but the responses will vary depending on the question style.

Open-ended vs. Close-ended Questions:

  • What is your favorite ice cream flavor? / Do you like chocolate ice cream?
  • How are you feeling? / Are you feeling well?
  • What are you plans this evening? / Do you have dinner plans?
  • What homework do you have to complete? / Do you have math homework?
  • Where is your shirt? / Is your shirt in the closet?
  • Where should I buy a new blouse? / Should I buy a blouse at the mall?
  • When is your birthday? / Is your birthday in May?
  • What books did you read this summer? / Did you read a book from the suggested list?
  • Where is your next vacation? / Do you think you will go to Europe soon?
  • How did you meet your husband? / Are you married?

As you can see from these examples, each question type brings out a different kind of response. Close-ended questions are more specific, while open-ended ones are much more “open.”

How Is Each Question Used?

Open ended interview questions

Close-ended questions are best used when you want a short, direct answer to a very specific question. They are less personal in nature and are best used when the person asking wants a quick answer.

Are the following questions open- or close-ended questions?

  • Will you attend the dance tonight?
  • How will you evade the storm?
  • Did you bring the camera?
  • Why can’t I join you?
  • Would you like a new dress?

See answers below.

Summary: What Are Open-Ended, Close-Ended Questions?

Define open-ended question: an open-ended question is a question that does not expect a specific, narrow answer.

Define closed-ended question: a close-ended question is a question that expects a specific answer and does not give leeway outside of that answer.

In summary,

  • Open-ended questions are broad and do not expect a specific answer.
  • Close-ended questions are limiting and expect a specific answer.

helpful professor logo

75 Open-Ended Questions Examples

75 Open-Ended Questions Examples

Chris Drew (PhD)

Dr. Chris Drew is the founder of the Helpful Professor. He holds a PhD in education and has published over 20 articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education. [Image Descriptor: Photo of Chris]

Learn about our Editorial Process

open-ended questions examples definition and benefits, explained below

Open-ended questions are inquiries that cannot be answered with a simple “yes” or “no” and require elaboration.

These questions encourage respondents to provide more detailed answers, express opinions, and share experiences.

They can be useful in multiple contexts:

  • In conversation , it elicits more information about someone and can help break the ice or deepen your relationship with them.
  • In education , open-ended questions are used as prompts to encourage people to express themselves, demonstrate their knowledge, or think more deeply about other people.
  • In research , they are used to gather detailed responses from research participants who, if not asked open-ended questions, may not give valuable detailed or in-depth responses.

An example of an open-ended question is:

“What did you enjoy most about your recent vacation?”

Open-Ended Questions Examples

Examples of open-ended questions for students.

  • What did you find most interesting or surprising about today’s lesson?
  • How would you explain this concept to someone who has never encountered it before?
  • Can you think of a real-life example of what we are talking about today?
  • When doing the task, what did you find most challenging and why?
  • How does this topic connect to the topic we were discussing in last week’s lesson?
  • When you walk out of this lesson today, what is the most important insight you’ll take with you?
  • When you were solving this problem, what strategies did you draw upon? Can you show them to me?
  • If you could change one thing about how you did today’s task, what would it be and why?
  • How do you feel about the progress you have made in the unit so far, and what areas do you think you need to work on?
  • What questions do you still have about this topic that we can address in our next lesson?
  • How do you think this subject will be relevant to your life outside of the classroom, such as on the weekends or even in the workplace once you leave school?
  • We tried just one way to solve this problem. Can you think of any alternative approaches we could have taken to reach the same results?
  • What resources or strategies do you think were most useful when solving this problem?
  • What were the challenges you faced when completing this group work task and how would you work to resolve them next time?
  • What are some of the possible weaknesses of the theory we’ve been exploring today?
  • How has your understanding of this topic evolved throughout the course of this unit?
  • What are some real-world applications of what we’ve learned today?
  • If you were to design an experiment to test this hypothesis, what would be your approach?
  • Can you think of any counterarguments or alternative perspectives on this issue?
  • How would you rate your level of engagement with this topic, and what factors have influenced your level of interest?

Examples of Open-Ended Questions for Getting to Know People

  • So, can you tell me about the first time you met our mutual friend who introduced us?
  • How did you get interested in your favorite hobby?
  • How have your tastes in music changed over time?
  • Can you explain a memorable memory from your childhood?
  • Are there any books, movies, or TV shows that you’ve enjoyed recently that you could recommend? Why would you recommend them to me?
  • How do you usually spend your weekends or leisure time?
  • Can you tell me about a restaurant experience you had that you really enjoyed and why it was so memorable?
  • What’s your fondest memory of your childhood pet?
  • What first got you interested in your chosen career?
  • If you could learn a new skill or take up a new hobby, what would it be and why?
  • What’s the best piece of advice you’ve ever received from a parent or mentor?
  • If you were to pass on one piece of advice to your younger self, what would lit be?
  • Tell me about something fun you did in the area recently that you could recommend that I do this weekend on a budget of $100?
  • If you could have a think for a second, would you be able to tell me your short-term, medium-term, and long-term personal goals ?
  • If you could travel anywhere in the world, where would you go and why?

Examples of Open-Ended Questions for Interviews

  • Can you tell me about yourself and your background, and how you came to be in your current position/field?
  • How do you approach problem-solving, and what methods have you found to be most effective?
  • Can you describe a particularly challenging situation you faced, and how you were able to navigate it?
  • What do you consider to be your greatest strengths, and how have these played a role in your career or personal life?
  • Can you describe a moment of personal growth or transformation, and what led to this change?
  • What are some of your passions and interests outside of work, and how do these inform or influence your professional life?
  • Can you tell me about a time when you faced criticism or negative feedback, and how you were able to respond to it?
  • What do you think are some of the most important qualities for success in your field, and how have you worked to develop these qualities in yourself?
  • Can you describe a moment of failure or setback, and what you learned from this experience?
  • Looking to the future, what are some of your goals or aspirations, and how do you plan to work towards achieving them?

Examples of Open-Ended Questions for Customer Research

  • What factors influenced your decision to purchase this product or service?
  • How would you describe your overall experience with our customer support team?
  • What improvements or changes would you suggest to enhance the user experience of our website or app?
  • Can you provide an example of a time when our product or service exceeded your expectations?
  • What challenges or obstacles did you encounter while using our product or service, and how did you overcome them?
  • How has using our product or service impacted your daily life or work?
  • What features do you find most valuable in our product or service, and why?
  • Can you describe your decision-making process when choosing between competing products or services in the market?
  • What additional products or services would you be interested in seeing from our company?
  • How do you perceive our brand in comparison to our competitors, and what factors contribute to this perception?
  • What sources of information or communication channels did you rely on when researching our product or service?
  • How likely are you to recommend our product or service to others, and why?
  • Can you describe any barriers or concerns that might prevent potential customers from using our product or service?
  • What aspects of our marketing or advertising caught your attention or influenced your decision to engage with our company?
  • How do you envision our product or service evolving or expanding in the future to better meet your needs?

Examples of Open-Ended Questions for Preschoolers

  • Can you tell me about the picture you drew today?
  • What is your favorite thing to do at school, and why do you like it?
  • How do you feel when you play with your friends at school?
  • What do you think would happen if animals could talk like people?
  • Can you describe the story we read today? What was your favorite part?
  • If you could be any animal, which one would you choose to be and why?
  • What would you like to learn more about, and why does it interest you?
  • How do you help your friends when they’re feeling sad or upset?
  • Can you tell me about a time when you solved a problem all by yourself?
  • What is your favorite game to play, and how do you play it?
  • If you could create your own superhero, what powers would they have and why?
  • Can you describe a time when you were really brave? What happened?
  • What do you think it would be like to live on another planet?
  • If you could invent a new toy, what would it look like and what would it do?
  • Can you tell me about a dream you had recently? What happened in the dream?

Open-Ended vs Closed-Ended Questions

DefinitionRequire elaboration and full sentence responses. These questions cannot be answered with “yes” or “no”.Can be answered with “yes,” “no,” or a very brief response, without elaboration.
PurposeEncouraging deeper explanations, expression, and analysis from the respondent.Gathering specific information, getting an explicit response, or confirming details.
Example“Can you explain what happened to you when you went on vacation?”“Did you enjoy your vacation?”
BenefitPromotes deep thinking because in asking for a detailed response, students have to process and formulate complete thoughts.Is great for gathering fast input, for example on likert scales during research or, during teacher-centered instruction, to quickly ensure students are following you.
LimitationsOften requires one-to-one discussion so is impractical in large group situations. Requires a skilled conversationalist who can think up questions that will elicit detailed responses.Tends not to elicit detailed insights so cannot gather the full picture. It doesn’t help us get a nuanced understanding of people’s thoughts and opinions.
Ideal UseIn education, to get people thinking deeply about a topic. In conversation, to get people to share more about themselves with you and start an interesting conversation In research, to gather in-depth data from interviews and qualitative studies that can lead to rich insights.In education, to gather formative feedback during teacher-centered instruction. In conversation, to get the clarifying information you need quickly. In reseasrch, to conducts large-scale surveys, polls, and quantitative studies that can generate population-level insights.

Benefits of Open-Ended Questions

Above all, open-ended questions require people to actively think. This engages them in higher-order thinking skills (rather than simply providing restricted answers) and forces them to expound on their thoughts.

The best thing about these questions is that they benefit both the questioner and the answerer:

  • Questioner: For the person asking the question, they benefit from hearing a full insight that can deepen their knowledge about their interlocutor.
  • Answerer: For the person answering the question, they benefit because the very process of answering the question helps them to sort their thoughts and clarify their insights.

To expound, below are four of the top benefits.

1. Encouraging critical thinking

When we have to give full answers, our minds have to analyze, evaluate, and synthesize information. We can’t get away with a simple yes or no.

This is why educators embrace open-ended questioning, and preferably questions that promote higher-order thinking .

Expounding on our thoughts enables us to do things like:

  • Thinking more deeply about a subject
  • Considering different perspectives
  • Identifying logical fallacies in our own conceptions
  • Developing coherent and reasoned responses
  • Reflecting on our previous actions
  • Clarifying our thoughts.

2. Facilitating self-expression

Open-ended questions allow us to express ourselves. Imagine only living life being able to say “yes” or “no” to questions. We’d struggle to get across our own personalities!

Only with fully-expressed sentences and monologues can we share our full thoughts, feelings, and experiences. It allows us to elaborate on nuances, express our hesitations, and explain caveats.

At the end of explaining our thoughts, we often feel like we’ve been more heard and we have had the chance to express our full authentic thoughts.

3. Building stronger relationships

Open-ended questioning creates good relationships. You need to ask open-ended questions if you want to have good conversations, get to know someone, and make friends.

These sorts of questions promote open communication, speed up the getting-to-know-you phase, and allow people to share more about themselves with each other.

This will make you more comfortable with each other and give the person you’re trying to get to know a sense that you’re interested in them and actively listen to what they have to say. When people feel heard and understood, they are more likely to trust and connect with others.

Tip: Avoid Loaded Questions

One mistake people make during unstructured and semi-structured interviews is to ask open-ended questions that have bias embedded in them.

For an example of a loaded question, imagine if you asked a question: “why did the shop lifter claim he didn’t take the television without paying?”

Here, you’ve made a premise that you’re asking the person to consent to (that the man was a shop lifter).

A more neutral wording might be “why did the man claim he didn’t take the television without paying?”

The second question doesn’t require the person to consent to the notion that the man actually did the shop lifting.

This might be very important, for example, in cross-examining witnesses in a police station!

When asking questions, use questions that encourage people to provide full-sentence responses, at a minimum. Use questions like “how” and “why” rather than questions that can be answered with a brief point. This will allow people the opportunity to provide more detailed responses that give them a chance to demonstrate their full understanding and nuanced thoughts about the topic. This helps students think more deeply and people in everyday conversation to feel like you’re actually interested in what they have to say.

open-ended questions

  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 10 Reasons you’re Perpetually Single
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 20 Montessori Toddler Bedrooms (Design Inspiration)
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 21 Montessori Homeschool Setups
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd-2/ 101 Hidden Talents Examples

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

  • Admission Essay
  • Statement of Purpose Editing
  • Personal Statement Editing
  • Recommendation Letter
  • Motivation Letter
  • Cover Letter
  • Supplemental Essay
  • Letter of Continued Interest
  • Scholarship Essay
  • Role Model Essay
  • Our Editors
  • College Admission Essay Examples
  • College Cover Letter Examples
  • College Personal Statement Examples
  • Graduate Personal Statement Examples
  • Graduate Statement of Purpose Examples
  • MBA Essay Examples
  • MBA Personal Statement Examples
  • MBA Resume Examples
  • MBA Recommendation Letter Examples
  • Medical School Personal Statement Examples
  • Medical School Recommendation Letter Examples
  • Pricing Plans
  • Public Health
  • Dissertation
  • Research Paper
  • Thesis Editing
  • Academic Editing
  • Motivation letter
  • Letter of Recommendation
  • Personal Statement
  • Statement of Purpose

What Is An Open Ended Question? Answering It Through Essay

EssayEdge > Blog > What Is An Open Ended Question? Answering It Through Essay

Table of Contents:

What is an Open-Ended Question?

Open-ended questions are those that do not define the scope you should take (i.e., how many and what kinds of experiences to discuss). Like personal statements for other types of applications, open-ended essays have more room for creativity, as you must make the decision on issues such as how expansive or narrow your topic should be. For business schools, the most common question of this type asks about your personal background, but many questions that look straightforward are actually relatively open-ended.

For example, a question that asks you to describe your leadership style is more open than a question that asks you to describe a single leadership experience. This question defines the kind of experience you should discuss, but not the number. Therefore you still face decisions on how many examples to use and how to integrate them. On the other hand, a question that asks you to discuss your most important activity limits you to one example, but leaves open from which realm you will choose that example. Therefore you still face decisions on what theme you will use to drive your discussion. In both cases, you should use the guidelines discussed in this lesson to structure your essay.

The key aims of this lesson are the same as for the previous one: you will learn how to identify and develop an overarching theme and to organize your content in the most effective structure. Thus, you will learn how to answer open ended questions to write a perfect grad school essay. There will also be some overlap in subsections to provide a step-by-step guideline.

As we explained in the last lesson, the overarching theme you decide on will inform the manner in which you organize the rest of your content. But in contrast to the type of essay discussed in the previous lesson, you don’t have a series of questions to guide your thought process for these open-ended types. Instead, you must analyze your main ideas and examples and identify the underlying theme that ties them together.

There are two extremes that you should avoid, as demonstrated by the following examples:

TOO BROAD: “A variety of experiences have shaped me into the person I am today.” TOO NARROW: “My character is defined by hard work.”

It is better to err on the side of specificity, but to avoid the problem of sounding too narrow and over-simplistic, you should add layers to create a more sophisticated theme. For example: “While perseverance helped me to survive academically during my first years in the U.S., I discovered a more profound love of learning when I chose my major in college.”

The same two methods of articulating your theme apply here as they did to the complex essays. We will go through them again with different examples.

Need help? Check out EssayEdge editing services:

The Upfront Approach

The idea here is to articulate your theme in the introduction, suggesting the focus of your argument as you would in a thesis statement. This applicant faces one of the most typical open ended questions examples, “What matters most to you and why?” Many people will choose a concrete topic, such as family or religion. In those cases, it’s still essential to have a theme in addition to the topic, so the essay doesn’t amount to a disordered listing of facts. The approach that this applicant uses is unique in that the topic is itself a theme: “a lifelong pursuit to improve myself as a human being.” To add further depth to this theme, he explains how he will approach the topic from three angles: professional, spiritual, and personal.

Not all essays need to be as clearly outlined as this one is. Nevertheless, this essay demonstrates the effectiveness of asserting a clear theme that offers direction for the rest of the discussion.

The Gradual Approach

Because you are writing personal essays, you might prefer to allow the argument to unfold more naturally as a story. Each paragraph will build upon previous points as an underlying theme gradually emerges. The conclusion then ties these individual themes together and includes some kind of encapsulation of the material that preceded it. This applicant writes a summary of his personal and family background. He begins by making each point on its own terms, without trying to force an all-encompassing interpretation on his life.

Gradually, however, ideas begin to recur about obstacles, sacrifice, and the united resolve that his family showed. He puts these pieces together in the final paragraph: “My family created a loving home in which I was able to develop the self-confidence that I need in order to overcome many of the challenges that I face in my career. In addition, growing up in a family of very modest means, and being conscious of my parents’ sacrifices, has given me a powerful sense of drive.”

Organization

Answering open-ended questions will naturally give you more freedom in adopting an arrangement for your ideas. While one strategy comes from the previous lesson, the other two are new.

Hierarchy of Evidence

This approach will be less common for open-ended questions because the majority of them ask about personal background, and in those cases you’re not looking to emphasize accomplishments by bringing them to the forefront. Nevertheless, if there’s something in your personal background that would make you stand out, you should not hesitate to open with that rather than stick to more conventional orderings.

Showing Progress

We do not have a section advising chronological order, because despite its convenience, you should not choose such an approach for its own sake. A chronological essay often reads like a dull list, undiscriminating in its details. On the other hand, the Showing Progress approach often results in a chronological order for independent reasons.

The guiding principle here is to structure your evidence in a way that demonstrates your growth, from a general initial curiosity to a current definite passion, or from an early aptitude to a refined set of skills. It differs from the Hierarchy of Evidence approach because your strongest point might come at the end, but its strength lies precisely in the sense of culmination that it creates.

This applicant faces a variation of the failure question. Instead of being asked to discuss one failure, he has to reflect on the quotation, “Mistakes are the portals of discovery.” (Note: here the theme is given to you, but the scope is not defined. Therefore the example is still useful, as the writer has to choose how to organize his evidence.) After discussing his initial mistake, he describes subsequent actions with clear comparisons to the original experience that demonstrate the progress he has made. Moreover, his choice to discuss two separate mistakes creates a second level of progress, as the lessons he learns after the second mistake are clearly more advanced and mature.

Juxtaposing Themes

If two experiences are closely related but occurred years apart, it makes more sense to develop them as one set of ideas than to interrupt them with unrelated points. This essay, quoted above under the Gradual Approach subsection, moves through the applicant’s personal background point by point, instead of attempting to tell a chronological story. He devotes separate paragraphs to different family members and discusses his experience with the religious conflicts in Ireland in its own segment. Thus each idea is developed in full without being interrupted by points that would fit in only because of chronology.

Your decision between these latter two approaches comes down to the nature of your content—most importantly, the number of ideas you’re juggling. Moreover, showing progress is more significant in an essay about self-development than one about more external factors. Finally, note that you can combine the two approaches by showing progress within self-contained thematic units.

Robin W. - professional essay editor and proofreader

Popular Posts

April 8, 2024 How To Start a Scholarship Essay: Catch Reader’s Attention Fast

April 8, 2024 My Role Model Essay: A Few Ways to Elaborate on The Subject

April 8, 2024 How To Start a Personal Statement? | Writing Tips and Samples

Related Posts

May 20, 2024 Passive versus Active Voice: How To Write in Active Voice?

April 25, 2024 How to Overcome Writer’s Block and Craft a Perfect College Essay

April 8, 2024 Questions to Ask In a College Interview

©2024 Student Media LLC. All rights reserved.

EssayEdge: Essay Editing & Proofreading Service.

Our mission is to prepare you for academic and career success.

  •   Log In  
  •   Sign Up  
  • Forgot password

Unable to log in? Please clear your browser's cache and then refresh this page and try again

Reset password Please enter your email address to request a password reset.

check you email

Check your email We’ve just sent a password reset link to your email.

This information is used to create your account

Open-Ended Questions: 28 Examples of How to Ask Properly

Roland Vojkovský

The power of open-ended questions lies in the insights they unlock.

Mastering open-ended questions is key, as they unlock more than just brief replies. They invite deeper thoughts, opening doors to honest conversations. The skill of openness and support is crucial for team leaders who want to cultivate a similar culture among their employees and customers. Unlike yes-or-no questions, open-ended ones pave the way for people to express themselves fully.

They are not just about getting answers, but about understanding perspectives, making them a valuable tool in the workplace, schools, and beyond. Through these questions, we dig deeper, encouraging a culture where thoughts are shared openly and ideas flourish.

What is an open-ended question?

Open-ended questions kick off with words like “Why?”, “How?”, and “What?”. Unlike the yes-or-no kind, they invite a fuller response. It’s not about getting quick answers, but about making the respondent think more deeply about their answers.

These questions ask people to pause, reflect, and delve into their thoughts before responding. It’s more than just getting an answer—it’s about understanding deeper feelings or ideas. In a way, open-ended questions are bridges to meaningful conversations, leading to a richer exchange of ideas and insights.

Comparison: Open-ended vs closed-ended questions

Open-ended and closed-ended questions serve as the two sides of the inquiry coin, each with its unique advantages.

Open-ended questions:

  • Kickstart with “How”, “Why”, and “What”
  • No set answers, sparking more thought
  • Encourage detailed responses, explaining the ‘why’ or ‘how’

Closed-ended questions:

  • Often have a “Yes” or “No” response
  • Feature predetermined answers (e.g., Options A, B, C)
  • Aim for specific, clear-cut responses, making them quick to answer

Together, they balance a conversation. Open-ended questions open up discussions, while close-ended questions keep them on track.

Benefits of asking open-ended questions

  • Deeper understanding : They dig deeper, unveiling more than just surface-level information.
  • Enhanced communication : Open-ended questions foster a two-way dialogue, making conversations more engaging.
  • Building trust: When people feel heard, it builds trust and a strong rapport.
  • Encourages critical thinking: These questions nudge towards reflection, enhancing critical thinking skills.
  • Uncovering insights : They can bring out hidden insights that might stay buried otherwise.
  • Problem-solving: By identifying core issues, they pave the way for effective problem-solving.
  • Personal growth : Promoting self-reflection, open-ended questions contribute to personal growth and awareness.

As you can see, open-ended questions pave the way for in-depth responses. Unlike a simple ‘yes’ or ‘no’, they encourage individuals to share more. This leads to richer engagements, giving a peek into others’ perspectives. It’s more than just collecting data; it’s about understanding the context behind it. Through open-ended questions, discussions become more engaging and informative. It’s a step towards fostering a culture of open communication and meaningful interactions.

28 examples of open-ended questions

Questions for team meetings:

  • What steps could enhance our meeting’s effectiveness?
  • How does our meeting structure support or hinder our goals?
  • What topics should be prioritized in our next meeting?
  • How can we make our meetings more engaging and productive?
  • What was the most impactful part of today’s meeting?
  • If you could change one thing about our meetings, what would it be?
  • How do our meetings compare to those in other departments?

For company surveys:

  • What aspects of our culture contribute to your job satisfaction?
  • How could we modify our workspace to boost productivity?
  • What are your thoughts on our current communication channels?
  • How would a flexible work schedule impact your work-life balance?
  • What training or resources would further your career development here?
  • How do our company values align with your personal values?
  • What suggestions do you have for improving team collaboration?

Ideas for brainstorming sessions:

  • What alternative solutions could address this challenge?
  • How might we streamline our brainstorming process?
  • What barriers are hindering creative thinking in our sessions?
  • How do you feel about the diversity of ideas presented?
  • What methods could we employ to encourage more innovative thinking?
  • How can we better document and follow up on ideas generated?
  • What factors should be considered when evaluating potential solutions?

For classroom discussions:

  • What teaching methods engage you the most?
  • If you could redesign our classroom, what changes would you make?
  • How does peer interaction enhance your learning experience?
  • What topics or subjects would you like to explore in more depth?
  • How could technology be integrated to enhance learning?
  • What challenges do you face in achieving your academic goals?
  • How could the school support you better in overcoming academic hurdles?

How to craft effective open-ended questions

Crafting effective open-ended questions is an art. It begins with choosing the right starters like “How”, “What”, and “Why”.

  • Example: How did you come up with this idea?
  • Example: What were the main challenges faced?
  • Example: Why do you think this approach works best?

Using these starters makes it easier to receive thoughtful answers that lead to deeper thinking and understanding.

Beyond starters, here are more tips:

  • Be clear: Ensure clarity to avoid confusion.
  • Avoid leading: Don’t direct towards a specific answer.
  • Keep it simple: Steer clear of complex language.
  • Encourage thought: Frame questions to prompt reflection.
  • Be open: Prepare for unexpected answers.
  • Practice active listening: Show genuine interest.
  • Follow-Up: Delve deeper with additional questions.

Characteristics of good open-ended questions:

  • Interest: Be genuinely interested in the responses.
  • Clarity: Keep your question clear and straightforward.
  • Neutral tone: Avoid leading or biased words.
  • Emotive verbs: Use verbs that evoke thoughts or emotions, like ‘think’, ‘feel’, or ‘believe’.
  • Non-accusatory: Frame your question to avoid sounding accusatory, which can hinder honest responses.

For instance, instead of asking “Why did you choose this method?”, try “What led you to choose this method?”. It feels less accusatory and more open to insightful responses.

When to Use Open-Ended Questions

Open-ended questions are invaluable tools for diving into meaningful conversations, whether in live discussions or self-paced surveys. Acting like keys, they unlock the reasoning behind people’s thoughts and feelings. For example, incorporating open-ended questions into your Net Promoter Score (NPS) surveys can offer insights into why customers assigned a specific score.

These questions are particularly effective for sparking deeper thinking and discussions. Imagine you’re in a team meeting and you ask, “What can we do to better deliver our projects?” The room is likely to fill with useful suggestions. Similarly, in customer service emails , posing a question like “How can we improve your experience?” can provide insights that go beyond the scope of pre-crafted templates.

Start your day  with great  quality  content

In educational settings, questions like “How can we make learning this easier for you?” can encourage thoughtful answers. This not only enhances the learning environment but also fosters a culture of open communication. By asking such questions, you’re doing more than just seeking answers; you’re inviting deeper thought and engagement.

The real magic of open-ended questions lies in their ability to transform basic interactions into opportunities for greater understanding and learning. Whether you’re conducting a survey, such as an Employee Net Promoter Score , or simply having a team discussion, these questions add context and depth. They turn simple exchanges into meaningful conversations, helping you reach the ultimate goal—whether you’re talking to team members or customers.

Bonus: 8 of our favorite open-ended questions for customer feedback

Embarking on the open-ended questions journey? While Nicereply specializes in collecting easy-to-digest feedback through stars, smiley faces, or thumbs up/down, we see the value in the detailed insights open-ended questions can provide. Here’s a list of our favorite open-ended questions to enhance your customer satisfaction insights:

  • How could we improve your experience with our customer service?
  • What did you appreciate most about your interaction with our team?
  • Were there any aspects of our service that fell short of your expectations?
  • What additional services or features would you like us to offer?
  • How would you describe your overall satisfaction with our service?
  • What suggestions do you have for our support team to serve you better?
  • What were the key factors that influenced your satisfaction with our service?
  • How does our customer service compare to others you have experienced?

Though Nicereply’s focus is on clear-cut feedback, engaging with open-ended questions on a separate note can offer a richer understanding of your customer’s experience.

1: How could we improve your experience with our customer service?

Asking for feedback shows you’re keen on making your service better. It helps understand what customers think, find out what’s missing, and aim for the best. This question really shows that a company cares about improving.

2: What did you appreciate most about your interaction with our team?

Finding out what customers like helps grow those good parts. It’s a way to cheer on what’s going well and make sure these good habits keep going strong.

3: Were there any aspects of our service that fell short of your expectations?

Knowing what let customers down is the first step to fixing it. This question can bring out hidden issues, making it easier to sort them out. It also shows customers that their happiness is important and their worries are heard, which can really boost the bond between the customer and the company, a crucial factor in building customer loyalty .

4: What additional services or features would you like us to offer?

Uncovering customer desires helps in tailoring services to meet their needs. It’s a proactive step toward innovation based on customer-driven insights.

5: How would you describe your overall satisfaction with our service?

This question opens up a space for many different reactions and stories. It captures a general feeling that can be explored more for deeper understanding.

6: What suggestions do you have for our support team to serve you better?

This question invites customers to share ideas on improving our service. It’s a positive way to get useful feedback. It also shows a commitment to getting better and valuing what customers have to say, which can build trust and good relations.

7: What were the key factors that influenced your satisfaction with our service?

Looking into the details of satisfaction helps to understand what makes good service for customers. It’s a logical way to break down customer satisfaction.

8: How does our customer service compare to others you have experienced?

A comparative question provides a reality check and a broader industry perspective. It’s a way to understand your competitive standing from a customer-centric viewpoint.

It also may provide insights into areas where competitors excel, offering a benchmark for improvement, or areas where your service shines, which can be leveraged in marketing and brand positioning.

Conclusion: Open-ended questions in a nutshell

Open-ended questions are conversation starters, allowing for a richer exchange of ideas. They help individuals express themselves more fully, paving the way for a deeper understanding.

In business, particularly in customer support, these questions are crucial. They help unearth the customer’s perspective, providing key insights for improving service. For support professionals, every open-ended question is an opportunity to better understand customer needs and enhance the dialogue. Through these questions, a culture of open communication and continuous learning is fostered, which is essential for delivering exceptional customer service.

How did you like this blog?

Nice

Roland is the go-to guy for content marketing at Nicereply. With over a decade of experience in the field, he took the reins of the SEO department in April 2023. His mission? To spread the word about customer experience far and wide. Outside of the digital world, Roland enjoys quality time with his wife and two daughters. And if he's in the mood, you might catch him lifting weights at the gym—but don't hold your breath!

Related articles

TOP Questions for CSAT, NPS & CES [Free resource]

TOP Questions for CSAT, NPS & CES [Free resource]

Best questions for customer satisfaction survey.

18 Ways to Ask Survey Questions to Get Customer Feedback

18 Ways to Ask Survey Questions to Get Customer Feedback

The best customer service tips every week. no spam, we promise..

Get guides, support templates, and discounts first. Join us.

  • PRO Courses Guides New Tech Help Pro Expert Videos About wikiHow Pro Upgrade Sign In
  • EDIT Edit this Article
  • EXPLORE Tech Help Pro About Us Random Article Quizzes Request a New Article Community Dashboard This Or That Game Happiness Hub Popular Categories Arts and Entertainment Artwork Books Movies Computers and Electronics Computers Phone Skills Technology Hacks Health Men's Health Mental Health Women's Health Relationships Dating Love Relationship Issues Hobbies and Crafts Crafts Drawing Games Education & Communication Communication Skills Personal Development Studying Personal Care and Style Fashion Hair Care Personal Hygiene Youth Personal Care School Stuff Dating All Categories Arts and Entertainment Finance and Business Home and Garden Relationship Quizzes Cars & Other Vehicles Food and Entertaining Personal Care and Style Sports and Fitness Computers and Electronics Health Pets and Animals Travel Education & Communication Hobbies and Crafts Philosophy and Religion Work World Family Life Holidays and Traditions Relationships Youth
  • Browse Articles
  • Learn Something New
  • Quizzes Hot
  • Happiness Hub
  • This Or That Game
  • Train Your Brain
  • Explore More
  • Support wikiHow
  • About wikiHow
  • Log in / Sign up
  • Education and Communications
  • Communication Skills

How to Write Open‐Ended Questions

Last Updated: March 9, 2023 Fact Checked

This article was reviewed by Annaliese Dunne . Annaliese Dunne is a Middle School English Teacher. With over 10 years of teaching experience, her areas of expertise include writing and grammar instruction, as well as teaching reading comprehension. She is also an experienced freelance writer. She received her Bachelor's degree in English. There are 7 references cited in this article, which can be found at the bottom of the page. This article has been fact-checked, ensuring the accuracy of any cited facts and confirming the authority of its sources. This article has been viewed 94,622 times.

Open-ended questions cannot be answered with a simple “yes” or “no.” Instead, they have multiple potential right answers, and require thought, reflection, and explanation from the person responding. [1] X Research source That being said, open-ended questions require as much effort to write as they do to answer. Whether you’re getting ready for an academic discussion, preparing to interview someone, or developing a survey for sales or market research, keep in mind that your questions should ideally spark reflection, discussion, and new ideas from your respondents.

Determining a Specific Purpose

Step 1 Prepare open-ended questions based on reading for class discussions.

  • Take notes on potential questions as you read. While you read the source material for your class discussion, write down broad, big-picture questions about what you’re reading. If you have identified or been given a purpose for reading, use it to guide the questions that you might ask. Later, you can use these notes to help write more polished, final open-ended questions.
  • If you have trouble coming up with specific questions while reading, underline or circle portions of the text that seem important, confusing, or connected to your purpose for reading. You can return to these later as starting points for your written open-ended questions.

Step 2 Add open-ended questions to market research surveys to gain new insights.

  • For example, instead of asking: “Were you satisfied with your experience?” You could try something like: “What about your experience did you find most satisfying, and what about it did you find frustrating or difficult?” Instead of simply giving a “yes” or “no” answer, your respondents will give you specific information, and possibly new ideas for improving your product or service. [3] X Research source
  • However, if you’re looking for simpler, more quantitative data, it might be easier to rely on multiple-choice, yes-no, or true-false questions, all of which are closed-ended. For example, if you’re trying to find out which gelato flavor was the most popular at your shop this month, it would be easier to ask a closed-ended question about which the respondent purchased most frequently, and then list all available flavors as potential answers.

Step 3 Use open-ended interview questions to thoroughly screen a potential job candidate.

  • Examples of effective open-ended questions to ask in an employment interview include: “In a previous job, have you ever made a mistake that you had to discuss with your employer? How did you handle the situation?” or “When you’re very busy, how do you deal with stress?”

Step 4 Prepare open-ended questions for journalistic interviews to ensure thorough responses.

  • This strategy can be especially useful when interviewing a candidates for public office, who are often more concerned with pushing their own platform than with giving thorough, honest answers. Closed-ended questions allow interviewees like these to halt the conversation with a “Yes, but…” or “No, but…” response, and then redirect it towards their own agenda.

Structuring Effective Questions

Step 1 Begin your question with “how,” “why,” or “what.”

  • This isn’t a hard-and-fast rule – you can write a closed-ended question with any leading word. For example, “What color shirt was she wearing?” is decidedly a closed-ended question.

Step 2 Create questions that analyze, compare, clarify, or explore cause and effect.

  • Analytical or meaning-driven questions might ask why a character in a literary text is behaving a certain way, what the importance of a particular concept is, or what the meaning of a scene or image might be. In a class discussion about a novel, you might ask: “What is the significance of the fact that Mary held back tears as she finished her donut towards the end of Chapter 2?”
  • Comparison questions might ask about similarities or differences between character perspectives, or ask the respondent to compare and contrast two different methods or ideas. For example, in a marketing survey, you could ask, “Which model of can opener – the Ergo-Twist or the Ergo-Twist II – was easier to use, and why?"
  • Clarifying questions might ask what the meaning of a complicated idea or an unclear term might be. For instance, if you’re interviewing someone who keeps bringing up “the war on Christmas,” you might ask them, “What exactly do you mean by that statement? Who is attacking Christmas, and how?”
  • Cause-and-effect questions might ask why a character is displaying an emotion in a particular situation, or what connections might exist between two different ideas. An example of a cause-and-effect question that you might ask in an interview could be: “What aspects of your experience in college sports might influence your approach to this job?”

Step 3 Avoid questions that are vague, leading, or answerable in one word.

  • An example of an excessively vague question might be “What about Jeff’s strange behavior?” (Well, what about it?)
  • A leading question hints at the expected answer, thus making it difficult for students who have different ideas to speak up. An example might be: “Why is the ocean a symbol of human insignificance and existential despair?”
  • An example of a yes or no question would be: “Does the grandfather disapprove of his granddaughter’s desire to become a cowgirl?”

Step 4 Avoid questions with limited possible answers.

  • This could mean offering survey respondents a text box to type or write their answers in, rather than bubbles to fill in.
  • In a conversational setting, like a journalistic interview, this means avoiding giving your subject potential answers when you pose the question. For example, instead of asking, “Would you prioritize an aggressive overhaul of public transportation or the increased use of alternative fuels?” ask a question like: “What strategies would you prioritize to make our city more energy-efficient?”

Step 5 Follow up closed-ended questions with open-ended questions.

  • For example, if you ask a multiple-choice question like “How often do you visit your local public library? A) Often, B) Sometimes, or C) Never,” you could follow it up with questions like: “If you chose A, what aspects of our library keep you coming back?” or “If you chose C, what prevents or dissuades you from visiting the library?”

Step 6 Check over your questions to make sure that they’re open-ended.

Expert Q&A

You might also like.

Juicy Questions to Ask Your Friends

  • ↑ https://examples.yourdictionary.com/examples-of-open-ended-and-closed-ended-questions.html
  • ↑ https://www.nngroup.com/articles/open-ended-questions/
  • ↑ https://www.indeed.com/career-advice/career-development/open-ended-questions-examples
  • ↑ https://www.indeed.com/career-advice/interviewing/tough-open-ended-questions
  • ↑ https://www.poynter.org/reporting-editing/2004/the-way-we-ask/
  • ↑ https://hbr.org/2018/05/the-surprising-power-of-questions
  • ↑ https://www.artofmanliness.com/people/social-skills/social-briefing-8-better-conversations-asking-open-ended-questions/

About This Article

Annaliese Dunne

  • Send fan mail to authors

Reader Success Stories

Meranda Stuard-Newcomb

Meranda Stuard-Newcomb

Jan 25, 2019

Did this article help you?

open ended essay

Featured Articles

Enjoy Your Preteen Years

Trending Articles

Pirate Name Generator

Watch Articles

Make Fluffy Pancakes

  • Terms of Use
  • Privacy Policy
  • Do Not Sell or Share My Info
  • Not Selling Info

Don’t miss out! Sign up for

wikiHow’s newsletter

Some Thoughts on Open-Ended Writing

Open-ended writing connects ideas and identifies new questions. It encourages conversation instead of presenting a polished argument.

Writing online, especially for a blog, can feel difficult. Especially for a (recovering) social media and content marketer like myself. Yes, it’s easy to make a blog. It seems like there are constantly new tools popping up promising to help you blog, build a newsletter, or post on social media.

But writing is a different matter. Today it seems like blogging is focused on producing capital “C” Content for consumption. Everything becomes an “ultimate guide to X,” even if you try to use slightly more tasteful titles. With the arrival of Chat-GPT and other AI tools, the world is faced with even more bland and repetitive capital “C” content as the “ dark forest ” of the web expands, to borrow a term from Yancey Strickler .

This isn’t inspiring. And it’s rarely enjoyable to write these types of articles.

Digital Gardening and Small b Blogging

When I started writing on this website (again), I wanted to treat it more like a digital garden. I’d like to expand my thoughts on digital gardening sometime, but I think Maggie Appleton has the best working definition for now:

A garden is a collection of evolving ideas that aren’t strictly organised by their publication date. They’re inherently exploratory – notes are linked through contextual associations. They aren’t refined or complete - notes are published as half-finished thoughts that will grow and evolve over time. They’re less rigid, less performative, and less perfect than the personal websites we’re used to seeing.

This approach resonates with me. Mainly because producing exhaustive articles is, well, exhausting. This definition of the digital garden is similar to the concept of “ small b blogging ” introduced by Tom Critchlow:

Small b blogging is learning to write and think with the network. Small b blogging is writing content designed for small deliberate audiences and showing it to them. Small b blogging is deliberately chasing interesting ideas over pageviews and scale. An attempt at genuine connection vs the gloss and polish and mass market of most “content marketing”.

I recently read another note by Tom Critchlow that was the direct inspiration for this line of thinking, entitled “ Writing, Riffs, and Relationships .” There are lots of good concepts in the article, but the idea of making writing small stood out to me. In particular

People’s first instinct with content is to try and make it polished and closed. To be useful by solving something or creating the ultimate guide to something. Those pieces of content can be good - but they’re very hard to write, and even harder to write well! Instead I prefer to take a more inquisitive and open-ended approach….
Closed writing is boring writing. If you’ve fully explored and put to bed the topic you’re writing about then there’s very little left for someone to react to. “Nice post” someone might say. But if you deliberately leave some rough edges, some threads that the reader can pull on, then you’re inviting the reader into the conversation. You’re saying (possibly explicitly!) - “Hey, what are your thoughts on this topic? How do you think about it?”

Critchlow goes on to recommend ending a blog post or “riff” with more questions that encourage you (and your audience) to explore other topics. Altogether, this made me think about the difference between open-ended and closed writing and how they are different modes of writing altogether.

A Brief Definition of Open-Ended Writing

Open-ended writing seeks to connect ideas and identify new pathways of inquiry. It explores a topic by connecting sources and identifying tensions, conflicts, or missing information. Open-ended writing invites conversation and debate.

In practice, open-ended writing is defined by three basic activities: free writing, summarizing, and questioning:

  • Free writing introduces sources, ideas, and questions and starts to connect them together.
  • Summarizing organizes the information into a statement of what is known so far and what has been covered.
  • Questioning identifies tensions and gaps that could be explored further in the future or invites areas of conversation.

Some Spring Cleaning

I started this blog to write small, informal blog posts. So far I’ve fallen back into the trap of writing “articles” that don’t really encourage conversation or provide new threads to pull on. While it’s not critical for doing the actual writing, I decided to re-classify my digital garden to encourage myself to focus more on riffs and notes.

Previously, I had two categories: Notes and Essays . Notes were supposed to be objective and focused on single topics; essays would pull together multiple notes to present a specific point of view. Obviously, neither category encouraged exploratory writing. I generally wrote less and focused on polishing my notes, which were supposed to be the rough form of writing. I seldom got around to writing essays.

I now have my digital garden organized into three categories:

  • Notes – Open-ended, exploratory riffs that connect 2-3 ideas or sources together. These are the heart of the digital garden.
  • Articles – Closed writing that thoroughly explores a single topic from a mostly objective point of view.
  • Essays – Utilizes either open-ended or closed writing to explore a single subject from a personal point of view.

How do you practice open-ended writing?

This particular post might be my first true note. It’s not meant to be comprehensive and it’s mostly my attempt to work out (in public) an approach to open-ended writing.

  • What do you find hard or challenging about writing?
  • Does changing the purpose to open-ended writing encourage you to write more? Or does it scare you?
  • Do you use a public platform or a private one? Is one better than the other, or are they just different?
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case AskWhy Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

open ended essay

Home Surveys Question Types

Open-Ended Questions: Examples & Advantages

Open ended questions

When designing surveys , we often need to describe whether to use open-ended questions versus closed-ended questions to get specific information. Yet we need to be aware that open-ended and closed-ended questions have their strengths and weaknesses and perform in different ways.

Open-ended are those questions that a sender makes to encourage one or several receivers to obtain some information in response. For example: Where is my wallet?

Open-Ended Questions: Definition

Open-ended questions are free-form survey questions that allow and encourage respondents to answer in open-text format to answer based on their complete knowledge, feeling, and understanding. The detailed response to this question is not limited to a set of options.

Unlike a closed-ended question that leaves survey responses limited and narrow to the given options, an open-ended question allows you to probe deep into the respondent’s detailed answers, gaining valuable information about the subject or project. The responses to these qualitative research questions can be used to attain detailed and descriptive information on a subject.

They are an integral part of qualitative market research . This research process depends heavily on open and subjective questions and answers on a given topic of discussion or conversation, with room for further probing by the researcher based on the answer given by the respondent. In a typical scenario, closed-ended questions are used to gather qualitative data from respondents.

open ended essay

Examples of Open-Ended Questions

Respondents like open-ended questions as they get 100% control over what they want to respond to, and they don’t feel restricted by the limited number of options. The beauty of the process is that there can never be a one-word answer. They’ll either be in the form of lists, sentences or something longer like speech/paragraph.

So, to understand this more, here are some open-ended question examples:

Examples of Open Ended Questions

  • Interview method : How do you plan to use your existing skills to improve organizational growth, if hired by the company?
  • Customer-facing: Please describe a scenario where our online marketplace helps a person make day-to-day purchases in daily life.
  • Technical: Can you please explain the back-end Javascript code template used for this webpage or blog post?
  • Demographic: What is your age? (asked without survey options)
  • Personal / Psychographic: How do you typically deal with stress and anxiety in your life?

In a study conducted by Pew Research, respondents were asked, “What mattered most to you while deciding how you voted for president?” One group was asked this question in a close-ended question format, while the other was asked in an open-ended one. The results are displayed below:

open-ended-question

In the close-ended questions format, 58% of respondents chose “The economy”. In the other format, only 35% wrote an answer that indicated “The economy.” Note that only 8% of respondents selected “Other” in the format of the close-ended question. With an open-ended format, 43% of respondents wrote in a response that would have been categorized as “Other.”

Open-Ended Questions vs Close Ended Questions

Open-ended questions motivate the respondents to put their feedback into words without restricting people’s thoughts. They aren’t as objective and dominant as close-ended questions.

Close-Ended Questions

Open-Ended Questions

Do you like working with us? Tell us about your experience with our organization so far.
Have you been stressed lately? Share with us what has been troubling you.
How satisfied are you with your current job role? What do you expect from this appraisal?

By using these leading questions, the researcher understands the respondents’ true feelings. They have an element that will give you information about different thought processes across your clientele, troubleshooting suggestions, and getting a peek into their inhibitions too.

  • The open-ended and closed-ended questions are different tasks for respondents. In the open-ended task, respondents write down what is readily available in their minds. In the close-ended question task, respondents focus their “attention on specific responses chosen by the investigator” (Converse and Presser, 1986).
  • Asking the same question in these two different formats will almost always produce different results. Many investigators have demonstrated this over several decades.
  • Few respondents are going to select the “Other” category and enter responses that are different from the answer choices that are listed.

So what does this mean for us? If you can, do qualitative research first and ensure your close-ended questions represent the items in people’s heads. We need the list of items to be complete since few respondents will select the “Other” category. It may also be necessary to list items not readily available to respondents if they are important to you.

close ended question

LEARN ABOUT: Survey Sample Sizes

When presenting results , I have found it helpful to explain the fundamental differences between open-ended and closed-ended question examples in a sentence or two. It helps them understand that these are not necessarily precise measurements but measurements that require some interpretation relative to other questions in the survey and additional information from steps in qualitative research . Hence, that is why they need an analyst like you or me!

Why Use Open-Ended Questions?

Unrestricted opinions:.

The customers need a platform to voice their opinions without limits on the answers. Happy or unhappy. As answer options for questions aren’t provided, the respondent has the liberty to include details about good life, feelings, attitudes, and views that they usually wouldn’t get to submit in single word answers.

Creative Expression:

These questions are more appreciative of the respondents than close-ended questions as users aren’t expected to just “fill” them out for the sake of it.

Spellbinding Vision and Creativity:

Respondents may stun you with the vision and creativity they show with their more detailed answers. Links to their blogs or a verse or two of their poetry will leave you spellbound.

Embracing Freedom of Response:

If there are only close-ended questions in a microsurvey, the users usually get disconnected and fill it out without giving it much thought. With the kind of freedom that open-ended questions offer, users can respond the way they’d like to, be it the number of words or the details or the tone of the message.

LEARN ABOUT: Testimonial Questions

Driving Marketing and Innovation:

These responses may be marketing tips for improving the organization’s branding or some creative ideas that can lead to monetary gains in future.

Tackling Complexity:

Knotty situations need more than just a mere Yes/No feedback. Single-select or multiple choice questions cannot do justice to the detail process or scrutiny required for some critical and complex situations.

Exploring Feedback and Troubles:

These questions work best in situations where the respondents are expected to explain their feedback or describe the troubles they’re facing with the products.

Unveiling Customer Insights:

You can learn from your respondents. The open-ended questions offer the freedom to these respondents to be vocal about their opinion that would be insightful for a company.

Revealing Thought Patterns:

Respondent logic, thoughts, language, and reference choices can be known from these questions that can reveal a lot about how the respondent’s brain functions.

Always think before designing a survey as to what your objective is. Scrutinize the purpose, evaluate the positives and negatives of using an open or closed answer for your research study. Try it by sending out to a selected database, analyzing the results, and planning improvements for the next round of surveys.

LEARN ABOUT: Speaker evaluation form

How to Ask an Open-Ended Question?

Everything easy or complicated requires competence. Asking the right question is also one such thing that requires capabilities. Capability to understand and segment the target audience , determine the kind of questions that will work well with that audience, and determine the efficiency of them.

Here are four ways to create effective open-ended questions:

Understand the difference between open-ended question and closed-ended question:

Before you start putting questions to paper, you need to have absolute clarity on open-ended vs closed-ended questions . Your objective of sending out an online survey should be clear, and based on that, you can evaluate the kind of questions you would want to use. These are usually used where the feelings and feedback of the customer are highly valued. To receive 100% transparent feedback on these questions, make sure that you don’t lead the respondents with your questions and give them complete liberty to fill in whatever they want.

Create a list of open-ended questions before curating the survey:

Once you get clarity on what are open-ended questions and how to implement them, figure out a list of survey questions that you’d want to use. First, you can have a fair share of open-ended questions in your survey, which can fluctuate depending on your responses.

LEARN ABOUT:  Social Communication Questionnaire

Examples of open-ended questions like these are extremely popular and give you more value-added insights:

  • Why do you think competitive market research is important before launching a new business?
  • How do you think you’ll overcome these obstacles in our project?
  • Tell us about your experience with our onboarding process.
  • What are your professional priorities at the moment?
  • What domain of work motivates you?
  • You can make a list of similar questions before you start executing the survey.

Reconstruct any question into an open-ended question:

Observation is the key here. Observe what kind of questions you usually ask your customers, prospects, and every other person you come across. Analyze whether your questions are closed-ended or open-ended. Try and convert those closed questions into open-end ones wherever you think the latter would fetch you better results and valuable insights.

Follow up a closed-ended question with an open-ended question:

This trick works wonders. It’s not always possible to convert a closed question into an open one, but you can follow up by getting a question answered.

For example, if you have a closed question like – “Do you think the product was efficient?” with the options “Yes” and “No”, you can follow it up with an open question like “How do you think we can make the product better in future?”

Regarding surveys, the advantages of open questions surpass that of closed ones.

How to Add Open-Ended Questions?

1. Goto: Login » Surveys » Edit » Workspace

2. Click on the Add Question button to add a question.

3. Select Basic, then go to the Text section and select Comment Box.

4. Enter the question text.

open ended questions

5. Select the data type: Single Row Text, Multiple Rows Text, Email address, or Numeric Data.

open ended questions setting

6. Select the Text Box Location (below or next to question text). Enabling “next to question text” will put the text box to the right of the question.

How to view the data collected by an open-ended question?

1. Click on Login » Surveys » Analytics » Text Analytics » Text Report

open ended questions analytics

Please note that analysis for open-ended text questions is not included in the Real-Time Summary or Analysis Report. To view the analysis of open-ended questions, you can see the Word Cloud report.

LEARN ABOUT: Easy Test Maker

Can You Limit The Number of Characters in a Text Question?

You can set the limit of the number of characters that respondents can enter in the textbox.

How to Mark The Question as Mandatory?

To make the question mandatory, you can toggle the validation on and select ‘Force Response’. It is off by default. When ‘Force Response’ is not enabled, respondents can continue with the survey without selecting answers. If respondents go through all the pages in the online questionnaire without selecting answers, the response is still considered complete. You can enable the required option to make a question required so that respondents can continue with the survey only after responding to the questions.

LEARN ABOUT: Structured Questionnaire

open ended questions settings

Closed-ended questions, like open questions, are used in both spoken and written language and in formal and informal situations. It is common to find questions of this type in school or academic evaluations, interrogations, and job interviews, among many other options.

LEARN ABOUT: This or that questions

Whether you need a simple survey tool or a collaborative research solution, with our Academic licenses for universities and educational institutions, you get access to all the best features used by our Enterprise research clients.

  • Advanced logic and workflows for smarter surveys
  • Over 5000 universities & colleges and over 1 million+ students use QuestionPro
  • Academic license supports multi-admin role environment

Open-ended questions are essential to note that crafting practical open-ended questions requires skill and careful consideration. Questions should be clear, concise, and relevant to the topic. They should avoid leading or biased language, allowing individuals to express their views without undue influence.

Overall, open-ended questions are powerful to gather information, foster communication, and gain deeper insights. Whether used in research, professional settings, or personal conversations, they enable individuals to explore ideas, share perspectives, critical thinking of a person, and engage in meaningful discussions. By embracing the openness and curiosity of open ended questions, we can uncover new knowledge, challenge assumptions, and broaden our understanding of the world.

MORE LIKE THIS

Experimental vs Observational Studies: Differences & Examples

Experimental vs Observational Studies: Differences & Examples

Sep 5, 2024

Interactive forms

Interactive Forms: Key Features, Benefits, Uses + Design Tips

Sep 4, 2024

closed-loop management

Closed-Loop Management: The Key to Customer Centricity

Sep 3, 2024

Net Trust Score

Net Trust Score: Tool for Measuring Trust in Organization

Sep 2, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • What’s Coming Up
  • Workforce Intelligence

Frequently asked questions

What’s the difference between closed-ended and open-ended questions.

Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. These questions are easier to answer quickly.

Open-ended or long-form questions allow respondents to answer in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have otherwise considered.

Frequently asked questions: Methodology

Attrition refers to participants leaving a study. It always happens to some extent—for example, in randomized controlled trials for medical research.

Differential attrition occurs when attrition or dropout rates differ systematically between the intervention and the control group . As a result, the characteristics of the participants who drop out differ from the characteristics of those who stay in the study. Because of this, study results may be biased .

Action research is conducted in order to solve a particular issue immediately, while case studies are often conducted over a longer period of time and focus more on observing and analyzing a particular ongoing phenomenon.

Action research is focused on solving a problem or informing individual and community-based knowledge in a way that impacts teaching, learning, and other related processes. It is less focused on contributing theoretical input, instead producing actionable input.

Action research is particularly popular with educators as a form of systematic inquiry because it prioritizes reflection and bridges the gap between theory and practice. Educators are able to simultaneously investigate an issue as they solve it, and the method is very iterative and flexible.

A cycle of inquiry is another name for action research . It is usually visualized in a spiral shape following a series of steps, such as “planning → acting → observing → reflecting.”

To make quantitative observations , you need to use instruments that are capable of measuring the quantity you want to observe. For example, you might use a ruler to measure the length of an object or a thermometer to measure its temperature.

Criterion validity and construct validity are both types of measurement validity . In other words, they both show you how accurately a method measures something.

While construct validity is the degree to which a test or other measurement method measures what it claims to measure, criterion validity is the degree to which a test can predictively (in the future) or concurrently (in the present) measure something.

Construct validity is often considered the overarching type of measurement validity . You need to have face validity , content validity , and criterion validity in order to achieve construct validity.

Convergent validity and discriminant validity are both subtypes of construct validity . Together, they help you evaluate whether a test measures the concept it was designed to measure.

  • Convergent validity indicates whether a test that is designed to measure a particular construct correlates with other tests that assess the same or similar construct.
  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related. This type of validity is also called divergent validity .

You need to assess both in order to demonstrate construct validity. Neither one alone is sufficient for establishing construct validity.

  • Discriminant validity indicates whether two tests that should not be highly related to each other are indeed not related

Content validity shows you how accurately a test or other measurement method taps  into the various aspects of the specific construct you are researching.

In other words, it helps you answer the question: “does the test measure all aspects of the construct I want to measure?” If it does, then the test has high content validity.

The higher the content validity, the more accurate the measurement of the construct.

If the test fails to include parts of the construct, or irrelevant parts are included, the validity of the instrument is threatened, which brings your results into question.

Face validity and content validity are similar in that they both evaluate how suitable the content of a test is. The difference is that face validity is subjective, and assesses content at surface level.

When a test has strong face validity, anyone would agree that the test’s questions appear to measure what they are intended to measure.

For example, looking at a 4th grade math test consisting of problems in which students have to add and multiply, most people would agree that it has strong face validity (i.e., it looks like a math test).

On the other hand, content validity evaluates how well a test represents all the aspects of a topic. Assessing content validity is more systematic and relies on expert evaluation. of each question, analyzing whether each one covers the aspects that the test was designed to cover.

A 4th grade math test would have high content validity if it covered all the skills taught in that grade. Experts(in this case, math teachers), would have to evaluate the content validity by comparing the test to the learning objectives.

Snowball sampling is a non-probability sampling method . Unlike probability sampling (which involves some form of random selection ), the initial individuals selected to be studied are the ones who recruit new participants.

Because not every member of the target population has an equal chance of being recruited into the sample, selection in snowball sampling is non-random.

Snowball sampling is a non-probability sampling method , where there is not an equal chance for every member of the population to be included in the sample .

This means that you cannot use inferential statistics and make generalizations —often the goal of quantitative research . As such, a snowball sample is not representative of the target population and is usually a better fit for qualitative research .

Snowball sampling relies on the use of referrals. Here, the researcher recruits one or more initial participants, who then recruit the next ones.

Participants share similar characteristics and/or know each other. Because of this, not every member of the population has an equal chance of being included in the sample, giving rise to sampling bias .

Snowball sampling is best used in the following cases:

  • If there is no sampling frame available (e.g., people with a rare disease)
  • If the population of interest is hard to access or locate (e.g., people experiencing homelessness)
  • If the research focuses on a sensitive topic (e.g., extramarital affairs)

The reproducibility and replicability of a study can be ensured by writing a transparent, detailed method section and using clear, unambiguous language.

Reproducibility and replicability are related terms.

  • Reproducing research entails reanalyzing the existing data in the same manner.
  • Replicating (or repeating ) the research entails reconducting the entire analysis, including the collection of new data . 
  • A successful reproduction shows that the data analyses were conducted in a fair and honest manner.
  • A successful replication shows that the reliability of the results is high.

Stratified sampling and quota sampling both involve dividing the population into subgroups and selecting units from each subgroup. The purpose in both cases is to select a representative sample and/or to allow comparisons between subgroups.

The main difference is that in stratified sampling, you draw a random sample from each subgroup ( probability sampling ). In quota sampling you select a predetermined number or proportion of units, in a non-random manner ( non-probability sampling ).

Purposive and convenience sampling are both sampling methods that are typically used in qualitative data collection.

A convenience sample is drawn from a source that is conveniently accessible to the researcher. Convenience sampling does not distinguish characteristics among the participants. On the other hand, purposive sampling focuses on selecting participants possessing characteristics associated with the research study.

The findings of studies based on either convenience or purposive sampling can only be generalized to the (sub)population from which the sample is drawn, and not to the entire population.

Random sampling or probability sampling is based on random selection. This means that each unit has an equal chance (i.e., equal probability) of being included in the sample.

On the other hand, convenience sampling involves stopping people at random, which means that not everyone has an equal chance of being selected depending on the place, time, or day you are collecting your data.

Convenience sampling and quota sampling are both non-probability sampling methods. They both use non-random criteria like availability, geographical proximity, or expert knowledge to recruit study participants.

However, in convenience sampling, you continue to sample units or cases until you reach the required sample size.

In quota sampling, you first need to divide your population of interest into subgroups (strata) and estimate their proportions (quota) in the population. Then you can start your data collection, using convenience sampling to recruit participants, until the proportions in each subgroup coincide with the estimated proportions in the population.

A sampling frame is a list of every member in the entire population . It is important that the sampling frame is as complete as possible, so that your sample accurately reflects your population.

Stratified and cluster sampling may look similar, but bear in mind that groups created in cluster sampling are heterogeneous , so the individual characteristics in the cluster vary. In contrast, groups created in stratified sampling are homogeneous , as units share characteristics.

Relatedly, in cluster sampling you randomly select entire groups and include all units of each group in your sample. However, in stratified sampling, you select some units of all groups and include them in your sample. In this way, both methods can ensure that your sample is representative of the target population .

A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.

The key difference between observational studies and experimental designs is that a well-done observational study does not influence the responses of participants, while experiments do have some sort of treatment condition applied to at least some participants by random assignment .

An observational study is a great choice for you if your research question is based purely on observations. If there are ethical, logistical, or practical concerns that prevent you from conducting a traditional experiment , an observational study may be a good choice. In an observational study, there is no interference or manipulation of the research subjects, as well as no control or treatment groups .

It’s often best to ask a variety of people to review your measurements. You can ask experts, such as other researchers, or laypeople, such as potential participants, to judge the face validity of tests.

While experts have a deep understanding of research methods , the people you’re studying can provide you with valuable insights you may have missed otherwise.

Face validity is important because it’s a simple first step to measuring the overall validity of a test or technique. It’s a relatively intuitive, quick, and easy way to start checking whether a new measure seems useful at first glance.

Good face validity means that anyone who reviews your measure says that it seems to be measuring what it’s supposed to. With poor face validity, someone reviewing your measure may be left confused about what you’re measuring and why you’re using this method.

Face validity is about whether a test appears to measure what it’s supposed to measure. This type of validity is concerned with whether a measure seems relevant and appropriate for what it’s assessing only on the surface.

Statistical analyses are often applied to test validity with data from your measures. You test convergent validity and discriminant validity with correlations to see if results from your test are positively or negatively related to those of other established tests.

You can also use regression analyses to assess whether your measure is actually predictive of outcomes that you expect it to predict theoretically. A regression analysis that supports your expectations strengthens your claim of construct validity .

When designing or evaluating a measure, construct validity helps you ensure you’re actually measuring the construct you’re interested in. If you don’t have construct validity, you may inadvertently measure unrelated or distinct constructs and lose precision in your research.

Construct validity is often considered the overarching type of measurement validity ,  because it covers all of the other types. You need to have face validity , content validity , and criterion validity to achieve construct validity.

Construct validity is about how well a test measures the concept it was designed to evaluate. It’s one of four types of measurement validity , which includes construct validity, face validity , and criterion validity.

There are two subtypes of construct validity.

  • Convergent validity : The extent to which your measure corresponds to measures of related constructs
  • Discriminant validity : The extent to which your measure is unrelated or negatively related to measures of distinct constructs

Naturalistic observation is a valuable tool because of its flexibility, external validity , and suitability for topics that can’t be studied in a lab setting.

The downsides of naturalistic observation include its lack of scientific control , ethical considerations , and potential for bias from observers and subjects.

Naturalistic observation is a qualitative research method where you record the behaviors of your research subjects in real world settings. You avoid interfering or influencing anything in a naturalistic observation.

You can think of naturalistic observation as “people watching” with a purpose.

A dependent variable is what changes as a result of the independent variable manipulation in experiments . It’s what you’re interested in measuring, and it “depends” on your independent variable.

In statistics, dependent variables are also called:

  • Response variables (they respond to a change in another variable)
  • Outcome variables (they represent the outcome you want to measure)
  • Left-hand-side variables (they appear on the left-hand side of a regression equation)

An independent variable is the variable you manipulate, control, or vary in an experimental study to explore its effects. It’s called “independent” because it’s not influenced by any other variables in the study.

Independent variables are also called:

  • Explanatory variables (they explain an event or outcome)
  • Predictor variables (they can be used to predict the value of a dependent variable)
  • Right-hand-side variables (they appear on the right-hand side of a regression equation).

As a rule of thumb, questions related to thoughts, beliefs, and feelings work well in focus groups. Take your time formulating strong questions, paying special attention to phrasing. Be careful to avoid leading questions , which can bias your responses.

Overall, your focus group questions should be:

  • Open-ended and flexible
  • Impossible to answer with “yes” or “no” (questions that start with “why” or “how” are often best)
  • Unambiguous, getting straight to the point while still stimulating discussion
  • Unbiased and neutral

A structured interview is a data collection method that relies on asking questions in a set order to collect data on a topic. They are often quantitative in nature. Structured interviews are best used when: 

  • You already have a very clear understanding of your topic. Perhaps significant research has already been conducted, or you have done some prior research yourself, but you already possess a baseline for designing strong structured questions.
  • You are constrained in terms of time or resources and need to analyze your data quickly and efficiently.
  • Your research question depends on strong parity between participants, with environmental conditions held constant.

More flexible interview options include semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias is the tendency for interview participants to give responses that will be viewed favorably by the interviewer or other participants. It occurs in all types of interviews and surveys , but is most common in semi-structured interviews , unstructured interviews , and focus groups .

Social desirability bias can be mitigated by ensuring participants feel at ease and comfortable sharing their views. Make sure to pay attention to your own body language and any physical or verbal cues, such as nodding or widening your eyes.

This type of bias can also occur in observations if the participants know they’re being observed. They might alter their behavior accordingly.

The interviewer effect is a type of bias that emerges when a characteristic of an interviewer (race, age, gender identity, etc.) influences the responses given by the interviewee.

There is a risk of an interviewer effect in all types of interviews , but it can be mitigated by writing really high-quality interview questions.

A semi-structured interview is a blend of structured and unstructured types of interviews. Semi-structured interviews are best used when:

  • You have prior interview experience. Spontaneous questions are deceptively challenging, and it’s easy to accidentally ask a leading question or make a participant uncomfortable.
  • Your research question is exploratory in nature. Participant answers can guide future research questions and help you develop a more robust knowledge base for future research.

An unstructured interview is the most flexible type of interview, but it is not always the best fit for your research topic.

Unstructured interviews are best used when:

  • You are an experienced interviewer and have a very strong background in your research topic, since it is challenging to ask spontaneous, colloquial questions.
  • Your research question is exploratory in nature. While you may have developed hypotheses, you are open to discovering new or shifting viewpoints through the interview process.
  • You are seeking descriptive data, and are ready to ask questions that will deepen and contextualize your initial thoughts and hypotheses.
  • Your research depends on forming connections with your participants and making them feel comfortable revealing deeper emotions, lived experiences, or thoughts.

The four most common types of interviews are:

  • Structured interviews : The questions are predetermined in both topic and order. 
  • Semi-structured interviews : A few questions are predetermined, but other questions aren’t planned.
  • Unstructured interviews : None of the questions are predetermined.
  • Focus group interviews : The questions are presented to a group instead of one individual.

Deductive reasoning is commonly used in scientific research, and it’s especially associated with quantitative research .

In research, you might have come across something called the hypothetico-deductive method . It’s the scientific method of testing hypotheses to check whether your predictions are substantiated by real-world data.

Deductive reasoning is a logical approach where you progress from general ideas to specific conclusions. It’s often contrasted with inductive reasoning , where you start with specific observations and form general conclusions.

Deductive reasoning is also called deductive logic.

There are many different types of inductive reasoning that people use formally or informally.

Here are a few common types:

  • Inductive generalization : You use observations about a sample to come to a conclusion about the population it came from.
  • Statistical generalization: You use specific numbers about samples to make statements about populations.
  • Causal reasoning: You make cause-and-effect links between different things.
  • Sign reasoning: You make a conclusion about a correlational relationship between different things.
  • Analogical reasoning: You make a conclusion about something based on its similarities to something else.

Inductive reasoning is a bottom-up approach, while deductive reasoning is top-down.

Inductive reasoning takes you from the specific to the general, while in deductive reasoning, you make inferences by going from general premises to specific conclusions.

In inductive research , you start by making observations or gathering data. Then, you take a broad scan of your data and search for patterns. Finally, you make general conclusions that you might incorporate into theories.

Inductive reasoning is a method of drawing conclusions by going from the specific to the general. It’s usually contrasted with deductive reasoning, where you proceed from general information to specific conclusions.

Inductive reasoning is also called inductive logic or bottom-up reasoning.

A hypothesis states your predictions about what your research will find. It is a tentative answer to your research question that has not yet been tested. For some research projects, you might have to write several hypotheses that address different aspects of your research question.

A hypothesis is not just a guess — it should be based on existing theories and knowledge. It also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations and statistical analysis of data).

Triangulation can help:

  • Reduce research bias that comes from using a single method, theory, or investigator
  • Enhance validity by approaching the same topic with different tools
  • Establish credibility by giving you a complete picture of the research problem

But triangulation can also pose problems:

  • It’s time-consuming and labor-intensive, often involving an interdisciplinary team.
  • Your results may be inconsistent or even contradictory.

There are four main types of triangulation :

  • Data triangulation : Using data from different times, spaces, and people
  • Investigator triangulation : Involving multiple researchers in collecting or analyzing data
  • Theory triangulation : Using varying theoretical perspectives in your research
  • Methodological triangulation : Using different methodologies to approach the same topic

Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure. 

Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field. It acts as a first defense, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.

Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.

In general, the peer review process follows the following steps: 

  • First, the author submits the manuscript to the editor.
  • Reject the manuscript and send it back to author, or 
  • Send it onward to the selected peer reviewer(s) 
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made. 
  • Lastly, the edited manuscript is sent back to the author. They input the edits, and resubmit it to the editor for publication.

Exploratory research is often used when the issue you’re studying is new or when the data collection process is challenging for some reason.

You can use exploratory research if you have a general idea or a specific question that you want to study but there is no preexisting knowledge or paradigm with which to study it.

Exploratory research is a methodology approach that explores research questions that have not previously been studied in depth. It is often used when the issue you’re studying is new, or the data collection process is challenging in some way.

Explanatory research is used to investigate how or why a phenomenon occurs. Therefore, this type of research is often one of the first stages in the research process , serving as a jumping-off point for future research.

Exploratory research aims to explore the main aspects of an under-researched problem, while explanatory research aims to explain the causes and consequences of a well-defined problem.

Explanatory research is a research method used to investigate how or why something occurs when only a small amount of information is available pertaining to that topic. It can help you increase your understanding of a given topic.

Clean data are valid, accurate, complete, consistent, unique, and uniform. Dirty data include inconsistencies and errors.

Dirty data can come from any part of the research process, including poor research design , inappropriate measurement materials, or flawed data entry.

Data cleaning takes place between data collection and data analyses. But you can use some methods even before collecting data.

For clean data, you should start by designing measures that collect valid data. Data validation at the time of data entry or collection helps you minimize the amount of data cleaning you’ll need to do.

After data collection, you can use data standardization and data transformation to clean your data. You’ll also deal with any missing values, outliers, and duplicate values.

Every dataset requires different techniques to clean dirty data , but you need to address these issues in a systematic way. You focus on finding and resolving data points that don’t agree or fit with the rest of your dataset.

These data might be missing values, outliers, duplicate values, incorrectly formatted, or irrelevant. You’ll start with screening and diagnosing your data. Then, you’ll often standardize and accept or remove data to make your dataset consistent and valid.

Data cleaning is necessary for valid and appropriate analyses. Dirty data contain inconsistencies or errors , but cleaning your data helps you minimize or resolve these.

Without data cleaning, you could end up with a Type I or II error in your conclusion. These types of erroneous conclusions can be practically significant with important consequences, because they lead to misplaced investments or missed opportunities.

Data cleaning involves spotting and resolving potential data inconsistencies or errors to improve your data quality. An error is any value (e.g., recorded weight) that doesn’t reflect the true value (e.g., actual weight) of something that’s being measured.

In this process, you review, analyze, detect, modify, or remove “dirty” data to make your dataset “clean.” Data cleaning is also called data cleansing or data scrubbing.

Research misconduct means making up or falsifying data, manipulating data analyses, or misrepresenting results in research reports. It’s a form of academic fraud.

These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement but a serious ethical failure.

Anonymity means you don’t know who the participants are, while confidentiality means you know who they are but remove identifying information from your research report. Both are important ethical considerations .

You can only guarantee anonymity by not collecting any personally identifying information—for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, or videos.

You can keep data confidential by using aggregate information in your research report, so that you only refer to groups of participants rather than individuals.

Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe.

Ethical considerations in research are a set of principles that guide your research designs and practices. These principles include voluntary participation, informed consent, anonymity, confidentiality, potential for harm, and results communication.

Scientists and researchers must always adhere to a certain code of conduct when collecting data from others .

These considerations protect the rights of research participants, enhance research validity , and maintain scientific integrity.

In multistage sampling , you can use probability or non-probability sampling methods .

For a probability sample, you have to conduct probability sampling at every stage.

You can mix it up by using simple random sampling , systematic sampling , or stratified sampling to select units at different stages, depending on what is applicable and relevant to your study.

Multistage sampling can simplify data collection when you have large, geographically spread samples, and you can obtain a probability sample without a complete sampling frame.

But multistage sampling may not lead to a representative sample, and larger samples are needed for multistage samples to achieve the statistical properties of simple random samples .

These are four of the most common mixed methods designs :

  • Convergent parallel: Quantitative and qualitative data are collected at the same time and analyzed separately. After both analyses are complete, compare your results to draw overall conclusions. 
  • Embedded: Quantitative and qualitative data are collected at the same time, but within a larger quantitative or qualitative design. One type of data is secondary to the other.
  • Explanatory sequential: Quantitative data is collected and analyzed first, followed by qualitative data. You can use this design if you think your qualitative data will explain and contextualize your quantitative findings.
  • Exploratory sequential: Qualitative data is collected and analyzed first, followed by quantitative data. You can use this design if you think the quantitative data will confirm or validate your qualitative findings.

Triangulation in research means using multiple datasets, methods, theories and/or investigators to address a research question. It’s a research strategy that can help you enhance the validity and credibility of your findings.

Triangulation is mainly used in qualitative research , but it’s also commonly applied in quantitative research . Mixed methods research always uses triangulation.

In multistage sampling , or multistage cluster sampling, you draw a sample from a population using smaller and smaller groups at each stage.

This method is often used to collect data from a large, geographically spread group of people in national surveys, for example. You take advantage of hierarchical groupings (e.g., from state to city to neighborhood) to create a sample that’s less expensive and time-consuming to collect data from.

No, the steepness or slope of the line isn’t related to the correlation coefficient value. The correlation coefficient only tells you how closely your data fit on a line, so two datasets with the same correlation coefficient can have very different slopes.

To find the slope of the line, you’ll need to perform a regression analysis .

Correlation coefficients always range between -1 and 1.

The sign of the coefficient tells you the direction of the relationship: a positive value means the variables change together in the same direction, while a negative value means they change together in opposite directions.

The absolute value of a number is equal to the number without its sign. The absolute value of a correlation coefficient tells you the magnitude of the correlation: the greater the absolute value, the stronger the correlation.

These are the assumptions your data must meet if you want to use Pearson’s r :

  • Both variables are on an interval or ratio level of measurement
  • Data from both variables follow normal distributions
  • Your data have no outliers
  • Your data is from a random or representative sample
  • You expect a linear relationship between the two variables

Quantitative research designs can be divided into two main categories:

  • Correlational and descriptive designs are used to investigate characteristics, averages, trends, and associations between variables.
  • Experimental and quasi-experimental designs are used to test causal relationships .

Qualitative research designs tend to be more flexible. Common types of qualitative design include case study , ethnography , and grounded theory designs.

A well-planned research design helps ensure that your methods match your research aims, that you collect high-quality data, and that you use the right kind of analysis to answer your questions, utilizing credible sources . This allows you to draw valid , trustworthy conclusions.

The priorities of a research design can vary depending on the field, but you usually have to specify:

  • Your research questions and/or hypotheses
  • Your overall approach (e.g., qualitative or quantitative )
  • The type of design you’re using (e.g., a survey , experiment , or case study )
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods (e.g., questionnaires , observations)
  • Your data collection procedures (e.g., operationalization , timing and data management)
  • Your data analysis methods (e.g., statistical tests  or thematic analysis )

A research design is a strategy for answering your   research question . It defines your overall approach and determines how you will collect and analyze data.

Questionnaires can be self-administered or researcher-administered.

Self-administered questionnaires can be delivered online or in paper-and-pen formats, in person or through mail. All questions are standardized so that all respondents receive the same questions with identical wording.

Researcher-administered questionnaires are interviews that take place by phone, in-person, or online between researchers and respondents. You can gain deeper insights by clarifying questions for respondents or asking follow-up questions.

You can organize the questions logically, with a clear progression from simple to complex, or randomly between respondents. A logical flow helps respondents process the questionnaire easier and quicker, but it may lead to bias. Randomization can minimize the bias from order effects.

A questionnaire is a data collection tool or instrument, while a survey is an overarching research method that involves collecting and analyzing data from people using questionnaires.

The third variable and directionality problems are two main reasons why correlation isn’t causation .

The third variable problem means that a confounding variable affects both variables to make them seem causally related when they are not.

The directionality problem is when two variables correlate and might actually have a causal relationship, but it’s impossible to conclude which variable causes changes in the other.

Correlation describes an association between variables : when one variable changes, so does the other. A correlation is a statistical indicator of the relationship between variables.

Causation means that changes in one variable brings about changes in the other (i.e., there is a cause-and-effect relationship between variables). The two variables are correlated with each other, and there’s also a causal link between them.

While causation and correlation can exist simultaneously, correlation does not imply causation. In other words, correlation is simply a relationship where A relates to B—but A doesn’t necessarily cause B to happen (or vice versa). Mistaking correlation for causation is a common error and can lead to false cause fallacy .

Controlled experiments establish causality, whereas correlational studies only show associations between variables.

  • In an experimental design , you manipulate an independent variable and measure its effect on a dependent variable. Other variables are controlled so they can’t impact the results.
  • In a correlational design , you measure variables without manipulating any of them. You can test whether your variables change together, but you can’t be sure that one variable caused a change in another.

In general, correlational research is high in external validity while experimental research is high in internal validity .

A correlation is usually tested for two variables at a time, but you can test correlations between three or more variables.

A correlation coefficient is a single number that describes the strength and direction of the relationship between your variables.

Different types of correlation coefficients might be appropriate for your data based on their levels of measurement and distributions . The Pearson product-moment correlation coefficient (Pearson’s r ) is commonly used to assess a linear relationship between two quantitative variables.

A correlational research design investigates relationships between two variables (or more) without the researcher controlling or manipulating any of them. It’s a non-experimental type of quantitative research .

A correlation reflects the strength and/or direction of the association between two or more variables.

  • A positive correlation means that both variables change in the same direction.
  • A negative correlation means that the variables change in opposite directions.
  • A zero correlation means there’s no relationship between the variables.

Random error  is almost always present in scientific studies, even in highly controlled settings. While you can’t eradicate it completely, you can reduce random error by taking repeated measurements, using a large sample, and controlling extraneous variables .

You can avoid systematic error through careful design of your sampling , data collection , and analysis procedures. For example, use triangulation to measure your variables using multiple methods; regularly calibrate instruments or procedures; use random sampling and random assignment ; and apply masking (blinding) where possible.

Systematic error is generally a bigger problem in research.

With random error, multiple measurements will tend to cluster around the true value. When you’re collecting data from a large sample , the errors in different directions will cancel each other out.

Systematic errors are much more problematic because they can skew your data away from the true value. This can lead you to false conclusions ( Type I and II errors ) about the relationship between the variables you’re studying.

Random and systematic error are two types of measurement error.

Random error is a chance difference between the observed and true values of something (e.g., a researcher misreading a weighing scale records an incorrect measurement).

Systematic error is a consistent or proportional difference between the observed and true values of something (e.g., a miscalibrated scale consistently records weights as higher than they actually are).

On graphs, the explanatory variable is conventionally placed on the x-axis, while the response variable is placed on the y-axis.

  • If you have quantitative variables , use a scatterplot or a line graph.
  • If your response variable is categorical, use a scatterplot or a line graph.
  • If your explanatory variable is categorical, use a bar graph.

The term “ explanatory variable ” is sometimes preferred over “ independent variable ” because, in real world contexts, independent variables are often influenced by other variables. This means they aren’t totally independent.

Multiple independent variables may also be correlated with each other, so “explanatory variables” is a more appropriate term.

The difference between explanatory and response variables is simple:

  • An explanatory variable is the expected cause, and it explains the results.
  • A response variable is the expected effect, and it responds to other variables.

In a controlled experiment , all extraneous variables are held constant so that they can’t influence the results. Controlled experiments require:

  • A control group that receives a standard treatment, a fake treatment, or no treatment.
  • Random assignment of participants to ensure the groups are equivalent.

Depending on your study topic, there are various other methods of controlling variables .

There are 4 main types of extraneous variables :

  • Demand characteristics : environmental cues that encourage participants to conform to researchers’ expectations.
  • Experimenter effects : unintentional actions by researchers that influence study outcomes.
  • Situational variables : environmental variables that alter participants’ behaviors.
  • Participant variables : any characteristic or aspect of a participant’s background that could affect study results.

An extraneous variable is any variable that you’re not investigating that can potentially affect the dependent variable of your research study.

A confounding variable is a type of extraneous variable that not only affects the dependent variable, but is also related to the independent variable.

In a factorial design, multiple independent variables are tested.

If you test two variables, each level of one independent variable is combined with each level of the other independent variable to create different conditions.

Within-subjects designs have many potential threats to internal validity , but they are also very statistically powerful .

Advantages:

  • Only requires small samples
  • Statistically powerful
  • Removes the effects of individual differences on the outcomes

Disadvantages:

  • Internal validity threats reduce the likelihood of establishing a direct relationship between variables
  • Time-related effects, such as growth, can influence the outcomes
  • Carryover effects mean that the specific order of different treatments affect the outcomes

While a between-subjects design has fewer threats to internal validity , it also requires more participants for high statistical power than a within-subjects design .

  • Prevents carryover effects of learning and fatigue.
  • Shorter study duration.
  • Needs larger samples for high power.
  • Uses more resources to recruit participants, administer sessions, cover costs, etc.
  • Individual differences may be an alternative explanation for results.

Yes. Between-subjects and within-subjects designs can be combined in a single study when you have two or more independent variables (a factorial design). In a mixed factorial design, one variable is altered between subjects and another is altered within subjects.

In a between-subjects design , every participant experiences only one condition, and researchers assess group differences between participants in various conditions.

In a within-subjects design , each participant experiences all conditions, and researchers test the same participants repeatedly for differences between conditions.

The word “between” means that you’re comparing different conditions between groups, while the word “within” means you’re comparing different conditions within the same group.

Random assignment is used in experiments with a between-groups or independent measures design. In this research design, there’s usually a control group and one or more experimental groups. Random assignment helps ensure that the groups are comparable.

In general, you should always use random assignment in this type of experimental design when it is ethically possible and makes sense for your study topic.

To implement random assignment , assign a unique number to every member of your study’s sample .

Then, you can use a random number generator or a lottery method to randomly assign each number to a control or experimental group. You can also do so manually, by flipping a coin or rolling a dice to randomly assign participants to groups.

Random selection, or random sampling , is a way of selecting members of a population for your study’s sample.

In contrast, random assignment is a way of sorting the sample into control and experimental groups.

Random sampling enhances the external validity or generalizability of your results, while random assignment improves the internal validity of your study.

In experimental research, random assignment is a way of placing participants from your sample into different groups using randomization. With this method, every member of the sample has a known or equal chance of being placed in a control group or an experimental group.

“Controlling for a variable” means measuring extraneous variables and accounting for them statistically to remove their effects on other variables.

Researchers often model control variable data along with independent and dependent variable data in regression analyses and ANCOVAs . That way, you can isolate the control variable’s effects from the relationship between the variables of interest.

Control variables help you establish a correlational or causal relationship between variables by enhancing internal validity .

If you don’t control relevant extraneous variables , they may influence the outcomes of your study, and you may not be able to demonstrate that your results are really an effect of your independent variable .

A control variable is any variable that’s held constant in a research study. It’s not a variable of interest in the study, but it’s controlled because it could influence the outcomes.

Including mediators and moderators in your research helps you go beyond studying a simple relationship between two variables for a fuller picture of the real world. They are important to consider when studying complex correlational or causal relationships.

Mediators are part of the causal pathway of an effect, and they tell you how or why an effect takes place. Moderators usually help you judge the external validity of your study by identifying the limitations of when the relationship between variables holds.

If something is a mediating variable :

  • It’s caused by the independent variable .
  • It influences the dependent variable
  • When it’s taken into account, the statistical correlation between the independent and dependent variables is higher than when it isn’t considered.

A confounder is a third variable that affects variables of interest and makes them seem related when they are not. In contrast, a mediator is the mechanism of a relationship between two variables: it explains the process by which they are related.

A mediator variable explains the process through which two variables are related, while a moderator variable affects the strength and direction of that relationship.

There are three key steps in systematic sampling :

  • Define and list your population , ensuring that it is not ordered in a cyclical or periodic order.
  • Decide on your sample size and calculate your interval, k , by dividing your population by your target sample size.
  • Choose every k th member of the population as your sample.

Systematic sampling is a probability sampling method where researchers select members of the population at a regular interval – for example, by selecting every 15th person on a list of the population. If the population is in a random order, this can imitate the benefits of simple random sampling .

Yes, you can create a stratified sample using multiple characteristics, but you must ensure that every participant in your study belongs to one and only one subgroup. In this case, you multiply the numbers of subgroups for each characteristic to get the total number of groups.

For example, if you were stratifying by location with three subgroups (urban, rural, or suburban) and marital status with five subgroups (single, divorced, widowed, married, or partnered), you would have 3 x 5 = 15 subgroups.

You should use stratified sampling when your sample can be divided into mutually exclusive and exhaustive subgroups that you believe will take on different mean values for the variable that you’re studying.

Using stratified sampling will allow you to obtain more precise (with lower variance ) statistical estimates of whatever you are trying to measure.

For example, say you want to investigate how income differs based on educational attainment, but you know that this relationship can vary based on race. Using stratified sampling, you can ensure you obtain a large enough sample from each racial group, allowing you to draw more precise conclusions.

In stratified sampling , researchers divide subjects into subgroups called strata based on characteristics that they share (e.g., race, gender, educational attainment).

Once divided, each subgroup is randomly sampled using another probability sampling method.

Cluster sampling is more time- and cost-efficient than other probability sampling methods , particularly when it comes to large samples spread across a wide geographical area.

However, it provides less statistical certainty than other methods, such as simple random sampling , because it is difficult to ensure that your clusters properly represent the population as a whole.

There are three types of cluster sampling : single-stage, double-stage and multi-stage clustering. In all three types, you first divide the population into clusters, then randomly select clusters for use in your sample.

  • In single-stage sampling , you collect data from every unit within the selected clusters.
  • In double-stage sampling , you select a random sample of units from within the clusters.
  • In multi-stage sampling , you repeat the procedure of randomly sampling elements from within the clusters until you have reached a manageable sample.

Cluster sampling is a probability sampling method in which you divide a population into clusters, such as districts or schools, and then randomly select some of these clusters as your sample.

The clusters should ideally each be mini-representations of the population as a whole.

If properly implemented, simple random sampling is usually the best sampling method for ensuring both internal and external validity . However, it can sometimes be impractical and expensive to implement, depending on the size of the population to be studied,

If you have a list of every member of the population and the ability to reach whichever members are selected, you can use simple random sampling.

The American Community Survey  is an example of simple random sampling . In order to collect detailed data on the population of the US, the Census Bureau officials randomly select 3.5 million households per year and use a variety of methods to convince them to fill out the survey.

Simple random sampling is a type of probability sampling in which the researcher randomly selects a subset of participants from a population . Each member of the population has an equal chance of being selected. Data is then collected from as large a percentage as possible of this random subset.

Quasi-experimental design is most useful in situations where it would be unethical or impractical to run a true experiment .

Quasi-experiments have lower internal validity than true experiments, but they often have higher external validity  as they can use real-world interventions instead of artificial laboratory settings.

A quasi-experiment is a type of research design that attempts to establish a cause-and-effect relationship. The main difference with a true experiment is that the groups are not randomly assigned.

Blinding is important to reduce research bias (e.g., observer bias , demand characteristics ) and ensure a study’s internal validity .

If participants know whether they are in a control or treatment group , they may adjust their behavior in ways that affect the outcome that researchers are trying to measure. If the people administering the treatment are aware of group assignment, they may treat participants differently and thus directly or indirectly influence the final results.

  • In a single-blind study , only the participants are blinded.
  • In a double-blind study , both participants and experimenters are blinded.
  • In a triple-blind study , the assignment is hidden not only from participants and experimenters, but also from the researchers analyzing the data.

Blinding means hiding who is assigned to the treatment group and who is assigned to the control group in an experiment .

A true experiment (a.k.a. a controlled experiment) always includes at least one control group that doesn’t receive the experimental treatment.

However, some experiments use a within-subjects design to test treatments without a control group. In these designs, you usually compare one group’s outcomes before and after a treatment (instead of comparing outcomes between different groups).

For strong internal validity , it’s usually best to include a control group if possible. Without a control group, it’s harder to be certain that the outcome was caused by the experimental treatment and not by other variables.

An experimental group, also known as a treatment group, receives the treatment whose effect researchers wish to study, whereas a control group does not. They should be identical in all other ways.

Individual Likert-type questions are generally considered ordinal data , because the items have clear rank order, but don’t have an even distribution.

Overall Likert scale scores are sometimes treated as interval data. These scores are considered to have directionality and even spacing between them.

The type of data determines what statistical tests you should use to analyze your data.

A Likert scale is a rating scale that quantitatively assesses opinions, attitudes, or behaviors. It is made up of 4 or more questions that measure a single attitude or trait when response scores are combined.

To use a Likert scale in a survey , you present participants with Likert-type questions or statements, and a continuum of items, usually with 5 or 7 possible responses, to capture their degree of agreement.

In scientific research, concepts are the abstract ideas or phenomena that are being studied (e.g., educational achievement). Variables are properties or characteristics of the concept (e.g., performance at school), while indicators are ways of measuring or quantifying variables (e.g., yearly grade reports).

The process of turning abstract concepts into measurable variables and indicators is called operationalization .

There are various approaches to qualitative data analysis , but they all share five steps in common:

  • Prepare and organize your data.
  • Review and explore your data.
  • Develop a data coding system.
  • Assign codes to the data.
  • Identify recurring themes.

The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .

There are five common approaches to qualitative research :

  • Grounded theory involves collecting data in order to develop new theories.
  • Ethnography involves immersing yourself in a group or organization to understand its culture.
  • Narrative research involves interpreting stories to understand how people make sense of their experiences and perceptions.
  • Phenomenological research involves investigating phenomena through people’s lived experiences.
  • Action research links theory and practice in several cycles to drive innovative changes.

Hypothesis testing is a formal procedure for investigating our ideas about the world using statistics. It is used by scientists to test specific predictions, called hypotheses , by calculating how likely it is that a pattern or relationship between variables could have arisen by chance.

Operationalization means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioral avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalize the variables that you want to measure.

When conducting research, collecting original data has significant advantages:

  • You can tailor data collection to your specific research aims (e.g. understanding the needs of your consumers or user testing your website)
  • You can control and standardize the process for high reliability and validity (e.g. choosing appropriate measurements and sampling methods )

However, there are also some drawbacks: data collection can be time-consuming, labor-intensive and expensive. In some cases, it’s more efficient to use secondary data that has already been collected by someone else, but the data might be less reliable.

Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.

There are several methods you can use to decrease the impact of confounding variables on your research: restriction, matching, statistical control and randomization.

In restriction , you restrict your sample by only including certain subjects that have the same values of potential confounding variables.

In matching , you match each of the subjects in your treatment group with a counterpart in the comparison group. The matched subjects have the same values on any potential confounding variables, and only differ in the independent variable .

In statistical control , you include potential confounders as variables in your regression .

In randomization , you randomly assign the treatment (or independent variable) in your study to a sufficiently large number of subjects, which allows you to control for all potential confounding variables.

A confounding variable is closely related to both the independent and dependent variables in a study. An independent variable represents the supposed cause , while the dependent variable is the supposed effect . A confounding variable is a third variable that influences both the independent and dependent variables.

Failing to account for confounding variables can cause you to wrongly estimate the relationship between your independent and dependent variables.

To ensure the internal validity of your research, you must consider the impact of confounding variables. If you fail to account for them, you might over- or underestimate the causal relationship between your independent and dependent variables , or even find a causal relationship where none exists.

Yes, but including more than one of either type requires multiple research questions .

For example, if you are interested in the effect of a diet on health, you can use multiple measures of health: blood sugar, blood pressure, weight, pulse, and many more. Each of these is its own dependent variable with its own research question.

You could also choose to look at the effect of exercise levels as well as diet, or even the additional effect of the two combined. Each of these is a separate independent variable .

To ensure the internal validity of an experiment , you should only change one independent variable at a time.

No. The value of a dependent variable depends on an independent variable, so a variable cannot be both independent and dependent at the same time. It must be either the cause or the effect, not both!

You want to find out how blood sugar levels are affected by drinking diet soda and regular soda, so you conduct an experiment .

  • The type of soda – diet or regular – is the independent variable .
  • The level of blood sugar that you measure is the dependent variable – it changes depending on the type of soda.

Determining cause and effect is one of the most important parts of scientific research. It’s essential to know which is the cause – the independent variable – and which is the effect – the dependent variable.

In non-probability sampling , the sample is selected based on non-random criteria, and not every member of the population has a chance of being included.

Common non-probability sampling methods include convenience sampling , voluntary response sampling, purposive sampling , snowball sampling, and quota sampling .

Probability sampling means that every member of the target population has a known chance of being included in the sample.

Probability sampling methods include simple random sampling , systematic sampling , stratified sampling , and cluster sampling .

Using careful research design and sampling procedures can help you avoid sampling bias . Oversampling can be used to correct undercoverage bias .

Some common types of sampling bias include self-selection bias , nonresponse bias , undercoverage bias , survivorship bias , pre-screening or advertising bias, and healthy user bias.

Sampling bias is a threat to external validity – it limits the generalizability of your findings to a broader group of people.

A sampling error is the difference between a population parameter and a sample statistic .

A statistic refers to measures about the sample , while a parameter refers to measures about the population .

Populations are used when a research question requires data from every member of the population. This is usually only feasible when the population is small and easily accessible.

Samples are used to make inferences about populations . Samples are easier to collect data from because they are practical, cost-effective, convenient, and manageable.

There are seven threats to external validity : selection bias , history, experimenter effect, Hawthorne effect , testing effect, aptitude-treatment and situation effect.

The two types of external validity are population validity (whether you can generalize to other groups of people) and ecological validity (whether you can generalize to other situations and settings).

The external validity of a study is the extent to which you can generalize your findings to different groups of people, situations, and measures.

Cross-sectional studies cannot establish a cause-and-effect relationship or analyze behavior over a period of time. To investigate cause and effect, you need to do a longitudinal study or an experimental study .

Cross-sectional studies are less expensive and time-consuming than many other types of study. They can provide useful insights into a population’s characteristics and identify correlations for further research.

Sometimes only cross-sectional data is available for analysis; other times your research question may only require a cross-sectional study to answer it.

Longitudinal studies can last anywhere from weeks to decades, although they tend to be at least a year long.

The 1970 British Cohort Study , which has collected data on the lives of 17,000 Brits since their births in 1970, is one well-known example of a longitudinal study .

Longitudinal studies are better to establish the correct sequence of events, identify changes over time, and provide insight into cause-and-effect relationships, but they also tend to be more expensive and time-consuming than other types of studies.

Longitudinal studies and cross-sectional studies are two different types of research design . In a cross-sectional study you collect data from a population at a specific point in time; in a longitudinal study you repeatedly collect data from the same sample over an extended period of time.

Longitudinal study Cross-sectional study
observations Observations at a in time
Observes the multiple times Observes (a “cross-section”) in the population
Follows in participants over time Provides of society at a given point

There are eight threats to internal validity : history, maturation, instrumentation, testing, selection bias , regression to the mean, social interaction and attrition .

Internal validity is the extent to which you can be confident that a cause-and-effect relationship established in a study cannot be explained by other factors.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts and meanings, use qualitative methods .
  • If you want to analyze a large amount of readily-available data, use secondary data. If you want data specific to your purposes with control over how it is generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

A confounding variable , also called a confounder or confounding factor, is a third variable in a study examining a potential cause-and-effect relationship.

A confounding variable is related to both the supposed cause and the supposed effect of the study. It can be difficult to separate the true effect of the independent variable from the effect of the confounding variable.

In your research design , it’s important to identify potential confounding variables and plan how you will reduce their impact.

Discrete and continuous variables are two types of quantitative variables :

  • Discrete variables represent counts (e.g. the number of objects in a collection).
  • Continuous variables represent measurable amounts (e.g. water volume or weight).

Quantitative variables are any variables where the data represent amounts (e.g. height, weight, or age).

Categorical variables are any variables where the data represent groups. This includes rankings (e.g. finishing places in a race), classifications (e.g. brands of cereal), and binary outcomes (e.g. coin flips).

You need to know what type of variables you are working with to choose the right statistical test for your data and interpret your results .

You can think of independent and dependent variables in terms of cause and effect: an independent variable is the variable you think is the cause , while a dependent variable is the effect .

In an experiment, you manipulate the independent variable and measure the outcome in the dependent variable. For example, in an experiment about the effect of nutrients on crop growth:

  • The  independent variable  is the amount of nutrients added to the crop field.
  • The  dependent variable is the biomass of the crops at harvest time.

Defining your variables, and deciding how you will manipulate and measure them, is an important part of experimental design .

Experimental design means planning a set of procedures to investigate a relationship between variables . To design a controlled experiment, you need:

  • A testable hypothesis
  • At least one independent variable that can be precisely manipulated
  • At least one dependent variable that can be precisely measured

When designing the experiment, you decide:

  • How you will manipulate the variable(s)
  • How you will control for any potential confounding variables
  • How many subjects or samples will be included in the study
  • How subjects will be assigned to treatment levels

Experimental design is essential to the internal and external validity of your experiment.

I nternal validity is the degree of confidence that the causal relationship you are testing is not influenced by other factors or variables .

External validity is the extent to which your results can be generalized to other contexts.

The validity of your experiment depends on your experimental design .

Reliability and validity are both about how well a method measures something:

  • Reliability refers to the  consistency of a measure (whether the results can be reproduced under the same conditions).
  • Validity   refers to the  accuracy of a measure (whether the results really do represent what they are supposed to measure).

If you are doing experimental research, you also have to consider the internal and external validity of your experiment.

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyze data (for example, experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

Ask our team

Want to contact us directly? No problem.  We  are always here for you.

Support team - Nina

Our team helps students graduate by offering:

  • A world-class citation generator
  • Plagiarism Checker software powered by Turnitin
  • Innovative Citation Checker software
  • Professional proofreading services
  • Over 300 helpful articles about academic writing, citing sources, plagiarism, and more

Scribbr specializes in editing study-related documents . We proofread:

  • PhD dissertations
  • Research proposals
  • Personal statements
  • Admission essays
  • Motivation letters
  • Reflection papers
  • Journal articles
  • Capstone projects

Scribbr’s Plagiarism Checker is powered by elements of Turnitin’s Similarity Checker , namely the plagiarism detection software and the Internet Archive and Premium Scholarly Publications content databases .

The add-on AI detector is powered by Scribbr’s proprietary software.

The Scribbr Citation Generator is developed using the open-source Citation Style Language (CSL) project and Frank Bennett’s citeproc-js . It’s the same technology used by dozens of other popular citation tools, including Mendeley and Zotero.

You can find all the citation styles and locales used in the Scribbr Citation Generator in our publicly accessible repository on Github .

logo

  • GMAT CLUB TESTS
  • FORUM QUIZ - NEW!
  • QUESTION BANKS
  • DECISION TRACKER
  • SCHOOL DISCUSSIONS
  • MARKETPLACE
  • T&C and Privacy Policy
  • GMAT Club Rules
  • Login Register Forgot password?
  • ${glob_var/L_LOGIN_LOGOUT}
  • Quick Search

Optional and Open-Ended Essay Questions: What’s the Best Strategy?

accepted.com

Many business schools ask open-ended questions, many of which are variations of, “I wish the admissions committee had asked me…” or, “What is a question that you wish we had asked?” The most common among these open-ended questions is the optional essay, where you really have free rein to discuss anything you feel is important and that you have not had an opportunity to address anywhere else in your application. 

Clients are often uncertain about how best to use these spaces, which are excellent opportunities to round out your profile. In this post we will offer advice on dealing with different types of open-ended questions.

How many open-ended essay questions do you have? 

To figure out how to maximize these spaces, see if the application offers both an optional essay space as  well  as another open-ended question? Or do you only have one?

Let’s say you  only  have the optional essay to write about a topic of your choice. And let’s say that the school has not asked you to explain any weakness or inconsistency in any other area of the application. If you do have a noticeable weakness, such as that your grades plummeted during your sophomore year in college, your GMAT is low for the average accepted applicant at this school, or you had an employment gap of six months or longer,  there are ways to deal with them effectively  using this space. Explain the circumstances simply and directly: perhaps you underwent surgery, had a death in the family, or had taken on too many extracurricular activities or work responsibilities. 

You don’t want to leave the admissions committee guessing or assuming the worst, but you also need to keep perspective. There is no need to explain away a single C+ grade from your freshman year. You don’t have to justify the fact that you hadn’t started your own nonprofit organization by the age of 19. You don’t have to apologize for the fact that you didn’t spend your undergrad years at an Ivy League school.

When using this essay to  address a weakness , keep it short. Present the relevant background surrounding the facts, and what you learned or did subsequently to improve or change the outcome to the extent possible. Whether there were circumstances beyond your control, or you hadn’t used the best judgment, your goal is to provide context for events that may not reflect well on you. Do not make excuses. Show that the circumstances that had impeded your performance no longer exist, or that you have learned how to handle those circumstances should a similar situation arise in the future. Remember, this question almost always comes right at the end of the essay portion of the application — it will probably be the last thing the adcom reads, so if at all possible, use it to give them something positive to remember you by.

How the optional essay helps you highlight your well-rounded personality

Since MBA application essays often focus exclusively on work examples, your career goals, and/or why you are interested in attending a particular school, the optional is a great place to add dimensionality and present yourself as a whole, well-rounded individual. You may drift out of the adcom’s minds pretty quickly if you simply come across as “the project manager with the 740 GMAT,” and since the adcom has already gained a sense of your leadership chops, don’t use the optional to write about a secondary leadership role–even if you’re very proud of it. Instead, use the optional essay to  really stand out  as “the project manager who used skydiving as a team-building exercise,” “the investment banker who teaches salsa dancing to senior citizens,” or “the marketing manager who taught herself five languages in her spare time.” Now that is an application to remember! 

What about the question, “I wish the admissions committee had asked me…”

Let’s start with how  not  to answer this question. Do not use it to discuss bland “catch-all” topics such as, “I wish the admissions committee had asked me how I achieve excellence in everything I do.” Believe it or not, some applicants will want to use this approach, but it’s a big mistake. Not only will those types of answers usually end up being way too generic, but  they will make you sound self-absorbed  and even arrogant. 

Assuming that you’ve presented your professional/leadership experiences compellingly in your other essays, you can definitely take a more lighthearted (though not frivolous) approach here. Choose to write about an aspect of yourself that is uniquely, distinctively, memorably, YOU. This picture will round out the adcom’s perception of you. It will promote your “human interest” factor, your potential to contribute something to the incoming class and the teams you will work with, beyond your work experience and academic abilities.

Harvard Business School  has a single, entirely open-ended question. They ask, “As we review your application, what more would you like us to know as we consider your candidacy for the Harvard Business School MBA program?”  We work with clients  all the time who are very intimidated by this question, but by the end of the process we have convinced them that this question is really a gift, though it is one you must use wisely. By providing 900 words (a/o the 2022-23 application cycle) to answer this question, Harvard is hoping to learn a few things about their successful admits:

  • A complete picture of the applicant through a carefully planned use of space.
  • An essay that does NOT present what the candidate  thinks  the committee wants to hear (whatever you think it is, it’s not).
  • An applicant’s ability to deal with ambiguity and show clarity of thought and succinctness.

Some other schools may also word this question in an open-ended way as Harvard does, but if the question pointedly asks you what you wish they had asked, you need to answer honestly: What  do  you wish they had asked you?

”Keeping in mind your goal to add a fuller, more personal dimension to you as an applicant, it could be any of the following:  “What do I do for fun?”  “How did I come to develop the goals that I have?” “What have I learned about life from playing the flute?” “Why was Cheryl Strayed’s memoir,  Wild,  life-changing for me?” The open-ended questions in your MBA application are wonderfully valuable opportunities for you to round out your application. You can directly explain any deficiencies in your candidacy while showing how you have addressed them, as well as introduce the adcom to a more personally interesting aspect of yourself that will make you stand out for all the right reasons!

Do you need help answering these questions or any other MBA application questions? Check out Accepted’s  MBA Admissions Consulting & Editing Services  and work one-on-one with an admissions pro who will answer your questions and help you get ACCEPTED.

By Judy Gruen, former Accepted admissions consultant. Judy holds a Master’s in Journalism from Northwestern University. She is the co-author of Accepted’s first full-length book,  MBA Admission for Smarties: The No-Nonsense Guide to Acceptance at Top Business Schools .  Want an admissions expert help you get accepted? Click here to get in touch!

Top MBA Essay Questions: How to Answer them Right!

Related Resources:

  • Sample Essays from Admitted HBS Students
  • How Should I Choose Which Essay Questions to Answer When I Have Choices?
  • How Much Overlap Can There Be Between My Resume and My Essays?

This article originally appeared on blog.accepted.com

accepted.com

accepted.com

Published in MBA , Accepted.com , Admission Consultants , Applications and Blog

  • MBA Application Tips
  • MBA application advice
  • MBA essay tips
  • Optional Information
  • optional essay

Open-Ended vs. Closed Questions in User Research

open ended essay

January 26, 2024 2024-01-26

  • Email article
  • Share on LinkedIn
  • Share on Twitter

When conducting user research, asking questions helps you uncover insights. However, how you ask questions impacts what and how much you can discover .

In This Article:

Open-ended vs. closed questions, why asking open-ended questions is important, how to ask open-ended questions.

There are two types of questions we can use in research studies: open-ended and closed.

  Open-ended questions allow participants to give a free-form text answer. Closed questions (or closed-ended questions) restrict participants to one of a limited set of possible answers.

Open-ended questions encourage exploration of a topic; a participant can choose what to share and in how much detail. Participants are encouraged to give a reasoned response rather than a one-word answer or a short phrase.

Examples of open-ended questions include:

  • Walk me through a typical day.
  • Tell me about the last time you used the website.
  • What are you thinking?
  • How did you feel about using the website to do this task?

Note that the first two open-ended questions are commands but act as questions. These are common questions asked in user interviews to get participants to share stories. Questions 3 and 4 are common questions that a usability-test facilitator may ask during and after a user attempts a task, respectively.

Closed questions have a short and limited response. Examples of closed questions include:

  • What’s your job title?
  • Have you used the website before?
  • Approximately, how many times have you used the website?
  • When was the last time you used the website?

Strictly speaking, questions 3 and 4 would only be considered “closed” if they were accompanied by answer options, such as (a) never, (b) once, (c) two times or more. This is because the number of times and days could be infinite. That being said, in UX, we treat questions like these as closed questions.

In the dialog between a facilitator and a user below, closed questions provide a short, clarifying response, while open-ended questions result in the user describing an experience.

T

Using Closed Questions in Surveys

Closed questions are heavily utilized in surveys because the responses can be analyzed statistically (and surveys are usually a quantitative exercise). When used in surveys, they often take the form of multiple-choice questions or rating-scale items , rather than open-text questions. This way, the respondent has the answer options provided, and researchers can easily quantify how popular certain responses are. That being said, some closed questions could be answered through an open-text field to provide a better experience for the respondent. Consider the following closed questions:

  • In which industry do you work?
  • What is your gender?

Both questions could be presented as multiple-choice questions in a survey. However, the respondent might find it more comfortable to share their industry and gender in a free-text field if they feel the survey does not provide an option that directly aligns with their situation or if there are too many options to review.

Another reason closed questions are used in surveys is that they are much easier to answer than open-ended ones. A survey with many open-ended questions will usually have a lower completion rate than one with more closed questions.

Using Closed Questions in Interviews and Usability Tests

Closed questions are used occasionally in interviews and usability tests to get clarification and extra details. They are often used when asking followup questions. For example, a facilitator might ask:

  • Has this happened to you before?
  • When was the last time this happened?
  • Was this a different time than the time you mentioned previously?

Closed questions help facilitators gather important details. However, they should be used sparingly in qualitative research as they can limit what you can learn.

open ended essay

The greatest benefit of open-ended questions is that they allow you to find more than you anticipate. You don’t know what you don’t know.   People may share motivations you didn’t expect and mention behaviors and concerns you knew nothing about. When you ask people to explain things, they often reveal surprising mental models , problem-solving strategies, hopes, and fears.

On the other hand, closed questions stop the conversation. If an interviewer or usability-test facilitator were to ask only closed questions, the conversation would be stilted and surface-level. The facilitator might not learn important things they didn’t think to ask because closed questions eliminate surprises: what you expect is what you get.

open ended essay

Closed Questions Can Sometimes Be Leading

When you ask closed questions, you may accidentally reveal what you’re interested in and prime participants to volunteer only specific information. This is why researchers use the funnel technique , where the session or followup questions begin with broad, open-ended questions before introducing specific, closed questions.

Not all closed questions are leading. That being said, it’s easy for a closed question to become leading if it suggests an answer.

The table below shows examples of leading closed questions . Reworking a question so it’s not leading often involves making it open-ended, as shown in column 2 of the table below.

One way to spot a leading, closed question is to look at how the question begins. Leading closed questions often start with the words “did,” “was,” or “is.” Open-ended questions often begin with “how” or “what.”

New interviewers and usability-test facilitators often struggle to ask enough open-ended questions. A new interviewer might be tempted to ask many factual, closed questions in quick succession, such as the following:

  • Do you have children?
  • Do you work?
  • How old are you?
  • Do you ever [insert behavior]?

However, these questions could be answered in response to a broad, open-ended question like Tell me a bit about yourself .

When constructing an interview guide for a user interview, try to think of a broad, open-ended version of a closed question that might get the participant talking about the question you want answered, like in the example above.

When asking questions in a usability test, try to favor questions that begin with “how,” or “what,” over “do,” or “did” like in the table below.

Another tip to help you ask open-ended questions is to use one of the following question stems :

  • Walk me through [how/what]...
  • Tell me a bit about…
  • Tell me about a time where…

Finally, you can ask open-ended questions when probing. Probing questions are open-ended and are used in response to what a participant shares. They are designed to solicit more information. You can use the following probing questions in interviews and usability tests.

  • Tell me more about that.
  • What do you mean by that?
  • Can you expand on that?
  • What do you think about that?
  • Why do you think that?

Ask open-ended questions in conversations with users to discover unanticipated answers and important insights. Use closed questions to gather additional small details, gain clarification, or when you want to analyze responses quantitatively.

Related Topics

  • Research Methods Research Methods

Learn More:

Please accept marketing cookies to view the embedded video. https://www.youtube.com/watch?v=LpV3tMy_WZ0

Open vs. Closed Questions in User Research

open ended essay

Competitive Reviews vs. Competitive Research

Therese Fessenden · 4 min

open ended essay

15 User Research Methods to Know Beyond Usability Testing

Samhita Tankala · 3 min

open ended essay

Always Pilot Test User Research Studies

Kim Flaherty · 3 min

Related Articles:

Field Studies Done Right: Fast and Observational

Jakob Nielsen · 3 min

Should You Run a Survey?

Maddie Brown · 6 min

The Funnel Technique in Qualitative User Research

Maria Rosala and Kate Moran · 7 min

Card Sorting: Pushing Users Beyond Terminology Matches

Samhita Tankala and Jakob Nielsen · 5 min

Card Sorting: Uncover Users' Mental Models for Better Information Architecture

Samhita Tankala and Katie Sherwin · 11 min

The Diverge-and-Converge Technique for UX Workshops

Therese Fessenden · 6 min

  • BeginningReads™
  • DecodableReads™
  • TopicReads™ – Primary
  • FYI for Kids
  • SummerReads™
  • Talking Points For Kids
  • Stories of Words
  • TopicReads™ – Middle School
  • Heroes:  Inspiring Stories for Teens!
  • Search for...
  • Core Vocabulary Word Zones
  • Core Vocabulary Word Maps
  • Core Vocabulary Word Pictures
  • Academic Word List
  • E4: Exceptional Expressions for Everyday Events
  • S4: Super Synonym Sets for Stories
  • Content Area Word Pictures
  • Read-Aloud Favorites
  • Recommended read-aloud books for knowledge-building and social-emotional learning
  • The Reading GPS
  • The Reading GPS gives teachers information about whether students are moving toward the goal of proficient reading.

Dr. Elfrieda Hiebert’s Text Elements by Task (TExT) model underlying our texts has been validated through scientific research

  • Teach Your Child to Read & Spell

open ended essay

  • Pat Cunningham's Comprehension Response Sheets

Book cover ofComprehension Response Sheets by Pat Cunningham

  • Teach Your Child Lessons: BeginningReads

Tutoring lessons for all 10 levels of BeginningReads.

  • ToolKit for Tutoring

open ended essay

  • Comprehension Guides from Reading Partners

Comprehension Guides for TextProject Texts

  • Lesson Plan for a Fluency Intervention
  • CCSS Webinar Series
  • Text Complexity
  • small changes = BIG RESULTS
  • The Science of Reading Blog and Video Series
  • Text Matters—a Magazine for Educators

Practical, research-based ideas for improving reading instruction.

  • Presentation Slides
  • Webinar Videos
  • Fostering Hope with Children’s Literature

A young white male teacher reading aloud has turned around a picture book with a red cover to show his class of mainly African-American K-1st grade children.

  • Beginning Readers: Instruction & Texts
  • Reading Volume & Silent Reading Stamina
  • Vocabulary & Knowledge
  • Reading Research Reports

open ended essay

  • “CATERing” to Readers’ Needs with AI: Innovation in Text Design and Instruction
  • Enhancing Opportunities for Decoding and Knowledge Building through Beginning Texts
  • What the Quasi-Regular Orthography of English Means for Bringing Students to Proficient Reading

open ended essay

  • Frankly Freddy Blog

Preparing Students in Writing Responses to Open-Ended Questions

open ended essay

The new 2015–2016 assessments written by Smarter Balanced Assessment Consortium and the Partnership for Assessment of Readiness for College and Careers both heavily feature questions that require students to provide evidence for their reply. This is a dramatic departure from simple multiple-choice questions where student can guess the best response if they are unsure of the answer. What can teachers do to prepare students for this more rigorous form of testing? How can teachers help students pinpoint the heart of open-ended questions to give the best response?

Suppose you are a student taking one of the new assessments that have been developed to measure attainment of the Common Core State Standards for the English Language Arts (CCSS/ELA). After reading a text about a baseball-loving girl and her grandmother, you look at the questions you are to answer. Here is what you see:

What does Naomi learn about Grandma Ruth? Use details from the text to support your answer. ( Grandma Ruth, Smarter Balanced test sample)

This task is an example of the Smarter Balanced Assessment Consortium’s (SBAC) tasks for grades 3–5. It is illustrative of a task format thousands of students will encounter when they take that assessment in the fall of 2014. It is also similar to a format found on the end-of-year assessment tasks used by the Partnership for Assessment of Readiness for College and Careers (PARCC).

These tasks, open-ended questions as well as research simulations (often described as performance assessments), require students to construct their own responses rather than select them from a set of given possibilities. And, if you are a typical student, this assessment may be the first time that you have been required to respond to a task by doing little more than filling in a bubble. Needless to say, if you are a typical student, responding successfully to such a task might prove daunting.

A great deal has been written—and continues to be written almost daily—about implementing classroom instruction that promotes the skills and knowledge called for in the CCSS/ELA. Less focus has been given, however, to addressing the additional skills and different kinds of knowledge that are called for to complete some of the tasks found on the new CCSS/ELA-related assessments.

The skills and knowledge that underlie understanding the expectations of and writing responses to higher-level questions are not simply test-taking abilities. Rather they are skills and dispositions that apply to both demonstrating achievement on the assessments and, more importantly, to effective information processing in the 21st century.

Focusing on open-ended tasks (future issues of Text Matters will address other types of tasks, such as research simulations), this issue of Text Matters identifies the skills and knowledge that students will need if they are to achieve success on the new CCSS/ELA-related assessments and offers ideas for ways that teachers can develop these skills and understandings. The three main goals of this article are:

  • to describe how students need to approach the close reading of the questions, or tasks on the assessments;
  • to identify the kinds of skills and knowledge students need in writing clear, comprehensible responses; and
  • to examine issues related to fluency in writing and stamina that arise as students work with extended texts.

Applying Close Reading to Open-Ended Assessment Tasks

As of early 2014, most American students are not accustomed to writing extended responses for assessment questions. Rarely do state assessments (and even more rarely commercial publishers’ norm-referenced tests) require students to write even a phrase or a sentence or two in response to questions, much less an entire paragraph. They might write answers to some questions in core reading materials. However, these answers are seldom extended and the questions do not yet reflect the “close reading” intent of the CCSS/ELA, which involves inspecting a text closely for evidence that supports responses. In many classrooms, students do little writing in response to their reading. Most often, they construct responses, usually orally, for an immediate audience (e.g., a small group or the entire class in a classroom setting). If a response misses the point of a text, the student gets immediate clarification and correction from the teacher or a peer.

On the CCSS/ELA-related assessment tasks, however, students will be reading texts on their own and writing responses for an audience that is remote. And, whereas students in a classroom setting get a second chance for a correct response as the teacher repeats a question in a class discussion or asks for greater clarification on a written report or essay, there is no such fallback for students who miss the intent of an open-ended test task. The lack of immediate feedback and guidance creates a major impediment to the ability of students to write responses that demonstrate what they comprehend from the text and provide support from the text for these responses.

In addition, when one examines student responses to open-ended tasks, it becomes apparent that many students also do not read the questions carefully, and their responses are off target or not sufficient.

Consider this example of a constructed-response question and what specifically it requires students to do:

What could you conclude about the author’s bias? Provide two pieces of evidence from the text that support your conclusion.

Mistakes students are likely to make in answering this question have nothing to do with their comprehension of the stimulus text. Often these mistakes reflect lack of attention to the specifics of the task and lack of completeness in responding. Common mistakes that students make in their responses include the following:

  • They provide only one piece of evidence from the text.
  • They provide their own ideas, but no evidence from the text.
  • They provide adequate evidence but no clearly stated conclusion.
  • They fail to pay attention to the verbs in the questions.
  • They do not make a clear connection between their conclusion and the evidence.
  • They respond in an incomplete matter that is often difficult to understand.

The first three problems indicate that close reading is a skill applicable not only to how students must read the stimulus text, but also to how they must read a question and think about what it requires them to do.

Another mistake students often make as they read assessment tasks is the failure to pay attention to the verbs in the questions. For these open-ended tasks, the scoring guides are closely aligned with the verbs, and teachers must make sure students understand that there are differences among explain , describe , list , summarize , and identify . For example, a student response that describes a situation will not receive a full score if the assessment task asks the student to explain it. Some lessons on these verb differences and on how to respond to questions that contain each can help students in their careful reading of tasks and successful construction of responses. Students also need practice responding to questions with different verbs and discussing how their responses reflect the verbs’ intent. This attention to understanding the verbs of questions is useful for almost all students, but it is critical for English language learners. The final two mistakes made by students in their responses, as noted above, are largely conceptual shortcomings that will be discussed in the following section.

One important note: It is crucial that teachers directly teach close reading of tasks. Students might be able to perform an assessment task but fail to demonstrate their ability because they misread the task. Moreover, attention to the specific requirements of tasks is not only a skill but a critical disposition for success at school, at work, and even in personal pursuits such as sports and hobbies. Helping students recognize the importance of attention to task requirements in all aspects of their lives promotes the development of this disposition. Teachers can use games such as Simon Says to develop this ability with very young students. Keeping classroom discussions on topic or work groups on task can promote this disposition as students move across grades.

The new CCSS/ELA-related assessments contain a variety of open-ended tasks, in addition to the ones that have already been described. Table 1 lists ways in which students may fall short in their responses to particular kinds of tasks.

Table 1
Examples of Open-Ended Tasks and Mistakes Students Make with Them
Example of TaskExamples of Frequent Mistakes Made by Students
Give three reasons, based on details in the text, that Wolfgang thought he was doing the right thing.
What is the main point the author is making in this article? Provide three details that make that point.
Tell which character you believe was the bravest and give evidence from the story that shows that the character was brave

Students’ faulty responses reflect a lack of experience with the types of tasks—tasks that require students to read closely and attend to the evidence in the text. To become competent at these tasks requires experience with such tasks and deliberate instruction of strategies and close reading of tasks. Three actions on the part of teachers will support students in developing the competence that will keep them on the road to college and career readiness:

  • Provide students with opportunities to respond to open-ended questions with connected discourse.
  • Read responses to open-ended questions as a class and discuss whether the responses actually describe, explain, support, etc. or are off task. This demonstrates the importance of close reading of questions and lays the foundation for students’ self-checking their own responses.
  • Help students to develop the habit of checking answers, similar to checking an answer in math. Is this the type of answer the question requires? Does it make sense? Are all the required pieces here?

Writing Complete, Comprehensible Responses

Look again at the typical mistakes students make, such as those listed in Table 1. The related mistakes that students are likely to make in responding to the tasks reflect two major problems:

  • Students write in an incomplete, difficult-to-understand manner, as if they were speaking to someone familiar rather than writing for a stranger or remote reader.
  • Students do not make clear connections between their conclusions and the text evidence.

Teachers can help students avoid these problems by helping them to understand who their readers will be and by demonstrating for them how to frame the responses in ways that make explicit connections between their ideas and information from the text.

Writing for Remote Readers

Writing for “remote readers” is a new experience for young students who are accustomed to sharing their writing with teachers and peers who can give feedback about clarity on the spot. Teachers need to help their students understand that as they write responses on a large-scale assessment, they are writing for readers who are unfamiliar with them personally and who will not available to ask for clarifications or to point out shortcomings of their writing. Indeed, students need to know that their responses might even be “read” and scored by a computer.

In addition, students, especially younger students, are not aware of the importance of providing clear indications of their thinking in their writing. During class discussions of text-related questions, students can ask for clarifications and have incomplete or vague responses corrected. When writing answers for a stranger to read, clarity is essential. Showing students some unclear responses to questions and discussing how to fix them is one step in developing both their awareness of the need for clarity and their skill in providing it. Having them work in groups to improve the clarity of their own responses and those of peers is another approach that can help focus student attention on how to apply this skill.

Finally, prompting students to self-monitor by asking questions is an especially effective way to help them keep in mind the need for clarity as they write. A guiding checklist can provide them with hints such as the following:

  • Can someone who is not sitting next to me understand my response without asking for clarification?
  • Would the evidence from the text that I’ve chosen to support my response convince me?
  • Is my response thorough and complete? Can I add details from the text to make it stronger?
  • Does my response answer the question?

Making Explicit Connections

Making connections between ideas in writing is a key aspect of clarity. As in the examples above, most CCSS/ELA-related assessment tasks ask students to give evidence from the text to support their responses. Examination of student work shows that those who are unfamiliar with this kind of test question commonly provide just a conclusion and list two details from the text. They seldom offer any information as to how these details support their conclusion.

Direct instruction and practice with both written and oral responses can develop students’ skill in making connections explicit. The following are some practices and activities that teachers can use both to help students develop a model for thorough, complete answers and to learn about the aspects of their writing that trigger confusion in readers:

  • Provide opportunities for students to share feedback with each other on the quality of their responses. (This is a handy habit to develop for both college and career readiness.)
  • Encourage students to use applications such as Box or Dropbox set up for the classroom to provide responses to each others’ written responses, compositions, and thoughts about class work. Students should reflect on the clarity of their own writing as well as provide peers with feedback.
  • Constantly provide opportunities for students to self-monitor their oral and written writing responses.
  • Conduct a bull’s eye activity to guide student discussions about the quality of sample responses to questions (Kapinus, 2002). Using a target chart such as the one in Figure 1, teachers can explain that just as the target has different rings of difficulty, responses have different levels of completeness.

Figure 1 Bull’s Eye Chart

Bull's Eye Target

Accepted Admissions Blog

Everything you need to know to get Accepted

open ended essay

August 1, 2024

Optional and Open-Ended Essay Questions: What’s the Best Strategy?

open ended essay

Many business schools use open-ended essay prompts, which are usually variations of “I wish the admissions committee had asked me…” or “What is a question that you wish we had asked?” The most common among these open-ended questions is the optional essay, where you have free rein to discuss anything you feel is important, something you do not have the opportunity to address anywhere else in your application. 

What’s the best use of this wide-open space? Our clients are often uncertain, and nervous about making a mistake in what they choose to write about. In this post, we will guide you in making the optional essay work for you by using it to round out your profile. It’s an excellent opportunity, once you know how to optimize it.

Before you start thinking about a topic to write about, consider how many other “open spaces” you already have in the application. You might have both an optional essay  and  another open-ended question, or you might have only one. 

Optional Essay Strategy 1: Putting profile weaknesses into context

If the optional essay is the only place where you can write about a topic of your choice, you could use it to explain any weakness or inconsistency in your profile, particularly one that would be  noticeable to the adcom. If the school hasn’t asked about weaknesses anywhere else in the application, this would be a wise use of the space. Examples of noticeable weaknesses would be if your grades plummeted at some point when you were in college, if your GMAT score is lower than the school’s average, and if you had an employment gap of six months or longer. You could have legitimate reasons for any of these issues. For example, perhaps you had to undergo surgery or were dealing with another serious health issue, you had a death in the family, or you took on too many extracurricular activities or work responsibilities. All of these situations would naturally have depressed your performance. 

Aim to write a short, clear, direct response. Don’t be vague about the circumstances you’re describing. If you are unclear or ambiguous in your explanation, the adcom might start guessing at scenarios and reasons for the poor showing, possibly assuming something much worse than the reality. 

Still, keep perspective about what might reflect poorly on your profile as a whole. There is no need to explain a single C+ grade from your freshman year. You’re not behind in the achievement sweepstakes because you hadn’t launched your own nonprofit or business by the age of 21. You don’t have to apologize for not spending your undergrad years at an Ivy League school. 

When using the optional essay to  address a weakness , it’s not enough to present the relevant context surrounding the situation. You also need to explain what you learned from it or did afterward to improve or change the outcome to the extent possible. Were there circumstances beyond your control? Did you fail to use your best judgment (common and understandable in very young adults)? Either way, don’t make excuses, but be sure to demonstrate that the circumstances that led to your lower performance are a thing of the past. Provide evidence that you have learned how to handle such situations should a similar one arise in the future. 

The optional essay almost always comes right at the end of the essay portion of the application and will therefore probably be the last thing the adcom reads. Do your best to give them something positive to remember you by.

Optional Essay Strategy 2: Highlighting your well-rounded personality 

On a brighter note, since MBA application essays often focus heavily on work examples, career goals, and/or why you are interested in attending a particular school, you can use the optional essay to reveal a more personal side of yourself to the adcom. This can establish that you are a well-rounded and interesting individual who would be a welcome addition to the class. 

If you simply come across as “the project manager with the 740 GMAT,” you risk blending in with too many of your competitors,  losing a potentially critical advantage. Similarly, you almost certainly will have already written about leadership, so don’t be repetitive by using the optional essay to discuss a secondary leadership role, even if it was incredible and you’re very proud of it. Instead,  really stand out  as “the project manager who used skydiving as a team-building exercise,” “the investment banker who teaches salsa dancing to senior citizens,” or “the marketing manager who taught herself two additional languages in her spare time.” Now that is an application to remember! 

Open-Ended Essay Strategy 

So, how should you approach an open-ended prompt like “I wish the admissions committee had asked me…”? Let’s start with how  not  to respond to it. Believe it or not, some applicants will waste this opportunity by discussing bland “catchall” topics – for example, “I wish the admissions committee had asked me how I achieve excellence in everything I do.” Please don’t do this. This type of answer usually ends up being way too generic, even arrogant. 

You can definitely take a more lighthearted approach here and write about an aspect of yourself that is uniquely, distinctively, memorably you . Lighthearted does not mean frivolous. For example, you might say that you wish the committee had asked you about the book that changed your life, your favorite musical instrument, the historical personality who fascinates you the most, the moment you realized you weren’t a kid anymore, or what you admire most about a friend, relative, or mentor. Pick a subject for which your enthusiasm is genuine, because it will provide insight into you on a more personal level. It is almost sure to make your writing livelier, more interesting, and memorable. And here’s another bonus: beyond the work experience and academic abilities you have already written about, these answers can convey your potential value as a member of the incoming class and the teams you will work with. 

You could also consider responding to the prompt with any of the following:  “What do I do for fun?”  “How did my grandmother’s immigrant journey from Korea influence my values?” “What have I learned about the creative process from learning to build websites?” “Why was getting fired from my first job one of the best things that ever happened to me?” The sky’s the limit!

Open-ended essay questions in your MBA application present fantastic opportunities for you to round out your candidacy for the adcom. At a minimum, you can bolster your profile by explaining any deficiencies in it and proving that you have addressed them. Moreover, you can surprise the adcom by unveiling a more personal, memorable aspect of yourself that will make you stand out for all the right reasons!

Do you need help answering these questions or any other MBA application questions? Schedule a free consultation with an Accepted admissions pro who will answer your questions and help you get accepted .

Judy Gruen

By Judy Gruen, a former Accepted admissions consultant. Judy holds a master’s in journalism from Northwestern University and is the co-author of Accepted’s first full-length book, MBA Admission for Smarties: The No-Nonsense Guide to Acceptance at Top Business Schools . Want an admissions expert help you get accepted? Click here to get in touch!

Related Resources:

  • Four Tips for Displaying Teamwork in Your Application Essays
  • Highlighting Your Leadership Experience in Your Application
  • How Personal Is Too Personal In Your Application Essays?

About Us Press Room Contact Us Podcast Accepted Blog Privacy Policy Website Terms of Use Disclaimer Client Terms of Service

Accepted 1171 S. Robertson Blvd. #140 Los Angeles CA 90035 +1 (310) 815-9553 © 2022 Accepted

Stamp of AIGAC Excellence

  • Open access
  • Published: 28 November 2014

Should essays and other “open-ended”-type questions retain a place in written summative assessment in clinical medicine?

  • Richard J Hift 1  

BMC Medical Education volume  14 , Article number:  249 ( 2014 ) Cite this article

17k Accesses

64 Citations

Metrics details

Written assessments fall into two classes: constructed-response or open-ended questions, such as the essay and a number of variants of the short-answer question, and selected-response or closed-ended questions; typically in the form of multiple-choice. It is widely believed that constructed response written questions test higher order cognitive processes in a manner that multiple-choice questions cannot, and consequently have higher validity.

An extensive review of the literature suggests that in summative assessment neither premise is evidence-based. Well-structured open-ended and multiple-choice questions appear equivalent in their ability to assess higher cognitive functions, and performance in multiple-choice assessments may correlate more highly than the open-ended format with competence demonstrated in clinical practice following graduation. Studies of construct validity suggest that both formats measure essentially the same dimension, at least in mathematics, the physical sciences, biology and medicine. The persistence of the open-ended format in summative assessment may be due to the intuitive appeal of the belief that synthesising an answer to an open-ended question must be both more cognitively taxing and similar to actual experience than is selecting a correct response. I suggest that cognitive-constructivist learning theory would predict that a well-constructed context-rich multiple-choice item represents a complex problem-solving exercise which activates a sequence of cognitive processes which closely parallel those required in clinical practice, hence explaining the high validity of the multiple-choice format.

The evidence does not support the proposition that the open-ended assessment format is superior to the multiple-choice format, at least in exit-level summative assessment, in terms of either its ability to test higher-order cognitive functioning or its validity. This is explicable using a theory of mental models, which might predict that the multiple-choice format will have higher validity, a statement for which some empiric support exists. Given the superior reliability and cost-effectiveness of the multiple-choice format consideration should be given to phasing out open-ended format questions in summative assessment. Whether the same applies to non-exit-level assessment and formative assessment is a question which remains to be answered; particularly in terms of the educational effect of testing, an area which deserves intensive study.

Peer Review reports

Learning and the stimulation of learning by assessment

Modern definitions of learning, such as that attributed to Siemens: “Learning is a continual process in which knowledge is transformed into something of meaning through connections between sources of information and the formation of useful patterns, which generally results in something that can be acted upon appropriately, in a contextually aware manner” [ 1 ],[ 2 ] essentially stress two points: firstly, that learning requires a much deeper, effortful and purposeful engagement with the material to be learned than the acquisition of factual knowledge alone; secondly, that learned knowledge does not exist in a vacuum; its existence is inferred from a change in the learner’s behaviour. This has led transfer theorists to postulate that knowledge transfer is the basis of all learning, since learning can only be recognised by observing the learner's ability to display that learning later [ 3 ],[ 4 ].

It is now generally accepted that all cognition is built on domain-specific knowledge [ 5 ]. Content-light learning does not support the ability to transfer knowledge to new situations and a comprehensive store of declarative or factual knowledge appears essential for transfer [ 4 ]. Furthermore, a high order of understanding and contextualization must accompany the declarative knowledge if it is to be successfully applied later. Where transfer – in other words, the successful application of knowledge to new situations – has been shown, the common factor appears to be deep learning, and the abstraction of general principles [ 6 ]-[ 8 ].

Indeed, knowledge may be acquired and held at varying depths. Aspects of this are reflected in the cognitive levels of learning constituting Bloom's taxonomy of learning [ 9 ]-[ 14 ] (Figure  1 ); the varying levels of clinical competence and performance described in Miller’s pyramid [ 15 ] (Figure  2 ) and the stages of proficiency postulated by Dreyfus and Dreyfus [ 16 ]. The extent to which different assessment formats measure proficiency over the entire range of complexity of understanding and performance is one of the central issues in assessment.

figure 1

Modified bloom’s taxonomy [ [ 11 ] ].

figure 2

Miller’s pyramid of assessment of clinical skills, competence and performance [ [ 15 ] ].

Assessment is central to the educational process, and has benefits beyond that of measuring knowledge and competence alone; principally in directing and stimulating learning, and in providing feedback to teachers and learners [ 17 ]. Recent research supports a critical role for assessment in consolidating learning, and strengthening and facilitating memorisation and recall. There is accumulating evidence that the process of stimulating recall through testing enhances learning and retention of learned material. This has been termed the testing effect , and several hypotheses have been put forward to explain it, including increased cognitive effort, conceptual and semantic processing, and increased attention to the properties distinguishing the learnt item from similar items, which strengthens the relationship between the cue which triggers the memory and the memory item itself [ 18 ],[ 19 ]. It appears to be principally the act of retrieving information from memory which strengthens knowledge and knowledge retention [ 20 ],[ 21 ], irrespective of whether retrievable is covert or overt [ 22 ]. Importantly, high-level questions appear to stimulate deeper conceptual learning and better learning retention then those pitched at a lower level [ 23 ]. A number of strategies have been proposed to exploit this in educational practice, including those recently summarised for use in medical education [ 24 ]. This is in a sense related to the “generation effect”, where it has been shown that spontaneously generating information as opposed to learning it passively improves subsequent recall [ 18 ],[ 19 ].

Assessment in educational practice

It is accepted that standards of assessment are inherently variable. There is therefore an obligation, in summative assessment, to ensure that assessment meets certain minimum criteria [ 25 ]. Achieving this in the individual instance is challenging, given the wide range of skills and knowledge to be assessed, marked variation in the knowledge of assessment of those who must assess and the highly variable environments in which the assessment takes place. There is now an extensive literature on assessment, in terms of research, guidelines and recommendations [ 26 ],[ 27 ]. Importantly, modern approaches recognise that no single form of assessment is suitable for every purpose, and stressed the need for programmatic assessment , which explicitly recognises that assessment is best served by a careful combination of a range of instruments matched to a particular purpose at each stage of the learning cycle, such as for formative, diagnostic or summative purposes [ 25 ],[ 26 ],[ 28 ].

Written assessment

Despite the proliferation of assessment methodologies which attempt to test the competence of medical students directly, such as OSCE, OSPE, case-based assessment, mini-CEX and workplace-based assessment, written assessments remain in widespread use. Much of the knowledge base required by the clinician is not necessarily testable in the performance format. Additionally, in comparison with most practical assessment formats, written tests are easier to organize and deliver, requiring little more than pen and paper or a computer, a venue, question setters and markers who need not be physically present.

In general, all forms of written assessment may be placed into one of two categories. Constructed response or open-ended questions include a variety of written formats in which the student is required to generate an answer spontaneously in response to a question. The prototypical example is the essay. There are many variants including short answer questions (SAQ), mini-essay questions, single-word and single-sentence questions and the modified essay question (MEQ). The selected-response or closed-ended format is typified by the multiple-choice question (MCQ) assessment, where candidates select the most appropriate answer from a list of options rather than generating an answer spontaneously. Many variants of the multiple-choice format have been used: current best practice recommends the use of one-best-answer (of three, four or five possible answers), and extended matching item (EMI) formats [ 29 ]. In this debate I shall use the term open-ended when referring to the constructed-response format, and multiple-choice as a synonym for the selected-response format.

All high-stakes assessments should meet an adequate standard in terms of quality and fairness, as measured by a number of parameters, summarised recently in a consensus statement [ 30 ]. Principal among these are the classic psychometric parameters of reproducibility (reliability or consistency; that a result would not essentially change with retesting under similar conditions), and validity or coherence, which I describe in detail below. Other important measures by which assessments should be judged are equivalence (assessments administered at different institutions or during different testing cycles produce comparable outcomes), feasibility (particularly in terms of efficiency and cost effectiveness), educational effect (the student who takes the assessment is thereby motivated to undertake appropriate learning), catalytic effect (the assessment provides outcomes that, when fed back into the educational programme, result in better teaching and learning) and acceptability to both teachers and learners.

It is generally accepted that the multiple-choice format, in contrast to the open-ended format, has high reliability and is efficient, a consequence primarily of wide sampling, and to a lesser extent, of its objectivity. In support of the open-ended format, it has been widely held that this format is superior at testing higher cognitive levels of knowledge and has greater validity. This belief is intuitively appealing and appears to represent the viewpoint of many of those involved in medical assessment, including those with extensive knowledge and experience in medical education. In an attempt to gain the best of both formats, there has been a shift from the prototypical essay towards newer formats comprising a larger number of short, structured questions, a development intended to retain the perceived benefit of the open-ended question with the superior reliability of the MCQ.

Thus the two formats are generally seen to be in tension, MCQ being significantly more reliable, the open-ended format having greater validity. In this debate I will compare the performance of the open-ended format with MCQ in summative assessment, particularly in final exit examinations. I draw attention to the large body of evidence which supports the view that, in summative assessment, the multiple-choice format is intrinsically able to provide all the value of the open-ended format and does so more reliably and cost effectively, thus throwing into question the justification for the inclusion of the open-ended format in summative assessment. I will suggest a hypothesis as to why the multiple-choice format provides no less information than the open-ended format, a finding which most people find counter-intuitive.

A critical concept is that assessment is not only of learning, but also for learning [ 27 ],[ 31 ]. In the first case, the purpose of assessment is to determine whether that which is required to be learnt has in fact been learnt. In the second case, it is acknowledged that assessment may in itself be a powerful driver for learning at the cognitive level. This is supported by a body of evidence indicating the powerful effect of assessment on strengthening memorisation and recall [ 20 ],[ 22 ],[ 23 ]. In this debate I concentrate primarily on summative assessment in its role as assessment of learning ; one must however remain aware that those methods of assessment best suited to such summative assessment may not be identical to those best suited to assessment for learning ; indeed, it would be surprising if they were.

For the first part of the 20 th century, written assessment in medicine consisted largely of essay-writing [ 30 ]. Multiple-choice assessment was developed for psychological testing by Robert Yerkes immediately before the First World War and then rapidly expanded for the testing of army recruits. Yerkes was interested in assessing learning capacity—not necessarily human—and applied it to crows [ 32 ] and pigs [ 33 ] as well as psychiatric patients and mentally challenged subjects, a group among whom it was widely used for a number of years thereafter [ 34 ],[ 35 ]. Application to educational assessment has been credited to Frederick J. Kelly in 1914, who was drawn to it by its efficiency and objectivity [ 36 ].

Throughout its history, the multiple-choice format has had many detractors. Their principal arguments are that closed-ended questions do not stimulate or test complex constructive cognitive processes, and that if the ability to construct rather than choose a correct answer is not actively assessed, there is a potential that it will be neither taught nor learnt [ 37 ]-[ 41 ].

As Rotfield has stated: "Students proudly show off their high grades, from multiple-choice exams, as if their future careers will depend on knowing which choice to make instead of discerning which choices exist" [ 42 ]. Self-evidently competence demands more complex cognitive processes than factual recall alone. The ability to invoke these higher levels of cognition is clearly a skill which should be explicitly assessed. Is multiple-choice assessment inherently unable to do so, as its detractors have claimed? The belief that open-ended questions test high-order cognitive skills whereas multiple-choice questions do not and that therefore by inference open-ended questions evoke and test a reasoning process which is more representative of real-life problem-solving than multiple-choice, is a serious concern which I address in this review. We begin however with a comparison of the two formats in terms of reproducibility and feasibility.

Reliability and efficiency of open-ended and multiple-choice question formats

Wider sampling greatly increases reproducibility, compensating as it does for unevenness in a candidate’s knowledge, varying quality of questions and even the personality of examiners [ 43 ],[ 44 ]. That the reproducibility of the multiple-choice format is much higher than that of the open-ended format is borne out in numerous studies comparing the two formats [ 45 ]-[ 47 ]. Recognition of these shortcomings has led to the design of open-ended-formats specifically intended to increase reproducibility and objectivity, while maintaining the supposed advantages of this format in terms of validity. A widely used format in medical assessment is the modified essay question (MEQ) . The format is of a clinical scenario followed by a series of sequential questions requiring short answers. This was expressly designed to bridge a perceived gap between multiple-choice and SAQ as it was believed that it would prove better at testing high-order cognitive skills than multiple-choice while allowing for more standardised marking than the standard open-ended question [ 45 ].

Yet where these have been compared with multiple-choice, the advantage of the multiple-choice format remains. A large number of questions and multiple markers are required in order to provide acceptable reliability for MEQs and essay questions [ 45 ]. Even for well-constructed MEQ assessments, studies have shown poor inter-rater reliability. Thus in an MEQ paper in a final undergraduate medical exit examination marked in parallel by several assessors, statistically significant differences between the scores of the different examiners were shown in 50% of the questions, as well as significant differences in the median scores for the examination as a whole [ 47 ]. Nor were these differences trivial; a substantial difference in outcome in terms of likelihood of failure were shown. This is cause for concern. Schuwirth et al . have stressed the necessity for interpreting reliability in terms of outcome, particularly in terms of pass/fail misclassification, and not merely in terms of numeric scores such as Cronbach’s alpha [ 27 ]. In this and other such studies the open-ended questions were of the highest possible quality practically achievable, typically MEQ's carefully prepared by skilled question writers working in teams, reviewed for appropriateness and scored using an analytic scoring scheme designed to minimise inter-rater variability. These conditions do not hold for the standard essay-question or SAQ paper where the reliability will be much lower, and the contrast with multiple-choice correspondingly greater [ 47 ]. Open-ended items scored on a continuum, such as 0-100%, have much lower inter-rater reliability than those scored against a rigid marking schedule. Therefore the discrepancy in reliability for the "graded essay" marked on a continuum versus multiple-choice is much larger than it is for more objectively scored open-ended formats.

In contrast to the open-ended question format, the multiple-choice is objective and allows multiple sampling of a subject. The result is high reproducibility. Furthermore it substantially reduces the potential for a perception of examiner bias, and thus the opportunity for legal challenge by the unsuccessful candidate [ 48 ]. The multiple-choice format is efficient. Lukhele et al . studied a number of national university-entrance examinations which included both multiple-choice items and essay questions [ 49 ]. They found that 4-8 multiple-choice items provided the same amount of information as a single essay, and that the essay’s efficiency in providing information about the candidate’s ability per minute of testing was less than 10% of that of an average multiple-choice item. For a middle-level examinee, approximately 20 times more examination time was required for an essay to obtain the same information as could be obtained from a multiple-choice assessment. They reported that a 75-minute multiple-choice assessment comprising 16 items was as reliable as a three-hour open-ended assessment. Though the relative gain in efficiency using multiple-choice in preference to essay questions varies according to subject, it is an invariable finding [ 49 ].

Though the initial development of an multiple-choice assessment is labour-intensive, this decreases with increasing experience on the part of item-writers, and decreases further once a question bank has been developed from which questions can be drawn for re-use. The lower efficiency of the open-ended question is not restricted to examination time but also the requirement for grading by examiners. Typically an open-ended test requires from 4 to 40 times as long to administer as a multiple-choice test of equivalent reliability [ 50 ]. In one study, the cost of marking the open-ended items was 300 times that of the multiple-choice items [ 49 ]; the relative cost of scoring the papers may exceed a factor of 1000 for a large examination [ 50 ].

The multiple-choice format thus has a clear advantage over open-ended formats in terms of reproducibility, efficiency and cost-effectiveness. Why then are open-ended questions still widely used? Principally this is because of a belief that essay-type questions, SAQ and their variants test higher-order cognitive thinking in a manner that MCQ cannot, and consequently have higher validity. It has been repeatedly stated that the MCQ format is limited in its ability to test deep learning, and is suitable for assessing facts only, whereas open-ended questions assess dynamic cognitive processes such as the strength of interconnected rules, the use of the mental models, and the mental representations which follow [ 37 ]-[ 39 ]; in short that open-ended questions permit the assessment of logical and reasoning skills in a manner that multiple-choice does not [ 40 ],[ 41 ]. Is there evidence to support these assertions?

The ability to test higher-order cognitive skills

The revised Bloom's taxonomy of learning [ 9 ]-[ 12 ] is helpful in evaluating the level of cognition drawn upon by an assessment (Figure  1 ). By convention, assessment questions targeting the first two levels, are regarded as low-level questions, the third level as intermediate, and the fourth to sixth levels as high-level.

Those who understand the principles underlying the setting of high-quality multiple-choice items have no difficulty in accepting that multiple-choice is capable of assessing high-order cognition [ 10 ],[ 13 ],[ 14 ]. The shift from true-false questions, (which in order to avoid ambiguity frequently test factual information only) to the one-best-answer and EMI formats have facilitated this [ 29 ]. Indeed, there exist well-validated instruments specifically designed to assess critical thinking skills and to measure their development with progress through college-level educational programs, which are entirely multiple-choice based, such as the California Critical Thinking Skills Test [ 51 ],[ 52 ]. Schuwirth and Van der Vleuten [ 48 ] make a distinction between context-rich and context-free questions. In clinical assessment, a context-rich question is typically presented as a case vignette. Information within the vignette is presented to candidates in its original raw format, and they must then analyse, interpret and evaluate this information in order to provide the answer. The stimulus reflects the question which the candidate must answer and is therefore relevant to the content of the question. An example of a final-year question in Internal Medicine is shown in the following example. Such a question requires analysis ( What is the underlying problem? ), application ( How do I apply what I know to the treatment of this patient? ) and evaluation ( Which of several possible treatments is the most appropriate? ), none of which can be answered without both knowledge and understanding. Thus 5 of Bloom’s 6 levels have been tested.

Example of a context-rich multiple-choice item in internal medicine

A 24-year-old woman is admitted to a local hospital with a short history of epistaxis. On examination she is found to have a temperature of 36.9°C. She is wasted, has significant generalised lymphadenopathy and mild oral candidiasis but no dysphagia. A diffuse skin rash is noticed, characterised by numerous small purple punctate lesions. A full blood count shows a haemoglobin value of 110 g/L, a white cell count of 3.8×10 9 per litre and platelet count of 8.3×10 9 per litre. Which therapeutic intervention is most urgently indicated in this patient?

Antiretroviral therapy

Fluconazole

Platelet concentrate infusion

None of the options offered are obviously unreasonable or easily excluded by the candidate who attempts to shortcut the cognitive processes required in answering it by searching for clues in the options themselves. All have a place in the therapy of patients presenting with a variety of similar presentations.

Answering this item requires:

Analysis . In order to answer this item successfully, the candidate will have to recognise (1) that this patient is highly likely to be HIV-positive (given the lymphadenopathy, evidence of oral candidiasis and the high local prevalence of HIV), (2) that the presentation is suggestive of immune thrombocytopenic purpura (given the epistaxis, skin manifestations and very low platelet count), (3) that other commonly-seen concomitant features such as severe bacterial infection and extensive esophageal candidiasis are excluded by a number of negative findings.

Evaluation . Further, in order to answer this item successfully, the candidate will have to (1) consider the differential diagnosis for the principal components of the clinical vignette and, by process of evaluation, decide which are the most likely; (2) decide which of the diagnoses require treatment most urgently, (3) decide which form of therapy will be most appropriate for this.

Knowledge, understanding and application . It is utterly impossible to “recognise” the correct answer to this item without having worked through this process of analysis and evaluation, and the knowledge required to answer it must clearly be informed by deep learning, understanding and application. Hence five of the six levels of Bloom’s taxonomy have been tested. Furthermore it would appear an eminently reasonable proposition that the candidate who correctly answers this question will indeed be able to manage such a patient in practice, hence implying structural validity.

Though guessing has a 20% chance of providing the correct answer, this will be eliminated as a factor by assessing performance across multiple such items and applying negative marking to incorrect answers.

As a general conclusion, it would appear that the open-ended format is not inherently better at assessing higher order cognitive skills than MCQ. The fundamental determinant is the way in which the question is phrased in order to stimulate higher order thinking; if phrased inappropriately, the open-ended format will not perform any better than MCQ. A crucial corollary is that in comparing formats, it is essential to ensure that MCQ questions crafted to elicit high order thinking (particularly those which are context-rich) are compared with open-ended questions crafted to the same level; it is inappropriate to compare high-order items in one format with low order items in the other. Several studies have investigated the effect of the stimulus on thought processes in the open questions and have shown that the stimulus format is more important than the response format . Scores on questions in open-ended format and multiple-choice format correlate highly (approaching 100%) for context-rich questions testing the same material. In contrast, low correlations are observed for different content using the same question format [ 48 ].

In response to the low objectivity and reliability of the classic essay-type questions, modified open-ended formats have evolved which typically combine short answers, carefully crafted questions and rigid marking templates. Yet this increase in reliability appears to come at a significant cost to the presumed advantage of the open-ended format over the multiple-choice format in testing higher orders of cognition. Feletti and Smith have shown that as the number of items in the open-ended examination increases, questions probing high-order cognitive skills tend to be replaced by questions requiring factual recall alone [ 46 ]. Hence as accuracy and reliability increase, any difference between such an assessment and a multiple-choice assessment in terms of other indicators tends to disappear; ultimately they converge on an essentially identical assessment [ 47 ],[ 49 ].

Palmer and Devitt [ 45 ] analysed a large number of multiple-choice and MEQ questions used for summative assessment in a clinical undergraduate exam. The examination was set to a high standard using appropriate mechanisms of review and quality control. Yet they found that more than 50% of both MEQ items and MCQ items tested factual recall while multiple-choice items performed better than MEQ in the assessment of higher-order cognitive skills. They reported that "the modified essay question failed in its role of consistently assessing higher cognitive skills whereas the multiple-choice frequently tested more than mere recall of knowledge”.

In a subsequent study of a rigorously prepared and controlled set of exit examinations, they reported that the proportion of questions testing higher-level cognitive skills was lower in the MEQ paper then in the MCQ paper. More than 50% of the multiple-choice items assessed higher level cognition, as opposed to just 25% of the MEQ items. The problem was compounded by a higher frequency of item-writing flaws in the MEQ paper, and flaws were found in the marking scheme in 60% of the MEQ's. The authors conclude that “The MEQ paper failed to achieve its primary purpose of assessing higher cognitive skills” [ 47 ].

We therefore appear to be dealing with a general rule: the more highly open-ended questions are structured with the intention of increasing reliability, the more closely they converge on an equivalent multiple-choice question in terms of performance, thus negating any potential advantage of the open-ended format over the closed-ended [ 53 ]; indeed they appear frequently to underperform MCQ items in the very area in which they are believed to hold the advantage. Thus the shift to these newer forms of assessment may actually have had a perverse effect in diminishing the potential for the open-ended assessment to evaluate complex cognitive processes. This does not imply that open-ended items such as SAQ, MEQ and key-feature assessments, particularly those designed to assess clinical reasoning, are inherently inferior to MCQ; rather it is a warning that there is a very real risk in practice of “dumbing-down” such questions in an attempt to improve reliability, and empiric observations suggest that this is indeed a consequence frequently encountered even in carefully crafted assessments.

Combining multiple-choice and open-ended tests in the same assessment, in the belief that one is improving the strength of the assessment, leads to an overall less reliable assessment than is constituted by the multiple-choice section on its own [ 49 ], thus causing harm rather than adding benefit [ 50 ].

The second argument, frequently advanced in support of the open-ended format, is that it has greater validity; that spontaneously recalling and reproducing knowledge is a better predictor of the student’s eventual ability to handle complex problems in real-life then is the ability to select an answer from a list [ 54 ]. Indeed, this argument is intuitively highly appealing. The case for the retention of open-ended questions in medical undergraduate and postgraduate assessment largely rests on validity, with the assumption that asking the candidate to describe how they would diagnose, investigate and treat a patient predicts future clinical competence more accurately than does the ability to select the right response from a number of options [ 55 ],[ 56 ]. The question of validity is central. If the open-ended format is genuinely of higher validity than the multiple-choice format, then there is a strong case for retaining essay-type questions, SAQ and MEQ in the assessment protocol. If this contention cannot be supported, then the justification for retaining open-ended items in summative assessment may be questioned.

Is the contention true? Essentially, this may be explored at two levels. The first is to correlate outcomes between the two formats. The second is to perform appropriate statistical analysis to determine whether these formats are indeed testing different dimensions or “factors”.

Validity is an indicator of how closely the assessment actually measures the quality it purportedly sets out to test. It is self-evident that proficiency in many domains, including clinical practice, requires not only the ability to recall factual knowledge, but also the ability to generate and test hypotheses, integrate knowledge and apply it appropriately as required.

Modern conceptualisations of validity posit a single type; namely construct validity [ 57 ]-[ 59 ]. This is based on the premise that ultimately all validity rests on the fidelity with which a particular assessment reflects the underlying construct, “intangible collections of abstract concepts and principles which are inferred from behaviour and explained by educational or psychological theory” [ 60 ]. Construct validity is then defined as a process of investigation in which the constructs are carefully delineated, and evidence at multiple levels is sought which supports a valid association between scores on that assessment and the candidate's proficiency in terms of that construct. For example, five types of evidence have been proposed which may provide support for such an association [ 60 ],[ 61 ], namely content, the response process, internal structure, relationship to other variables and consequences. In this discussion we highlight the relevant to the last two methods; convergent correlations between the two forms of assessment, and the impact of test scores on later performance, particularly that requiring problem-solving under conditions encountered in the work situation. This “is particularly important to those employers more interested in hiring competent workers than good test takers” [ 62 ].

Direct comparisons of the open-ended and multiple-choice formats

Correlation.

Numerous studies have assessed the correlation of scores between the two formats. If scores are highly correlated, the two formats are essentially measuring the same thing in which case, in terms of validity, there is no advantage of one over the other. With few exceptions, studies indicate that scores on the two forms of assessment are highly correlated. Norman et al. compared the two formats prospectively and showed a strong correlation between the two sets of scores [ 63 ]. A similar result was found by Palmer et al. who suggested that the two types of examination were essentially testing similar characteristics [ 47 ]. Similarly Norcini et al. found that written patient management problems and multiple choice items appeared to be measuring essentially the same aspects of clinical competence, though the multiple-choice items did so more efficiently and with greater reliability [ 17 ]. Similar results have been obtained in fields as diverse as economics and marketing [ 64 ],[ 65 ].

In general correlations between the two formats are higher when the questions in each format are specifically designed to be similar (stem-equivalent), and lower where the items in the two formats differ. However, the difference is not great: in a meta-analysis, Rodriguez found a correlation across 21 studies of 0.92 for stem-equivalent items and 0.85 across 35 studies for non-stem-equivalent items. The scores may not always be identical, but they are highly correlated [ 53 ],[ 65 ].

Factor analysis: do the formats measure more than one construct?

Identification of the actual constructs measured in an assessment has proved challenging given the lack of congruence between the simple cognitive assumptions on which testing is often based and the very complex cognitive nature of the constructs underlying understanding [ 66 ]. A number of studies have used confirmatory factor analysis and principal component analysis to determine whether the constructs tested by the two formats lie along a single dimension or along two or more divergent dimensions. Bennett et al . compared a one factor model with a two factor model to examine the relationship of the open-ended and closed-ended formats and found that in general the single factor provided a better fit. This suggests that essentially the two formats are testing the same thing [ 67 ]. Similarly Bridgeman and Rock found, using a principal components model, that both formats appeared to load on the same factor, implying that the open-ended format was not providing information on a different dimension [ 68 ]. Thissen and Wainer found that both formats could largely be ascribed to a single shared factor but did find some specific open-ended factors for which only the open-ended items contributed [ 69 ]. Though Lissitz et al . [ 70 ] quote a study by JJ Manhart, which found a two-factor model generally more appropriate than a one factor model, this study has not been published and the significance of the divergence cannot be assessed.

In a study of high school assessments using confirmatory factor analysis, Lissitz et al. showed a correlation of 0.94 between the two formats in the domains of algebra and biology; a two-factor model provided a very slight increment over a one-factor model in terms of fit. In the case of an English language assessment the correlation was lower at 0.74 and a two-factor model provided a better fit. In a test of US government, intermediate results were found with the correlation of 0.83 and a slight superiority of a two-factor model. This suggests that the addition of open-ended items in biology and algebra provided little further information beyond the multiple-choice items, whereas in other domains—English and government—the two formats are to some degree measuring different constructs [ 70 ]. Indeed, the literature in general suggests that differences in format appeared to be of little significance in the precise sciences such as biology and mathematics, but may have some relevance in fields such as history and languages, as suggested by Traub and Fisher [ 71 ]. In summary, there is little evidence to support the belief that the open-ended format is testing dimensions which the multiple-choice format cannot [ 53 ],[ 70 ],[ 72 ].

Construct validity was specifically assessed by Hee-Sun et al . [ 73 ], who attempted to measure the depth of understanding among school-level science students revealed by multiple-choice and short written explanatory answers respectively. They reported that students who showed higher degrees of knowledge integration were more likely to score highly on multiple-choice, though the reverse did not hold true. They suggested that the multiple-choice items were less effective in distinguishing adjacent grades of understanding as opposed to distinguishing high-performance from low performance, a finding similar to that of Wilson and Wang [ 74 ] and Ercikan et al . [ 75 ]. Unfortunately the generalisability of these results is limited since the multiple-choice items were poorly standardised, both in format and in difficulty, and the circumstances under which the testing was conducted were essentially uncontrolled.

Lukhele et al . performed a rigorous analysis of high-quality university placement exams taken by thousands of candidates [ 49 ]. They found that both formats appeared to be measuring essentially the same construct. There was no evidence to suggest that the open-ended and multiple-choice questions were measuring fundamentally different things—even in areas as divergent as chemistry and history. Factorial analysis suggested that there were two variant dimensions reflected in the scores of the multiple-choice and open-ended sections, one slightly more related to multiple-choice and the other to the open-ended format. However these were highly correlated, whatever the factor is that is specifically measured by the open-ended format, multiple-choice would measure it almost as well. Thus for all practical purposes, in such summative assessments, multiple-choice assessments can satisfactorily replace open-ended assessments.

An important principle is that the variance introduced by measuring “the wrong thing” in the multiple-choice is small in comparison with the error variance associated with the open-ended format given its low reliability. This effectively cancels out any slight advantage in validity [ 49 ] (Figure  3 ). Indeed, Wainer and Thissen state that “measuring something that is not quite right accurately may yield far better measurement than measuring the right thing poorly” [ 50 ].

figure 3

Stylized depiction of the contrasting ability of the presumed open-ended and multiple-choice formats to assess recognition and recall as opposed to higher forms of cognitive learning. Ideally, multiple-choice and open-ended questions would measure two different abilities (such as recall/recognition versus reasoning/application) – this may be shown as two divergent axes (shown on left). The error variance associated with each type of question is indicated by the shaded blocks, and is much greater for the open-ended question, given its inherent lower reliability. In practice, it appears that the two axes are closely aligned, implying that the two types of questions are measuring essentially the same thing (shown on right). What little additional information the open-ended question might be giving (as shown by a slight divergence in axis) is offset by its wide error variance, which in effect overlaps the information given by the multiple-choice question, thus significantly reducing the value of any additional information it provides.

In summary, where studies have suggested that the open-ended format is measuring something that multiple-choice does not (particularly in older studies), the effect has tended to be minimal, or possibly explicable on methodological grounds, or indefinable in terms of what is actually being measured. In contrast, methodologically sound studies converge on the conclusion that the difference in validity between the two formats is trivial. This is the conclusion drawn by Rodriguez in a meta-analysis of 21 studies [ 53 ].

Demonstrating an essential similarity for the two formats under the conditions of summative assessment does not necessarily mean that they provide identical information. It is possible and indeed likely that open-ended questions may make intermediate steps in thinking and understanding visible, thus serving a useful role in diagnostic as opposed to summative assessment [ 73 ],[ 75 ],[ 76 ]. Such considerations are particularly useful in using assessment to guide learning rather than merely as a judgment of competence [ 77 ]. In summative assessment at a stage prior to final exit from a programme, and particularly in formative assessment, the notion of assessment for learning becomes important; and considerations such as the generation effect and the potentiation of memory recall by testing cannot be ignored. Interestingly, a recent publication suggests that multiple-choice format testing is as effective as SAQ-format testing in potentiating memorisation and recall [ 23 ], thus supporting the contention that well-crafted MCQ and open-ended questions are essentially stimulating the same cognitive processes in the learner.

Some authors have raised the concern that students may constitutionally perform differentially on the two forms of assessment, and might be disadvantaged by a multiple-choice assessment should their strengths lie in the open-ended format. Studies in this area have been reassuring. Bridgeman and Morgan found that discrepant results were not predictive of poor academic performance as assessed by other parameters [ 78 ]. Ercikan et al . reported that discrepancies in the outcome between open-ended and multiple-choice tests were largely due to the low reliability of the open-ended component and inappropriate testing strategies [ 75 ]. A study which correlated the two formats with each other and with other measures of student aptitude showed a high degree of correlation and was unable to identify students who clearly had a propensity to perform consistently better on one format than the other [ 79 ]. Thus the belief that some students are constitutionally more suited to open-ended questions than to multiple-choice would appear to be unfounded.

An important question is whether the format of assessment effects the type of learning students use in preparation for it. As early as 1971, Hakstian suggested that anticipation of a specific form of examination did not result in any change in the amount or type of preparation, or any difference in performance in subsequent testing [ 80 ]. He concluded as follows: “The use of various types of tests to foster various kinds of study and learning, although widely advocated would seem to be a practice based on intuitive appeal, but not convincingly supported by empirical research. In particular, the contention that the superiority of the essay examination is its ability to promote more desirable study methods and higher performance on tasks requiring organisation, and deeper comprehension analysis of information should be re-evaluated in light of the evidence in the present study of no differences between groups in terms of study methods, the essay examination, or items from the higher levels of the cognitive domain”. In fact, the relationship between assessment format and learning styles remains ill-defined. Though some studies have suggested that students tended to make more use of surface learning strategies in preparation for MCQ and deeper learning strategies in preparation for open-ended questions [ 81 ],[ 82 ], other studies have failed to show such an association [ 80 ],[ 83 ]. Some studies have even failed to show that deep learning approaches correlated with better performance in applied MCQ’s and a written course project, both of which required high level cognitive performance [ 84 ],[ 85 ], though, a significant finding was that a surface learning strategy appeared deleterious for both factual and applied MCQ scores [ 85 ].

Indeed, a review of the literature on learning strategies suggests that the notion that one or other assessment format consistently calls forth a particular learning strategy is simplistic, and much of the evidence for this may have been misinterpreted [ 86 ]. The student’s choice of learning style appears to be dependent on multiple interacting and to some extent, confounding factors, most importantly the student’s innate learning motivation and preferred learning strategy. This is however subject to modification by other factors, particularly the student’s own perception of whether the assessment is directed at assessment of factual knowledge or of understanding, a perception which may frequently not coincide with the intentions of the examiner [ 87 ]. Individual differences in learning strategy probably outweigh any other consideration, including the assessment format, though this is not constant and students will adapt their preferred learning strategy according to their perception of the requirement for a particular assessment [ 88 ]. A further study has suggested that the approach to learning the student brings into the course is the strongest predictor of the learning style they will employ subsequently and, irrespective of the instructor’s best efforts, the only factor significantly correlated with the change in learning style is a change in the student’s perception of the cognitive demands of the assessment. Thus students are frequently strategic in their choice of learning strategy, but the strategies may be misplaced [ 87 ]. The student’s academic ability may be relevant; one study has shown that more academically able science students correctly identified the MCQ as requiring deep knowledge and adopted an appropriate learning strategy, whereas less able students interviewed the assessment as principally a test of recall and used a counter-productive surface-learning strategy.

Hadwin et al . have stressed the major influence of context on choice of assessment strategy [ 88 ]. There is for example evidence that students will modify their strategy according to whether the assessment is perceived as a final examination or as an interim assessment, irrespective of format [ 81 ]. So-called construct-irrelevant factors such as female gender and increasing maturity tend to correlate with selection of a deep learning strategy [ 85 ] independent of assessment format, while the association of anxiety and other emotional factors with a particular assessment will impair performance and thus operate as a confounding factor [ 89 ],[ 90 ]. In discussing their results, Smith and Miller stated that “Neither the hypothesis that multiple-choice examination will promote student use of surface strategy nor the hypothesis that essay examination will promote student use of deep strategy were supported” [ 91 ]. As a general conclusion, it would appear valid to say that current evidence is insufficient to suggest that the open-ended format should be preferred over MCQ or vice versa on the grounds that it promotes more effective learning strategies.

It is also important to be aware that open-ended assessments may bring confounding factors into play, for example testing language mastery or skills rather than the intended knowledge domain itself [ 70 ], and hand-written answers also penalise students with poor writing skills, low writing speeds and poor handwriting [ 65 ].

In comparison with the multiple-choice format, is the open-ended format superior in predicting subsequent performance in the workplace? This has been assessed and the answer, surprisingly, is that it may be less predictive. Rabinowitz and Hojat [ 92 ] correlated the single MEQ assessment and five multiple-choice assessments written at the conclusion of a series of six clerkships with performance after graduation. Results in multiple-choice assessment consistently demonstrated the highest correlations with subsequent national examination scores and with objective assessments of performance in the workplace. The MEQ questions showed the lowest correlation. Wilkinson and Frampton directly compared an assessment based on long and short essay-type questions with a subsequent assessment protocol containing short essay questions and two multiple-choice papers [ 56 ], correlating these with performance in the subsequent internship year using robust rating methodologies. They found no significant correlation between the scores of the open-ended question protocol and assessments of performance in the workplace after graduation. In contrast they found that the combination of the SAQ paper and two multiple-choice papers showed a highly significant correlation with subsequent performance. This study showed that the predominant use of multiple-choice in the assessment resulted in a significant improvement in the structural validity of the assessment in comparison with essay-type questions alone. It was unable to answer the question as to whether the open-ended questions are necessary at all since the multiple-choice component was not compared with the performance rating independently of the essay questions. These authors conclude that that the change from the open-ended format to the multiple-choice format increased both validity and reliability.

Recommendations from the literature

Wainer and Thissen stated that: “We have found no evidence of any comparison of the efficacy of the two formats (when a particular trait was specified and skilled item writers then constructed items to measure it) in which the multiple-choice item format was not superior” [ 50 ]. Lukhele et al . concluded: “Thus, while we are sympathetic to… the arguments… regarding the advantages of open-ended format, we have yet to see convincing psychometric evidence supporting them. We are awash in evidence of their drawbacks”, and further, “… We are forced to conclude that open-ended items provide this information in more time at greater cost than the multiple-choice items. This conclusion is surely discouraging to those who feel that open-ended items are more authentic and, hence, in some sense, more useful than multiple-choice items. It should be” [ 49 ].

Palmer et al . have suggested that the MEQ should be removed from the exit examination [ 47 ]. Given that MEQ's are difficult to write to a high standard and in such a way that they test high-order cognitive skills, and given the time required and the subjectivity in marking, their use does not represent an efficient use of resources. Indeed, they state “… MEQ's often do little more than test the candidate's ability to recall a list of facts and frustrate the examiner with a large pile of papers to be hand-marked”. They conclude there is no good measurement reason for including open-ended items in the high-stakes assessment, given that the MEQ performed poorly in terms of testing high-order thinking in comparison with the multiple-choice despite considerable effort to produce quality questions.

Schuwirth and Van der Vleuten too have suggested that there is no justification for the use of SAQ in assessment, since the stimulus of most SAQ can also be applied with multiple-choice. They recommend that SAQ should not be used in any situation except where the spontaneous generation of the answer is absolutely essential. Furthermore, they believe that there is little place for context-free questions in medical assessment as the context-rich stimulus approximates clinical practice more closely [ 48 ].

Why does the open-ended format persist in medical assessment?

Hence the evidence suggests that in written summative assessment the multiple-choice format is no less able to test high-order thinking than open-ended questions, may have higher validity and is superior in reliability and cost-effectiveness. Remarkably this evidence extends as far back as 1926 [ 53 ],[ 93 ], and the reasons underlying the persistence of the open-ended format in assessment are of some interest. I suggest a number of factors. Studies bear out the common-sense expectation that questions designed to test factual knowledge only—irrespective of whether these are presented as open-ended or in multiple-choice format—do not test the same level of reasoning as more complex questions [ 94 ]. Indeed, a recurring finding in the literature is that the so-called deficiencies of the multiple-choice format lie more with the quality of the individual question item (and by inference, with the question-setter), than with the format per se . This leads to a self-fulfilling prophecy: examiners who do not appreciate the versatility of the multiple-choice format set questions which only test low-order thinking and not surprisingly achieve results which confirm their bias. Palmer et al. state that criticism of multiple-choice as being incapable of testing high-order thinking is in fact criticism of poorly written questions, and that the same criticism can be directed at open-ended assessments [ 45 ]. There is indeed evidence that stem-equivalent items tend to behave similarly, irrespective of whether the item is phrased as an open-ended question or in MCQ format. It is therefore essential that in making comparisons, the items compared are specifically crafted to assess the same order of cognition. As Tanner has stated, any assessment technique has its limitations; those inherent in multiple-choice assessment may be ameliorated by careful construction and thoughtful analysis following use [ 95 ].

Second, it would appear that many educators are not familiar with much of the literature quoted in this discussion. The most persuasive material is found in the broader educational literature, and though there are brief references in the medical education literature to some of the studies to which I have referred [ 47 ],[ 48 ], as well as a few original studies performed in the medical assessment context [ 17 ],[ 45 ],[ 47 ],[ 63 ], the issue does not appear to have enjoyed prominence in debate and has had limited impact on actual assessment practice. In their consensus statement and recommendations on research and assessment, Schuwirth et al. stress the need for reference beyond the existing medical education literature to relevant scientific disciplines, including cognitive psychology [ 27 ]. In the teaching context, it is remarkable how the proposition that the open-ended format is more appropriate in testing the knowledge and skills ultimately required for the workplace has been repeatedly and uncritically restated in the literature in the absence of compelling evidence to support it.

Third is the counter-intuitiveness of this finding. Indeed, the proposition that the open-ended format is more challenging than MCQ is intuitively appealing. Furthermore, there is the “generation effect”; experimental work has shown that spontaneous generation of information, as opposed to reading enhances recall [ 18 ],[ 19 ]. Although this applies to learning rather than to assessment, many teachers implicitly attribute a similar but reversed process to the act of recall, believing that spontaneous recall is more valid than cued recall. However, validity at face value is an unreliable proxy for true validity, and the outcome in practice may contradict what seems intuitively correct [ 48 ]. As the literature on learning increases, it has become apparent that evidenced-based practice frequently fails to coincide with the intuitive appeal of a particular learning methodology. Examples include the observation that interleaved practice is more effective than blocked practice and distributed practice is more effective than massed practice in promoting acquisition of skills and knowledge [ 21 ]. There is a need for assessment to be evidence-based; to an extent assessment would appear to lag behind learning and teaching methodology in this respect. Rohrer and Pashler have suggested that underutilisation of learning strategies shown to be more effective than their traditional counterparts, such as learning through testing, distributed practice and interleaved practice, remain so because of “the widespread (but erroneous) feeling that these strategies are less effective than their alternatives” [ 21 ].

Fourth and perhaps most defensible is concern that there is much that as yet remains unknown about the nature of assessment; particularly seen from the viewpoint of assessment for learning, and given very interesting new insights into the cognitive basis of memorisation, recall and reasoning, a field which is as yet largely unexplored, and may be expected to have a significant impact on the choice of assessment format. For diagnostic purposes, the open-ended format may hold value, since it is better able to expose the students intermediate thinking processes and therefore allow precise identification of learning difficulties [ 72 ]. Newer observations such as the generation effect [ 18 ],[ 19 ], the testing effect [ 20 ],[ 23 ], the preassessment effect, where the act of preparation for an assessment is itself a powerful driver of learning [ 96 ], and the post-assessment effect, such as the effect of feedback [ 96 ] are clearly important; were it to be shown that a particular format of assessment, such as the open-ended question, was superior in driving learning, then this would be important information which might well determine the choice of assessment. At this point however no such reliable information exists. Preliminary work suggests that MCQ items are as effective as open-ended items in promoting the testing effect [ 23 ]. None of these considerations are as yet sufficiently well supported by experimental evidence to argue definitively for the inclusion of open-ended questions on the basis of their effect on learning, though the possibility clearly remains. Furthermore, this debate has concentrated on high-stakes, summative exit assessments where the learning effects of assessment are presumably less important than they are at other stages of learning. Certainly, open-ended assessment remains appropriate for those domains not well-suited to multiple-choice assessment such as data gathering, clinical judgement and professional attitudes [ 92 ] and may have value for a particular question which cannot be presented in any other format [ 48 ]. Though the evidence is less compelling, open-ended items may be superior in distinguishing between performances of candidates occupying the two extremes of performance [ 75 ].

Cognitive basis for the observation

The need for assessment of research to move beyond empiric observations to studies based on a sound theoretical framework has recently been stressed [ 27 ],[ 96 ]. There is as yet little written on the reasons for the counter-intuitive finding that MCQ is as valid as open-ended assessments in predicting clinical performance. I suggest that the observation is highly compatible with cognitive-constructivist and situated learning theory, and in particular the theory of conceptual change [ 97 ]. Fundamental to this theory is the concept of mental models. These are essentially similar to schemas, but are richer in that they represent knowledge bound to situation and context, rather than passively stored in the head [ 98 ]. Mental models may therefore be thought of as cognitive artifacts constructed by an individual based on his or her preconceptions, cognitive skills, linguistic comprehension, and perception of the problem, which evolve as they are modified through experience and instruction [ 99 ]. Conceptual change is postulated to represent the mechanism underlying meaningful learning, and is a process of progressively constructing and organizing a learner’s personal mental models [ 100 ],[ 101 ]. It is suggested that an effective mental model will integrate six different aspects: knowledge appropriately structured for a particular domain (structural knowledge), pathways for solving problems related to the domain (procedural knowledge), mental images of the system, associations (metaphors), the ability to know when to activate mental models (executive knowledge), and assumptions about the problem (beliefs) [ 102 ]. Therefore increasing proficiency in any domain is associated not just with an enlarging of store of knowledge and experience, but also with increasing complexity in the extent to which knowledge is organised and the manner in which it is stored and accessed [ 103 ], particularly as complex mental models which may be applied to problem-solving [ 104 ]. A counterpart in the domain of medical expertise is the hierarchy of constructs proposed by Schmidt et al . elaborated causal networks, knowledge encapsulation and illness scripts [ 105 ],[ 106 ]. Conceptual change theory has a clear relationship to our current understanding of expertise, which is postulated to emerge where knowledge and concepts are linked as mental representations into propositional networks which allow rapid processing of information and the omission of intermediate steps in reasoning [ 107 ],[ 108 ]; typically the expert’s knowledge is grouped into discrete packets or chunks, and manipulation of these equates to the manipulation of a large amount of information simultaneously without conscious attention to any individual component [ 104 ]. In comparison with non-experts, the representations of experts are richer, more organised and abstract and are based on deep knowledge; experts also recognise the conditions under which use of particular knowledge is appropriate [ 109 ]. As Norman has stated, “expert problem-solving in medicine is dependent on (1) prior experiences which can be used in routine solution of problems by pattern recognition processes and (2) elaborated conceptual knowledge applicable to the occasional problematic situation ” [ 110 ]. The processes of building expertise and that of constructing mental models are essentially parallel [ 99 ].

Therefore any form of assessment intended to measure proficiency must successfully sample the candidate’s organisation of and access to knowledge, and not just content knowledge alone [ 99 ],[ 111 ]. I have reviewed the empirical evidence which suggests that the multiple-choice format is indeed predictive of proficiency, which provides important evidence that it is valid. This is explicable in terms of mental models. An alternative view of a mental model is as an internal representation of a system that the learner brings to bear in a problem-solving situation [ 103 ],[ 104 ],[ 112 ]. The context-rich written assessment [ 48 ] is essentially an exercise in complex problem-solving, and fits the definition of problem-solving as “cognitive processing aimed at accomplishing certain goals when the solution is unknown” [ 103 ],[ 113 ].

Zhang has introduced the concept of a “distributed cognitive task”: a task requiring that information distributed across both the internal mind and the external environment is processed [ 114 ]. If we extend Zhang’s concept of external representation to include a hypothetical patient, the subject of the clinical vignette, who represents the class of all such patients, then answering the context-rich multiple-choice item may be seen as a distributed cognitive task. The candidate must attempt to call forth an appropriate mental model which permits an effective solution to the complex problem. In a sequence of events which parallels that described by Zhang, the candidate must internalise the information provided in the vignette, form an accurate internal representation (an equivalent concept is that of the problem space, a mental representation of the problem requiring solution [ 115 ]); this in turn activates and interacts with the relevant mental models and is followed by externalization: the return of the product of the interaction of internal representation and mental model to the external environment, and the selection of a solution. In effect a relationship has been defined between environmental information, activation of higher level cognition and externalisation of internal representations [ 114 ].

Assessment items which require complex problem-solving call on mental models appropriate to that particular context, and the item can only be answered confidently and correctly if the mental model is present at the level of proficiency. There is therefore no such thing as the student with generic expertise “in answering multiple-choice questions”, which explains the findings of Hakstian [ 80 ], Bridgeman and Morgan [ 78 ], Ercikan et al. [ 75 ] and Bleske-Rechek et al . [ 79 ], none of whom found convincing evidence for the existence of a class of student with a particular skill in answering multiple-choice questions.

Recent observations that retrieval of knowledge improves retention, and may be enhanced in the learning process by frequent testing [ 20 ],[ 21 ], and in particular a recent publication summarising four studies performed in an authentic learning environment which demonstrates that that testing using MCQ format is as effective as SAQ testing [ 23 ], supports the hypothesis that the MCQ format engages with high order cognitive processes, in both learning and retrieval of memory. This is further supported by their finding that high-level test questions stimulate deeper conceptual learning and better learning retention then do low-level test questions [ 23 ].

In summary, the multiple-choice item is testing the integrity and appropriateness of the candidate’s mental models, and in doing so, is in fact assessing proficiency. If the item is designed to test factual recall only then it will fail for this purpose, since it is the solution of a complex problem which tests the strength of the mental model and the cognitive processes which interact with it. Yet even a low-quality assessment based on factual recollection will correlate significantly with proficiency. Firstly, all mental models are based on a foundation of structural knowledge. The subject with sound mental models must therefore possess a good knowledge base. Secondly, possessing effective and appropriate mental models facilitates the retention and recall of knowledge [ 103 ]. Not surprisingly therefore, even on a fact-based assessment, good students will correctly recall the information and excel; students with deficient mental models, are less likely to be able to recall the information when needed. This is supported by the work of Jensen et al . [ 116 ] who found that high order questions stimulated deep conceptual understanding and retention, and correlated with higher performance on both subsequent high order assessment items and low-order assessment items. Indeed, recognition and recall are highly correlated [ 50 ]. There is evidence that the cognitive processes evoked by the multiple-choice format are not influenced by cueing [ 117 ], though the reasons for the frequent observation that MCQ scores are higher than those for equivalent open-ended item assessments raise concern that cueing may yet have a role [ 118 ]. However, where the stem and options have been well-designed―particularly such that the distractors all appear attractive to the candidate without the requisite knowledge― cueing should not be an issue [ 29 ],[ 48 ], and the common argument that it is easier to recognize an answer than it is to generate it spontaneously would appear not to hold true.

Problem-solving skills are poorly generalizable [ 41 ]. This is explicable in that mental models are essentially domain-specific, representing a particular set of knowledge and circumstances, but the actual process of developing them is highly dependent on domain-general processes including metacognition, self-regulation and cognitive flexibility [ 99 ].

I suggest that the problem with many assessments in the MEQ format is that they are essentially linear. By requiring the candidate to think one step at a time, the assessment effectively misses the crux of the problem-solving process, which is to look at and respond to a complex problem in its entirety, and not stepwise. The context-rich vignette-based multiple-choice item by contrast presents a complex problem which must be holistically assessed. Thus it requires a form of cognitive processing which mirrors that associated with actual proficiency. Hybrid formats such as key feature assessments in effect also break down the clinical reasoning process into a sequence of sequential steps; whether this is regarded as a drawback will depend on the relative importance ascribed to decision-making at critical points in the decision tree and global assessment of a problem viewed holistically. This is a critical area for future research in clinical reasoning.

Educators who mistrust the multiple-choice format have tended to concentrate on the final, and cognitively the least important, step in this whole process: the selection of a particular option as the answer, while ignoring the complex cognitive processes which precede the selection. Indeed, in a good assessment, the candidate is not “selecting” an answer at all. They recognise the external representation of a problem, subject the internalised representation to high level cognitive processing, and then externalise the product as a solution [ 119 ], which (almost as if coincidentally) should coincide with one of the options given.

The multiple-choice format is by no means unlimited in its capacity to test higher-order thinking. The literature on problem-solving stresses the importance of highly-structured complex problems, characterised by unknown elements with no clear path to the solution and indeed a potential for there to be many solutions or even no solution at all [ 99 ]. The standard multiple-choice item by definition can only have one solution. Thus, though it may be context-rich, it is limited in its complexity. It is difficult however to imagine how a practically achievable open-ended written assessment might perform better. In order to accommodate complexity, the question would essentially have to be unstructured—thereby eliminating all the structured short-answer progeny of the essay format, such as MEQ. In order to permit the candidate to freely demonstrate the application of all his or her mental resources to a problem more complex than that permitted by a multiple-choice vignette, one would in all probability require that the candidate is afforded the opportunity to develop an extensive, unstructured and essentially free-ranging, essay-length response; marking will be inherently subjective and we are again faced with the problem of narrow sampling, subjectivity and low reliability.

In effect the choice would then lie between an assessment comprising one or two unstructured essay length answers with low objectivity and reliability, and a large number of highly reliable multiple choice items which will effectively test high-order problem-solving, but will stop short of a fully complex situation. Perhaps this is a restatement of the assertion that “measuring something that is not quite right accurately may yield far better measurement than measuring the right thing poorly” [ 50 ], the situation depicted in Figure  3 .

Another way of understanding the validity of the multiple-choice format is by comparing the responses of candidates at different phases of the learning process with the stages of increasing proficiency posited by Dreyfus et al . [ 16 ] (Table  1 ). Here the first column comprises the stages of learning; in this context, we shall regard stage of learning as synonymous with level of proficiency or expertise, which is a measure of the effectiveness of problem-solving skill. The second column contains descriptors for each stage chosen for their relevance to complex problem-solving posed by a well-constructed context-rich multiple-choice item. The third column contains a description of the likely performance on that item of a candidate at that stage of proficiency. The relationship between proficiency and performance in a complex multiple-choice item is in fact remarkably direct. The candidate who has reached the stage of proficiency or expertise will be more likely to select the correct response than candidates at a lower level, and the more widely such proficiency is spread across the domain, the higher the aggregate score in the assessment. Though the score for a standard multiple-choice item is binary (all or nothing), the assessment as a whole is not. Whereas candidates in the top categories are likely to arrive at a correct solution most of the time, and students in the lowest category hardly ever, the middle order candidates with less secure mental models will answer with less confidence, but will in a number of items proportional to their proficiency, come up with the correct solution, their mental models proving to be sufficiently adequate for the purpose. Over a large number of items such a multiple-choice assessment will therefore provide a highly accurate indication of the level of proficiency of the candidate. To avoid all confounding variables however it is absolutely essential that the options are set such that cueing is eliminated.

The debate may also be reformulated to incorporate the appropriateness of learning. Deep learning is characterised by an understanding of the meaning underlying knowledge, reflection on the interrelationships of items of information, understanding of the application of knowledge to everyday experience, integration of information with prior learning, the ability to differentiate between principle and example and the organisation of knowledge into a coherent, synthetic structure [ 99 ],[ 100 ]—essentially an alternative formulation of the mental model. One can thus argue that the candidate who possesses deep knowledge has, by the very fact of that possession, demonstrated that they have the sort of comprehensive and intuitive understanding of the subject—in short, the appropriate mental models as described by Jonassen and Strobel [ 97 ],[ 101 ]—to allow the information to be used for problem-solving. Correspondingly, the weak student lacks deep knowledge, and this will be exposed by a well-constructed multiple-choice assessment, provided that the items are written in a manner which explores the higher cognitive levels of learning.

Therefore, if candidates demonstrate evidence of extensive, deeply-learned knowledge, and the ability to solve complex problems, be it through the medium of multiple-choice assessment or any other form of assessment, then it is safe to assume that they will be able to apply this knowledge in practice. This accounts for the extensive correlation noted between multiple-choice performance, performance in open-ended assessments, and tests of subsequent performance in an authentic environment.

The argument that open-ended questions do not test higher order cognitive skills, and consequently lack validity, is not supported by the evidence. Some studies may have been confounded by the unfair comparison of high-order items in one format with low-order items in another. This cannot be discounted as partly responsible for the discrepancies noted in some of the work I have referenced, such as that of Hee-Sun et al . [ 73 ], yet where the cognitive order of the items have been carefully matched, a number of careful studies suggest that, particularly in science and medicine, the two modalities assess constructs which though probably not identical, overlap to the extent that using both forms of assessment is redundant. Given the advantage of the multiple-choice format in reliability, efficiency and cost-effectiveness, the suggestion that open-ended items may be replaced entirely with multiple-choice items in summative assessment is one which deserves careful consideration. This counter-intuitive finding highlights our lack of understanding of the cognitive processes underlying both clinical competence and its assessment, and suggests that much further work remains to be done. Despite the MCQ format’s long pedigree, it is clear that we understand little about the cognitive architecture invoked by this form of assessment. The need for a greater role for theoretical models in assessment research has been stressed [ 27 ],[ 96 ]. As illustrated in this debate, medical teaching and assessment must be based on a solid theoretical framework, underpinned by reliable evidence. Hard evidence combined with a plausible theoretical model - which must attempt to explain the observations on the basis of cognition - will provide the strongest basis for the identification of effective learning and assessment methodologies.

That the multiple-choice format demonstrates high validity is due in part to the observation that well-constructed, context-rich multiple-choice questions are fully capable of assessing higher orders of cognition, and that they call forth cognitive problem-solving processes which exactly mirror those required in practice. On a theoretical basis it is even conceivable that the multiple-choice format will show superior performance in assessing proficiency in contrast with some versions of the open-ended format; there is indeed empirical evidence to support this in practice [ 56 ],[ 92 ]. Paradoxically, the open-ended format may demonstrate lower validity than well-written multiple-choice items; since attempts to improve reliability and reduce objectivity by writing highly focused questions marked against standardised, prescriptive marking templates frequently “trivialize” the question, resulting in some increase in reproducibility at the expense of a significant loss of validity [ 120 ]. Indeed, I have argued that, based on an understanding of human cognition and problem-solving proficiency, context-rich multiple-choice assessments may be superior in assessing the very characteristics which the proponents of the open-ended format claim as a strength of that format.

Though current evidence supports the notion that in summative assessment open-ended items may well be redundant, this conclusion should not be uncritically extrapolated to situations where assessment for learning is important, such as in formative assessment and in summative assessment at early and intermediate stages of the medical programme given that conclusive evidence with respect to the learning effects of the two formats is as yet awaited.

Author’s contribution

The author was solely responsible the literature and writing the article.

Author’s information

RJH is currently Dean and Head of the School of Clinical Medicine at the University of KwaZulu-Natal, Durban, South Africa. He studied at the University of Cape Town, specialising in Internal Medicine and subsequently hepatology, before moving to Durban as Professor of Medicine. He has a longstanding interest in medical education, and specifically in the cognitive aspects of clinical reasoning, an area in which he is currently supervising a number of research initiatives.

Abbreviations

Modified essay question

Multiple-choice question

Short answer question

Objective structured clinical examination

Siemens G: Connectivism: Learning as Network-Creation. [ http://www.elearnspace.org/Articles/networks.htm ]

Siemens G: Connectivism: A learning theory for the digital age. Int J Instr Technol Distance Learn. 2005, 2: 3-10.

Google Scholar  

Perkins DN, Salomon G: Learning transfer. International Encyclopaedia of adult education and training. Edited by: Tuijnman AC. 1996, Pergamon Press, Tarrytown, NY, 422-427. 2

Haskell EH: Transfer of learning: Cognition, Instruction, and Reasoning. 2001, Academic Press, New York

Spelke E: Initial Knowledge: Six Suggestions. Cognition on cognition. Edited by: Mehler J, Franck S. 1995, The MIT Press, Cambridge, MA US, 433-447.

Barnett SM, Ceci SJ: When and where do we apply what we learn? A taxonomy for far transfer. Psychol Bull. 2002, 128: 612-637.

Brown AL: Analogical Learning and Transfer: What Develops?. Similarity and Analogical Reasoning. Edited by: Vosniadou S, Ortony A. 1989, Cambridge University Press, New York, 369-412.

Gick ML, Holyoak KJ: Schema Induction and Analogical Transfer. 2004, Psychology Press, New York, NY US

Bloom BS: The Cognitive Domain. Taxonomy of Educational Objectives, Handbook I. 1956, David McKay Co Inc, New York

Anderson LW, Krathwohl DR, Airasian PW, Cruikshank KA, Mayer RE, Pintrich PR, Raths J, Wittrock MC: A Taxonomy for Learning, Teaching, and Assessing: a revision of Bloom's Taxonomy of Educational Objectives. 2001, Longman, New York

Anderson LW, Sosniak LA: Bloom's Taxonomy: A Forty-year Retrospective. Ninety-third yearbook of the National Society for the Study of Education: Part II. Edited by: Anderson LW, Sosniak LA. 1994, University of Chicago Press, Chicago IL

Conklin J: A taxonomy for learning, teaching, and assessing: a revision of Bloom's taxonomy of educational objectives. Educ Horiz. 2005, 83: 154-159.

Haladyna TM, Downing SM: A taxonomy of multiple-choice item-writing rules. Appl Meas Educ. 1989, 2: 37-51.

Haladyna TM: Developing and Validating Multiple-choice Test Items . Mahwah NJ: L. Erlbaum Associates; 1999.

Miller GE: The assessment of clinical skills/competence/performance. Acad Med. 1990, 65: S63-S67.

Dreyfus HL, Dreyfus SE, Athanasiou T: Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer. 1986, Free Press, New York

Norcini JJ, Swanson DB, Grosso LJ, Webster GD: Reliability, validity and efficiency of multiple choice question and patient management problem item formats in assessment of clinical competence. Med Educ. 1985, 19: 238-247.

Taconnat L, Froger C, Sacher M, Isingrini M: Generation and associative encoding in young and old adults: The effect of the strength of association between cues and targets on a cued recall task. Exp Psychol. 2008, 55: 23-30.

Baddeley AD, Eysenck MW, Anderson M: Memory. 2010, Psychology Press, New York

Karpicke J, Grimaldi P: Retrieval-based learning: a perspective for enhancing meaningful learning. Educ Psychol Rev. 2012, 24: 401-418.

Rohrer D, Pashler H: Recent research on human learning challenges conventional instructional strategies. Educ Res. 2010, 39: 406-412.

Smith MA, Roediger HL, Karpicke JD: Covert retrieval practice benefits retention as much as overt retrieval practice. J Exp Psychol Learn Mem Cogn. 2013, 39: 1712-1725.

McDermott KB, Agarwal PK, D’Antonio L, Roediger HL, McDaniel MA: Both multiple-choice and short-answer quizzes enhance later exam performance in middle and high school classes. J Exp Psychol Appl. 2014, 20: 3-21.

Cutting MF, Saks NS: Twelve tips for utilizing principles of learning to support medical education. Med Teach. 2012, 34: 20-24.

Schuwirth LWT, Van der Vleuten CPM: General overview of the theories used in assessment: AMEE Guide No. 57. Med Teach. 2011, 33: 783-797.

Van der Vleuten CP, Schuwirth LW: Assessing professional competence: from methods to programmes. Med Educ. 2005, 39: 309-317.

Schuwirth L, Colliver J, Gruppen L, Kreiter C, Mennin S, Onishi H, Pangaro L, Ringsted C, Swanson D, Van der Vleuten C, Wagner-Menghin M: Research in assessment: Consensus statement and recommendations from the Ottawa 2010 Conference. Med Teach. 2011, 33: 224-233.

Schuwirth LWT, Van der Vleuten CPM: Programmatic assessment and Kane's validity perspective. Med Educ. 2012, 46: 38-48.

Case SM, Swanson DB: Constructing Written Test Questions for the Basic and Clinical Sciences. 2002, National Board of Medical Examiners, Philadelphia, 3

Norcini J, Anderson B, Bollela V, Burch V, Costa MJ, Duvivier R, Galbraith R, Hays R, Kent A, Perrott V, Roberts T: Criteria for good assessment: Consensus statement and recommendations from the Ottawa 2010 Conference. Med Teach. 2011, 33: 206-214.

Shepard LA: The role of assessment in a learning culture. Educ Res. 2000, 29: 4-14.

Coburn CA, Yerkes RM: A study of the behavior of the crow corvus americanus Aud. By the multiple choice method. J Anim Behav. 1915, 5: 75-114.

Yerkes RM, Coburn CA: A study of the behavior of the pig Sus Scrofa by the multiple choice method. J Anim Behav. 1915, 5: 185-225.

Brown W, Whittell F: Yerkes' multiple choice method with human adults. J Comp Psychol. 1923, 3: 305-318.

Yerkes RM: A New method of studying the ideational behavior of mentally defective and deranged as compared with normal individuals. J Comp Psychol. 1921, 1: 369-394.

Davidson C: Davidson CN: Now You See It: How the Brain Science of Attention Will Transform the Way We Live, Work, and Learn. 2011, Viking Press, New York

Frederiksen JR, Collins A: A Systems Approach to Educational Testing. Technical Report No. 2. 1990, Center for Technology in Education, New York

Guthrie JT: Testing higher level skills. J Read. 1984, 28: 188-190.

Nickerson RS: New directions in educational assessment. Educ Res. 1989, 18: 3-7.

Stratford P, Pierce-Fenn H: Modified essay question. Phys Ther. 1985, 65: 1075-1079.

Wass V, Van der Vleuten C, Shatzer J, Jones R: Assessment of clinical competence. Lancet. 2001, 357: 945.

Rotfield H: Are we teachers or job trainers?. Acad Mark Sci Q. 1998, 2: 2.

Crocker L, Algina J: Introduction to Classical & Modern Test Theory. 1986, Holt, Rinehart and Winston, Inc., Fort Worth, TX

Angoff W: Test reliability and effective test length. Psychometrika. 1953, 18: 1-14.

Palmer EJ, Devitt PG: Assessment of higher order cognitive skills in undergraduate education: modified essay or multiple choice questions? Research paper. BMC Med Educ. 2007, 7: 49-49.

Feletti GI, Smith EK: Modified essay questions: Are they worth the effort?. Med Educ. 1986, 20: 126-132.

Palmer EJ, Duggan P, Devitt PG, Russell R: The modified essay question: its exit from the exit examination?. Med Teach. 2010, 32: e300-e307.

Schuwirth LW, Van der Vleuten CPM: Different written assessment methods: what can be said about their strengths and weaknesses?. Med Educ. 2004, 38: 974-979.

Lukhele R, Thissen D, Wainer H: On the relative value of multiple-choice, constructed response, and examinee-selected items on two achievement tests. J Educ Meas. 1994, 31: 234-250.

Wainer H, Thissen D: Combining multiple-choice and constructed-response test scores: toward a Marxist theory of test construction. Appl Meas Educ. 1993, 6: 103-118.

Facione PA: The California Critical Thinking Skills Test--College Level. Technical Report #1. Experimental Validation and Content Validity. 1990, California Academic Press, Millbrae CA

Facione PA, Facione NC, Blohm SW, Giancarlo CAF: The California Critical Thinking Skills Test [Revised]. In Millbrae CA: California Academic Press; 2007.

Rodriguez MC: Construct equivalence of multiple-choice and constructed-response items: A random effects synthesis of correlations. J Educ Meas. 2003, 40: 163-184.

Falk B, Ancess J, Darling-Hammond L: Authentic Assessment in Action: Studies of Schools and Students at Work. 1995, Teachers College Press, United States of America

Rethans JJ, Norcini JJ, Baron-Maldonado M, Blackmore D, Jolly BC, LaDuca T, Lew S, Page GG, Southgate LH: The relationship between competence and performance: implications for assessing practice performance. Med Educ. 2002, 36: 901-909.

Wilkinson TJ, Frampton CM: Comprehensive undergraduate medical assessments improve prediction of clinical performance. Med Educ. 2004, 38: 1111-1116.

Baker EL: Standards for Educational and Psychological Testing. Sage Publications, Inc; 2012.

Eignor DR: The Standards for Educational and Psychological Testing. APA Handbook of Testing and Assessment in Psychology, Vol 1: Test Theory and Testing and Assessment in Industrial and Organizational Psychology. Edited by: Geisinger KF, Bracken BA, Carlson JF, Hansen J-IC, Kuncel NR, Reise SP, Rodriguez MC. 2013, American Psychological Association, Washington, DC, US, 245-250.

Eignor DR: Standards for the development and use of tests: The Standards for Educational and Psychological Testing. Eur J Psychol Assess. 2001, 17: 157-163.

Downing SM: Validity: on the meaningful interpretation of assessment data. Med Educ. 2003, 37: 830.

Messick S: The interplay of evidence and consequences in the validation of performance assessments. Educ Res. 1994, 23: 13-23.

Kuechler WL, Simkin MG: Why is performance on multiple-choice tests and constructed-response tests Not more closely related? theory and an empirical test. Decis Sci J Innov Educ. 2010, 8: 55-73.

Norman GR, Smith EK, Powles AC, Rooney PJ: Factors underlying performance on written tests of knowledge. Med Educ. 1987, 21: 297-304.

Bacon DR: Assessing learning outcomes: a comparison of multiple-choice and short-answer questions in a marketing context. J Mark Educ. 2003, 25: 31-36.

Kastner M, Stangla B: Multiple choice and constructed response tests: Do test format and scoring matter?. Procedia - Social and Behav Sci. 2011, 12: 263-273.

Nichols P, Sugrue B: The lack of fidelity between cognitively complex constructs and conventional test development practice. Educ Measurement: Issues Pract. 1999, 18: 18-29.

Bennett RE, Rock DA, Wang M: Equivalence of free-response and multiple-choice items. J Educ Meas. 1991, 28: 77-92.

Bridgeman B, Rock DA: Relationships among multiple-choice and open-ended analytical questions. J Educ Meas. 1993, 30: 313-329.

Thissen D, Wainer H: Are tests comprising both multiple-choice and free-response items necessarily less unidimensional. J Educ Meas. 1994, 31: 113.

Lissitz RW, Xiaodong H, Slater SC: The contribution of constructed response items to large scale assessment: measuring and understanding their impact. J Appl Testing Technol. 2012, 13: 1-52.

Traub RE, Fisher CW: On the equivalence of constructed- response and multiple-choice tests. Appl Psychol Meas. 1977, 1: 355-369.

Martinez ME: Cognition and the question of test item format. Educ Psychol. 1999, 34: 207-218.

Hee-Sun L, Liu OL, Linn MC: Validating measurement of knowledge integration in science using multiple-choice and explanation items. Appl Meas Educ. 2011, 24: 115-136.

Wilson M, Wang W-C: Complex composites: Issues that arise in combining different modes of assessment. Appl Psychol Meas. 1995, 19: 51-71.

Ercikan K, Schwartz RD, Julian MW, Burket GR, Weber MM, Link V: Calibration and scoring of tests with multiple-choice and constructed-response item types. J Educ Meas. 1998, 35: 137-154.

Epstein ML, Lazarus AD, Calvano TB, Matthews KA, Hendel RA, Epstein BB, Brosvic GM: Immediate feedback assessment technique promotes learning and corrects inaccurate first responses. Psychological Record. 2002, 52: 187-201.

Schuwirth LWT, Van der Vleuten CPM: Programmatic assessment: From assessment of learning to assessment for learning. Med Teach. 2011, 33: 478-485.

Bridgeman B, Morgan R: Success in college for students with discrepancies between performance on multiple-choice and essay tests. J Educ Psychol. 1996, 88: 333-340.

Bleske-Rechek A, Zeug N, Webb RM: Discrepant performance on multiple-choice and short answer assessments and the relation of performance to general scholastic aptitude. Assessment Eval Higher Educ. 2007, 32: 89-105.

Hakstian AR: The Effects of Type of Examination Anticipated on Test Preparation and Performance. J Educ Res. 1971, 64: 319.

Scouller K: The influence of assessment method on Students' learning approaches: multiple choice question examination versus assignment essay. High Educ. 1998, 35: 453-472.

Thomas PR, Bain JD: Contextual dependence of learning approaches: The effects of assessments. Human Learning: J Pract Res Appl. 1984, 3: 227-240.

Watkins D: Factors influencing the study methods of Australian tertiary students. High Educ. 1982, 11: 369-380.

Minbashian A, Huon GF, Bird KD: Approaches to studying and academic performance in short-essay exams. High Educ. 2004, 47: 161-176.

Yonker JE: The relationship of deep and surface study approaches on factual and applied test-bank multiple-choice question performance. Assess Eval Higher Educ. 2011, 36: 673-686.

Joughin G: The hidden curriculum revisited: a critical review of research into the influence of summative assessment on learning. Assess Eval Higher Educ. 2010, 35: 335-345.

Scouller KM, Prosser M: Students' experiences in studying for multiple choice question examinations. Stud High Educ. 1994, 19: 267.

Hadwin AF, Winne PH, Stockley DB, Nesbit JC, Woszczyna C: Context moderates students' self-reports about how they study. J Educ Psychol. 2001, 93: 477-487.

Birenbaum M: Assessment and instruction preferences and their relationship with test anxiety and learning strategies. High Educ. 2007, 53: 749-768.

Birenbaum M: Assessment preferences and their relationship to learning strategies and orientations. High Educ. 1997, 33: 71-84.

Smith SN, Miller RJ: Learning approaches: examination type, discipline of study, and gender. Educ Psychol. 2005, 25: 43-53.

Rabinowitz HK, Hojat M: A comparison of the modified essay question and multiple choice question formats: their relationship to clinical performance. Fam Med. 1989, 21: 364-367.

Paterson DG: Do new and old type examinations measure different mental functions?. School Soc. 1926, 24: 246-248.

Schuwirth LW, Verheggen MM, Van der Vleuten CPM, Boshuizen HP, Dinant GJ: Do short cases elicit different thinking processes than factual knowledge questions do?. Med Educ. 2001, 35: 348-356.

Tanner DE: Multiple-choice items: Pariah, panacea or neither of the above?. Am Second Educ. 2003, 31: 27.

Cilliers FJ, Schuwirth LW, van der Vleuten CP: Modelling the pre-assessment learning effects of assessment: evidence in the validity chain. Med Educ. 2012, 46: 1087-1098.

Jonassen DH, Strobel J: Modeling for Meaningful Learning. Engaged Learning with Emerging Technologies. Edited by: Hung D. 2006, Springer, Amsterdam, 1-27.

Derry SJ: Cognitive schema theory in the constructivist debate. Educ Psychol. 1996, 31: 163-174.

Kim MK: Theoretically grounded guidelines for assessing learning progress: cognitive changes in Ill-structured complex problem-solving contexts. Educ Technol Res Dev. 2012, 60: 601-622.

Mayer RE: Models for Understanding. Rev Educ Res. 1989, 59: 43-64.

Jonassen D, Strobel J, Gottdenker J: Model building for conceptual change. Interact Learn Environ. 2005, 13: 15-37.

Jonassen DH: Tools for representing problems and the knowledge required to solve them. Edited by Tergan S-O, Keller T. Berlin, Heidelberg: Springer; 2005:82–94.

Bogard T, Liu M, Chiang Y-H: Thresholds of knowledge development in complex problem solving: a multiple-case study of advanced Learners' cognitive processes. Educ Technol Res Dev. 2013, 61: 465-503.

Van Gog T, Ericsson KA, Rikers RMJP: Instructional design for advanced learners: establishing connections between the theoretical frameworks of cognitive load and deliberate practice. Educ Technol Res Dev. 2005, 53: 73-81.

Schmidt HG, Norman GR, Boshuizen HP: A cognitive perspective on medical expertise: theory and implication. Acad Med. 1990, 65: 611-621.

Schmidt HG, Rikers RMJP: How expertise develops in medicine: knowledge encapsulation and illness script formation. Med Educ. 2007, 41: 1133-1139.

Norman G, Young M, Brooks L: Non-analytical models of clinical reasoning: the role of experience. Med Educ. 2007, 41: 1140-1145.

Ericsson KA, Prietula MJ, Cokely ET: The Making of an Expert. Harv Bus Rev. 2007, 85: 114-121.

Hoffman RR: How Can Expertise be Defined? Implications of Research From Cognitive Psychology. Exploring Expertise. Edited by: Williams R, Faulkner W, Fleck J. 1996, University of Edinburgh Press, Edinburgh, 81-100.

Norman GR: Problem-solving skills, solving problems and problem-based learning. Med Educ. 1988, 22: 279-286.

Ifenthaler D, Seel NM: Model-based reasoning. Comput Educ. 2013, 64: 131-142.

Jonassen D: Using cognitive tools to represent problems. J Res Technol Educ. 2003, 35: 362-381.

Mayer RE, Wittrock MC: Problem-Solving Transfer. Handbook of Educational Psychology. Edited by: Berliner DC, Calfee RC. 1996, Macmillan Library Reference USA, New York, NY, 47-62.

Zhang J, Norman DA: Representations in distributed cognitive tasks. Cogn Sci. 1994, 18: 87-122.

Simon HA: Information-Processing Theory of Human Problem Solving. Handbook of Learning & Cognitive Processes: V Human Information. Edited by: Estes WK. 1978, Lawrence Erlbaum, Oxford England, 271-295.

Jensen JL, Woodard SM, Kummer TA, McDaniel MA: Teaching to the test…or testing to teach: exams requiring higher order thinking skills encourage greater conceptual understanding. Educ Psychol Rev. 2014, 26: 307-329.

Cohen-Schotanus J, Van der Vleuten CPM: A standard setting method with the best performing students as point of reference: practical and affordable. Med Teach. 2010, 32: 154-160.

Desjardins I, Touchie C, Pugh D, Wood TJ, Humphrey-Murto S: The impact of cueing on written examinations of clinical decision making: a case study. Med Educ. 2014, 48: 255-261.

Pretz JE, Naples AJ, Sternberg RJ: Recognizing, Defining, and Representing Problems. The Psychology of Problem Solving. Edited by: Davidson JE, Sternberg RJ. 2003, Cambridge University Press, New York, NY US, 3-30.

Schuwirth LWT, Schuwirth LWT, Van der Vleuten CPM: ABC of learning and teaching in medicine: written assessment. BMJ: British Med J (International Edition). 2003, 326: 643-645.

Download references

Acknowledgements

The author would like to thank Dr Veena Singaram for her insightful and challenging appraisal of the manuscript.

Author information

Authors and affiliations.

Clinical and Professional Practice Research Group, School of Clinical Medicine, University of KwaZulu-Natal, Durban, 4013, South Africa

Richard J Hift

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Richard J Hift .

Additional information

Competing interests.

The author declares that he has no competing interests.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2, authors’ original file for figure 3, rights and permissions.

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License ( http://creativecommons.org/licenses/by/4.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Hift, R.J. Should essays and other “open-ended”-type questions retain a place in written summative assessment in clinical medicine?. BMC Med Educ 14 , 249 (2014). https://doi.org/10.1186/s12909-014-0249-2

Download citation

Received : 08 May 2014

Accepted : 07 November 2014

Published : 28 November 2014

DOI : https://doi.org/10.1186/s12909-014-0249-2

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Conceptual change
  • Mental models
  • Multiple choice

BMC Medical Education

ISSN: 1472-6920

open ended essay

Are you an agency specialized in UX, digital marketing, or growth? Join our Partner Program

Learn / Blog / Article

Back to blog

Open-ended questions vs. close-ended questions: examples and how to survey users

Unless you’re a mind reader, the only way to find out what your users are thinking is to ask them. That's what surveys are for. 

But the way you ask a question often determines the kind of answer you get—and one of the first decisions you have to make is: are you going to ask an open-ended or a closed-ended question?

open ended essay

Last updated

Reading time.

open ended essay

Understanding the difference between open-ended and close-ended questions helps you ask better, more targeted questions, so you can get actionable answers. The question examples we cover in this article look at open- and closed-ended questions in the context of a website survey, but the principle applies across any type of survey you may want to run. 

Start from the top or skip ahead to 

What’s the difference between open-ended and closed-ended questions?

4 tips on how to craft your survey questions for a maximum response rate

5 critical open-ended questions to ask customers

When to ask open-ended questions vs. closed-ended questions

Open-ended vs. close-ended questions: what’s the difference?

Open-ended questions are questions that cannot be answered with a simple ‘yes’ or ‘no’, and instead require the respondent to elaborate on their points.

Open-ended questions help you see things from a customer’s perspective as you get feedback in their own words instead of stock answers. You can analyze open-ended questions using spreadsheets , view qualitative research and data analysis trends, and even spot elements that stand out with word cloud visualizations.

Closed-ended questions are questions that can only be answered by selecting from a limited number of options, usually multiple-choice questions with a single-word answer (‘yes’ or ‘no’) or a rating scale (e.g. from strongly agree to strongly disagree).

Closed-ended questions give limited insight, but can easily be analyzed for quantitative data . For example, one of the most popular closed questions in market research is the Net Promoter Score® (NPS) survey, which asks people “How likely are you to recommend this product/service on a scale from 0 to 10?” and uses numerical answers to calculate overall score trends. Check out our NPS survey template to see this closed-ended question in action.

open ended essay

Let’s take a look at the examples of open-ended questions vs. closed-ended questions above.

All the closed questions in the left column can be responded to with a one-word answer that gives you the general sentiment of each user and a few useful data points about their satisfaction, which help you look at trends and percentages. For example, did the proportion of people who declared themselves happy with your website change in the last three, six, or 12 months?

The open-ended questions in the right column let customers provide detailed responses with additional information so you understand the context behind a problem or learn more about your unique selling points . If you’re after qualitative data like this, the easy way to convert closed-ended into open-ended questions is to consider the range of possible responses and re-word your questions to allow for a free-form answer.

💡 Pro tip : when surveying people on your website with Hotjar Surveys , our Survey Logic feature lets you ask follow-up questions that help you find out the what and the why behind your users’ actions. 

For more inspiration, here are 20+ real examples of open- and closed-ended questions you can ask on your website, along with a bunch of free pre-built survey templates and 50+ more survey questions to help you craft a better questionnaire for your users. 

Or, take advantage of Hotjar’s AI for Surveys , which generates insightful survey questions based on your research goal in seconds and prepares an automated summary report with key takeaways and suggested next steps once results are in.

Use Hotjar to build your survey and get the customer insights you need to grow your business.

How to ask survey questions for maximum responses

It’s often easy to lead your customers to the answer you want, so make sure you’re following these guidelines:

1. Embrace negative feedback

Some customers may find it hard to leave negative feedback if your questions are worded poorly.

For example, “We hope there wasn’t anything bad about your experience with us, but if so, please let us know” is better phrased neutrally as “Let us know if there was anything you’d like us to do differently.” It might sting a little to hear negative comments, but it’s your biggest opportunity to really empathize with customers and fuel your UX improvements moving forward.

2. Don’t lead your customers

“You bought 300 apples over the past year. What's your favorite fruit?” is an example of a leading question . You just planted the idea of an apple in your customers' mind. Valuable survey questions are open and objective—let people answer them in their own words, from their own perspective, and you’ll get more meaningful answers.

3. Avoid asking ‘and why?’

Tacking “and why?” on at the end of a question will only give you simple answers. And, no, adding “and why?” will not turn closed-ended questions into open-ended ones!

Asking “What did you purchase today, and why?” will give you an answer like “3 pairs of socks for a gift” (and that’s if you’re lucky), whereas wording the question as “Why did you choose to make a purchase today?” allows for an open answer like, “I saw your special offer and bought socks for my niece.”

4. Keep your survey simple

Not many folks love filling in a survey that’s 50 questions long and takes an hour to complete. For the most effective data collection (and decent response rates), you need to keep the respondents’ attention span in mind. Here’s how:

Keep question length short : good questions are one-sentence long and worded as concisely as possible

Limit the number of questions : take your list of planned questions and be ruthless when narrowing them down. Keep the questions you know will lead to direct insight and ditch the rest.

Show survey progress : a simple progress bar, or an indication of how many questions are left, motivates users to finish your survey

5 of our favorite open-ended questions to ask customers

Now that you know how to ask good open-ended questions , it’s time to start putting the knowledge into practice.

To survey your website users, use Hotjar's feedback tools to run on-page surveys, collect answers, and visualize results. You can create surveys that run on your entire site, or choose to display them on specific pages (URLs).

Different types of Hotjar surveys

As for what to ask—if you're just getting started, the five open-ended questions below are ideal for any website, whether ecommerce or software-as-a-service:

1. How can we make this page better?

If you missed the expectations set by a customer, you may have over-promised or under-delivered. Ask users where you missed the mark today, and you’ll know how to properly set, and meet, expectations in the future. An open platform for your customers to tell you their pain points is far more valuable for increasing customer satisfaction than guessing what improvements you should make. Issues could range from technical bugs to lack of product range.

2. Where exactly did you first hear about us?

An open “How did you find out about us?” question leaves users to answer freely, without leading them to a stock response, and gives you valuable information that might be harder to track with traditional analytics tools.

We have a traffic attribution survey template ready and waiting for you to get started.

3. What is stopping you from [action] today?

A “What is stopping you?” question can be shown on exit pages ; the open-form answers will help you identify the barriers to conversion that stop people from taking action.

Questions like this can also be triggered in a post-purchase survey on a thank you or order confirmation page. This type of survey only focuses on confirmed customers: after asking what almost stopped them, you can address any potential obstacles they highlight and fix them for the rest of your site visitors.

4. What are your main concerns or questions about [product/service]?

Finding out the concerns and objections of potential customers on your website helps you address them in future versions of the page they’re on and the products they’ll use. It sounds simple, but you’ll be surprised by how candid and helpful your users will be when answering this one.

Do you want to gather feedback on your product specifically? Learn what to improve and understand what users really think with our product feedback survey template and this expert advice on which product questions to ask when your product isn't selling.

5. What persuaded you to [take action] today?

Learning what made a customer click ‘buy now’ or ‘sign up’ helps you identify your levers. Maybe it’s low prices, fast shipping, or excellent customer service—whatever the reason, finding out what draws customers in and convinces them to stay helps you emphasize these benefits to other users and, ultimately, increase conversions.

Ask the right questions at the right time to get the insights you need

Whether you’re part of a marketing, product, sales, or user research team, asking the right questions through customer interviews or on-site surveys helps you collect feedback to create better user experiences and increase conversions and sales.

The type of question you choose depends on what you’re trying to achieve:

Ask a closed-ended question when you want answers that can be plotted on a graph and used to show trends and percentages. For example, answers to the closed-ended question “Do you trust the information on [website]?” helps you understand the proportion of people who find your website trustworthy versus those who do not.

Ask an open-ended question when you want in-depth answers to better understand your customers and their needs , get more context behind their actions, and investigate the reasons behind their satisfaction or dissatisfaction with your product. For example, the open-ended question “If you could change anything on this page, what would it be?” allows your customers to express, in their own words, what they think you should be working on next.

Not only is the kind of question you ask important—but the moment you ask it is equally relevant. Hotjar Surveys , our online survey tool , has a user-friendly survey builder that lets you effortlessly craft a survey and embed it anywhere on your web page to ask the right questions at the right time and place.

Build and send a survey today 🔥

Related articles.

open ended essay

User research

5 tips to recruit user research participants that represent the real world

Whether you’re running focus groups for your pricing strategy or conducting usability testing for a new product, user interviews are one of the most effective research methods to get the needle-moving insights you need. But to discover meaningful data that helps you reach your goals, you need to connect with high-quality participants. This article shares five tips to help you optimize your recruiting efforts and find the right people for any type of research study.

Hotjar team

open ended essay

How to instantly transcribe user interviews—and swiftly unlock actionable insights

After the thrill of a successful user interview, the chore of transcribing dialogue can feel like the ultimate anticlimax. Putting spoken words in writing takes several precious hours—time better invested in sharing your findings with your team or boss.

But the fact remains: you need a clear and accurate user interview transcript to analyze and report data effectively. Enter automatic transcription. This process instantly transcribes recorded dialogue in real time without human help. It ensures data integrity (and preserves your sanity), enabling you to unlock valuable insights in your research.

open ended essay

Shadz Loresco

open ended essay

An 8-step guide to conducting empathetic (and insightful) customer interviews in mid-market companies

Customer interviews uncover your ideal users’ challenges and needs in their own words, providing in-depth customer experience insights that inform product development, new features, and decision-making. But to get the most out of your interviews, you need to approach them with empathy. This article explains how to conduct accessible, inclusive, and—above all—insightful interviews to create a smooth (and enjoyable!) process for you and your participants.

Open-Ended Questions Essay Examples

  • To find inspiration for your paper and overcome writer’s block
  • As a source of information (ensure proper referencing)
  • As a template for you assignment

Converting Closed-Ended Questions to Open-Ended Questions

Asking open-ended questions is a critical assessment skill and a crucial part of motivational interviewing. Open-ended questions encourage the service receiver to elaborate on the problem instead of providing simple answers. Such questions help both the client and the service provider understand the key aspects of the problem in depth (DiClemente et al., 2017). In the situation with Mrs. Lopez, I would ask three open-ended questions to understand her condition.

First, I would say, “Your daughter says that your interest in recreational activities has decreased lately. Please, help me understand the reason for that.” When asking this question, I would demonstrate that both her daughter and I are worried about her current condition. Additionally, this question is designed to help Mrs. Lopez to search for a reason for her changes in mood. This question would help to rule out some causes of her behavior, such as abusive behavior of the personnel or health conditions like chronic pain.

Second, I would ask Mrs. Lopez, “What are your relationships with your daughter?” This question is designed to help the service receiver assess her relationship with her daughter. By asking this question, I plan to receive information on whether the relationship with her daughter can be treated as a coping mechanism or as a source of negative feelings that can contribute to depressive symptoms. Finally, I would ask, “What do you think about suicides?” This question is designed to help the service provider to understand the level of tolerance for suicides. This question would help to assess the likelihood of a suicide attempt.

Asking open-ended questions is crucial during the assessment of clients. Practitioners are sometimes inclined to use closed-ended questions to save time. However, such an approach often impairs the assessment process. Thus, it is crucial for advanced human service practitioners to be able to convert closed-ended questions into open-ended ones. The exercise below provides several examples of how common closed-ended questions can be changed to support the spirit of motivational interviewing.

  • How are you feeling today?
  • What has been bothering you today?
  • How is your personal life?
  • I was wondering if you could tell me about your significant other.
  • What is your typical drinking pattern?
  • Why do you think your drinking pattern may have caused discomfort to your relatives?
  • How was your school day?
  • What was the most interesting thing you learned at school today?

Sometimes, questions that seem open-ended receive closed-ended responses. For instance, “How are you feeling today?” is common question practitioners ask service receivers. This question often receives a short close-ended answer like “Fine, thanks!” Thus, sometimes the questions need to be rephrased to be more specific. For instance, a practitioner may ask, “What has been bothering you today?” which will encourage the client to reflect on the events of the day. Additionally, the question can be transformed into a request, such as “I was wondering if you could tell what positive moments you experienced today.” Such a request demonstrates that the practitioner is actually interested in the answer and helps to escape cliché questions, which can receive closed-ended answers.

Another example of rephrasing open-ended questions was Question 4 provided above. Children often come back with “Fine!” after being asked how school was today. The problem is that the question has turned into a cliché, and children do not feel that their parents are actually interested in their school life. By asking, “What was the most interesting thing your learned at school today?” a parent or a service provider can encourage the child to reflect on the events of the day and see the positive side of going to school.

Asking open-ended questions is a crucial skill that advanced human service practitioners should master. Open-ended questions invite collaboration with the service provider and deeper thoughts about the issue of interest (DiClemente et al., 2017). Even though open-ended questions may be difficult to quantify for formal assessments, they help the client to understand the underlying cause of the events (DiClemente et al., 2017). Moreover, such questions help both the service provider and the service receiver to identify the possibilities of change (DiClemente et al., 2017). Open-ended questions also help to reveal “silent” issues, the ones that the practitioner would have never thought of specifically (Geer, 1991). Thus, advanced human service providers are to be able to use open-ended questions appropriately.

DiClemente, C. C., Corno, C. M., Graydon, M. M., Wiprovnick, A. E., & Knoblach, D. J. (2017). Motivational interviewing, enhancement, and brief interventions over the last decade: A review of reviews of efficacy and effectiveness. Psychology of Addictive Behaviors , 31 (8), 862.

Geer, J. G. (1991). Do open-ended questions measure “salient” issues? Public Opinion Quarterly , 55 (3), 360-370.

  • Mental Health Issues: The Public Perception
  • The Power Flower Exercise, Level of Power
  • “Of Wolves and Men” by Lopez Barry & Lopez Barry Holstun
  • Response about Landscape and Narrative by Barry Lopez
  • Best Practices in Interviewing
  • Loneliness in Middle Adulthood
  • Breaking the Norm Again with Emo Aesthetics
  • Exploration of Social Ties, Neighboring and Community in City
  • The Ban on Affirmative Action: Is History Repeating Itself?
  • Surveillance Impact on Newham Community
  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2022, July 13). Open-Ended Questions. https://ivypanda.com/essays/open-ended-questions-case-study/

"Open-Ended Questions." IvyPanda , 13 July 2022, ivypanda.com/essays/open-ended-questions-case-study/.

IvyPanda . (2022) 'Open-Ended Questions'. 13 July.

IvyPanda . 2022. "Open-Ended Questions." July 13, 2022. https://ivypanda.com/essays/open-ended-questions-case-study/.

1. IvyPanda . "Open-Ended Questions." July 13, 2022. https://ivypanda.com/essays/open-ended-questions-case-study/.

Bibliography

IvyPanda . "Open-Ended Questions." July 13, 2022. https://ivypanda.com/essays/open-ended-questions-case-study/.

  • Share full article

Advertisement

The Morning

The u.s. open concludes.

Let us consider the grief of the lapsed sports fan.

An illustration of people watching a tennis match.

By Melissa Kirsch

“It’s like Black Friday at Walmart,” a tennis fan told The Times of the record-breaking attendance at this year’s U.S. Open. This sort of review might make a normal person glad they’d opted out of attending the tournament. But the more I heard of the colossal throngs, the endless lines attendees were enduring to procure a souvenir hat or a Honey Deuce or just to get inside the stadium complex, the more I wished I were there.

I have, for most of my teen and adult life, defined myself as a tennis fan. It’s been a sort of badge of honor: I may not remember the rules of football from one Super Bowl to the next, but I can recall in bright detail the intricacies of the John McEnroe-Jimmy Connors rivalry of the 1980s. Being into tennis has given me a connection to the larger fraternity of sports fans, the parking lot tailgaters and March Madness bracketeers and the people who get up at 4 a.m. to watch World Cup matches.

John Jeremiah Sullivan wrote that tennis is “as close as we come to physical chess, or a kind of chess in which the mind and body are at one in attacking essentially mathematical problems. So, a good game not just for writers but for philosophers, too.” For this mostly indoor cat who’s more at home discussing literature than LeBron, tennis has provided a passage from the cerebral to the physical, a means of getting out of my head.

In May, in a cafe in Dublin, I struck up a conversation with a woman at a neighboring table. She’d just finalized her plans to attend Wimbledon and was abuzz with anticipation for the players she hoped to see. We chatted about Coco Gauff and Carlos Alcaraz, Novak Djokovic and Frances Tiafoe, top seeds with good chances of going far. Sensing she’d found a confederate, she moved on to the Italian Open, which was going on as we spoke. As she reeled off the stats of players I’d never heard of, I felt my tennis bona fides slipping. I tried to keep up — it felt good to be connecting with a stranger in a foreign country through the lingua franca of tennis — but I was lost. I could still deconstruct every stroke in Stan Wawrinka’s electric 2015 victory over Djokovic in the French Open final, but, for no good reason, I hadn’t really been engaged with the game since Roger Federer and Serena Williams retired in 2022. I, who used to mark tournament dates in my calendar as soon as they were announced, had essentially retired from tennis myself.

The grief of the lapsed fan is hardly a serious matter. With a little light internet research, one can get back into any sport — one could even accomplish this in the few remaining hours before the U.S. Open finals begin. My friend Justin, who texts me “!!!!” whenever something notable happens in a Grand Slam match on the (in recent years incorrect) assumption that I’m watching too, has probably not even noticed that I haven’t been responding all year.

I felt a little silly for even describing my U.S. Open FOMO as grief when I chatted about it this week with my colleague Sam Sifton. But he pointed out that he felt it, too, felt the poignancy of not attending the tournament, not taking part in the ritual of walking the boardwalk at Flushing Meadows from the train to the tennis center and back. I loved that part of going to the Open too, the magic-hour light radiant on the faces of fellow fans en route to a night match.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit and  log into  your Times account, or  subscribe  for all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?  Log in .

Want all of The Times?  Subscribe .

IMAGES

  1. Open-Ended Essay

    open ended essay

  2. Open Ended Essay Examples

    open ended essay

  3. OPEN-ENDED/THESIS DRIVEN RESPONSE ESSAY by dwgeorge

    open ended essay

  4. Open Ended Questions in Research Free Essay Example

    open ended essay

  5. AP Lit Open-Ended Essay: Tips, Tricks, and Analysis (2009A)

    open ended essay

  6. Essay Writing

    open ended essay

VIDEO

  1. What is Extended Essay? (Conclusion)

  2. How Elden Ring Made Me Appreciate Sekiro

  3. Odisha Lit Fest 2014

  4. Tutor LMS : How To Import Open Ended or Essay Type Quiz ?

  5. The Sandboxification Of Nintendo

  6. Types of Essays (for CSS and other competitive exams)

COMMENTS

  1. How To Start a College Essay: 9 Effective Techniques

    How To Start a College Essay: 9 Effective Techniques

  2. What Are Open-Ended, Close-Ended Questions? Definition, Examples

    Define closed-ended question: a close-ended question is a question that expects a specific answer and does not give leeway outside of that answer. In summary, Open-ended questions are broad and do not expect a specific answer. Close-ended questions are limiting and expect a specific answer. Answers. Examples of open questions. Learn the ...

  3. 75 Open-Ended Questions Examples (2024)

    75 Open-Ended Questions Examples (2024)

  4. How to Write an Argumentative Essay

    How to Write an Argumentative Essay | Examples & Tips

  5. What Is An Open Ended Question? Answering It Through Essay

    Open-ended questions are those that do not define the scope you should take (i.e., how many and what kinds of experiences to discuss). Like personal statements for other types of applications, open-ended essays have more room for creativity, as you must make the decision on issues such as how expansive or narrow your topic should be.

  6. Open-Ended Questions: 28 Examples of How to Ask Properly

    Open-Ended Questions: 28 Examples of How to Ask Properly

  7. How to Write Open‐Ended Questions: 10 Steps (with Pictures)

    Download Article. 1. Begin your question with "how," "why," or "what.". As you begin writing your questions, start them with words that could prompt multiple possible answers. Questions that open with more specific words, such as "which" or "when," often have a single correct answer. [6]

  8. 300 Questions and Images to Inspire Argument Writing

    300 Questions and Images to Inspire Argument Writing

  9. Some Thoughts on Open-Ended Writing

    A Brief Definition of Open-Ended Writing. Open-ended writing seeks to connect ideas and identify new pathways of inquiry. It explores a topic by connecting sources and identifying tensions, conflicts, or missing information. Open-ended writing invites conversation and debate. In practice, open-ended writing is defined by three basic activities ...

  10. How to analyze open-ended questions in 5 steps [template included]

    How to Analyze Open-ended Questions in 5 Steps ...

  11. Open-Ended Questions: Examples & Advantages

    Open-Ended Questions: Examples & Advantages

  12. Examples of Open-Ended vs. Closed-Ended Questions

    Open-ended questions can be a little hard to spot sometimes. How can you know if a question is open-ended or closed-ended? Browse these examples to find out. ... while open-ended questions are more like subjective short responses and essay questions. Now that you know the difference between these question types, ...

  13. Free response question

    Free response questions require test takers to respond to a question or open-ended prompt with a prose response. In addition to being graded for factual correctness, free response questions may also be graded for persuasiveness, style, and demonstrated mastery of the subject material. Free response questions are a common part of assessment ...

  14. What's the difference between closed-ended and open-ended ...

    Closed-ended, or restricted-choice, questions offer respondents a fixed set of choices to select from. These questions are easier to answer quickly. Open-ended or long-form questions allow respondents to answer in their own words. Because there are no restrictions on their choices, respondents can answer in ways that researchers may not have ...

  15. Optional and Open-Ended Essay Questions: What's the Best Strategy?

    The most common among these open-ended questions is the optional essay, where you really have free rein to discuss anything you feel is important and that you have not had an opportunity to address anywhere else in your application. Clients are often uncertain about how best to use these spaces, which are excellent opportunities to round out ...

  16. Open-Ended vs. Closed Questions in User Research

    Open-Ended vs. Closed Questions in User Research

  17. Preparing Students in Writing Responses to Open-Ended Questions

    These tasks, open-ended questions as well as research simulations (often described as performance assessments), require students to construct their own responses rather than select them from a set of given possibilities. ... as the teacher repeats a question in a class discussion or asks for greater clarification on a written report or essay ...

  18. Optional and Open-Ended Essay Questions: What's the Best Strategy?

    Open-ended essay questions in your MBA application present fantastic opportunities for you to round out your candidacy for the adcom. At a minimum, you can bolster your profile by explaining any deficiencies in it and proving that you have addressed them. Moreover, you can surprise the adcom by unveiling a more personal, memorable aspect of ...

  19. Should essays and other "open-ended"-type questions retain a place in

    Written assessments fall into two classes: constructed-response or open-ended questions, such as the essay and a number of variants of the short-answer question, and selected-response or closed-ended questions; typically in the form of multiple-choice. It is widely believed that constructed response written questions test higher order cognitive processes in a manner that multiple-choice ...

  20. Open-Ended Questions [vs Close-Ended] + Examples

    Open-Ended Questions [vs Close-Ended] + Examples

  21. Student strategies when taking open-ended test questions

    In contrast, on open-ended (or "free-response") questions—which range from fill-in-the-blank to essay questions—students must generate answers themselves. This act of retrieving information is known to be a powerful driver of content retention (Carrier & Pashler, Citation 1992 ; McDaniel & Masson, Citation 1985 ).

  22. Open-Ended Questions Essay Examples

    Get a custom essay on Open-Ended Questions. First, I would say, "Your daughter says that your interest in recreational activities has decreased lately. Please, help me understand the reason for that.". When asking this question, I would demonstrate that both her daughter and I are worried about her current condition.

  23. Open-ended essay

    Open-ended essay. Characteristic of open-ended essay: "buisnessey" theme; doing, how they go whatever it is... variation - will require a comparison between "national/types of business culture how people in culture A do something vs. people in culture B; thinking about something is also doing; How to answer the essay: the statement ...

  24. The U.S. Open Concludes

    American men's tennis has been lost in the wilderness. Before this weekend, the last American man to reach the U.S. Open final was Andy Roddick in 2006; Roddick was also the last to win the ...