Top 5 Rules for Great Multiple Choice Questions

Multiplechoice

For many of us who design e-learning, the multiple choice question is the default question format we add at the end of a chapter or module. But where did it come from? Is there any proof that multiple choice is effective? Are there alternatives that we should be using to test our learners? In this article we’ll dig a little deeper…

The multiple choice question was designed by Edward Thorndike – an American psychologist, who was an expert on comparative psychology and the learning process. Thorndike put his testing expertise to work for the United States army, where in 1914 (during World War 1) he created the multiple choice test that is still being used today by the United States Military.

Why Do We Use Multiple Choice Assessments?

Well there are 3 main reasons that learning professionals default to the multiple choice question when designing training without thinking about alternatives:

1) When used correctly, it works! Multiple choice questions provide a reliable way of delivering a knowledge check based on the information taught within the module. It is often difficult for a learner to cheat the system by guessing the answers to a well-written multiple choice question.

2) It is extremely quick and easy to write a multiple choice question, the author can simply review critical information within the content and create questions.

3) When it comes to marking, whether automated or manual, multiple-choice questions are easy to analyse. This is especially true in today’s world of managing data via an LMS – an algorithm cannot evaluate an essay style response to a question, however it can quickly assess hundreds of multiple-choice questions to provide an accurate report on performance.

 

What Are The Alternatives?

We know that multiple choice is the most common type of question used in elearning – but what other options do we have? Well, we can divide different question types into 3 categories – graded questions, survey questions or essay questions. (You can download a handy PDF guide displaying all types of questions available in Articulate Storyline by clicking here.)

1) Graded questions

A graded question is one that has either a correct/incorrect answer, or a graded answer – an example of this could be ‘2 points for the correct answer, 1 point for the second best option and 0 points for incorrect answer’.

So a typical multiple-choice question will have one correct and three incorrect answers. (Quick tip – you should always ensure your multiple choice questions have the same total number of answers to give the learner continuity!)

Some of the more common examples of graded questions are:

  • Multiple-choice
  • True or false
  • Multiple response
  • Fill in the blank
  • Choose the sequence

 

2) Survey questions

The most common type of survey question is officially called a Likert scale, and this type of question bridges the gap between graded questions and essay questions. It allows the learner to give their opinion based on a scale of 1 to 10, excellent to poor or agree to disagree (and any other variations you can think of!)

We have all seen these types of surveys, often sent to us in customer service questionnaires that are quickly deleted from our bulging inboxes!

3) Essay questions

An essay question is simply a question that requires the learner to type up an answer. This requires a lot more work from the learner, and subsequently a lot more work for the person reviewing the answer.

In an LMS environment, this question type is less efficient, because the LMS can’t recognise if the answer is correct or not. However it should be noted that this question type can be extremely effective if you are really trying to gauge the learner’s understanding of a specific topic. Often, a yes/no answer choice is not enough and the learner will want to expand upon what is being asked of them.

How These Question Formats Work in an LMS

So we now understand the different types of questions we can use, what’s next? Well in this article, we are going to ignore survey and essay questions. I am assuming that most people are managing their courses via an LMS, and within this environment essay questions are more difficult to use.

The reason for this is if we have 1000 learners taking a multiple-choice question, we can quickly analyse the percentage of people who answered the question correctly i.e. 73% of learners answered question number 3 correctly – easy.

However, if 1000 people have responded to an essay question, each of those responses must be individually analysed by human being and this must then be marked correct/incorrect or given a percentage score or mark out of 10.

Whilst survey questions can be measured via an LMS, they are more complex than graded questions and therefore used more infrequently. They are also more suited to gauging opinion rather than checking for understanding.

Graded Questions – Ideal for eLearning

So that leaves us with graded questions. These question types are an excellent method of collecting quick data from our courses.

Using a variety of these question types is a good idea if the software or platform delivering the course allows. (This opinion is based purely on providing the learner with some variety.)

But let’s focus on multiple choice and how we ensure our questions are effective. It’s safe to assume that all rapid-authoring tools and modern LMS will provide this type of question and many of the following tips are applicable to the other types of graded questions too…

How to Write a Great Multiple Choice Question?

There are some really simple rules that you can follow in order to ensure your multiple-choice questions are effective.

In this article I am going to outline the 5 most important and easy-to-remember rules you should bear in mind when writing your multiple choice questions. I have chosen to leave out some of the more obvious ones, such as ensuring you randomise the order of the answers (i.e. make sure the correct answer is not ‘Answer 3’ on every question!)

One of the things that is often missing from these types of lists are examples, so I have provided an example for each rule:

Rule 1: Test understanding, not memory

This sounds very obvious, but you would be surprised how often authors will scan the course looking for detail and create questions based on factual information, rather than testing the learners understanding of a topic. This often happens if the course author is not the subject matter expert (something to be aware of if you are outsourcing the development of your assessment).

Let’s take an example. Here are a couple of multiple choice questions testing you on the information delivered so far in this blog post (let’s see if you were really concentrating!)

In which year did Edward Thorndike start using multiple choice questions within the United States Army:

  1. 1912
  2. 1914
  3. 1915

 

That is a bad example of a question because it is only testing your memory – it is irrelevant whether you remember the exact year that the multiple choice question was first introduced!

Let’s take another example:

Why is a multiple choice question better than a survey question when delivered via an LMS:

  1. The answers to multiple choice questions are the only type of answers an LMS can read.
  2. An LMS cannot read and analyse answers from essay questions.
  3. Answers provided by multiple choice questions show a better representation of understanding than an essay question.

 

Can you see the difference? The first question is testing you on your memory but the second question is testing you on your understanding of the content discussed within the article.

Rule 2: Use believable distractors

A ‘distractor’ is an incorrect answer provided within a multiple choice question to make the learner think more carefully about the correct answer to the question.

One of the most annoying things for a learner is being forced to answer a series of questions where the answers are obvious. The key to writing a good distractor is that you should put more time into thinking of an incorrect answer than the correct answer. Often, the author will simply mark the correct answer and the distractors are an afterthought.

Let’s look at another example:

Which of the following is an example of a graded question?

  1. Connect the dots
  2. Fill in the missing word
  3. Multiple choice
  4. Snakes and ladders

 

In this example, all three of the distractors are obviously incorrect (in this example, it is obvious because none of the other answers have been discussed). Which means the learner simply has to identify the three incorrect answers, in order to know which answer is correct i.e. trial and error.

Let’s look at another example:

Which of the following is an example of a graded question?

  1. Likert scale
  2. Text entry
  3. Multiple choice
  4. Essay question

 

In this example, the three incorrect answers (distractors) could actually be the correct answer if you hadn’t understood the preceding content. These distractors make the learner think carefully about the options, rather than using a process of elimination to identify the correct answer. They may also have the added benefit of encouraging the learner to review the learning material to ensure they fully understand what has been taught.

Rule 3: Make the Most of the Question’s stem

Using the correct stem for a question is one of the most fundamental rules you should be following when writing your multiple-choice questions.

Let’s take a look at the following example:

The responses from large volumes of graded questions:

  1. can be read through and marked individually by the SME 
  2. can be analysed and reported on by our LMS
  3. can be emailed directly to the elearning administrator

 

As you can probably tell from the example, there is a repeat of the words ‘can be’ within all three of the answers. This is unnecessary repetition, creates a distraction for the learner and is very easy to fix.

The same question written correctly would look like this:

The responses from large volumes of graded questions can be:

  1. read through and marked individually by the SME 
  2. analysed and reported on by our LMS
  3. emailed directly to the elearning administrator

 

Always avoid repeating the same text if it can be added to the question stem.

Rule 4: Avoid negatives

I have seen this happen several times in various online assessments – using negative statements within a question can be confusing to a learner and is bad practice.

Let’s look at a couple of examples, first of all a question with the negative mentioned in the question and secondly the same question written correctly:

Which of the following question types is NOT classed as a graded question:

  1. multiple choice
  2. true/false
  3. fill in the blank
  4. Likert scale

 

Although the question and answers make complete sense, if the learner did not read the question thoroughly it would be easy to answer incorrectly. Remember – you are not trying to catch the learner out, we are simply trying to ensure that they understand what they have learned.

So the same question would be much more effective written in the following format:

Which of the following question types is classed as a survey question:

  1. multiple choice
  2. true/false
  3. fill in the blank
  4. Likert scale

 

In this example, the answers are exactly the same, but we have amended the question in order to remove the negative.

Rule 5: Be careful when using ‘All of the Above’ or ‘None of the Above’ 

Now this is one rule that is a little less black-and-white than the previous rules. That is because it is actually acceptable to use ‘all of the above’ or ‘none of the above’ as an answer,

However, this comes with a couple of disclaimers. First of all, if you use the word ‘above’, but your rapid-authoring software or LMS is randomising the answers, you will find that your question could look like this, regardless of how you entered it into the system:

Which of the following question types is classed as a graded question:

  1. multiple choice
  2. all of the above
  3. fill in the blank
  4. sequence

 

Obviously the problem here is that your system has randomised the answers and the word ‘above’ makes no sense because some of the answers are below! Common sense, but something to bear in mind.

The second disclaimer for using all of the above or none of the above as a multiple-choice response, is that it can often guide the learner to the correct answer. We will often see ‘all of the above’ being used as the correct answer, and then we never see that as an option amongst the other questions. So it is important to think carefully about the other distractors if you use this technique.

Conclusion

It’s safe to say that due to the simplicity in creating questions and subsequently reporting on the answers, multiple choice questions are here to stay. However, improving how we use these types of questions is essential if we want to deliver credible, effective e-learning.

Do you have any examples of multiple-choice questions used incorrectly that you can share? What type of questions do you prefer to use in your courses?

Useful Multiple Choice Assessment Resources

Another great article for learning about different rules can be seen here – 10 rules for writing multiple choice questions. There is also a useful resource with examples on the University of Texas website. And if you really want to get into the nitty-gritty behind multiple choice, there are several books published on the subject – this one being one of the most popular.