Beyond Pass/Fail: Using Assessment Results to Improve Learning

Most of the tests we give in the life sciences are mastery exams. We set a passing score and treat the learners’ outcomes as either pass or fail. If they pass, we say they have mastered the material. If they fail, we (typically) give them several more tries to pass. Ultimately our test results are binary: mastery or non-mastery. This is important, of course, but not terribly informative.

But there is another reason to test, one I call “Active Assessment.” Active Assessment uses assessments as part of the learning process – and we have decades of research data to back up this approach. Active assessments include diagnostic exams, pretesting, cumulative testing, “priming” exams, review and reinforcement exams, etc.

But what about mastery exams? We have to give them; they are an absolute requirement in our compliance-driven business. But it turns out that even mastery exams can provide a lot of useful learning information, if we spend a little time digging into the data behind “pass/fail.”

Here are some examples of useful exam results that you can mine, even from mastery exams:

How does an individual learner compare against his or her peers?

A. Compare individual learners in a sales district. While for each individual you need to know if he/she has passed the exam (a binary result) it is still useful for you to know how this individual did compared to his/her peers. For example, a person with a score well below mastery needs to be remediated differently than a person with a score just below mastery.

 

B. Compare individual learners against averages. Similarly, how does the individual learner compare to the district, the region, all learners?

 

How do sales district scores compare?

A. Compare high, low and average district scores against one another. Do some sales districts routinely outperform others? If so, that’s an important piece of information for sales training.

 

Which learning objectives need to be reinforced?

A. What are the average scores by assessment learning objective for the entire sales force? Not all learning objectives are learned equally well. Which learning objectives are perceived to be more difficult? Is it a problem with the training materials, the instructor (if there is one), or is this just a more difficult topic?

 

Where we need to direct our training/coaching resources?

A. Generate “heat maps” showing results by geographic area (district or region). You can dig a little deeper by effectively combining the previous two graphics above into “heat maps.” Heat maps allow you to look at learning objective results by geographic area.

 

What are learners’ misconceptions?

A. Rank order question results by difficulty. Some questions are more difficult than others. Where are the learners having difficulty so you can target remediation?

 

B. Show choice distributions for each question. For each question you need to know:

• Difficulty level. What percent answered it correctly (same information as graphic 6, above)?
• For those who answered incorrectly what did they think was the correct answer? (It is important to figure out why.)
• What is the point-biserial correlation (a measure of question quality)? Even the best exams may have poorly written questions.

 

We all know we now live in a world of “big data.” We tend to think that big data is solely the domain of data scientists and statisticians. But it turns out that you have your own “big data” – data from your assessment results. You just need to take the time to analyze it and it will become a critical part of your training strategy.

Steven Just, Ed.D. is CEO of Princeton Metrics. Steven can be reached at sjust@princetonmetrics.com. Check out his blog at www.princetonmetrics.wordpress.com.

Life Sciences Trainers & Educators Network

About Life Sciences Trainers & Educators Network

Leave a Reply