Using Data to Make eLearning Effective

Use data to make elearning effective

If you’ve ever developed an elearning course, you’ve most likely used an off-the-shelf authoring software tool such as Articulate Storyline or Adobe Captivate. You were also likely to have set the reporting settings to reflect the final quiz score or “slides viewed” and call it a day. In other words, the scoring and reporting you used did not necessarily make elearning effective. They simply ensured there was going to be a record of the training. In the age when seemingly everyone is focused on being compliant rather than taking time to produce effective learning interventions, these final quiz scores or percentages of slides viewed have somehow come to mean that the training is (or isn’t) effective. However, the aggregate quiz score and course progress reflect only course completion and not necessarily the effectiveness of the training. After all, we know that content consumption does not equal skills acquisition.
So how can a company tell if elearning course is effective? The best option, of course, is to observe the learners’ behavior after completing the training and note any positive changes. While this is the ideal approach, in many cases it’s not feasible, so we’ll settle for the closest proxy – learner’s behavior inside the course. In fact, collecting and analyzing learning data is becoming a major elearning trend. In this article, we will show how to use the Data Cloud widget for Articulate Storyline to gather the learning data that can help us understand the training effectiveness and come up with strategies that can help us make elearning effective. Note that while we are using Storyline, the widget is also available for Adobe Captivate.

Example 1: Pay attention to individual knowledge check answers.

To make these examples as robust as possible, we will assume that you are already familiar with the Data Cloud widget. Don’t worry if you haven’t used this widget before, it’s easy to get started with, and you can get going in minutes. For extra help, feel free to read our article about saving the learning data outside the LMS.
Let’s start assessing the training effectiveness by looking at individual questions in the knowledge check. To do this, we set up a table showing 5 fields—one for each question—displaying TRUE if the learner has answered the question correctly and FALSE in the case of a wrong answer. We named these fields “q1 correct,” “q2 correct,” “q3 correct,” “q4 correct,” “q5 correct.”

After several learners have taken this learning module, we check the data to see if there’s a pattern. Below, you can see a screenshot of the report. Looking at the data, we can clearly see that most learners had issues with question #4. This is a good place to start evaluating the training content. Did we explain the content properly? Was the learner able to link the training content to the question content? Are more explanations needed on this topic? What else can we do to make this elearning course more effective?

Assess the training effectiveness by looking at individual questions in the knowledge check

Example 2: Look for unexpected behavior.

After studying the data in step one, we open the learning course to reevaluate the content we test in question 4. One of the slides has a “Show Examples” button that brings up additional explanations and examples when clicked. We decide to test whether the learners click on that button. And if they do, we want to see how it affects the learning outcomes. To do this, we add another field to the data table and call it “button clicked.” We’ll set this field as numeric just to make it stand out from the rest in the data table. Zero will mean that the learner has not clicked the button, one will indicate a click.
We allow a few learners take the course. Next, we check the report to see if there’s a correlation. Particularly, we check how the “Show Examples” button click correlates with question 4 outcome. In fact, the data does show a strong correlation there. The table below clearly demonstrates that almost all of the learners who didn’t click on the button provided wrong answers in question 4.

How behavior affects elearning effectiveness

Example 3: Make predictions.

Next, we make a prediction that if we display the button more prominently on the slide and change its caption to “Continue” instead of “Show Examples,” more learners will click on it. We could simply restrict the course navigation, but we trust our learners and want to keep the navigation open. So we make the changes to the button and allow some more learners to take the course. After we check the report, we notice that the changes we made have resulted in improvements in question 4 scores. In other words, these changes allowed us to make elearning effective. At least more effective than it was before.

Predictions can make elearning effective

Example 4: Draw conclusions and act on findings to make elearning effective.

Finally, we can draw conclusions and sum up the findings. We have discovered that displaying the button prominently on the slides and changing the caption to “Continue” resulted in better learning outcomes. Let’s now apply this finding to the rest of the course. We know that the content in question 2 is also partially hidden behind another “Show Examples” button. We will make the same changes to this button and will test this course with the learners one more time. The table below illustrates that these changes have also resulted in performance improvements in question 2.

Data leads to performance improvement and effective learning

Of course, a record of what the learners are reading and seeing is just that. It does not measure how skilled the learners are in performing their job tasks. However, by paying attention to the learning data generated by the learners, we have an ability to fine-tune our courses. This has a potential to make elearning effective and result in stronger knowledge transfer.

eLearning Company Blog | July 10, 2019