About those U Penn MOOC results reported at MRI13

e-Literate 2013-12-20

Michael and I have been at the MOOC Research Initiative conference in Arlington, TX (#mri13) for the past three days. Actually, thanks to the ice storm it turns out MRI is the Hotel California of conferences.

13th_Floor_-_Hotel_California

credit: Bailey Carter assignment for Laura Gibbs’ class

While I’m waiting to find out which fine Texas hotel dinner I might enjoy tonight, I thought it would be worthwhile to share more information from the University of Pennsylvania research that seems to be the focus of media reports on the conference (see Chronicle, Inside Higher Ed, and eCampusNews, for example). Penn has tracked approximately one million students through their 17 first-generation MOOCs on Coursera, which provided the foundation for this research.

Per IHE:

“Emerging data … show that massive open online courses (MOOCs) have relatively few active users, that user ‘engagement’ falls off dramatically especially after the first 1-2 weeks of a course, and that few users persist to the course end,” a summary of the study reads.

For anyone who has paid even the slightest bit of attention to the MOOC space over the past year, those conclusions hardly qualify as revelations. Yet some presenters said they felt the first day of the conference served as an opportunity to confirm some of those commonly held beliefs about MOOCs.

While it is accurate that these basic observations have been made in the past, there was some additional information from U Penn worth considering. The following slide images are courtesy of Laura Perna, a member of the research team.

The research team (but apparently not the faculty members) classified only two of the courses studied as targeted at college students (Single-variable Calculus and Principles of Microeconomics). There were seven courses targeted at “occupational” students (Cardiac Arrest, Gamification, Networked Life, Into to Ops Management, Fundamentals of Pharmacology, Scarce Medical Resources and Vaccines) and eight for “enrichment” (ADHD, Artifacts in Society, Health Policy and ACA, Genome Science, Modern American Poetry, Greek and Roman Mythology, Listening to World Music, and Growing Old). Update: I have changed the language in this paragraph based on commentary from one of the MOOC faculty; see clarification at end of article.

As the Chronicle pointed out, there was a wide variation in these courses.

The courses varied widely in topic, length, intended audience, amount of work expected, and other details. The largest, “Introduction to Operations Management,” enrolled more than 110,000 students, of whom about 2 percent completed the course. The course with the highest completion rate, “Cardiac Arrest, Resuscitation Science, and Hypothermia,” enrolled just over 40,000 students, of whom 13 percent stuck with it to the end.

This variation included the use of teaching assistants.

UPenn 2

The research tracked several characteristics of the student population:

  • Users – these are all students who registered for the course, regardless of time frame.
  • Registrants – these are the subset of Users who registered before the course through the last week of the course. The difference is interesting, as there were quite a few Users who registered well after the course was over, essentially opting for a self-paced experience. We have seen very little analysis of this difference.
  • Starters – these are the students who logged into the course and had some basic course activity.
  • Active User – these are the students who watched at least one video (I’m not 100% sure if this is accurate, but it is close).
  • Persister – these are the students who were still active within the last week of the course.

Given their categories, the Penn team showed percentages across all the courses in question. The completion rate (% of Registrants who were Persisters) varied from 13% to 2%. More useful, in my opinion, was the view of all categories across all courses.

UPenn 4

And finally, they showed the pattern of MOOC activity over time, as shown by this view of quizzes in one course. This general pattern of steep drop-off in week one, followed by a slower decrease.

UPenn 5

Notes

1) Which Categories -  I think the team missed an opportunity to build on the work of the Stanford team, which identified different student patterns with more precision (see Stanford report here and my graphical mash-up here).

studentPatternsInMoocs20130930

 

2) Self-Paced - As mentioned before, it is interesting the separation of students who registered during the course official time frame (Registrants) and those who registered after the course was over. This later group ranged from 2% to 23%, which is significant. Thousands and even tens of thousands of students are choosing to register and access course material when the course is not even “running”. They would have access to open material, quizzes and presumably assignments on a self-paced basis, but likely have no interactions with other students or the faculty.

3) Learner Goals - As was discussed frequently at the conference (but not in news articles about the conference), when you open a course up in terms of enrollment, one result is that you get a variety of student types with different goals. Not everyone desires to “complete” a course, and it is a mistake to solely focus on “course completion” when referring to MOOCs. For future research, I would hope that U Penn and others would find a way to determine learner goals near the beginning of the course then measure whether students met their learning goals either when finishing or dropping out.

Update (12/7): From the comments, one of the Penn professors who taught one of the MOOCs (Kevin Werbach) has provided some clarifications that I feel are important enough to include within the article.

I’m glad to see the Penn research getting so much attention, but it seems it primarily confirms what all other studies have shown.

As far as I know, the researchers didn’t have any contact with the faculty teaching the courses. So some of their statements are generalizations. E.g., I’m not sure what it means for a course to be “targeted at college students.” E.g., I teach the in-person version of my course (Gamification) to college students, and I would think most of the people who study modern poetry do so in college.

Also, I wouldn’t take the TA numbers too seriously. There’s a big difference between an undergrad and a PhD student in the field, for example, and those numbers don’t indicate how much time they worked or whether they were paid. And it looks like they confused the two sessions of my course. The first one (which seems to be what they looked at) had 1 TA. In the second session, I experimented with using two MBA students supervising 4 undergrads (hence the 6), which worked poorly.

Finally, including people who signed up after the course ended seems very odd, especially when one of the metrics is what percentage were in the course at the time it ended. Plus Coursera implemented their Watchlist feature somewhere in the middle of this process, which I think would significantly change the post-course registration behavior.

Full disclosure: Coursera has been a client of MindWires Consulting.

The post About those U Penn MOOC results reported at MRI13 appeared first on e-Literate.