Response from Babson Survey author on differences with IPEDS
e-Literate 2014-02-01
I have written a series of posts on the new IPEDS data, including two that showed how this data seems to be quite different from the pervasive Babson Survey Research Group (BSRG) data (formerly known as the Sloan Survey). In particular, there were two findings, one on the number of students taking online courses:
And no, there aren’t 7.1 million [from Babson] US higher ed students taking at least one online course. There are closer to 5.5 million [from IPEDS] as of Fall 2012.
and one on the number of institutions offering online education:
The big difference that should be obvious is that the Babson data shows less than half the number of institutions with no online offerings than the IPEDS data – 15% compared to 31%.
Who has an online offering?
I have been in contact with Jeff Seaman, one of the two authors of the BSRG reports, to get his analysis on the differences, including sharing my spreadsheets used for analysis of the two data sets. Jeff has graciously reviewed the data and provided the following analysis of why BSRG data is so different from IPEDS data.
Preliminary IPEDS data includes an indicator if an institution offers distance education. The Babson Survey Research Group surveys of online learning have contained a similar measure for the past eleven years. The two are not measuring the same thing.
When the Alfred P. Sloan Foundation approached the Babson Survey Research Group to conduct the first of these reports in 2003, the hypothesis was that the most important transition point for an institution was when it moved from having NO online offerings to having ANY such offering. As such, the measure of “online offerings” was defined as broadly as possible – any online offering of any length to any audience at any time. IPEDS on the other hand only counts undergraduate courses for “A student enrolled in a 4- or 5-year bachelor’s degree program, an associate’s degree program, or a vocational or technical program below the baccalaureate.” Students who are not enrolled in a program, but are just taking courses do not count in the IPEDS definition. Non-credit courses, continuing education courses, courses for alumni, and courses for students not registered for a degree program at the institution are not counted.
For larger institutions with well-developed online programs the difference in these definitions has little impact – they may have offerings outside of the IPEDS definition but they also have courses that do meet the IPEDS definition. The BSRG and IPEDS measures agree very well for institutions with more than 1500 total enrollments. However, for schools with less than 1500 total students, the BSRG measure includes far more institutions than does the IPEDS measure. Most of these differences occur at the lower end of this spectrum; schools with only a few hundred total enrollments.
Many of these institutions do not meet the IPEDS definition for providing distance offerings but claim to meet the BSRG definition. These schools typically lack the resources to launch significant online offerings, but have consistently reported that online education is critical for them and that they [offer] it. Their online offerings are often very small (sometimes only two or three students), and rarely part of their core program. Their offerings may be shorter than a full-length course and are rarely for credit. They might not even be a “course.” It is very rare for these to be part of a degree-program.
To understand the differences in the two measures it may be helpful to think of IPEDS as the measure of providing distance courses for those pursuing a degree and BSRG as the measure of providing ANY online offering of any type for any participant. Institutions that meet the IPEDS definition will also meet the BSRG definition, but the reverse is not necessarily true.
To help visualize the issue described by Jeff, it is useful to see these two charts that compare institutions offering online or not, broken down by enrollment (you can hover over bars to get actual numbers), followed by summary table.
One clarification I would add is that IPEDS measures not only students enrolled in a degree program but also in certificate programs. I believe that Jeff’s point remains about students who are taking online courses and not seeking credit, it is just that credit can encompass courses leading to a degree or a certificate.
Definition of Online Course
The other issue that was raised in our discussions involved the different definition by BSRG and IPEDS of what constitutes an online course, independent of the degree or certificate-seeking status. The BSRG definition (page 6 of the report):
An online course is defined as one in which at least 80 percent of the course content is delivered online. Face-to-face instruction includes courses in which zero to 29 percent of the content is delivered online; this category includes both traditional and web facilitated courses. The remaining alternative, blended (or hybrid) instruction, has between 30 and 80 percent of the course content delivered online.
While IPEDS defines a distance education course as follows:
A course in which the instructional content is delivered exclusively via distance education. Requirements for coming to campus for orientation, testing, or academic support services do not exclude a course from being classified as distance education.
Could this play a major role in the difference in data between BSRG survey and IPEDS? Jeff answered that this distinction is secondary.
The 80% portion of the definition has an impact. [snip] The primary factor is the issue of audience – that IPEDS only counts courses for those enrolled in degree programs and most of these very small schools are using their online for other purposes. The 80% factor plays a role, but only for those that would otherwise meet the IPEDS definitions.
Comment
First of all, this whole interaction has partially restored my confidence in university-based research. Michael and I (as well as Al Essa and Mike Caufield) have described the disturbing trend where some universities are all too willing to present research findings when it helps the institutional image, but they can be quite reluctant to be transparent when there are legitimate questions that should cause the groups to re-examine their findings. Jeff has been quite willing to admit the differences between IPEDS and BSRG and spend time trying to understand the differences.
Jeff also mentioned that they are looking at questions to ask next year.
We have been thinking about what questions to put in next year’s survey to get at these differences. The number of schools counted as having online may be different for the two measures, but the impact is really very small when it comes to students – since these schools are so small and teach such small numbers of students online.
I encourage BSRG to follow through on this plan. The BSRG survey is the basis for a great deal of commentary and decision-making in US higher ed, and with the new data IPEDS will also play an influential role. It is important that there is coherence between the two data sources, allowing the best of non-survey required reporting and trend analysis.
The post Response from Babson Survey author on differences with IPEDS appeared first on e-Literate.