If you still watch TV with commercials, you may have seen an ad recently talking about using data to improve your business—the bakery that mined its sales data to discover that people buy more cake on rainy days, for example. Everybody’s talking about “big data” and “data science,” basically applying sophisticated analytic techniques to large datasets. And one of the things they’re doing is predictive modeling—using historical data to make predictions about the future.
These techniques have been in use in higher education since at least the 1960s. In enrollment management in particular, they’ve been employed in enrollment projections, retention projections, and predictions of the likelihood of prospective students or “inquiries” to apply for admission. In financial aid, predictive modeling has been employed in attempts to answer the question, “Are we making the best use of our institutional financial aid funds?”
In doing this, a statistical model is typically created from several years of institutional admissions and financial aid data for admitted students. The model can then be used to assess the price elasticity of the institution’s overall admit pool as well as segments within the overall population—financial aid applicants vs. non-applicants, in-state vs. out-of-state students, etc. Estimates can be derived for the impact of various factors on students’ decisions whether or not to enroll—academic criteria, financial need, amount of institutional grant, and others.
Additionally, in what can be the most useful phase of analysis, the model can be used to create simulations of alternative awarding strategies to illustrate how institutional goals interact; how optimizing for net tuition revenue, for example, affects enrollment, academic quality, diversity, and other class characteristics important to the institution.
As Rick Shipman, director of financial aid for Michigan State University, puts it, “We have successfully used predictive modeling to evaluate the likely impact of various changes in awarding, including changes to existing programs and development of new programs. We have found it especially useful in modeling different financial aid award amounts.”
One of the great benefits, then, of predictive modeling is its ability to reduce uncertainty. Alternative awarding strategies can be tested before rolling them out to real students. Opportunities for saving money or investing additional funds can be assessed without actually committing the financial aid budget to real grant expenditures.
Modeling, though, is only one tool applied toward the goal of enrolling a new class of students. It only has value when combined with a clear understanding of the institution’s mission, priorities, and enrollment goals. Failure to understand these factors may lead to proposing a course of action that has no chance of adoption because it is incompatible with the campus history or culture.
Recommendations on alternative financial aid awarding strategies that result from simulations also have to pass the practicality test: Can the awarding policies suggested by the model be implemented? Can they be written into a set of auto-packaging rules? Can they be incorporated into a net price calculator? Can they be described in publications and on the website in a way that’s easily understood by prospective students and parents? Even the perfect model results, if they can’t pass these reality checks, won’t be of much use.
Impractical model results, though, can be useful in rethinking policy decisions. For example, models run against a highly price-inelastic student population might suggest that eliminating
institutional grants would be the best way to increase net tuition revenue without greatly affecting enrollment.
This would not be a viable solution for most institutions, but the result would be a useful piece of information if a policy was under consideration to try to increase enrollment by increasing financial aid awards across the board. In this case, a more effective solution for growing enrollment would be to invest instead in increasing demand and building up the applicant pool rather than spending more money on financial aid.
There’s the practical consideration of timing, as well. The perfect packaging policy won’t be of much use if it arrives two months after competitors’ financial aid awards have gone out. This is a case where it’s better to be 80 percent right and on time than 100 percent right and too late.
Even if all of these conditions are met, though, there are inherent limitations to predictive modeling. A model is no better than the variables chosen (or available) to be included. The quality of the data is also an issue; in particular, the consistency with which data are reported from one year to the next.
And even the best of models can only account for a portion of the variance to be explained in the enrollment behavior of any group of 18-year-olds. In addition, econometric models typically don’t take into account external factors like unanticipated changes in state grant funding or changes in competitors’ tuition pricing and aid policies.
Finally, predictive models are always built on prior years’ data. So major changes in the makeup of the admitted student pool will limit the predictive accuracy of the model. In a discussion on constraints on the use of predictive modeling in financial aid, economist Michael Rizzo, director of the Alexander Hamilton Institute at the University of Rochester (N.Y.), notes that “what matters is the financial aid award (and other features) relative to competitors’ awards, and this is very, very hard to measure. At best, what we are hoping to do is include enough variation in our models to enable the other variables to capture some of this.”
Despite all of these limitations, the science of predictive modeling, combined with the art of student recruitment, can bring a level of predictability to what may, at times, seem to be a random jumble of student decisions. There’s just no substitute for solid, data-based decision making.