And rankings for all: Questioning national college rating framework
College ranking systems are typically viewed as unreliable metrics, often accused of practicing favoritism based on questionable criteria that varies by publisher.
In an attempt to provide an unbiased and informed resource for prospective students and their families, the Obama administration has formulated its own version of a college ranking system.
Based on three areas—access, affordability and outcome—the initial rating system has been presented as a “framework” that’s still being crafted, with comments requested from academia and the general public. It is scheduled for debut before the beginning of the 2015-16 academic year.
According to a press release from the U.S. Department of Education, the federal system will:
- Offer U.S. colleges and universities a set of benchmarks in which to mark progress and identify areas for improvement.
- Give students and families reliable information about college costs and selection.
- Generate data that will guide how the government spends $150 billion on financial aid each year.
The rating system continues to generate controversy, with several Republicans—including Senator Lamar Alexander (R-Tennessee), Virginia Foxx (R-North Carolina) and John Kline (R-Minnesota)—as well as some for-profit universities, among those finding fault with the initial proposal.
Much of the criticism revolves around the system’s standards. “The essence of a college education is directly connected to the quality of its academic programs,” says Molly Corbett Broad, president of the American Council on Education. “The third component, outcome, relates more to earnings downstream and does not act as an assessment of the overall quality of the program.”
Sandy Baum, professor of higher education at The George Washington University, says the framework is still skeletal.
“Neither the conceptual grounding for measuring quality nor the data are really adequate,” she says. “There is not an easy way to measure learning and then compare institutions on this metric. Comparing institutions that have different missions and very different student bodies is problematic.”
The data on which the government intends to base its rankings is also flawed, says Broad. “They don’t have access to sufficient data to enable them to understand what outcomes they can anticipate.”
What they do have is data on completion rates of first time, full-time students with no transfers. “This is now the minority of students in American higher education,” she says, noting that figures for recent-graduate earnings can be “misleading, even with very good data.”
Still, it’s difficult to offer constructive criticism, say both Baum and Broad, as the framework lacks concrete detail at this time.
“The goal is well-meaning, but the directive from the White House probably didn’t completely envision the limitations of data that the Department of Education has, and the complexity of these ratings,” says Broad. “The progress the administration is making on the framework is a reflection on how tough the assignment handed to them really is.”
The framework draft can be found at http://ubmag.me/ratingsframework.