In previous eras, clinching an argument simply required quoting an authority. Politicians cited Jefferson and Lincoln; academics frequently invoked Aristotle. Now, winning an argument often means citing numbers, not rhetoric, especially in higher education. Parents want more clarity about the outcomes of their children's education, and politicians' calls for accountability require more quantitative evidence. The Commission on the Future of Higher Education, for example, proposed both a student unit record database and increased public access to a wide variety of statistics on educational outcomes.
The question no longer is whether institutional performance and learning outcomes data should be collected and analyzed; instead, a debate is intensifying about which data should be collected, who should do the collecting, and who should have access to the results. Colleges and universities are already undertaking efforts in these areas. But their leaders must overcome the fear of sharing data with constituents, preempting a government-imposed system of collecting and sharing data that would be more costly and intrusive.
There's no denying the sober intent of federal and state officials calling for accountability measures-questioning the effectiveness of colleges and universities and inventing methods to rate (even rank) them. But we should welcome efforts to use data to illuminate the circumstances and achievements of higher education, to enhance campus decision-making, and to build a case for institutional effectiveness.
One tool of particular utility is the National Survey of Student Engagement. Although an indirect measure, it's a helpful gauge of how true our rhetoric is about educational effectiveness. Publicizing one's NSSE ranking can be risky: Even if a college scores well, its main competitor may score even better. Even so, most institutions could share their results, instantly making the risks and rewards of sharing more widely dispersed.
Another cogent use of data to assess effectiveness is the Collegiate Learning Assessment tool, which measures cognitive growth of students during their college careers. Some three dozen smaller colleges voluntarily formed a consortium through The Council of Independent Colleges in 2004 to share their results. Surveys of alumni satisfaction, such as those conducted by consultancy Hardwick-Day, are equally useful in assessing colleges' effects. Also, William G. Bowen and his colleagues' work on connections between academic performance and participation in intercollegiate athletics is very revealing.
Colleges eager to further professionalize their data operations can participate in workshops from The Association for Institutional Research to enhance institutional capacities to make use of national databases for benchmarking. For smaller colleges-many with modest or nonexistent research offices-CIC offers a Key Indicators Tool and is adding a Financial Indicators Tool, two benchmarking reports in which the calculations and comparisons are prepared for an institution.
Most campus leaders are eager to learn from new data, but for some presidents it may take real courage to use newly available data. Some fear that reports, intended to be confidential, might be used in inappropriate ways. We must build trust in the importance of an evidence-based culture for the voluntary use of meaningful data in higher education. Government agencies need to help in the development of new, varied assessment instruments, not impose a one-size-fits-all approach.
We have the opportunity to take advantage of a new climate that encourages measurement. The path to more and better use of data will probably continue to be mined with less-than-helpful efforts also made in the name of accountability-such as the overly simplistic U.S. News & World Report rankings. It will not be easy for us to remain true to academic ideals of objectivity and candor in the use of new data on institutional effectiveness, but we must.
Richard Ekman is president of The Council of Independent Colleges, www.cic.edu.