Most Americans reportedly believe that colleges and universities do a good job of educating students, but that hasn't prevented a debate from intensifying about the best ways to demonstrate students' "learning outcomes."
Secretary of Education Margaret Spellings' Commission on the Future of Higher Education and her follow-up statements are only the most recent if the most visible calls for such evidence of institutional effectiveness. Over the past decade, all of the regional accrediting bodies had already introduced steps to encourage institutions to focus on the results, rather than the "inputs" of a college education and to good effect.
Unfortunately, the commission's early statements called for a government-run, mandatory, one-size-fits-all test of college students' learning outcomes. Commissioners and the secretary heard from most quarters that this unilateral approach was needlessly heavy-handed. No one should want a system of higher education in the United States that is run directly by the national government. Other countries with nationalized systems of higher education offer sufficiently negative examples for any sensible person to think twice about that.
In a letter to commission chair Charles Miller early last year on behalf of small and mid-sized private colleges and universities, I expressed concern about the prospect of a government-controlled testing regimen that would run roughshod over institutional autonomy and individual privacy. I emphasized the progress already being made by the voluntary actions of colleges and universities, individually and together, on increased accountability.
In 2002-long before Secretary Spellings convened her commission-the Council of Independent Colleges (CIC) became the first association to embrace the importance of nongovernmental approaches to measuring educational effectiveness. It was CIC's view that such evidence would help to spur institutional improvement and to provide valuable information about institutional effectiveness of interest to students, parents, donors, and others.
More recently, the National Association of State Universities and Land Grant Colleges and the American Association of State Colleges and Universities have begun to develop guidelines for principles of outcomes assessment in public institutions.
CIC is working with three dozen of its member institutions that have been using the Collegiate Learning Assessment (CLA) for several years now. From an even earlier date, more than 300 CIC colleges and universities have used the National Survey of Student Engagement. NSSE measures how extensively an institution's students participate in pedagogical approaches that are proven to engage students actively in their academic programs. NSSE looks at how students spend their time rather than results namely, what they wind up learning.
The CLA is one of the first assessment tools to provide direct evidence of the student learning that occurs over a typical four-year college program. Taken together, NSSE and CLA provide a useful picture of institutional effectiveness.
The CLA measures capabilities that cut across specific major fields of study capabilities such as critical thinking, analytic reasoning, problem solving, and written communication. The CLA is fundamentally different from government mandated testing under the No Child Left Behind (NCLB) Act. All of the CLA measures are open- ended, instead of consisting of multiple-choice questions. The institution, not the student, is what is being measured with the CLA. And, unlike with NCLB-mandated testing, there is no fixed standard of proficiency with the CLA; progress is relative, depending on the characteristics of the students enrolled at the institution. It is cognitive growth from freshman to senior year that is measured.
One may wonder why colleges and universities haven't been providing evidence of students' learning outcomes all along. Unlike primary and secondary education, where there is broad agreement on learning objectives and on core subject matter, in college the learning outcomes are more varied. Students not only choose different curricular tracks, but they are expected to integrate learning across disciplines.
As some in higher education suggest, college-level student learning is so hard to define that efforts to measure it (let alone compare outcomes across institutions) are impossible. This view exaggerates the difficulty. In any event, new assessment tools such as the CLA offer a way to overcome individual student differences by looking at institutional results. When compared with an institution's prevailing pedagogies, the CLA offers a practical way to identify a college's most and least effective practices.
The CLA has gained recognition from funders, including the Teagle Foundation and the Carnegie Corporation of New York. The current 33-member CIC/CLA Consortium began with a 2003 pilot program. It will soon be expanded, thanks to a new grant from the Teagle Foundation.
The larger group of institutions to participate in the next phase of the CIC/CLA Consortium will work together to develop more comprehensive campus assessment plans for improving teaching and learning, such as using NSSE, campus-based learning portfolios, and classroom measures to provide multiple sources of evidence in addition to the CLA.
In fact, some institutions are already beginning to demonstrate the beneficial results of a comprehensive approach to student learning assessment. University of Charleston (W.Va.) President Edwin H. Welch initiated just such a university-wide planning effort a decade ago, and it has led to a change in the institution's mission, with much more emphasis on student learning. Faculty members' participation in this planning effort was critical to their unanimous approval, barely two years after its inception, of the shift in focus.
Now, 10 years later, the campus is noted for its "culture of assessment" in which faculty members play central roles in the implementation and revision of assessment activities. In 2005-2006, the university had the highest "value-added" score among the more than 100 colleges and universities that administered the CLA. The University of Charleston exemplifies how a college can take assessment seriously, using the CLA not only to improve student learning but also to produce evidence of educational progress by students.
The CLA is one of several assessment instruments promoted by the Spellings Commission. While commissioners initially sought a government-mandated approach to assessment, they softened their approach in response to vociferous opposition.
But that was last year. This March, Secretary Spellings convened a "summit." It was a grand occasion, held at the elegant Willard Hotel in Washington, D.C. I was one of about 275 invitees who learned on the day of the conference that a planning group had already been at work for several months and had crafted a set of draft recommendations that would form the basis for the day's discussions. Participants were assigned to one of a half dozen subgroups and were asked to discuss and modify one section of the draft recommendations.
I was assigned to the group that reviewed draft recommendations on the relationship between accreditation and assessment of learning outcomes. I was alarmed to find in the draft recommendations that, once again, a major role had been outlined for the federal government in both accreditation and learning outcomes assessment.
Fortunately, there was overwhelming consensus in our subgroup about modifying the draft recommendations. Our edits eliminated most of the proposed federal role in accreditation and in outcomes assessment, set strict limits on efforts to align state and voluntary standards for accreditation, and emphasized the need for outcomes assessment to be entirely voluntary. The degree of consensus on these points in our group was a surprise, given its diversity; participants included business leaders, state officials, accreditors, leaders of for profit institutions, and presidents of public and private colleges and universities.
At the end of the day, in a plenary session, each of the subgroups reported to the whole, and participants were thanked for their hard work. We heard that the next step would be handled by a steering committee and by the secretary's staff in "distilling" the revised recommendations.
I hope that this distilled version will faithfully sustain the subgroups' revisions. Independent colleges and universities should be concerned that the secretary's steering committee, while comprised of good people, does not include a single representative of the private, nonprofit sector of higher education.
During their deliberations, I hope Secretary Spellings and the steering committee keep in mind that American higher education continues to lead the world in quality and effectiveness. Fundamental to this success is the absence of federalization of our nation's colleges and universities. The secretary is correct in her view that we need to be vigilant in efforts to sustain America's leadership role in higher education, but there is no crisis at present that could justify a federal seizure of control of higher education as was the case when No Child Left Behind was introduced.
In working to increase the availability of information about learning outcomes, the secretary should be careful not to undermine the institutional autonomy that is a hallmark of U.S. higher education.
Richard Ekman is president of The Council of Independent Colleges, www.cic.edu.
UBTech 2017 Call for Speakers
Enhance your leadership influence by presenting at UBTech 2017, the biggest week in higher ed AV, IT, and Institutional Success. The UBTech program team is accepting proposal submissions in the following categories:
- Active Classroom
- AV Integration
- Campus IT
- Institutional Success
- Instructional Technology
- Policy and Practice
For more information and helpful tips on submitting high-quality proposals, visit the UBTech Speakers Portal.