The case against proctoring
The need for proctoring derives from the perceived need to prevent “academic dishonesty”, aka cheating. The issues with proctoring include 1) the assumption of guilty until proven innocent (all students are potential cheaters), 2) the cost borne by the student directly or indirectly, 3) the Orwellian loss of privacy, and 4) that the vast majority of students are made to suffer because of a few perceived bad actors.
I claim that this problem is largely a creation of higher education (HE). Only in HE is working with others and seeking information from a variety of sources considered cheating. In the real world this behavior is recognized and rewarded as an attribute of a resourceful team player. Even in HE, and especially in research, everyone understands that all progress stands on the shoulders of giants. We need to be teaching students how to be successful in the real world, not teaching them to only rely on themselves.
However, Academic dishonesty still exists in the form of plagiarism and falsification. As we all know, plagiarism is taking credit for the work of others. This can be intentional or simply the result of failure to use citations properly. Falsification is the intentional use of false statements or false data. So at the end of the day we should not throw out academic dishonesty, but we should narrow its definition to be aligned with today's reality. Given a modern definition of academic dishonesty, there is no need for proctoring.
Without proctoring, assessment concerns are often raised. My view is that assessment pedagogy is often inappropriate. Rather than figuring out how to proctor, we need to figure out how to assess appropriately. Some assessments might be performances, internships, research projects, peer reviews, crowd-sourced analysis or thesis-defense style assessments, to name a few. It is likely that assessment may need to be customized by discipline/subject.
Improper assessment is asking multiple choice questions that can be easily looked up online. This only tests the student’s ability to find answers (perhaps a proper assessment in itself). Given Google, YouTube, Wikipedia, and recognizing the imminent rise of intelligent agents (like Siri), soon asking a purely fact-based question will be pointless. One would no more do that than one would ask for the result of multiplying two numbers in the day of calculators. I suggest we should start to prepare for a time when knowledge is ubiquitous, but the certified ability to use it is in demand. According to Laszlo Bock, the senior vice president of people operations for Google, “Your degree is not a proxy for your ability to do any job. The world only cares about —and pays off on — what you can do with what you know (and it doesn’t care how you learned it). And in an age when innovation is increasingly a group endeavor, it also cares about a lot of soft skills — leadership, humility, collaboration, adaptability and loving to learn and re-learn.”1
Knowing people that can assist with problems should be recognized as an asset. The best innovative problem solving often comes from teams with diverse knowledge and experience. Crowd-sourcing problems can similarly be a vector to new solutions. Outsourcing problems for a price may be useful in certain situations. Therefore, assessing the approach to solving a problem would seem just as, if not more, appropriate than assessing the problem solution presented.
The response to these suggestions is often that they don’t scale. Faculty generally don’t have time to do proper assessment at scale, so they often fall back on automated scoring and need proctoring to make it work. I claim that if Amazon and eBay can put systems in place that control dishonest behavior without insulting their honest customers, then we can too. However, in the short run, it may be helpful to recognize that if we cannot assess properly at scale then perhaps we shouldn’t do it at all. Consider where big classes exist, largely service courses and MOOCs. Perhaps those should be taught on-line using active learning frameworks that are easily updated and can only be completed by iterations of mini-assessments. Those who complete the (self-paced) course (and pay the associated tuition) are awarded a badge. A badge or badges may be a prerequisite to classes where assessment is more meaningful. That is, if you failed to gain the learning outcomes of the subject, badge or no badge, there is little hope of faking the next level courses. The question becomes: “Is one’s assessment procedures so bad that one could actually earn a degree without mastery of the learning outcomes?”
I often hear that addressing academic dishonesty is an accrediting agency requirement. This is basically an "it's not my fault" excuse to utilize poor assessment pedagogy and proctoring. Accrediting agencies being generally member organizations are again usually willing to open a dialog on member issues. I do realize this requires a cultural change, but we cannot allow this to become another nail in HE’s coffin.
The bottom line is that if we expect to remain in business, we need to start treating students as customers as opposed to potential criminals. Remember, when we teach potential criminals, only potential criminals will have degrees.
Lawrence Frederick is currently employed at the University of Missouri, Saint Louis as Associate Vice-Chancellor for Information technology and CIO. He previously worked at Emory University, Vanderbilt University, and the University of the Pacific in Stockton, CA where he also worked on the Sub-change committee for WASC. The views presented here represent his views alone and do not necessarily reflect the views of his current or previous institutions.