Autonomous agents of change in higher ed
Editor's note: University Business welcomes the insights and opinions of educators and administrators on all topics. If you would like to contribute a guest column, please contact Tim Goral at firstname.lastname@example.org.
Charles Isbell’s research passion is artificial intelligence, or AI. The senior associate dean for Georgia Tech’s College of Computing focuses on building “autonomous agents,” or programs that can interact with large numbers of other intelligent agents, including humans.
Isbell was one of the founders of Georgia Tech’s Online Master of Science in Computer Science, the first accredited degree that students can earn exclusively through a MOOC platform. As a UBTech keynote speaker in Las Vegas this June, Isbell will discuss the evolution of the online MS program and the important role AI will play in education equity.
How did you get started researching artificial intelligence and online learning?
It’s a confluence of two things. First, my Georgia Tech colleagues and I were motivated by the idea of building intelligent things. Later, we were motivated by the idea that human beings are actually a fundamental part of that loop. That’s the tie-in to AI. Much of the initial work with AI was about trying to capture what it means to be human.
More recently, there’s been a swing toward thinking that what it means to be intelligent has a lot to do with interacting with people, or at least other intelligent beings. That drive of thinking about people’s needs pushed a bunch of us in the same direction.
At the same time, we saw the explosion of Udacity, Coursera, edX, and so on. These two things come together in a fantastic living experiment. At the root, it’s about the potential of online education to help with problems of access and to reach people who currently aren’t being reached, but who are capable.
Online learning was supposed to revolutionize education, but it still gets mixed results.
There was a lot of hype that developed around that but I think now we can actually begin to live up to the promise.
The dream, as it was expressed in the beginning, was about equality: “We’re going to make all this stuff available to everybody on the planet.” We’ve gotten to the point now, in a relatively short period of time, where we can start thinking about equity, which is different, right?
Equality is about treating everyone the same, giving everyone access to the same things. Equity is about giving people what they need in order to succeed. What we have is a technology that can allow for equality. And what we need is software technology, intelligent distribution of the resources and support that will actually allow for equity.
So your aim is to integrate AI to online courses so that it can improve the way a student learns?
Right. There’s two different ways AI can help in this space. One is what you might think of as good, old-fashioned AI—answering questions for people and giving them a better understanding of questions they might be looking for.
For example, one of our teams is working on a project that has to do with people asking questions in these big online forums. When you have got hundreds of people interacting with one another in a forum, people miss questions or the same question is asked at different times by many people. It can get very confusing.
This project will figure out that the question that is being asked has been answered or discussed in some detail already, and the program will push people toward that conversation. Think of it as a very smart, automatic, behind-the-scenes search agent driven by questions around a particular class or course.
The other way AI helps is more of what you would think of as big data and machine learning. In essence the AI says, “I’ve watched how you’re performing. I see that you’re spending a lot of time on this particular set of videos. I see that you’re taking longer to answer the quizzes. You’re going back and forth.”
When we have data like that, we can use it to drive people down a different path. Not a remedial path, not an accelerated path, but just a different path through the learning experiences that they would have.
Is that, to use a very basic example, the kind of thing Amazon and Netflix do when they suggest books or movies for you?
Yes, but there’s something very different about what they do. There’s something we call “The Beatles Problem.” And that is, if you spend too much time on it, Amazon eventually starts recommending you should buy The Beatles because that’s the lowest common denominator.
Amazon doesn’t want to drive you away so they push you toward the lowest common denominator—the safest choice—because otherwise, you’ll stop paying attention.
That is not what you want in education. In education, the kinds of choices you’re making aren’t really about popularity; they’re about understanding. And we have objective means of determining whether you’re getting it or not.
You can describe it as the same problem—they both rely on AI—but the way in which you would slice up the pie and the way you would determine success are actually quite different.
Can you give an example?
Sure. In my classes, I try to explain something called the curse of dimensionality. The short version is, the more features you have or the more dimensions you have, the more data you need to learn something. It grows exponentially.
That means that as we get more possible ways of describing people or objects or things, we need increasing amounts of data just to be able to determine whatever it is we’re trying to learn about those objects, those people, those things.
There’s a couple of ways you can explain that to people. One way is very geometric. Another is more visual. Both methods achieve the same result, but I’ve found over the years that some people get one and some people get the other.
Of all the different ways to explain a concept, the artificial intelligence engine will pick one based upon how you seem to be performing.
The important point there is that neither one of those ways of describing the problem is better or worse than the other, and neither implies something about your intelligence or your capabilities or how much you actually know or understand. They’re just different ways in which you internalize those kinds of facts about the world.
Being able to provide the right method to the right person is important. And we finally have enough data and interaction with people that we have some hope of how we could describe a particular concept to them.
What will UBTech attendees take from your presentation?
Part of my talk will be on the background of Georgia Tech’s online Master of Computer Science and related degrees that we’re doing now. The other part will be on our AI research and what we can do with it now that we’ve demonstrated the basics.
We’ve shown, first, that you can provide an education online and, second, that there are a lot of people who would benefit. We’ve gone from providing the infrastructure and the content to being able to provide the algorithms that can individualize it to a particular person.
In the short term, we’re going to have some pretty significant impact, but this is a long-term project that’s going to take years to do right. It’s going to take a lot of iterations and quite a few failures along the way.
We’re perhaps five years away from being able to point to something and feel good about it, but we are at least 15 years away from being able to feel great about it.
Tim Goral is senior editor of UB.