Home | Features | More than robo-markers: the possibilities of AI in education

More than robo-markers: the possibilities of AI in education

To some, artificial intelligence (AI) sounds like a futuristic possibility. But it’s already here, and it’s prolific. Apple’s Siri and Amazon’s Alexa are just two examples of advanced AI embedding themselves into our lives.

Education, too, is not immune to AI’s creep. When it comes to applying it to learning, however, Simon Buckingham Shum, a professor of learning informatics at UTS and director of the university’s Connected Intelligence Centre, and Simon Knight, a lecturer in learning analytics at UTS, say computers marking essays is just one direction we could take.

Campus Review asked them about its other surprising uses, and of course, whether the robots will take over.

CR: How is artificial intelligence currently being used in education settings?

SBS: If we’re talking specifically about products, then you tend to see AI being designed to help people do the kinds of things that schools and students frequently want to do, which is going to be passing tests and mastering curriculum. The most common thing that we’ve seen recently is what’s called an adaptive learning platform, which sort of coaches you without ever getting tired or frustrated with you.

SK: Researchers at universities and big tech companies like Pearson have developed tools that can analyse student writing, but often that’s being used to grade the writing, rather than to give feedback to the students. They’re using it in a much more summative way than we’re generally interested in.

The other thing in current classrooms and lecture theatres is that AI’s coming in indirectly, insofar as it’s in lots of other technologies that people are using. It’s in people’s mobile phones and various other sorts of devices that are pretty commonplace. They aren’t being used explicitly in a designed way, but they’re still there.

SBS: It’s in smartphones. It’s in Google search. It’s in image searches. It’s just almost become part of the infrastructure. We don’t think of those as especially educational. Of course, though, they have huge educational uses.

You make a distinction between using AI as an educational tool instead of using it as an educational replacement. Can you explain this a bit further?

SK: Yes, so one way of thinking about AI is that we automate some things that are already going on. So, for example, automatic marking, or potentially even instead of having a human tutor, you develop an artificial tutor. Another approach is, instead of simply trying to replace things that are already there, we think about how technology can help us to develop new approaches to teaching and learning or how they can support the best of our existing practices as well. This is called intelligence amplification or augmented intelligence, using AI to support human intelligence and decision making.

Can you give an example of the use of such technology?

SK: Some of the work that we’re doing is thinking about how we can use various technologies, some of which build on elements of AI, to provide feedback to the students and academics. One of our colleagues is working with nursing educators to look at how we can collect data on how people interact with patients, and then provide feedback on the quality of the interaction and the quality of their teamwork. The idea here is not that we want to do away with human feedback, but that we add a new kind of feedback; it’s augmenting what’s already there.

SBS: A lot of the excitement about big data, which is the catchphrase that everybody’s talking about now, whether it’s finance or healthcare or manufacturing or social media, is that it provides feedback loops. These allow you to monitor the system that you’re interested in, in higher and higher fidelity and closer and closer to real time.

So, if we take that idea in education, feedback in this realm has a long and chequered history. We know how important feedback is and we know how bad feedback can be when you’ve got hundreds of students who would all like informative, quality feedback. We can never give enough good feedback to students. This is an area where educators, in our experience, can be quite open to the idea that there may be a role here for AI. It could look at the data, and with careful design, give feedback to the learner, so that they can start to take more and more responsibility for their progress. They have a closer and closer to real time understanding of where their strengths and weaknesses are and they can figure out how to adjust their course, rather than waiting for feedback that may come, traditionally, at much more spaced out intervals. Better feedback, really, is key to the role that we see AI could be playing in the future.

You’ve also mentioned that AI should be taught in conjunction with its use. Why is this important?

SK: If we want our students to go out in the world, then they really need to understand the basics about how society is now functioning, and that can’t ignore AI and the data it works from. Partly, that’s about the fact that we would like our students to get jobs in a changing professional world . The professions are becoming increasingly datafied. AI is entering more professions. But this shift also has an impact on civil society as we saw in Donald Trump’s use of Facebook targeting in the election campaign or the recent ruling on Google’s access to patient data. You really need some basic understanding of how data is being used, what its potential is and what the risks are as well. So, we really need to be teaching our students to think critically about that, both in terms of their own professional development, but also in terms of their wider participation in society.

So, in that positive light, I think you mentioned something about innovation’s connection to AI in that it allows humans to focus more on innovation, because they’ll be freed from the lower order tasks that AI can perform. Can you explain that?

SBS: One thing that we’re seeing around employment, for example, is that routine work – that includes routine cognitive work, not just routine manual work – is under threat. What we’re likely to see is that’s going to become a less desirable thing to do because it’s going to be so poorly paid, because machines will be able to do it so much more efficiently. In those areas, we’ve got a major challenge in thinking about the future of the workforce. That could exacerbate the divides between the haves and the have nots. That should be a concern for us all.

One of our arguments is the educational system cannot afford simply to teach people to do things will be automatable in the future, because we’re essentially preparing people to be made redundant. What we’ve got to do is focus on cultivating curiosity, creativity, collaboration, ethical thinking, questioning — those thinking out of the box-type qualities, which machines aren’t very good at doing — at least not at the moment. But our education systems are really poorly tuned for cultivating those qualities. There is a role for AI in cultivating those qualities, we think.

Now moving on to a related area, algorithmic accountability. Can you explain what this is and how it applies to an education context?

SBS: AI is partly about algorithms and partly about the data that it crunches. AI and algorithms are permeating society in ways we’ve never seen before. Increasingly, quite high stakes decisions are being made about whether you’re going to get a job interview, whether you should get parole, whether your health insurance should go up or down, and whether you should even get health insurance. Machines are starting to make decisions or make recommendations about those decisions and there’s a growing social justice and accountability movement, completely correctly in our view, that says: ‘who’s designing these rules?’, ‘who’s designing these pattern recognition algorithms?’, and ‘when they get it wrong, who do we call?’

Invariably, we don’t know who to call. If your credit card suddenly gets blocked because you’ve been apparently behaving suspiciously, when you call the company, they won’t know why your card got stopped. It will just be that the computer thought something looked a bit suspicious. So, when we think about education, one way of framing this is: how many black boxes do we want in our education system? Is that acceptable? And when somebody does want to know what’s going on, how are we going to get answers?

Now, obviously, we do trust our lives to black boxes all the time. Nobody expects or wants to be a detailed engineer of their car or of how their television works. But somebody should be able to ask, in a university or a school, ‘why is this math tutor recommending this for Johnny?’ And there needs to be a suitable answer. And like beauty, “transparency” and “accountability” are also in the eye of the beholder: a teacher will want a different kind of explanation to an educational researcher, or an AI researcher. Ultimately, it comes down to trust.

That sort of feeds on to my last question, which is: do you think that the widespread use of AI in education is inevitable, and if so, will this generally be a positive or a negative thing, based on where the technology currently stands?

SK: It’s hard to imagine it not increasing in some form. Whether it’s positive or negative will depend on a number of choices that society makes. There certainly could be a number of negative consequences for the increased use of AI, but I don’t think that either of us would say that that has to be the case. We see that there is huge potential for it as well. Huge potential for good.

SBS: It really comes down to what your vision of education is, right? We’ve got a turbo-charged engine here. We can put it in one of the existing vehicles and simply make it go faster and be more efficient or we can say, “well, this enables us to build completely new kinds of vehicles.” Perhaps we can now build an all-terrain vehicle, which equips the learner to deal with much more complexity and much more challenge than they could previously — because that’s just about the only thing we can be sure the future holds.

SK: Another point is there is a potential for differential access to education around the use of AI. That might be that some students get exposed to high quality AI in really innovative classrooms and lecture theatres, whilst others don’t. There’s potential for that to create a two-tier system. Of course, the flip side of that is the potential for some schools to use really high quality teachers as well as technology, while other students and the schools they go to are exposed to scripted lessons delivered by AI. I think we should be concerned about that.

Is there anything else you’d like to add about AI in education?

SBS: We have to ask how these tools come to be conceived and designed. Right now, big publishing companies and educational technology entrepreneurs are pitching “learning dashboards” to schools and universities — but based on what notion of learning, and using what kinds of assessments? What we’re actively looking at now is the question, ‘how do you actively involve the educators and students in the design of these tools?’ This helps to get around a number of concerns that people rightly have about the use of data and AI. If the educators are giving early feedback on the design of these tools, then it’s much more likely that they’re going to be designed to fit in with how they want to teach and it is much more likely that they will be accepted.
There’s a large graveyard of ed tech tools out there that never quite made it because nobody ever talked to the teachers. Moreover, this involvement could perhaps help calm concerns that always pop up around automation, that teachers are going to be just laid off in droves.

Secondly, what about the students who are gong to be on the receiving end? We’ve failed if students feel that they’re paying high fees to come to university, only to be sat in front of a robot. We want students to be able to try the systems that we’re giving them, which means that they’re still being engaged with by real people, and they’re saying, “the fact that I can get feedback 24/7, for example, is really adding value to my studies. The fact that the system is tracking me, I’ve decided, is not a dystopic, big brother scenario, but rather, this shows the university cares, and that if I’m struggling, the university system is better able to support me…” So guess what, we’re back to trust. It’s a hybrid intelligence future, so we need to co-evolve people and machines to create a future we want.

 

Do you have an idea for a story?
Email [email protected]

Get the news delivered straight to your inbox

Receive the top stories in our weekly newsletter Sign up now

Leave a Comment

Your email address will not be published. Required fields are marked *

*