It has been three months since the US company OpenAI introduced ChatGPT, the most user-friendly interface yet for accessing artificial intelligence. You can chat with it, even in Czech: it solves mathematical problems on command, translates between languages, writes emails, poems and essays. No other online service has seen such rapid growth in the number of users: a hundred million people explored it in the first two months of its existence, even TikTok needed a period several times longer to register such interest.
Read the text in Czech translation.
Universities have taken different approaches to use of ChatGPT and similar AI tools, from banning them to recommending them for use in teaching. In February, the European University Association therefore issued a short set of recommendations on how to approach such tools. Read more about it in an interview with its author and EUA’s Director of Policy Coordination and Foresight, Thomas Jørgensen.
ChatGPT and similar AI tools are, according to some, a game changer. Moreover, we are observing rapid development of the possibilities they offer, so we cannot predict exactly how they will affect our world. How should traditionally conservative institutions like universities approach their use?
The most important thing is to realise that the emergence of these tools is simply a reality that we have to deal with. We can't ignore it, we shouldn’t say, it's no big deal, or not worry about it in any way. Moreover, universities have a responsibility to prepare learners for work with various tools, including the AI tools of different sources.
So, do you advise using ChatGPT at universities?
Universities are traditionally bound to produce texts. Now everyone has a tool that produces a certain type of text. So why not make some use of it?
The important thing is to understand how ChatGPT and similar resources actually work. ChatGPT puts words in a sequence that is probable, in an order that it has already seen in the texts it has been trained on. This means that the facts this tool is giving us may be true. If you type in Columbus, you get the correct year 1492, because these items of information have occurred together most often. But if you ask for something more uncommon, it is not usually able to give you the right answer. When you ask for a list of literature on a particular research area, it produces names that are probable but not correct. So, the best way to deal with this tool is to play with it, because then you will understand its limitations. For example, you can tell students to make a list of the literature in their field of study. Then they will see that what ChatGPT presents them with is not quite right. It will gather all sorts of information from the past and tell you where we are. It can be helpful when you need standardised text, for instance when you are writing a reception invitation or a formal email. It also writes poems extremely well.
Is it just much ado about nothing, then?
No, you can certainly use it creatively, say, okay, this is sort of a starting point and I need to add something to it. That way it can be presented to students. For example, let them have ChatGPT write a 2000-word essay on a topic in their field of study and then let them figure out what’s wrong, what could be discussed, and think about what it takes to create something better.
Have you any idea of how universities in Europe are dealing with this now?
At the moment we are seeing two main reactions. The first restricts use of ChatGPT, arguing that it is very close to plagiarism: we should know who wrote the texts, and if it is a machine then it is fundamentally against academic values. The second reaction is not so strict and says, let's experiment, let's try to adapt it in our teaching. We at EUA are in favour of the latter. Of course, we need to talk about the risks, they should not be underestimated, but we need to see the benefits too, to prepare for the new reality, not ignore it.
But wouldn't you be wary of recommending it to teachers or students? Similar tools, probably ChatGPT among them, will sooner or later become a paid service.
In this case, one might consider institutional subscriptions. But that's still in the future. It's hard to predict. Why not also imagine a community of researchers, making academic ChatGPT, from academic texts? It's not the revolutionary technology that ChatGPT uses, but it's very resource intensive. But the real challenge, I think, comes in the area of curriculum.
In what sense?
We have to think about what we are teaching. Do we teach creativity, original thought, new ideas, respect for values? And what are we actually assessing? Is it creativity or just being able to write a text that conforms the style that a machine could easily be used for?
In terms of integrity, should there be a common code of conduct at universities that prohibits use of AI for certain aspects of students’ and researchers' work?
I think it is difficult to predict anything, it is early days. We have not yet been able to see the full impact of these innovations. But in the future, I think there certainly should. Learners need to be taught how to deal with AI-based tools in accordance with academic integrity. For example, I like the idea that when you're preparing a text, it is only roughly done by the AI and then you finish it.
And declare AI a co-author?
No. AI is not a co-author. It's not responsible for anything. It's only when AI can give you exactly the right answers then it becomes interesting. You can automate parts of the research process. And only then you start asking questions, ‘What's the human contribution?’, and, ‘What does that actually mean?’
Do you think that now is also the time to bring research disciplines besides computer science into the discussion, so as to think about AI from different angles?
Of course. The role of universities is to look at AI in a holistic way. I would be thrilled to see researchers in the humanities collaborating with others and really trying to look at creativity, consciousness, information, and everything in between, and trying to understand the relationship between human and machine in an experimental way. A whole new world is opening up in research. For now, we're talking about ChatGPT, which can be a fun toy to play with, but I think the really tough questions are yet to come.
What kind of questions, for instance?
If the machine can put correct information together so that new knowledge appears, this poses some very tough questions about what is uniquely human in the research process. It also questions how much of our thought and behaviour is mere repetition of learned patterns that can be replicated by a machine.
Anyway, universities as institutions cannot be automated. So, I wouldn't be afraid of that, on the contrary I'm curious to see what happens next. But at the same time, I am very happy that we will have legislation in the EU about what really should be forbidden. AI is a tool. And it can be used for bad things as well as for good things. Here, it is also important to have precise guidelines and regulations for ethical and responsible research in AI. For example, I think that we need a good understanding of how AI can be used for subliminal manipulation, but at the same time subliminal manipulation will and should be forbidden outside the laboratory. That aspect of AI at universities could be more important than dealing with ChatGPT.