Since ChatGPT, a chatbot based on an artificial intelligence large language model, was launched in November 2022, the academic world has been abuzz with speculation about what the availability of tools like this could mean for the future of education. ChatGPT makes it extremely simple to generate convincing responses to prompt questions – which raises obvious questions over the risks to academic integrity.
In response, the Quality Assurance Agency for Higher Education (QAA) has published a briefing note on the “artificial intelligence threat to academic integrity”. The actual briefing note is a tad less doom-laden than the title suggests: they recommend engaging with students on the value of learning, and working with staff on designing authentic engagements rather than relying on software to detect automatically-generated text. Similarly, Jisc recommends considering how to educate on these tools and integrate them into education, rather than attempting to legislate against them.
So there has been much discussion of the impact of AI chatbots on learning and assessment, but what about information literacy in particular?
AI chatbots have the potential to impact information literacy in various ways, although there are significant downsides to their use. One concern is the limited accuracy of the information provided by AI chatbots, as they may not always have the most up-to-date or complete information and can sometimes make mistakes. Another concern is the potential for bias in the information provided, as these chatbots are only as unbiased as the data they were trained on. Over-reliance on AI chatbots for information can also lead to a decline in critical thinking and information literacy skills, as users may become less likely to seek out and evaluate information for themselves.
However, AI chatbots do offer some benefits that can improve information literacy. For example, they provide quick and easy access to information 24/7 and can be personalized to provide information tailored to a user’s needs and interests. Additionally, AI chatbots can use natural language processing (NLP) techniques to provide accurate information, reducing the spread of misinformation. By combining the convenience and personalization offered by AI chatbots with a critical eye and a strong understanding of information literacy principles, users can maximize the benefits and minimize the risks of using these tools.
Sounds promising? It should do, as the preceding 2 paragraphs were written by ChatGPT – which is unlikely to argue against its own usefulness! Having tested the bot on a number of topics, what I notice is that it comes up with reasonable-sounding arguments, but without any depth to them. There is nothing original or insightful in the above: had I asked anyone I know about the benefits and downsides of ChatGPT for finding information, they probably would have come up with a similar list. Which is understandable: despite giving the appearance of human-like intelligence, all the bot is really doing is repackaging information it has been fed into a plausible-looking order. The writer Ted Chiang recently described large language models like this as being like “a blurry jpeg of the web”, which struck me as an apt metaphor!
Which brings me to the implications for information literacy. If ChatGPT is essentially presenting information back to you based on the data it has been fed, but explaining it as if you were speaking to a human being, that is worse for information literacy than a simple Google search – because there is no way of finding where that information comes from. If a person types their query into Google, they can (we hope) at least see the details of the website they ultimately take their information from. Through information literacy education, we can encourage people to ask questions like: who wrote and published this website? What was their purpose for doing so? What perspective are they coming from – and whose voice is missing?
Whereas when asking a question of ChatGPT, you just get the information, without any context. (Interestingly, ChatGPT will include references if you ask it too – however these may be to non-existent sources!). ChatGPT presents its answers with all the confidence of someone in the bar late at night insisting that they remember Nelson Mandela dying in prison in the 1980s. If it tells you something you know to be false, you may fact-check it – but what if you are asking for information on topics you don’t know enough about to spot errors?
It’s probably too soon to speculate on the long-term impact that widespread use of these tools may have on information literacy. And like any new technology, it may turn out to be overhyped. But I think it is worth considering now how AI-driven tools like this could change the way we discuss information literacy with learners, and the ways in which we encourage them to think about information.
Laura Woods is the Deputy Chair of the CILIP Information Literacy Group.