Station Clock, York

AI in Higher Education: Recent news and resources

Laura WoodsIn this blog post, Laura Woods, Deputy Chair of the CILIP Information Literacy Group, gives a summary of Artificial Intelligence (AI) in Higher Education, highlighting recent news on the topic and some useful resources.


As the new term begins in higher education, all the conversations in the sector seem to be about generative AI and its impact on academic integrity and assessments. Here are some recent developments on the topic:

Like many universities, my own institution has been working on guidance for students and staff alike about acceptable uses of tools like ChatGPT. I’ve been looking around for what other universities have produced as well – so far I think my favourite is this guide to using AI in your studies from the University of Queensland, Australia. The guide goes over some potential use cases and provides advice on the potential risks of these tools, which I thought was a good, balanced approach.

One use of ChatGPT I’ve encountered a few times over the summer is using it to generate a list of suggested sources on a particular topic. I’ve had two separate members of staff approach me for help finding articles that they’d been unable to track down. On investigation, it turned out these didn’t exist – they’d been generated by asking ChatGPT for recommendations, and the AI had simply made up some plausible-sounding titles. In both cases, the staff members who’d asked for help were unaware of the known problem of AI “hallucinations” (a fancy term for the tendency of AI tools to simply make things up).

I find it interesting that in both cases, these were members of academic staff who were trying to familiarise themselves with new-to-them research fields, and had considered ChatGPT as a shortcut to finding the key literature in a field. This seems like a use for AI that, if it worked, would be an acceptable use case – after all, they had planned to actually find the articles and read them! It strikes me as similar to reading a Wikipedia article to grasp the basics, then following the references to find out more – something I regularly advise students to do. Given that these were academic staff who hadn’t realised that ChatGPT could just make things up (they both assumed the fault was in their own search skills when they couldn’t find the articles), I wonder how widespread this behaviour might be among students? I think it’s pretty common for student work to include citations peppered throughout that the learner hasn’t actually read, just included for the sake of their marking criteria (I heard this described recently as “drive-by citations”, a phrase I love and am 100% stealing). This may be more noticeable now if their lecturers or examiners start to notice reference lists containing supposedly seminal papers that they’ve somehow never come across in their own research!

Image by Gerd Altmann from Pixabay

I had a long chat with one of the academics who came to me with their non-existent articles, once I’d explained the problem of AI hallucinations, and gave some advice on how to spot them. One thing I pointed out was that all of the suggested article titles were perfectly on-topic for the question prompt used – suspiciously perfect. I asked how often, on searching for an article on a topic, they came across even one paper whose title almost exactly answered their specific question. They paused for a moment, then said, sadly, “Oh. It’s just telling me what I want to hear, isn’t it?”. I think that is the key to spotting AI hallucinations: a little critical thinking. If something seems too good to be true, it probably is!

I’ve attended a few webinars recently on AI in higher education, and at one (I’ve forgotten which one, and who said this – apologies!) an attendee pointed out that the skills to use AI thoughtfully and ethically are really just information literacy skills. It’s the same need to consider the source of information, how it was produced and why, and weigh up the credibility of any claims – just a new technology producing the information. 

I’d love to hear from other library folks on this. Have you encountered students or staff needing help finding articles that turn out not to exist? How about inter-library loan requests for non-existent articles or theses? I’ve heard that the “hallucination” problem is less of an issue with GPT-4 (the latest version of ChatGPT), however as this version is only available in the paid service, I think most people will still be using GPT-3.5, so hallucinations will continue to be a problem.

Leave a Comment

Your email address will not be published. Required fields are marked *