Beneath the Algorithm: Examining AI’s Grasp of Information Literacy

By Sarah Pavey

In today’s digital age, being information literate is more critical than ever. But can artificial intelligence (AI) language models be truly information literate themselves? That’s the question I explored by posing a series of questions to several leading AI chatbots: Claude, Pi, ChatGPT, Gemini, and Copilot. Their responses provide fascinating insights into the current capabilities and limitations of AI when it comes to information literacy – and what that could mean for educating the next generation.

Defining Information Literacy

To kick things off, I asked the simple question “Are you information literate?” Most of the AI models responded by defining information literacy as the ability to find, evaluate, understand, and effectively use information from various sources. ChatGPT took a more existential view, stating that as an AI, it doesn’t truly “know” things the way a human does, but rather generates responses based on statistical patterns in its training data. This highlighted a key distinction – AI models may have access to vast information, but they do not innately comprehend it like humans. As Claude wisely put it “I do not have true sentient understanding, just superficial modelling based on my training data.”

Thus, the implication for schools is that students need to be taught about the limitations of AI models, emphasising the value of human critical thinking skills in information evaluation.

Media literacy skills are crucial for students to understand how AI models generate responses and the potential biases inherent in their programming. If students of the future will need to collaborate with AI models, merging the AI information-gathering capabilities while applying their own critical thinking skills then banning this technology in schools is not productive or helpful. How will these students learn the competencies they need?

Acknowledging Fallibility

I next pressed the models on how they can justify claims of being information literate when they admittedly make mistakes. Across the board, the AIs readily acknowledged their potential for errors and outlined factors like:

  • Data limitations and biases in their training sets
  • Difficulties with ambiguity and context in natural language
  • The inability to stay current on rapidly evolving information

Gemini, in particular, gave a thoughtful response about citing sources, flagging uncertainties, and learning from feedback to improve over time. The candid recognition of fallibility was refreshing and important – true information literacy involves understanding one’s limits.

This weakness in AI models offers important lessons for teaching information literacy in schools. Educators can encourage students to critically assess the reliability of information.

Additionally, by addressing the challenges AI models face in interpreting ambiguity and context in natural language, teachers can help students navigate these complexities and help them enhance their own critical thinking skills. The concept of continuous learning demonstrated by AI models provides a valuable opportunity to underpin a growth mindset in students, encouraging them to view mistakes as an essential part of the learning process. AI models can be used to teach students about the importance of crediting information to its original source, promoting academic integrity and reinforcing the idea of building knowledge upon reliable and credible information. Integrating these lessons into information literacy education will empower students to become responsible and discerning users of information, equipping them with the necessary skills to navigate an increasingly complex digital landscape.

Understanding Nuances vs. Definitions

The answers gained thus far from the AI models were very data driven and yet information literacy as we know is hard to define. I was concerned that the AI algorithm resulted in answers such that you would receive from an autistic savant and so posed a different reflective question. I challenged the models on whether they truly grasp nuanced information literacy concepts beyond just reciting definitions. Most, like Gemini, acknowledged their limitations compared to human-level understanding. While they can highlight credibility criteria and identify potential biases, they struggle with deeper reasoning, real-world contextualisation, and critical thinking. Copilot admitted to simply “providing information based on my training data,” not possessing innate comprehension. Yet it still aimed to explain nuances by tying in perspectives from groups like UNESCO and academic libraries. This hints at AI’s potential to compile diverse viewpoints, even if true understanding remains elusive.

This inability to understand beyond the defined and data driven answers becomes even more apparent when you involve an AI tool that creates images. Students might draw the concept of information literacy using illustrations that depict the impact it could have on people’s lives but this is beyond the capabilities of the generative AI programme. AI relies heavily on the factual evidence. Humans may dwell more on the information content itself and it’s use in contrast to AI’s interpretation of information literacy focussing on how to find the data and information needed.

A hand holding a phone with words overlaid onto the image, such as ""disinformation", "fake news", and "propaganda"
Human interpretation of Information Literacy
A watercolour style AI generated image of a a computer scree, pens, and what looks like a clipboard, and a calculator.
Canva interpretation of Information Literacy


Teaching Information Literacy in Schools

A core question was whether these AI models could effectively teach information literacy skills to schoolchildren. The responses revealed some ideas for lesson plans, some quite innovative, but rooted in delivery by a teacher or librarian. I asked if the AI was able to conduct such a lesson itself but here the AIs consistently emphasised that human teachers are irreplaceable for leading actual lessons. The rationale was because teachers and librarians provide real-time feedback, and tailor instruction to individual students. As Claude stated, “For effective information literacy education, a human teacher is crucial.”

This led to my final question. I asked if AI would replace school librarians’ roles in nurturing information literacy competencies. Unanimously, the responses asserted that human librarians are irreplaceable for now. The models highlighted unique roles like:

  • Providing personalised guidance and creating supportive learning spaces
  • Curating physical book collections and cultivating a love of reading
  • Building community partnerships to enrich literacy initiatives
  • Adapting instruction to individual students’ needs in real-time

Nonetheless, the AIs saw potential for collaboration. As Gemini suggested, “Librarians can design engaging learning experiences, and language models can provide vast information resources and personalised assistance.” This combined human-AI approach could enhance information literacy efforts in meaningful ways.

The Importance of Cultural Context

What can we learn from this experiment? One aspect that received less emphasis from the AI models is the role of cultural context in information literacy. To be truly literate on a global scale requires understanding how different societies, histories, and values shape the production and dissemination of information. For example, evaluating the credibility of sources from an indigenous community’s oral traditions versus a Western academic journal involves very different contextual lenses. An AI trained primarily on data from one cultural viewpoint may struggle to appropriately validate information from another perspective. Encouraging multicultural literacy – the ability to engage with and understand information across diverse cultural contexts – is thus vital for empowering global digital citizens. Human educators experienced in navigating cultural complexities have an essential role to play here.

How Do School Students Cope in the Age of AI?

As these AI models illustrate, artificial intelligence is becoming increasingly literate when it comes to accessing and processing information. But true information literacy – understanding nuances, reasoning through ambiguities, integrating multicultural contexts – remains a uniquely human capability. In the age of AI and virtually unlimited information availability, education cannot just be about memorising facts. It must focus on cultivating literacies that AI cannot automate – creativity, emotional intelligence, ethical reasoning, systems thinking, and multicultural understanding. Yet this is sadly lacking in our education curricula and assessment protocols. It is compounded further by campaigns to ban AI technology through fears of cheating.

School librarians will be invaluable as guides through this new landscape. However, their roles may shift towards mentoring students in developing higher-order cognitive skills and cultural literacies. They will need to integrate AI assistance thoughtfully, while separating signals from noise.

As this exploration reveals, true information literacy is like venturing into an unknown landscape. Artificial intelligences are important explorers, but they cannot traverse the full terrain alone. By striking the right balance between AI’s potential and the indispensable role of educators and culturally attuned guidance, we can empower the next generation to be discerning global citizens … but by working with our new AI colleagues not closing the door!


Anthropic (2024) Claude Home Page. Available at:

Canva (2024) Canva Home Page. Available at:

Google (2024) Gemini Home Page. Available at:

Inflection AI (2024) Pi. Ai Home Page. Available at:

Microsoft (2024) CoPilot Home Page. Available at:

OpenAI (2024) Chat GPT Home Page. Available at:


Leave a Comment

Your email address will not be published. Required fields are marked *