The Container Conundrum
Online, everything looks like a website. In a pre-internet context, information containers were easier to interpret at a glance. You could literally feel the physical difference between a newspaper and a scholarly book. But online, those sensory and embodied experiences are muted and flattened in a browser window. Online, a magazine article, research report, and blog can all look more or less the same. And lately, it’s getting even harder. When AI-generated content can be summoned on demand, the question of “What am I looking at?” becomes harder to answer at a glance.
Our research team has been calling this the “container conundrum” because, while students need to engage deeply with what a source says (the content), they critically also need to understand how that information is packaged and delivered (the container). Adding to the conundrum is the checklist approach to source evaluation. Many of the tools we librarians give students for source evaluation (those tidy acronyms we all know) don’t always map well onto the messy reality of online information.
So our team decided to do a study that embraced the tangle and complexity of source evaluation in the wild, that is, outside of the library and on the web. We asked students about their source evaluation practices and recorded their processes in real time.
Our Study (briefly)
We conducted a qualitative study with 15 undergraduate students at the University of New Mexico, asking them to search for topics of interest and “think aloud” as they evaluated websites. What we were most interested in wasn’t whether they arrived at the “right” answer, but how they made decisions.
Our analysis is still ongoing, but our early findings suggest that students are doing complex, nuanced evaluation work, even when their judgments miss the mark
Links as evidence
One of the first things we noticed was how often students treated links within a webpage as a kind of built-in credibility marker. If a site included in-text links, students frequently interpreted that as evidence, as proof that the author had done research or that claims were supported. Occasionally they would click through, but often, the presence of links alone was enough.
“What is this?”
When we asked students if they could identify the information format or genre, students tended to describe everything as simply “a website” or “an article” even when encountering very different kinds of information. One student described a National Geographic magazine article as “scholarly” because it seemed well-researched and accurate. This points to a disconnect we see in instruction all the time, which is that “scholarly” becomes shorthand for “high quality,” rather than a specific genre with a particular purpose, audience, and process.
Ads as red flags
Advertisements played a large role in how students made trust judgments. Students often reacted negatively to ads, especially if they were intrusive, irrelevant, or visually overwhelming. The reasoning went deeper than annoyance; ads were read with suspicion as signals of underlying predatory economic structures. Students latched onto ads as evidence that money was changing hands, which in turn raised questions about credibility. However, not all ads were treated equally. Relevant, unobtrusive ads were sometimes tolerated.
Purpose Matters
Students also made judgments based on what they inferred about a website’s purpose. In one case, a participant interpreted a site’s design and content as evidence of good intent. The student, reflecting on a well-organized health-related site that offered substantial information for free, suggested that the creator was “actually trying to help people,” contrasting it with other similar sites that felt less credible. Moments like these highlight how students use design, tone, and perceived intent as cues for evaluating purpose, even when those cues are subtle or inferred.
Personal stories
For some participants, personal stories were received as sincere and trustworthy. For these students, the absence of visible institutional or commercial pressure made these sources feel more authentic, even benevolent. One moment in particular stood out when a student described a blog as “less biased.” That caught our attention. When we asked her to explain, she framed bias not as subjectivity, but as persuasion, agenda, or commercial influence. In her view, the blog was simply a space where people shared personal experiences within a community of mutual interest.
Avoiding bias
The acceptance of personal stories stood out so much because, in a somewhat opposing stance, students more broadly framed credibility as a distinction between facts and opinions. Sources that appeared neutral, informational, and “just giving the facts” were preferred. This led many students to avoid blogs, social media, or user-generated content altogether because they associated those spaces with opinion. Only occasionally did a student take a more nuanced view, recognizing that bias is often unavoidable and something to be interpreted rather than eliminated.
The Situation
Students did not treat evaluation as a routine step. Instead, they described adjusting their level of scrutiny based on the situation. If the stakes felt high, like something related to work, health, or real-world consequences, students were more likely to verify information. If the stakes felt low, they were less concerned. In other words, evaluation wasn’t always automatic, but often contextual, shaped by the situation at hand.
So what does this mean for teaching?
If there’s one thing we’ve taken from this work so far, it’s that source evaluation is complex. Our teaching should reflect that complexity, not try to explain it away.
Many of the checklists we use aim to make evaluation more manageable. But in doing so, they can sometimes obscure the very thing students are grappling with: that evaluating information is contextual, interpretive, and often ambiguous.
We do have one enduring recommendation to hammer home. We observed that students needed a way to name and organize what they were seeing. One place we might start is with a more meta concept: information formats or genres. These concepts consider how texts are categorized based on shared conventions, structural patterns, and audience expectations. Consider strategically asking students to begin source evaluation by answering questions like:
- What is this?
- What is it trying to do?
- How is this kind of information made?
This doesn’t solve the problem, but it gives students a way to orient themselves in an information environment where format containers are less visible and the signals around the shape of information are more subtle. Writing instruction makes genre explicit, students learn that different contexts demand different forms. Yet when it comes to reading and evaluating sources, genre recognition is often treated as intuitive rather than named.
And we can all stay curious, keep working on it, and find better ways to understand and teach about interrelated concepts in information literacy like authority, credibility, and source evaluation. There’s a reason this problem hasn’t been solved with a snappy four or five letter acronym. It’s interesting, subjective, difficult work–and it’s fun too. We had fun with our students during the study, laughed with them over the various websites and their curiosity, and enjoyed it when they made interesting or insightful observations.
This post by Alyssa Russo, Undergraduate Engagement Librarian, University of New Mexico & Lori Townsend, Assessment Coordinator, University of New Mexico is based upon their presentation at LILAC 2026. The full slide deck from the presentation at LILAC 2026 can be found at the archive here.

