image of laptop and mobile device

Fifty shades of bias

Danielle DeRise

Danielle DeRise is a Lecturer in the School of Writing, Rhetoric and Technical Communication at James Madison University in Harrisonburg, Virginia. In this guest post, she considers how perceptions of bias can influence our engagement with information.


For many semesters now, I’ve been engaging my university writing students in conversations about our increasingly digitalized lives. We’ve discussed an excerpt from American psychologist Jean Twenge’s 2017 book that connects smartphones to the high rates of anxiety and depression reported by “iGen”, Twenge’s term for the cohort to which today’s university students belong. We’ve also looked at a 2018 profile of a technologist who began worrying about deepfakes and other digital manipulations years before anyone else. I should say, I thought these texts would initiate conversations about our increasingly digitalized lives, and sometimes they do, but I’ve also noticed many students initially dismiss the articles as “too biased”—against younger generations in the first case and against technology in the second.

After much puzzling over this trend, I’ve decided it may be most useful as a window into the general public’s lack of consensus about what we even mean when we say something’s biased: everyday definitions seem to range from the fairly accurate—an observation of an author’s leanings—to, more interestingly, almost a reflection of the reader’s response to the material, such as, “I disagree; therefore, it’s biased.”

With students, I work on shifting ideas about bias from the static to the more dynamic, which accomplishes two goals: (1) it highlights the timeline of an information cycle, starting with a writer’s early handling of a topic, progressing to the investigative methods, and ending with a reader’s relationship to the published product; (2) it emphasizes that not all bias is equal—there are important differences in type and severity and intent. In other words, we try to establish when bias occurs, and at what stage(s) it poses the biggest problem. The hope is that if we spend time unpacking these “shades” of bias, we can work toward building a more comfortable relationship with the information that bombards us on a daily basis.

Of course, we might all worry less about bias if we vowed to read only peer-reviewed journal articles, though doing so wouldn’t completely solve our problems either, especially since scholarship is often impenetrable unless we’re an expert audience, not to mention that many people don’t have access to these sources. Unfortunately, the “all bias is bad and must be shunned” mantra means anything below the scholarly or factual-reporting threshold often gets too hastily rejected by students. While I don’t advocate relying on overly biased sources, I also encourage starting any information quest with what’s accessible, which—for most of us investigating most topics—means whatever we can look up online.

The problem is, vetting online sources involves considerable work. When credibility isn’t obvious, and it often isn’t, I provide students with some basic tools, the most user-friendly of which seems to be Sarah Blakeslee’s 2004 invention, “the CRAAP test”, which combines currency, relevance, authority, accuracy, and purpose to guide the source evaluation process. With many information literacy specialists questioning CRAAP’s usefulness to the digital era, I’ve also been presenting elements of Mike Caulfield’s SIFT, a method that calls for verifying web content and then often tracing it back to original, context-appropriate sources.

Digital content very much tests conventional boundaries in ways few of us could have anticipated; ultimately, any instrument can at best serve as a baseline from which to generate more nuanced observations, and no instrument can replace good old-fashioned critical thinking. For example, many students have heard that “.orgs” are less biased and more credible than “.coms,” which isn’t always wrong, but unless we dig into an entity’s purpose for dispensing information, this rule—taken at face value—might lead us to place more trust in the websites of political parties than in a Reuters article, for completely balanced reports about gun laws or health care or any other issue on which partisan organizations might benefit from a certain presentation.

Regardless of the tool used, though, a dynamic view of bias also helps emphasize that reading and writing are symbiotic processes. Otherwise, it’s easy to regard evaluation as an action we perform on other sources, instead of seeing those authors as reflections of our eventual credibility as writers. To address the need for self-assessment in the process, my colleague Michael Hickman proposes “SCRAAP”, adding the S as a way to direct attention to “self-bias,” something we all have. In a similar move, I teach an article called “How to Avoid Biased Thinking” as a way to discuss how our personal experiences/preferences may cloud our research processes if we are not mindful.

The amount of work involved in bias detection leads many students to believe if only everything were 100% unbiased—straight “reports on topics” of the sort found in encyclopedias—we would all know exactly what to trust. In class, many students wonder why we can’t read “just the facts” or why authors don’t always acknowledge “both sides” in absolutely equal measure. These are good impulses that benefit from some flexibility.

When deciding if omission of a “side” reveals bias, it can be useful for students to differentiate an inflammatory tone from an inconvenient fact. For example, the writer who labels Dr. X a “dangerous pathological liar” should sound pretty different from the one who states, “Dr. X lied on these ten occasions.” If we accuse the second of bias for simply failing to list Dr. X’s good deeds (or neglecting to state the positives of lying), then we perpetuate our current global slide into truth devaluation. It’s also important to think about when any purported bias might have occurred. Does the reporter fail to even investigate potentially exculpatory details? If so, that might be a problem, possibly explained by a personal bias. Assuming a reputable publication source, though—which, again, CRAAP or SIFT can help determine—it’s more likely there is enough solid information to dismiss the polar opposite side (that Dr. X is a noble truthteller) as no longer viable. The latter, then, represents less of a bias than an evidence-based statement for which a paragraph lavishing praise on Dr. X would actually lessen the writer’s credibility.

Ultimately, I don’t know why bias anxiety seems so prevalent in classrooms these days. It would be easy to say the 2016 election and the ensuing political polarization of American culture, but I’m aware this may simply be around when I started paying close attention. Am I falling into observational-selection bias by assuming students are mentioning bias more, when it’s actually just that I’m noticing it more?

Or maybe there is an increased focus on bias, reflecting—at least in part—Americans’ troubled relationship to media. Even before the so-called “fake news” era, the perception of media had plummeted to record lows. According to Gallup polling, in 1997, 53% of Americans reported they trusted the media a great deal/fair amount, down from a record high of 72% after the Vietnam War. By 2016, that percentage was 32%. More recently, President Trump’s claims that certain media outlets [unfavourable to him] are the enemy of the people seem to have gained traction with certain audiences. A Pew Research study released in late January 2020, revealed American Republicans and Democrats rely on nearly opposite sources for their news.

A whole other conversation could be had about whether it’s good for so many news sources to express partisan leanings, or about the extent to which cable giants like Fox and CNN have poisoned the well. We also have to confront the reality that clickbait and outrage and telling people what they want to hear all turn profits in a struggling industry.

That said, the dialog about media bias is really not the one I want to prioritize, either here or with students—not because it’s trivial, but because “media bias” complaints seem to be exacerbating the confusion. For example, it’s common for students to conflate an instance of social media users posting false links between vaccines and autism with the adjacent conversation about media bias. The distinction between “media” and “social media” is straightforward enough, but the picture grows far more complicated when that familiar media bias narrative gets equated with “fake news,” a term used indiscriminately to discredit content that ranges from hyper-partisan to accidentally misreported to true-but-simply-unflattering.

In a September 2019 Scientific American article, Claire Wardle, director of a nonprofit that analyzes misinformation, notes, “recent research shows that audiences increasingly connect [the term fake news] with the mainstream media. It is often used as a catchall to describe things that are not the same, including lies, rumors, hoaxes, misinformation, conspiracies and propaganda, but it also papers over nuance and complexity.” It seems like one risk of losing nuance and complexity is also losing a sense of perspective, particularly with respect to degree or intent. Good-faith writers who express their biases may sometimes deserve criticism, but such cases pale in comparison to deliberate attempts to wreak havoc in and on institutions that form the bedrock of democracies.

It’s also worth it for students to remember that “fact-reporting” is only one of writing’s myriad purposes. The American Press Institute reminds all of us that journalists aim to provide information so people “make the best possible decisions about their lives, their communities, their societies, and their governments.” Straight news can be every bit as incomprehensible as scholarship unless we’re already well-versed on the “who, what, when, and where” of the issue. Well-written analysis, then, can help us wrap our heads around why something happened, what it may mean, what can be done. Many articles I assign for class include writers’ subjective interpretations of events, so students are right that these are not completely neutral. My hope is that through our conversations, they can come to see value even in the “slightly biased,” and that as they deepen their awareness on a particular issue, student writers can become the ones who act as “checks” against more extreme biases they observe in other sources.

Moreover, since nothing touched by human hands is completely devoid of bias, much can be gained from engaging with—and challenging—even the most credible of experts. Some students, for example, notice that scholars have begun publishing questions about Twenge’s iGen study. Critiquing flawed design or imperfect research methods certainly demonstrates a more nuanced understanding of bias than simply dismissing a writer for having an inexplicable vendetta against teenagers or technology.

The amount of faulty information involved throughout the 2016 election cycle seems to have been one catalyst for a sea change in American academic and public concern about evaluating information. Countless articles and books on the subject have been published in the last three years. Some of these authors, like University of North Carolina’s Zeynep Tufekci, have recently become leaders of newly-established centres for studying technology’s impact on democracy. Many other colleges and universities have been responding, too. Bias and misinformation were hardly new concepts as of 2016, but what does seem newer, at least from my vantage point as a university writing instructor, is the effort to unite conversations about information literacy, writing, and citizenship.

James Madison University, where I’ve been teaching for seven years, opened its Center for Civic Engagement in 2017. On its homepage are links to information literacy tools, including CRAAP and SIFT, as well as to tutorials created by the JMU libraries to prepare students for the information literacy test they must pass in their first year. In 2018, Piedmont Virginia Community College, where I taught for nine years, started requiring an information literacy component in all first-semester English classes. Some of this content was introduced to students previously, but housing it in a course that teaches students to find trustworthy sources for their writing, happened very recently.

I’m not naïve or optimistic enough to claim that a renewed consensus on the definition of bias can inoculate us against bad information or those with an agenda to dispense bad information. However, I don’t think it’s useful to succumb to cynicism, either. I’d like to think having these conversations may boost our immunity for the battle ahead, especially with many warnings that misinformation has already infected the 2020 American election cycle. After all, the stakes couldn’t be higher. For students, it’s about relying on sources that bolster their own credibility as writers and thinkers. For all of us, it’s about challenging a tide of fake news and fake charges of fake news, so that we’re better informed and better poised to participate meaningfully in civic life.


Thank you to Laura Skinner, Assistant Professor and Reference/Instructional Librarian at Virginia’s Piedmont Community College, for offering valuable insights.

Danielle DeRise is a Lecturer in the School of Writing, Rhetoric and Technical Communication at James Madison University in Harrisonburg, Virginia.

Leave a Comment

Your email address will not be published. Required fields are marked *