In January 2017, the UK government set up an inquiry into ‘fake news’, with the objective of investigating ways to combat ‘the growing phenomenon of widespread dissemination… and acceptance as fact of stories of uncertain provenance or accuracy’. In this article, we argue that any solution to this issue must include a substantial educational element, and that universities are excellently equipped to help provide this.
Dr Philip Seargeant, Senior Lecturer in Applied Linguistics, The Open University
Dr Caroline Tagg, Lecturer in English Language and Applied Linguistics, The Open University
Last month we were asked to provide oral evidence to the parliamentary select committee on ‘fake news’. We’d previously submitted a written report on the topic, outlining implications and suggestions from our research into the way that people interact on social media. On the same day as we presented our evidence, the government announced that it was setting up a task force to deal with the phenomenon, aimed at ‘combating disinformation by state actors and others’. From the government’s point of view, the issue of ‘fake news’ relates closely to national security. But it also has broader and more everyday implications than this. Not only has ‘fake news’ become one of the most high-profile issues in today’s political world, it’s also one of the most complex, and any solution aimed at addressing it needs to take this complexity into account.
What is fake news, and what can we do about it?
To reduce it to the essentials, there are two main categories of problem which have become associated with the term. The first are the propaganda and misinformation campaigns that are being waged by groups (or nations) trying to influence the public conversation around key public policy issues. This is partly a technology problem – propaganda, after all, has been around for a long time, and thus the issue is more around the way it’s being carried out in today’s world, rather than the mere fact that it’s happening. As a technological issue, units such as that announced by the UK government are likely to be partially successful in combating it.
But technology is only one element of the equation. Our understanding of why people share fake stories, and what effect the stories have on people’s actions, is far less clear than the fact that it happens. An obvious point to make here is that social media is social – and thus the spread of misinformation is related to what people are doing on sites such as Facebook, as much as how the technology itself works. For this reason any solution to the problem needs to include not simply technological measures, but educational ones as well. Making people aware of how information is circulated, and how news is shaped and promoted, will give people the knowledge to make their own informed decisions.
Education versus regulation
In written evidence to the parliamentary select committee on ‘fake news’, education was the second most popular solution put forward. Education for children at primary and secondary level was seen as an important, although not immediate, solution. Self-regulation around advertising was viewed as the best short-term solution, but it was noted that this still does not tackle all aspects of the problem, especially the use of fake news in ‘information warfare’ and, we’d add, for issues such as the polarisation of debate, reduced trust in journalism, and use of fake news to denounce mainstream media.
Based on the research we’ve been conducting into the way people interact on social media, we made two main recommendations to the committee. The first of these highlights the fact that the spreading of information online is, as noted above, a social activity. People on Facebook are consuming news in spaces where they’re also carrying out identity and relational work – and of a somewhat challenging nature. One distinctive characteristic of Facebook is the complexity and breadth of most people’s social networks on the site. As a result of this, the potential audience that is addressed by any one user when writing a status update comprises a range of different social ties, from close family members to colleagues and acquaintances. Managing all these different relationships at the same time – whilst not offending or upsetting anyone – is a tricky and often very conscious process.
In our research, we found that people often respond to the challenging social context by trying to come across as likeable, inoffensive people, and by creating and maintaining an atmosphere of conviviality. One of the outcomes of this is that, regardless of whether a user identifies a fake or biased news story, they may not report it and in fact may share it because it comes from their social networks. Linked to this is our observation that users’ focus on maintaining social relationships and creating a convivial environment online may in fact contribute to the filter bubble effect, as does the belief that Facebook is not ideally suited for political argument. This happens because, in order to keep the peace, users often do not engage with views they find offensive, but instead quietly block offending posts or unfriend offenders, thus filtering out views with which they disagree.
The second aspect of our recommendations was the need for educational solutions, and the crucial role that higher education can play in this. Higher education institutions already teach a range of information and digital literacies, often as part of their academic literacies provision. These tend to focus on preparing students to use online sources for academic purposes, to locate and evaluate information, to collaborate online, and so on. Yet these same skills can easily be reframed for the context of how news and other media are circulated and consumed in society in general. In other words, the kinds of information literacy skills that universities now teach in order to improve students’ academic literacy can be equally useful in helping people to manage their social and public lives as digital citizens on social media.
But our argument goes further than this. Universities are also needed so that educational solutions to fake news can go beyond the provision of a check-list of information literacy skills, and instead educate students more roundly in the social dynamics of social media and the wider implications of their online actions. What is needed – we argue – is a greater critical awareness among the general public of how our social interactions and relationships influence our decisions regarding what to share or like, which in turn contributes to the circulation and visibility of news in the wider media environment.
Like schools, the obvious advantage that universities have is their expertise in the practice of effective teaching – and, unlike schools, through their core teaching and outreach programmes, universities can reach adult social media users who are already consuming and sharing news online. Companies such as Facebook have, over the past year, developed various initiatives which aim to educate their users about the issues that potentially lead to the spread of fake news. But the approach they’ve taken is often pedagogically limited, and thus unlikely to have much effect. In April 2017, for example, Facebook unveiled what they referred to as an ‘educational tool’ intended to assist users in identifying false stories and which could then be flagged them as unreliable. Working in association with the non-profit journalism coalition First Draft, Facebook came up with a list of bullet points to guide users in their online news consumption. This included advice on being sceptical of headlines, checking the URL and source of an article, looking at the evidence that’s cited, and cross-referencing with other articles on the same topic. All of this is highly sensible guidance, and is standard content for information literacy education. But it will only have any real impact if people bother to follow it. And the problem is that as an effective educational tool, it lacks any in-built motivational incentive, relying instead on the assumption that users of the site feel this to be an important issue, and are going to take the time to follow up on these precepts. Being able to shape the way people think about their online behaviour and to foster their desire to effect positive change is something that takes a more critical and reflective approach to digital literacy education.
It is early days in the effective responses to the fake news phenomenon. But as an example of the type of initiative we are proposing, below is a short series of animations that we have produced at the Open University, which aims to educate people about the role of social media in everyday lives, and help them both manage their own identity as digital citizens as well as understand the influence that sites like Facebook is having on civic debate.
We would like to see these initial attempts develop into something more substantial through collaboration between educational institutions and other players: digital intermediaries like Google and Facebook, as well as journalists. Each has a particular expertise which can contribute to a critical education for a networked online society in the era of fake news.
See our project webpage for more details of our research.
Our book, Taking Offence on Social Media: conviviality and communication on Facebook (2017, with Amy Aisha-Brown), is published by Palgrave.
Join the debate!
We would love to hear your thoughts on this topic. Please leave any comments below, and share this blog post with others who might be interested.