Getting the Public Behind the Fight on Misinformation
Misinformation is false or inaccurate information communicated regardless of intention to deceive. The spread of misinformation undermines trust in politics and the media, exacerbated by social media that encourages emotional responses, with users often only reading the headlines and engaging with false posts while sharing credible sources less. Once hesitant to respond, social media companies are increasingly enacting steps to stop the spread of misinformation. But why have these efforts failed to gain greater public support?
A 2021 poll from the Pearson Institute found that 95% of Americans believed that the spread of misinformation was concerning, with over 70% blaming, among others, social media companies. Though Americans overwhelmingly agree that misinformation must be addressed, why is there little public consensus on the appropriate solution?
Social Media and the Cold War Around Free Speech
READ MORE
To address this, we ran a national web survey with 1,050 respondents via Qualtrics, using gender, age and regional quota sampling. Our research suggests several challenges to combating misinformation.
First, there are often misconceptions about what social media companies can do. As private entities, they have the legal right to moderate content on their platform, whereas the First Amendment applies only to government restriction of speech. When asked to evaluate the statement “social media companies have a right to remove posts on their platform,” a clear majority of 58.7% agreed. Yet a divide emerges between Democrats, where 74.3% agreed with the statement compared to only 43.5% of Republicans.
Ignorance of the scope of the First Amendment may partially explain these findings, as well as respondents believing that, even if companies have the legal right, they should not engage in removal. Yet a history of tech companies initially couching policies as consistent with free speech principles only to later backtrack only adds to the confusion. For example, Twitter once maintained “a devotion to a fundamental free speech standard” of content neutrality, but by 2017 had shifted to a policy where not only posts could be removed but even accounts without offensive tweets.
Embed from Getty Images
Second, while most acknowledge that social media companies should do something, there is little agreement on what that something should be. Overall, 70% of respondents, including a majority of both Democrats (84%) and Republicans (57.6%), agreed with the statement that “social media companies should take steps to restrict false information online, even if it limits freedom of information.”
We then asked respondents if they would support five different means to combat misinformation. Here, none of the five proposed means mentioned in the survey found majority support, with the most popular option — providing factual information directly under posts labeled as misinformation — supported only by 46.6% of respondents. This was also the only option that a majority of Democrats supported (56.4%).
Moreover, over a fifth of respondents (20.6%) did not support any of the options. Even focusing just on respondents that stated that social media companies should take steps failed to find broad support for most options.
So what might increase public buy-in to these efforts? Transparent policies are necessary so that responses do not appear ad hoc or inconsistent. While many users may not pay attention to terms of services, consistent policies may serve to counter perceptions that efforts selectively enforce or only target certain ideological viewpoints.
Recent research finds that while almost half of Americans have seen posts labeled as potentially being misinformation on social media, they are wary of trusting fact-checks because they are unsure how information is identified as inaccurate. Greater explanation of the fact-checking process, including using multiple third-party services, may also help address this concern.
Unique Insights from 2,500+ Contributors in 90+ Countries
Social media companies, rather than relying solely on moderating content, may also wish to include subtle efforts that encourage users to evaluate posting behavior. Twitter and Facebook have already nodded in this direction with prompts to suggest users should read articles before sharing them.
Various crowdsourcing efforts may also serve to signal the accuracy of posts or the frequency with which they are being fact-checked. These efforts attempt to address the underlying hesitancy to combat misinformation while providing an alternative to content moderation that users may not see as transparent. While Americans overwhelmingly agree that misinformation is a problem, designing an effective solution requires a multi-faceted approach.
*[Funding for this survey was provided by the Institute for Humane Studies.]
The views expressed in this article are the author’s own and do not necessarily reflect Fair Observer’s editorial policy. More