Blind Share: Spreading Un-Seen Information

TL;DR: Users share content they haven't read, with even fewer viewers fact-checking its claims. The incentive structure on most social media drives individuals to share as quickly as possible and place faith in influencers to signal what can be trusted. Twitter and other platforms have attempted to make users feel accountable for what they share. But more can be done to reward and expose positive behavior.

In June, Twitter implemented a new feature in hopes of reducing users’ tendency to blindly share content: a warning prompt prior to retweet. Whenever users attempt to retweet a tweet that contains an article they haven’t opened, Twitter will ask the user if they want to read before retweeting.

This may have some noteworthy effects on user behavior during the pre-share moments of their user experience. Let us explore the social incentives that drive blind sharing, consider the potential success of Twitter’s approach to reduce it, and discuss other possible remedies to the problem.

Misinformation and disinformation (collectively referred to as ‘misinformation’ in this piece) have become a notable hindrance to public discourse. When individuals share content they haven’t reviewed themselves, it increases the likelihood of spreading false facts, misleading information, or unintended messages entirely. Widespread misinformation reduces the knowledge of basic facts, breaks down trust in institutions and traditional media, and corrodes public debate through false equivalencies. It can be used to justify giving a mainstream platform to unfounded claims for the sake of neutrality and it can cause political polarization and unnecessary divisiveness. (Waldman) Many citizens wish to reduce misinformation but act in ways that can expand its reach.

Worse than judging a book by its covers is judging an article by its title.

One of the major phenomena of online social behavior is that users rarely read the content that they share. In fact, Gabielkov et. al estimate that 59% of the URLs mentioned on Twitter are not clicked/read at all prior to being shared. Even less users take the time to fact-check that content. A 2017 ZignalLabs survey of over 2,000 adults indicated that 86% of Americans who read news articles on social media do not always fact-check the information they read. 27% of those individuals admit they also share news articles that they haven’t fact checked. Given this immense number of blind sharing and limited fact-checking, there must be incentives that drive this behavior.

The incentives that individuals consider prior to sharing content are the direct cause of the swift propagation of misinformation. Fu et al.’s 2010 study found that there are two primary incentives for sharing information on Facebook. Using both qualitative (focus group interviews) and quantitative (online surveys) methods, they discovered self-interest (likes, status, self-expression, loneliness) and communal (altruism, connection, group joy) incentives as being the most powerful. Focusing on status, one may posit that people share information that will earn them status and potentially refrain from sharing information that will hurt their status. In Status as a Service, Eugene Wei argues that online participants desire status and “seek out the most efficient path to maximizing social capital”. Considering these two points together, it is quite clear that users who share information online for status will be incentivized to do so in a way that maximizes their earned status.

Time plays a critical role in the maximizing of status. One way to potentially increase one’s own status is by sharing new content earlier. It is common knowledge that information posted ‘early’ in the lifecycle of a story diffuses quicker than that same information shared just a few hours later. (Yoo) Assuming that a higher reach equates to an increased likelihood of likes and therefore status on social networks like Twitter, it starts to become obvious that users are incentivized to share content as soon as possible in order to maximize their social capital. In other words, being early on a piece of information increases the likelihood of high engagement and social status. The desire to share content quickly may explain the notable amount of users who do not read / fact-check information before sharing it; they want to share it before others do.

Retractions, despite their importance, do not deliver the same dose of status as the initial story. When a user or organization retracts or clarifies misleading information that they’ve shared, they often receive little reach and minimally positive sentiment in return. Even in the case when users are exposed to retractions or clarifications, they are very unlikely to change their minds once having heard the original piece of information enough times. (Waldman)

Furthermore, influence is an additional factor that affects the propagation of misinformation. Yoo et al.’s study suggests that the influence of early sharers strongly affects the rate of information propagation, especially during disasters. They stated that, “An originator's influence is particularly relevant to the context of cascades in social media networks during humanitarian crises since users previously reported having significant concerns about the credibility of disaster information they received through social media.” If someone is more influential and is directly related to the information, it tends to spread faster. A contemporary example of this may be Tom Hanks contracting Covid-19 in early March and sharing this information publicly, causing many Americans to look at the virus with more seriousness. There may be two separate reasons for this effect. First, these influencers are trusted by their followers who feel that the information they share is true without reading/verifying. Users may free-ride the perceived fact-checking of the influencer, similar to the bystander effect at an online scale. Second, because fact checks can potentially signal skepticism, individuals may prefer to hold in their doubt about an influencer’s source’s trustworthiness in social situations. (Jun) It may not be easy to point out falsehoods in the 5G conspiracy theory expressed by one’s favorite actor (such as Woody Harrelson in this case). Third, the influencer assumingly has a large following, causing the individual to feel the communal incentive to support the whole group’s beliefs. This puts a lot of power and responsibility in the hands of the highly influential online.

Another factor that affects the spread of information is the subject matter. Certain subjects, like humanitarian causes, emergency crises, and political disputes, tend to spread faster than commercial or lifestyle content. (Yoo) This makes sense, but also makes preventing the propagation of misinformation even more difficult. The subjects that are most important are also the most high-impact opportunities to spread falsehoods and confusion.

Twitter’s approach of using a prompt to remind the user to read what they are about to share could have positive effects on the propagation of misinformation. Jun et al.’s research suggests that “[i]nducing vigilance immediately before evaluation increased fact-checking under social settings.” This means that making users feel accountable for what they share can have a powerful effect on their likelihood to blindly share. Twitter could potentially take this approach one step further by making it visible to viewers that the retweeter has not read the URL, similar to their “get the facts” banners that are shown on questionable content. One successful example of the inverse of this can be found on Amazon, whereby the reviews of ‘previous purchasers’ are prioritized and that information is made public to provide better context for the potential buyer. While it is unlikely that Twitter takes this feature to such extremes, one may argue that this would increase perceived accountability and reputation risk on the user and therefore reduce blind sharing. Others may point out that this may be too aggressive of an approach and only push people to open the article but not necessarily read it. This would make for a highly unsubstantial signal.

Since the expected reward for sharing unread/unverified content outweighs the potential risk, one way to potentially reduce the spread of misinformation is by increasing the negative consequences of sharing false information. This may be achieved with economic incentives, rewarding users who share content that is found to be true under social consensus with money and status, and, inversely, penalizing users who share content that is discovered to be false. This penalty may not need to be directly financial, but can potentially just reduce their opportunity to earn money or status in the future. Since more influential participants have a stronger effect on propagation, one could place more responsibility on highly influential sharers, punishing or rewarding them more based on the accuracy of what they share. Another approach may be to reduce or increase the rewards depending on the subject matter. If information regarding a rap song is shared, for example, there may be less at stake than when time-sensitive information is shared about a humanitarian crisis.

Lastly, social media platforms could attempt to increase the exposure and status achieved from retracting or correcting inaccurate statements. If news media publications’ or individuals’ retractions were fully and directly distributed to their followers / readers, they may have more of an incentive to share them. This could potentially come in the form of a Twitter notification or augmented organic reach for all users that engaged with a now-corrected piece of content. Platforms could even celebrate this kind of behavior, highlighting and providing more exposure at the algorithm-level to users who regularly retract and fact-check themselves. Looking at users’ desire for status and exposure, this may be a successful way to incentivize and normalize corrections/retractions at a larger scale.

As online influence becomes a more valuable and sought after asset, the incentive models around information dissemination need to adapt to psychologically guide users to read and fact-check content before sharing it publicly. New features that address these 21st century challenges should be supported but closely tested.

Subscribe to Yup
Receive the latest updates directly to your inbox.
Verification
This entry has been permanently stored onchain and signed by its creator.