A Tale of Three Rumors

About the Author:

Gilad Lotan

Gilad Lotan

Gilad Lotan is the VP of R&D for SocialFlow, a New York based startup that develops technology that optimizes content for social media channels.

The more we use social media, the more seasoned we become at assessing the trustworthiness of information that we come across. With rumors constantly flying around, famous celebrities are often mistaken to be dead, while every little move Apple makes triggers an onslaught of buzz around new product features.

This post details three rumors, each with its own path, source, evolution and outcome. One makes it far, cascading through networks of users, fans and followers who decide to amplify and pass on the message, another makes it far but is found to be false, while the third quickly dies. What can we learn from their differences? How can we improve our ability to recognize a false piece of information in realtime?

Adding Context, Gaining Trust

This is a classic example of a viral information flow: 1) user provides an important piece of information to a hungry crowd 2) it spreads like wildfire 3) the information turns out to be true.

On the evening of May 1st, 2011, Keith Urbahn broke the news about Osama Bin Laden’s death, beating the official White House announcement by a full hour. He wasn’t the first to speculate Bin Laden’s death, but he was the one who gained the most trust from the network. Within a minute of @KeithUrbahn’s original post, it was validated and placed in context by two important players. Politico’s Jake Sherman wrote:


and Brian Stelter of the New York Times added:


Both play a critical role in the flow, adding context on the source, vouching for their audience’s trust. A few others came out with speculation before Keith Urbahn, yet none with a trustworthy tone, and none drew trust from the network.

The Aspired Truth

On November 17th 2011, The New York City NBC Twitter handle (@NBCNewYork) posted the following tweet:
https://twitter.com/#!/nbcnewyork/status/137291910460096512
Some context: this comes at the height of #N17, Occupy Wall Street’s global day of action, six days after the NYPD evicted Zucotti park occupiers, and a symbolic two months after the first occupation (Sept 17th). The environment was incredibly heated and everyone expected violence to erupt. It was just a matter of when, who and how.

Within 5 minutes of NBC New York’s post, @HuffPostMedia, @CKanal (social media editor at CNBC) and @BrianStelter (NYTimes) reposted the message. @Skidder, the director of editorial operations at Gawker media added a confirmation a few minutes later, and the formal @NBCNews account (158k followers at the time – x8 in size compared to NBCNewYork) posted it as well. Coming from so many reputable sources, an onslaught of Retweets followed, along with many angry voices spewing “i told you so’s” and “this is what democracy looks like” type messages.

The correction came a few minutes later from the NYPD:


Immediately following that, both @NBCNewYork and @NBCNews came out with corrections. But at that stage, the original report was already at the peak of its cascade. In this case, the published correction didn’t reach as far and wide of an audience. People are much more likely to retweet what they want to be true, their aspirations and values.

Does misinformation always spread further than the correction? Not necessarily. I’ve seen it go either way. But I can safely say that the more sensationalized a story, the more likely it is to travel far. Many times the story about about misinformation is what spreads, rather than the false information itself (for example: the Steve Jobs false death tweet which cost Shira Lazar her CBS gig)

Lacking the Right Network

This is a very different case where information was fed to the network, but didn’t spread. Effectively nobody listened until the press came in.

Many claim that Aja Dior (@AjaDiorNavy) on Twitter broke the unfortunate news about Whitney Houston’s death. While it is true that @AjaDiorNavy’s tweet appeared on Twitter 42 minutes before the news hit the press, its content spread only to a handful of users. The story did not actually break until the AP and TMZ posted the announcement that came through formal channels.

The graph below shows the information flow during the first hour, as the story was publicized. Nodes represent Twitter users, and the connections between then represents that path that information flowed. The larger the node, the more retweets it generated. The yellow nodes were earlier to Tweet. The silo-ed group at the center left (#1) was first to mention and respond to the news, but it stayed confined within that group. (source)

This is a classic case of information that had the potential to spread, yet did not.

Quantifying Trust

The examples above teach us a few things about information and misinformation flows. The first clearly shows that one does not need to have a large audience, but rather the right kind of people in place who will provide context and generate trust. The second highlights that attention is limited, once a message is published, especially in the context of a heated struggle, it is difficult to retract. Additionally, it is harder to persuade folks an event is taking place when they don’t want to believe it is. The last teaches us that even one of the hottest pieces of information will not spread without the right network in place. So what can we do to assess “truthiness” of information in realtime as events are unfolding?

A hybrid approach may be ideal. We can use algorithmic methods to quickly identify and track emerging events. Model specific keywords that tend to show up around breaking news events (think “bomb”, “death”) and identify deviations from the norm. At the same time, its important to have humans constantly verifying information sources, part based on intuition, and part by activating their networks.

Andy Carvin does a phenomenal job building up a network of informants. He notoriously tracks events and rumors early on, consistently leveraging his network to seek rumor confirmation and validation. He teaches his network how to question and verify information, constantly learning which sources can be trusted, adding context where needed and pointing to problematic assumptions. Andy has generated a complex mental model of his audience in his head and uses it, along with his network of friends and followers, to verify rumors.

As our networks scale in size and complexity, there’s limited capacity to hold everything in our heads. But with a little help from the right type of data analysis tools, I am certain this can scale.