Detecting Deception in Social Media

It's bad enough when people are wrong as they express facts and opinions on the Internet. Mistakes happen. But there's more going on. Some people are intentionally adding noise to the online world, in an attempt to mislead users and analysts. Garbage in, garbage out, so how do we catch the garbage before it becomes part of the analysis?

This post is the second in a series. The first is Can You Trust Social Media Sources? Most of my posts aren't this long; the next will be nice and short.

Catching and deleting spam and other garbage in social media data is one side of an arms race, just like email spam and computer viruses. Developers of social media analysis platforms work to eliminate spam from their results, and spammers develop new tactics to dodge the filters. As long as the incentives remain, people will find ways to game the system.

For most analysts, the main response is to pick a platform that does a decent job of catching the undesirable content. Most do some sort of machine learning to identify and filter spam, and while the results are imperfect, they're useful as a first step. The second step is to allow users to flag content as spam, and it's good if the system learns from that action. A third step is to allow users to blacklist a site altogether; once you know it's not what you're looking for, there's no need to rely on the spam-scoring engine.

Evaluating questionable data
This is where I'd love to give you the magic button that reveals deceptive content. I'd like to have the Liar Liar power, too, but that's not going to happen. Instead, I have some ideas of how to think about questionable results. Most of them are in the form of questions. Some are more probabilistic than definitive, but I think they could be helpful.

  • Consider your purpose
    Your sensitivity to garbage in your data depends on what you're doing with it. If you're monitoring for customer service purposes, flag the spam and move on. If you're reporting on broad trends, you might get better results through sampling, or by focusing on high-quality sources. If you're looking for weak signals, you may not have the luxury of ignoring the low signal-to-noise ratio of a wide search. As always, match the effort to the objective.

    Some people actually need to look at spam—consider the legal department. If a link leads to a site selling counterfeit merchandise and you're in a trademark protection role, the spam is what you're looking for.

  • Consider the source (person)
    Who posted the item in question, and what do you know about them? Is the poster a known person? What do you know from the individual profile? Who does the person work for? What groups is the person connected to? Does the person typically discuss the current topic? Is the person's location consistent with the information shared?

    If you're not sure whether the poster is a person or a persona, develop a profile. A persona is like a cover identity; it can be strong or weak. Does the persona have a presence on multiple networks? Since when? Is it consistent across networks? Does it have depth, or is every post on the same topic? Who does the persona associate with online, and what do you know about them? Do the persona's connections reveal the complexity of relationship types that real people develop (school, work, family, etc.)? Do the profiles and connections give information about background that can be checked?

    For questionable sources, think about the different types of data that might reveal something through social network analysis.

    Back at the Social Media Analytics Summit, Tom Reamy described work by researchers to identify the political leanings of writers, based on their language choices (writing about non-political topics). Can we use text analytics to add information about native language, regional differences, and subject-matter expertise to individual profiles?

  • Consider the source (site)
    Where was the data posted? What do you know about the site? Is it a known or probable pay-to-play or disinformation site? Is it a content-scraping site? Does it have information from a single contributor (such as a blog) or from many (such as a crowdsourcing site)? What else is posted to the site? Where is it hosted? Who owns it? Where are they based? What can you learn from the domain registration?

    What's the online footprint of the site? Is it linked to real people in social networks? Is it used as a source by other people? Credibility flows through networks; do known, credible (not necessarily influential) people link to it and share its content in their networks? Does it appear to have bought its followers, or are they real people?

  • Consider other sources
    If you're going to do something serious—and I'll leave the definition of serious as an exercise for the reader—don't trap yourself in a new silo for social media data. What else do you know? What do other online sources say? Does the questionable data fit with what you're getting from sources outside of social media? Are you getting similar information from credible sources, or are all of the sources for the questionable data unknown?

    A few months ago, I heard Craig Fugate, the Administrator of the (US) Federal Emergency Management Agency (FEMA), tell a story about government agencies and unofficial sources of information. The story involved a suspected tornado and unconfirmed damage reports in social media. Government agencies prefer official reports from first responders and other trained observers, so the question was how to evaluate reports in social media.

    In the case of severe weather, one answer is to compare the reports with official sources of weather data. If radar indicated a likely tornado passing over a location a few minutes before the damage reports, then you'd know something important that should help evaluate those reports. What's the analogy for your task? Is there a hard-data source that can add relevant information? Does a geospatial view add a useful dimension (such as radar, post location, and photo metadata all in same location would, in the example)?

  • Consider the incentives
    What does a potential adversary stand to gain by fooling you—or someone else looking at the same data—with false information? Who gains by leading you to an incorrect action? Who makes money on your decision? Who benefits from misleading other people with false information (think product reviews and propaganda)? Is questionable information in your system consistent with the aims of an interested party?

    Part of the challenge here is that false information could be intended to mislead anyone. The target could be an individual, a small group, or entire populations. Who gains? Is there a link from the source to an interested party?

  • Consider the costs
    Part of what makes spam so frustrating is the volume level—there's a lot of the stuff around. At some point, the signal-to-noise ratio gets so low that the source becomes useless, unless you can identify and eliminate the junk. In a way, all that junk adds up to a sort of denial-of-service attack at the content layer. Is there a way to deal with that?

    A denial-of-service (DOS) attack and its scaled-up variant, the distributed denial-of-service (DDOS) attack, overload the targeted web site with simultaneous requests, causing it to become unavailable to real visitors. In 2010, Amazon weathered a DDOS attack without losing service. The explanation was that their normal operation looks a lot like a DDOS attack—lots of people visiting the site simultaneously. Their system was built to handle that kind of load, so the attack failed. One answer to a DDOS attack, then, is to have the capacity to handle the load.

    The social media analysis equivalent is to process it all, so what would that look like? Would a deeper analysis of known junk and its sources help improve the identification of junk? Would it tell you something useful about the parties that post the junk?

  • Consider the consequences
    The final point is to revisit the first point. What are you trying to accomplish? What decision will you make based on the data, and what happens if the information was false? What if it was placed there to manipulate your response (even if the information itself is true)? Does the rest of the decision-making process have the safeguards to prevent costly errors?
The hard problem
One way to look at this is to go through the whole process while thinking "spam." Junk results are an annoyance if you're doing day-to-day monitoring for business, and they're a problem if you're doing quantitative analysis. The technology is improving, and you have options for dealing with spam in these settings.

Some junk isn't that hard to catch, especially once a person looks at it. Gibberish blog comments are easy to identify. Names and email that don't match are sort of obvious, too. Content scrapers and other low-quality sites tend to have a certain look. If you have time to look at the spam that evades your filters, you can catch a lot of it.

The real challenge comes in looking for intelligence—whether in business, finance, politics, or government—in the presence of a motivated and well-funded adversary. If someone wants to fool you—or at least keep you from using an online source—they can improve their chances by better imitating the good data surrounding their junk. The quick glance to identify spam becomes a bigger effort, with more uncertainty.

Pay-to-play blogs may have original content from professional writers, so you can't just look for poor quality. False personas may be developed over time, with extensive material to create a convincing backstory. Networks of such personas could post disinformation, along with more normal-looking content, across multiple sites. With time and resources, personas can appear solid, which is why governments are investing in them.

I think some of the techniques above could help, but it's really a new arms race. The problem for everyone else is that this arms race will tend to poison the social media well for everyone who wants to discuss the contested topics.

If your organization is interested in these topics, don't just read the blog. Call me. As long as this post is, it's the short version. Clients get the full story.

XKCD cartoon by Randall Munroe.


About Nathan Gilliatt

  • ng.jpg
  • Voracious learner and explorer. Analyst tracking technologies and markets in intelligence, analytics and social media. Advisor to buyers, sellers and investors. Writing my next book.
  • Principal, Social Target
  • Profile
  • Highlights from the archive

Subscribe

Monthly Archives