Skip to main content

A military expert explains why social media is the new battlefield

A military expert explains why social media is the new battlefield

/

‘Social media rewards not morality or veracity, but virality’

Share this story

Graphic by Michele Doying / The Verge

After the 2016 US presidential election, social media came under scrutiny like never before, and what’s since come to light hasn’t been pretty: widespread consensus that foreign government-backed groups used platforms like YouTube, Twitter, and Facebook to spread discord and division among the American public. In their new book, P. W. Singer and Emerson T. Brooking make the argument that what we witnessed was a new form of global conflict, in which there are no bystanders.

LikeWar: The Weaponization of Social Media is a look at the role social media plays in modern conflict. Singer has written extensively about the future of warfare, looking at robotics (Wired for War), cybersecurity (Cybersecurity and Cyberwar: What You Need to Know), private military companies (Corporate Warriors: The Rise of the Privatized Military Industry), and even speculative fiction (Ghost Fleet: A Novel of the Next World War). Now, he turns his attention to what warfare looks like when information can spread around the world instantly. Singer and Brooking look at how groups like ISIS have used platforms like YouTube and Twitter to spread their message around the world, taunting their opponents and enticing new recruits, while bad actors like Russian-backed groups found ways to game Facebook’s design to spread misinformation and lies.

The Verge spoke with Singer about the state of social media and what solutions might exist.

Image: Houghton Mifflin Harcourt

You look at the history of communications infrastructure and how it’s evolved over time to allow people to communicate quicker. Why are speed and scale so important when looking at the rise of online harassment, propaganda, and the like?

The telegraph and then the telephone allowed us to connect personally from a distance at a speed not previously possible. Radio and then TV allowed one to broadcast out to many. What social media has done is combine the two, allowing simultaneous personal connection as never before, but also the ability to reach out to the entire world. The challenge is that this connection has been both liberating and disruptive. It has freed communication, but it has also been co-opted to aid the vile parts of it as well. The speed and scale have allowed these vile parts to escape many of the firebreaks that society had built up to protect itself. Indeed, I often think about a quote in the book from a retired US Army officer, who described how every village once had an idiot. And now, the internet has brought them all together and made them more powerful than ever before.

There’s been a ton of focus on companies like Facebook and Twitter for spreading misinformation around the world. The degree to which they’ve connected people is certainly an element, but why these companies? What about their approach to communication makes them so ripe for misuse?

In the historic blink of an eye, the founders of these companies have become some of the most powerful players in war and politics when they never set out for this role. Mark Zuckerberg writes software in his Harvard dorm room to allow fellow students to rate who is hot or not. Twitter is literally named after the term for short bursts of “inconsequential” information. And suddenly, they are setting the rules of everything from whether Russian disinformation campaigns should be allowed to whether Myanmar generals have the right to free speech so that they can spur mass killings.

“Social media rewards not morality or veracity, but virality.”

But part of the problem is not just their understandable unpreparedness for such a role and less understandable early turning of a blind eye to the abuses on their networks, but also the very design of them. The networks are for-profit businesses that create an attention economy. Social media rewards not morality or veracity, but virality. Their design is a perfect engine for the fast and wide spread of information, which makes them so wonderful. But there is a catch: unlike the truth, lies can be engineered to take advantage of that design and move faster and wider.

One thing that you covered that particularly interested me was how easy it is to co-opt an unwary participant in political discourse because algorithms reward outrage. How do you inoculate a population against this sort of behavior? 

Much like any other viral outbreak, we will have to draw upon everything from the equivalent of hygiene education, in this space digital literacy, to the targeting of superspreaders, the smaller subset of people who are at the core of viral outbreak.

Just as in public health, these education programs must not be only at our schools, but be joined by a broader, whole-of-society effort to inoculate vulnerable citizens against harmful misinformation. A number of nations have created everything from public awareness campaigns to an emergency alert system, akin to warnings of dangerous storms or disease outbreaks, that intended to slow the spread of such falsehoods before they can do too much damage. We also need the companies to pitch in more, aiding in creating firebreaks to misinformation spreads and de-platforming those who deliberately and knowingly spread lies that are intended to harm society. You have a right to free speech. You do not have a right to spread falsehood after falsehood that harms society on a private company’s network.

A big part of the problem with misinformation seems to be in scale: companies like Twitter and Facebook simply can’t keep up with stuff that clearly breaks the rules, and they’re putting their faith in machine learning. Do you think that this will work? Are there other approaches that they should try?

AI will aid in a lot of areas, such as by supplementing the role of people in content moderation, which will never be enough to match the scale of the problem. Indeed, in the book, we cover how the early efforts at AOL couldn’t keep up with the size of the internet policing problem then, so it’s folly to think Facebook or Twitter could hire their way around this problem with how much it has grown since.

“AI will still not be the silver bullet that too many believe.”

But, AI will still not be the silver bullet that too many believe. The first reason is that this is a conflict. The sides are each shifting tactics. So much of the misinformation problem is about people gaming the system, which machines particularly fall prey to. But also the problem is political. A machine won’t solve for you all the challenges of politics and law or issues of bias. It just introduces new wrinkles to them.

If you could implement a change to how social media companies operate or change something like a feature, what do you think the most effective thing to do is?

Like everything, there is no one silver bullet solution to the challenge. But I think the biggest change would come from the acceptance of a hard truth: they have become not just massively successful companies, but also the stewards of the nervous system of the modern world. Like it or not, that also makes them the rule-makers of a battle space that shapes the outcomes of everything from elections to wars.

The needed efforts include stepped-up investment in content moderation; de-platforming proven superspreaders of harassment and foreign influence operations; wargaming their products before they are deployed into the world, not just for cybersecurity vulnerabilities, but LikeWar misuse by attackers; labeling bots to allow humans to know they are interacting with a machine (aka “The Blade Runner” rule); and implementing measures to foil the next generation of AI used as sophisticated chatbots and faked imagery. But we also have to recognize that none of this will end. That’s the very nature of politics and war. There will always be action and reaction.

Another element to all of this is how bad actors work to gain legitimacy with press coverage. What do you think journalists should take away from reading this when it comes to utilizing social media?

They still matter! Much of the success of the disinformation campaigns came from influencing and leveraging journalists to carry their water for them. This was the case whether it was WikiLeaks or the Russian sock puppet in the book posing as a young American woman @Jenn_Abrams. “She” didn’t just amass 70,000 followers, but was also quoted in BBC News, BET, Breitbart, Business Insider, BuzzFeed, CNN, The Daily Caller, The Daily Dot,  Daily Mail, Dallas News, Fox News, France24, Gizmodo, HuffPost, IJR, The Independent, Infowars, Mashable,  National Post,  New York Daily News, The New York Times, The Observer, Quartz, Sky News, The Times of India, The Telegraph, USA Today, U.S. News and World Report, The Washington Post, and Yahoo.

Oh, and one last pet peeve of mine: over half of people don’t read anything more than the headlines online. And yet the headlines still are written in a way that allows lies to spread easier, such as only quoting the person making the lie. It is not 2016. There is no excuse for that anymore, headline writers!

When all is said and done, are you optimistic for the future at all?

I’m a realist. I think when things have gone awry, it has been from a mix of our own arrogance and ignorance. Hopefully, we can temper the arrogance by what we have seen happen over the last few years and tackle the ignorance by learning the new rules of the game.