Study: Social Media Easily Manipulated

Reading audio



26 December 2020

New research shows that social media companies differ in their ability to stop social media manipulation.

The NATO Strategic Communications Center of Excellence carried out the study. Two American senators took part.

Researchers from the center, based in Riga, Latvia, paid three Russian companies for fake social media engagement. For around $368, researchers got 337,768 fake likes, views and shares of posts on social media, including Facebook, Instagram, Twitter, YouTube and TikTok.

Some of those fake likes, views, and shares appeared on the verified accounts of Senators Chuck Grassley and Chris Murphy. Verified accounts are those that social media companies have confirmed as owned and controlled by the individual or group named on the account.

Grassley's office confirmed that the Republican from Iowa took part in the study.

Murphy, a Democrat from the state of Connecticut, said in a statement that he agreed to take part in the study. The senator said it is important to understand that even verified accounts are at risk of manipulation. It is easy to use social media as a tool to interfere with election campaigns and incite political unrest, Murphy said.

"It's clear that social media companies are not doing enough to combat misinformation and paid manipulation...," he said.

NATO StratCom director Janis Sarts told The Associated Press that social media manipulation hurts business markets and is a threat to national security.

Sarts said that fake accounts are being employed to trick computer programs that decide what information is popular.

"That in turn deepens divisions and thus weakens us as a society," he explained.

More than 98 percent of the fake engagements - likes, views, shares - remained active after four weeks, researchers found. And 97 percent of the accounts they reported for fake activity were still active five days later.

NATO StratCom did a similar test in 2019 with the accounts of European officials. They found that Twitter is now taking down fake content faster and Facebook has made it harder to create fake accounts.

"We've spent years strengthening our detection systems against fake engagement with a focus on stopping the accounts that have the potential to cause the most harm," a Facebook company spokesperson said in an email.

But YouTube and Facebook-owned Instagram remain open to risk, researchers said, and TikTok appeared "defenseless."

Researchers said that for the study they pushed content that was not political. They wanted to avoid any possible interference in the U.S. election.

So, the researchers posted pictures of dogs and food.

Ben Scott is executive director of Reset.tech, a group that works to fight digital threats to democracy. Scott said the investigation showed how easy it is to manipulate political communication and how little social media companies have done to fix the problems.

In an email to the Associated Press, Yoel Roth, Twitter's head of site integrity described the issue is an "evolving challenge."

Roth added that the study shows the big "effort that Twitter has made to improve the health of the public conversation."

YouTube said it has put in place safety measures to find fake activity on its site. It noted that more than 2 million videos were removed from the site recently for breaking its policies.

TikTok said it removes content or accounts that support fake engagement or other untrue information that may cause harm.

I'm John Russell.

Erika Kinetz reported on this story for the Associated Press. John Russell adapted it for Learning English. Caty Weaver was the editor.

_____________________________________________________________

Words in This Story

fake – adj. not true or real

engagement n. the act or state of being involved with something

content – n. the ideas, facts, or images that are in a book, article, speech, movie, etc.

detection -- n. the act or process of discovering, finding, or noticing something


Category