• "The Biology of #Disinformation," a paper by @rushkoff, @pesco, and @dunagan23 - @iftf

    https://i1.wp.com/media.boingboing.net/wp-content/uploads/2018/04/screenshot-71.jpg?w=654&ssl=1

    My Institute for the Future colleagues Douglas Rushkoff, Jake Dunagan, and I wrote a research paper on the "Biology of Disinformation" and how media viruses, bots and computational propaganda have redefined how information is weaponized for propaganda campaigns. While technological solutions may seem like the most practical and effective remedy, fortifying social relationships that define human communication may be the best way to combat “ideological warfare” that is designed to push us toward isolation. As Rushkoff says, "adding more AI's and algorithms to protect users from bad social media is counterproductive: how about increasing our cultural immune response to destructively virulent memes, instead?" From The Biology of Disinformation:

     

    The specter of widespread computational propaganda that leverages memetics through persuasive technologies looms large. Already, artificially intelligent software can evolve false political and social constructs highly targeted to sway specific audiences. Users find themselves in highly individualized, algorithmically determined news and information feeds, intentionally designed to: isolate them from conflicting evidence or opinions, create self-reinforcing feedback loops of confirmation, and untether them from fact-based reality. And these are just early days. If memes and disinformation have been weaponized on social media, it is still in the musket stage. Sam Woolley, director of the Institute for the Future’s (IFTF) Digital Intelligence Lab, has concluded that defenders of anything approaching “objective” truth are woefully behind in dealing with computational propaganda. This is the case in both technological responses and neuro-cultural defenses. Moreover, the 2018 and 2020 US election cycles are going to see this kind of cognitive warfare on an unprecedented scale and reach.

     

    But these mechanisms, however powerful, are only as much a threat to human reason as the memetic material they transmit, and the impact of weaponized memetics itself on the social and political landscape. Memes serve as both probes of collective cultural conflicts, and ways of inflaming social divisions. Virulent ideas and imagery only take hold if they effectively trigger a cultural immune response, leading to widespread contagion. This is less a question of technological delivery systems and more a question of human vulnerability. The urgent question we all face is not how to disengage from the modern social media landscape, but rather how do we immunize ourselves against media viruses, fake news, and propaganda?

    Read More

  • #Twitter #bots rampant in news, porn and sports links, Pew finds


    (Photo: Leon Neal, AFP/Getty Images)

    There's a lot of bots out there on Twitter.

    That's the message from a new Pew Research Center study, out Monday, which found that two-thirds of tweets that link to digital content are generated by bots — accounts powered by automated software, not real tweeters.

    Researchers analyzed 1.2 million tweets from last summer (July 27-Sept. 11), most of which linked to more than 2,300 popular websites devoted to sports, celebrities, news, business and sites created by organizations.

    Two-thirds (66%) of those tweets were posted or shared by bots and even more, 89%, of links that led to aggregation sites that compile stores posted online were posted by bots, the study says.

    The findings suggest that bots "play a prominent and pervasive role in the social media environment,” said Aaron Smith, associate research director at Pew, which used a "Botometer" developed at the University of Southern California and Indiana University to analyze links and determine if was posted by an automated account.

    “Automated accounts are far from a niche phenomenon: They share a significant portion of tweeted links to even the most prominent and mainstream publications and online outlets," Smith said in comments accompanying the study. "Since these accounts can impact the information people see on social media, it is important to have a sense their overall prevalence on social media.”

    The Pew researchers did not attempt to assess the accuracy of the material shared by the bots. Also not determined: whether the bots were “good” or “bad" or "whether the content shared by automated accounts is truthful information or not, or the extent to which users interact with content shared by suspected bots,” said Stefan Wojcik, a computational social scientist said in the study.

    Other findings about bots:

    • Bots were responsible for about 90% of all tweeted links to popular adult content websites, 76% of popular sports sites, and 66% to news and current events sites.
    • Some bots do more work than others. The 500 most-active suspected bot accounts sent 22% of tweeted links to popular news and current events sites. In comparison, the 500 most-active human tweeters sent about 6% of links to the outlets.
    • Bot accounts with a political bias were equally liberal or conservative.

    Bots have long plagued Twitter and other researchers have estimated as many as 15% of all Twitter accounts could be fake. Twitter has said the number is lower.

    Twitter's rules allow automated software, but bans the posting of misleading or abusive content or spam. In February, Twitter suspended multiple accounts following the indictment by special counsel Robert Mueller of Russian nationals for meddling in the U.S. election, including using fake Twitter accounts to wage "information warfare" against the U.S.

    The social network also attempts to remove deliberately manipulative tweets and shared offered details about that process Friday in an online post from Del Harvey, Twitter's vice president for trust and safety.

    During Tuesday's shooting at YouTube headquarters in San Bruno, Calif., Twitter began to see accounts "deliberately sharing deceptive, malicious information, sometimes organizing on other networks to do so," Harvey says. That activity is typical as information about tragedies emerges and presents "an especially difficult and volatile challenge" in how to respond to "people who are deliberately manipulating the conversation on Twitter in the immediate aftermath of tragedies like this," she says.

    Twitter "should not be the arbiter of truth," Harvey says, but the network does have rules against abusive behavior, hateful conduct, violent threats, spam and against suspended users creating new accounts.

    In recent months, Twitter has improved its tools and ability to respond to manipulative activity on the service, she says. After the YouTube shooting, "we immediately started requiring account owners to remove Tweets — many within minutes of their initial creation — for violating our policies on abusive behavior," she says. "We also suspended hundreds of accounts for harassing others or purposely manipulating conversations about the event."

    Automated systems also helped prevent suspended tweeters from creating new accounts and helped find "potentially violating Tweets and accounts" for the staff to review, she says.

    At the same time, the Twitter team "was also focused on identifying and surfacing relevant and credible content people could trust," Harvey says. "Moments highlighting reliable information were available in 16 countries and in five different languages — many within 10 minutes of the first Tweets — and also surfaced within top trends related to the situation."

    Twitter continues to deploy technology and people to improve the situation, she says. "We're committed to continuing to improve and to holding ourselves accountable as we work to make Twitter better for everyone," Harvey says. "We’re looking forward to sharing more soon."

    Follow USA TODAY reporter Mike Snider on Twitter: @MikeSnider.

    Read More

  • Here’s How Much Bots Drive Conversation During News Events - #socialmedia #propaganda #fakenews #Twitter #Bots #socialnetworks

     

    Here’s How Much Bots Drive Conversation During News Events

     

    Casey Chin; Getty Images

    Last week, as thousands of Central American migrants made their way northward through Mexico, walking a treacherous route toward the US border, talk of "the caravan," as it's become known, took over Twitter. Conservatives, led by President Donald Trump, dominated the conversation, eager to turn the caravan into a voting issue before the midterms. As it turns out, they had some help—from propaganda bots on Twitter.

    Late last week, about 60 percent of the conversation was driven by likely bots. Over the weekend, even as the conversation about the caravan was overshadowed by more recent tragedies, bots were still driving nearly 40 percent of the caravan conversation on Twitter. That's according to an assessment by Robhat Labs, a startup founded by two UC Berkeley students that builds tools to detect bots online. The team's first product, a Chrome extension called BotCheck.me, allows users to see which accounts in their Twitter timelines are most likely bots. Now it's launching a new tool aimed at news organizations called FactCheck.me, which allows journalists to see how much bot activity there is across an entire topic or hashtag.

    Take the deadly shooting at the Tree of Life synagogue in Pittsburgh over the weekend. On Sunday, one day after the shooting, bots were driving 23 percent of the Twitter activity related to the incident, according to FactCheck.me.

    "These big crises happen, and there’s a flurry of social media activity, but it's really hard to go back and see what’s being spread and get numbers around bot activity," says Ash Bhat, a Robhat Labs cofounder. So the team built an internal tool. Now they're launching it publicly, in hopes of helping newsrooms measure the true volume of conversation during breaking news events, apart from the bot-driven din.

    "The impact of these bot accounts is still seen and felt on Twitter."

    Ash Bhat, Robhat Labs

    Identifying bots is an ever-evolving science. To develop their methodology, Bhat and his partner Rohan Phadte compiled a sample set of accounts they had a high confidence were political propaganda bots. These accounts exhibited unusual behavior, like tweeting political content every few minutes throughout the day or amassing a huge following almost instantly. Unlike automated accounts that news organizations and other entities sometimes set up to send regularly scheduled tweets, the propaganda bots that Robhat Labs is focused on pose as humans. Bhat and Phadte also built a set of verified accounts to represent standard human behavior. They built a machine learning model that could compare the two and pick up on the patterns specific to bot accounts. They wound up with a model that they say is about 94 percent accurate in identifying propaganda bots. Factcheck.me does more than just track bot activity, though. It also applies image recognition technology to identify the most popular memes and images about a given topic being circulated by both bots and humans.

    The tool is still in its earliest stages and requires Bhat and his eight-person team to pull the numbers themselves each time they get a request. Newsrooms interested in tracking a given event have to email Robhat Labs with the topic they want to track. Within 24 hours, the company will spit back a report. Reporters will be able to see both the extent of the bot activity on a given topic, as well as the most shared pieces of content pertaining to that topic.

    There are limitations to this approach. It's not currently possible to the view the percentage of bot activity over a longer period of time. Factcheck.me also doesn't indicate which way the bots are swaying the conversation. Still, it offers more information than newsrooms have previously had at their disposal. Plenty of researchers have studied bot activity on Twitter as a whole, but FactCheck.me allows for more narrow analyses of specific topics, almost in real time. Already, Robhat Labs has released reports on the caravan, the shooting in Pittsburgh, and the senate race in Texas.

    Twitter has spent the last year cracking down on bot activity on the platform. Earlier this year, the company banned users from posting identical tweets to multiple accounts at once or retweeting and liking en masse from different accounts. Then, in July, the company purged millions of bot accounts from the platform, and has booted tens of millions of accounts that it previously locked for suspicious behavior.

    But according to Bhat, the bots have hardly disappeared. They've just evolved. Now, rather than simply sending automated tweets that Twitter might delete, they work to amplify and spread the divisive Tweets written by actual humans. "The impact of these bot accounts is still seen and felt on Twitter," Bhat says.

     

    Read More

  • Social Media Bots Draw Public’s Attention and Concern

    Social Media Bots Draw Public’s Attention and Concern

    While most Americans know about social media bots, many think they have a negative impact on how people stay informed

    About two-thirds of Americans have heard about social media bots, most of whom believe they are used maliciouslySince the 2016 U.S. presidential election, many Americans have expressed concern about the presence of misinformation online, particularly on social media. Recent congressional hearings and investigations by social media sites and academic researchers have suggested that one factor in the spread of misinformation is social media bots – accounts that operate on their own, without human involvement, to post and interact with others on social media sites.

    This topic has drawn the attention of much of the public: About two-thirds of Americans (66%) have heard about social media bots, though far fewer (16%) have heard a lot about these accounts. Among those aware of the phenomenon, a large majority are concerned that bot accounts are being used maliciously, according to a new Pew Research Center survey conducted July 30-Aug. 12, 2018, among 4,581 U.S. adults who are members of Pew Research Center’s nationally representative American Trends Panel (the Center has previously studied bots on Twitter and the news sites to which they link). Eight-in-ten of those who have heard of bots say that these accounts are mostly used for bad purposes, while just 17% say they are mostly used for good purposes.

    To further understand some of the nuances of the public’s views of social media bots, the remainder of this study explores attitudes among those Americans who have heard about them (about a third – 34% – have not heard anything about them).

    While many Americans are aware of the existence of social media bots, fewer are confident they can identify them. About half of those who have heard about bots (47%) are very or somewhat confident they can recognize these accounts on social media, with just 7% saying they are very confident. In contrast, 84% of Americans expressed confidence in their ability to recognize made-up news in an earlier study.

    Most believe a fair amount of the news people see on social media comes from bots …When it comes to the news environment specifically, many find social media bots’ presence pervasive and concerning. About eight-in-ten of those who have heard of bots (81%) think that at least a fair amount of the news people get from social media comes from these accounts, including 17% who think a great deal comes from bots. And about two-thirds (66%) think that social media bots have a mostly negative effect on how well-informed Americans are about current events, while far fewer (11%) believe they have a mostly positive effect.

    While the public’s overall impression of social media bots is negative, they have more nuanced views about specific uses of these accounts – with some uses receiving overwhelming support or opposition. For example, 78% of those who have heard about bots support the government using them to post emergency updates, the most popular function of the nine asked about in the survey. In contrast, these Americans are overwhelmingly opposed to the use of bots to post made-up news or false information (92%). They are also largely opposed to bots being used for political purposes and are more split when considering how companies and news organizations often use bots.

    Read More