• "The Biology of #Disinformation," a paper by @rushkoff, @pesco, and @dunagan23 - @iftf

    https://i1.wp.com/media.boingboing.net/wp-content/uploads/2018/04/screenshot-71.jpg?w=654&ssl=1

    My Institute for the Future colleagues Douglas Rushkoff, Jake Dunagan, and I wrote a research paper on the "Biology of Disinformation" and how media viruses, bots and computational propaganda have redefined how information is weaponized for propaganda campaigns. While technological solutions may seem like the most practical and effective remedy, fortifying social relationships that define human communication may be the best way to combat “ideological warfare” that is designed to push us toward isolation. As Rushkoff says, "adding more AI's and algorithms to protect users from bad social media is counterproductive: how about increasing our cultural immune response to destructively virulent memes, instead?" From The Biology of Disinformation:

     

    The specter of widespread computational propaganda that leverages memetics through persuasive technologies looms large. Already, artificially intelligent software can evolve false political and social constructs highly targeted to sway specific audiences. Users find themselves in highly individualized, algorithmically determined news and information feeds, intentionally designed to: isolate them from conflicting evidence or opinions, create self-reinforcing feedback loops of confirmation, and untether them from fact-based reality. And these are just early days. If memes and disinformation have been weaponized on social media, it is still in the musket stage. Sam Woolley, director of the Institute for the Future’s (IFTF) Digital Intelligence Lab, has concluded that defenders of anything approaching “objective” truth are woefully behind in dealing with computational propaganda. This is the case in both technological responses and neuro-cultural defenses. Moreover, the 2018 and 2020 US election cycles are going to see this kind of cognitive warfare on an unprecedented scale and reach.

     

    But these mechanisms, however powerful, are only as much a threat to human reason as the memetic material they transmit, and the impact of weaponized memetics itself on the social and political landscape. Memes serve as both probes of collective cultural conflicts, and ways of inflaming social divisions. Virulent ideas and imagery only take hold if they effectively trigger a cultural immune response, leading to widespread contagion. This is less a question of technological delivery systems and more a question of human vulnerability. The urgent question we all face is not how to disengage from the modern social media landscape, but rather how do we immunize ourselves against media viruses, fake news, and propaganda?

    Read More

  • “A Different Kind of #Propaganda”: Has #America Lost the #Information War?

    The most important public effort to counter Russian disinformation is understaffed, underfunded, and over-extended. Inside the battle Donald Trump doesn’t want to fight.

    The White House
    The White House.
    By George Skadding/The LIFE Picture Collection/Getty Images.

    Secluded on the second floor of the State Department’s Harry S. Truman Building, just down the hall from the Counterterrorism Bureau, is the spy-proof chamber where the future of American warfare is being fought, one tweet at a time. The Global Engagement Center, or G.E.C., would be a mostly nondescript L-shaped office, if not for the hanging television monitors, tuned to the news, and high-walled cubicles where dozens of staffers labor over computers that have been retrofitted with special screens to prevent wandering eyes. No cell phones or electronic devices are allowed in or out.

    The G.E.C. is, in a sense, Washington’s answer to the Internet Research Agency, the St. Petersburg-based troll factory where Russian social-media specialists worked day and night to sway the course of the presidential election. Of course, it was probably already too late by the time the G.E.C. was established by President Barack Obama, in March 2016, an alternate to the Center for Strategic Counterterrorism Communications. It wasn’t until December, weeks after Donald Trump had been elected, that the center’s mandate was expanded beyond combating terrorist propaganda to include state-sponsored disinformation under the National Defense Authorization Act. Even then, the G.E.C. struggled to catch up. Lacking support or direction from Secretary of State Rex Tillerson’s office, staffers’ work was handicapped for much of Trump’s first year in office. With less than 100 bodies working the disinformation beat, there were doubts both inside and outside the G.E.C. that the start-up-like group could overcome the bureaucratic obstacles of a White House with little interest in re-litigating the 2016 campaign, or even acknowledging the Russian threat.

    “The problem is that the secretary wasn’t going to wrap his head around this issue anytime soon, in the sense of I don’t even think he knew what the G.E.C. was until summer, and even then I would be surprised if he really knew,” one current State Department staffer told me. When Tillerson did acknowledge the G.E.C., the secretary and his top aides were perceived to be openly hostile toward its mission. In August, Politico reported that R.C. Hammond, who served as the State Department’s head of communications until December of last year, had urged Tillerson not to spend nearly $80 million that had been earmarked for the G.E.C. by Congress, including about $60 million from the Defense Department, because the effort would upset Moscow. (Hammond dismissed this narrative.) And while insiders have praised acting coordinator Daniel Kimmage, telling me that there is no better person to lead the effort than the foreign-service officer—who is fluent in both Russian and Arabic, among other languages—one source said he never gained the trust of the Trump partisans he reported to. He “specifically was seen as an Obama person,” the current State Department staffer said, noting that Kimmage was appointed to the post by a former John Kerry loyalist. “So people were suspicious. That also didn’t help.”

    Although the G.E.C. doesn’t represent the totality of U.S. efforts on the digital battlefield, its struggles are emblematic of larger intelligence failures spanning two administrations. Insiders familiar with the State Department’s counter-disinformation effort say the problem was exacerbated when Congress and the Obama administration pushed the G.E.C. to combat Russia and other state actors. “In my view, they took something that was working pretty well and they kind of broke it, because they tried to do too much with too little in the way of resources and with too little in the way of vision,” said a former State official in the Obama administration familiar with the situation.

    Thomas Hill, a former House Foreign Affairs Committee senior staffer who worked with Republican lawmakers to prevent the expansion of the G.E.C.’s mandate, argued that the center should have proved it could fight terrorist propaganda before taking on state actors, too. “It just misses the point because it demonstrates to me that people are less concerned with the G.E.C. being effective and actually achieving its mission, and more concerned with having something that they can point to and say, ‘See, we are doing something,’” Hill told me. “If people were serious about combating Russian propaganda, you have to be honest—$80 million and 50 people in the basement of the State Department is not going to cut it. That is not enough.”

    Others say that the G.E.C. suffers from fundamental flaws that extend beyond staffing and funding issues. While the current State Department staffer conceded that “the Seventh Floor wasn’t helpful—absolutely,” they argued that the G.E.C. struggled to deliver a proof of concept to win over Tillerson. “Tillerson was hyperfocused on analytics. He would want to know, ‘How do I know that this is effective?’—and if you are proposing something new and you don’t have any data . . . then that hurts your argument.”

    On a more prosaic level, however, as another election approaches, many State Department insiders fear that their new commander-in-chief might not want to solve the problem they have been tasked with fixing. For the last year, the United States intelligence community has been warning that Russia will seek to intervene in the 2018 midterm elections, too—and that the government is not prepared to stop it. “We’re taking steps, but we’re probably not doing enough,” Admiral Mike Rogers, the head of the National Security Agency, told Congress at the end of February when asked what the administration was doing to prevent further Kremlin interference. “President [Vladimir] Putin has clearly come to the conclusion that there’s little price to pay and that therefore ‘I can continue this activity.’”

    Donald Trump and Vladimir Putin hold a meeting on the sidelines of the G20 Summit in Hamburg, Germany, July 7, 2017.

    By Saul Loeb/AFP/Getty Images.

    But without buy-in from Trump, who reportedly chafes at any discussion of Russian election meddling, not much has changed. “The entire system gets energized and mobilized when the president is doing the mobilizing and energizing. If not, then you’re dependent on the different pieces of the system taking initiative. Some of them will, but others won’t,” a former senior national security and State Department official told me. For Putin, this person quipped, Trump was a “gift from the heavens.”

    In February, the Justice Department indicted the Internet Research Agency, along with two other Russian entities and 13 people, with criminal charges related to Moscow’s efforts to interfere in the 2016 election. According to the indictment, that operation had a budget of about $1.25 million a month—shoestring by American standards, but more than enough in Russia to employ hundreds of operatives controlling thousands of fake social-media accounts. Some stole real Americans’ identities; others flew to the United States to gather intelligence, produce videos, and organize rallies. The overarching goal, aided by the rank and file back in St. Petersburg, was to dial up the outrage on both sides, demoralizing Democrats and infuriating Republicans. The social-media strategy was ham-fisted but brutally effective: one I.R.A. account, @TEN_GOP, gained more than 100,000 followers and was re-tweeted by multiple Trump campaign officials before being shut down by Twitter in the summer of 2017.

    The G.E.C., meanwhile, is still struggling to get properly funded. While Tillerson approved a submission for $40 million from the Defense Department last summer to fight state-sponsored disinformation, the Pentagon said that State had missed its opportunity, requesting the funds too close to a September 30 expiration date. Later, after months of haggling, Defense and State reached an agreement to transfer $40 million to the G.E.C., bringing the center’s 2018 funding to nearly $100 million. But the money hasn’t materialized. Last month, The New York Times reported that the State Department “has yet to spend any of the $120 million it has been allocated since late 2016.” State officials told the Times they hoped the sum would be made available by April.

    A bipartisan group of senators responded by demanding that State and Defense explain the delay. “We are particularly concerned that the apparent lack of urgency in transferring authorized funds from the Department of Defense to the Global Engagement Center has left the Center ill-equipped to carry out its mandate,” the lawmakers wrote. “The Department of State requested these funds from the Department of Defense in August 2017, and Congress authorized these funds because we see this mechanism as a critical component of a government-wide response to Russian malign influence.” But the letter appears to have fallen down a Kafka-esque rabbit hole. With the 2018 midterm elections fast approaching, a Defense Department spokesperson told me Friday that the G.E.C. is no closer to receiving the funding than it was in the beginning of March. A senior State Department official pushed back, saying the Pentagon had already initiated the process to transfer the funds. But the Pentagon official put the onus on Congress. “The Department of Defense cannot initiate a transfer of funds until a DoD’s fiscal year 2018 appropriations bill is passed that includes funding for the G.E.C. and applicable congressional notification procedures are completed,” they told me. Steve Goldstein, who served as undersecretary for public diplomacy until he was fired alongside Tillerson last month, told me that while Congress had authorized a transfer of up to $60 million under the National Defense Authorization Act, it never appropriated the funds. “That is a really key point,” he said.

    With Tillerson and Goldstein out, the future of the G.E.C. is even more up in the air. While Tillerson was skeptical of the effort, according to the current State staffer, Goldstein was a fierce advocate. “It definitely doesn’t help that Goldstein’s leaving, given how supportive he was” of the G.E.C., a Senate staffer told me when Goldstein was fired. While State Department spokesperson Heather Nauert is heading the office of Public Diplomacy and Public Affairs, there is currently no permanent head of the G.E.C. I am told that Diane Zeleny, the vice president of external relations at the United States Institute of Peace, is expected to be appointed to the post, but with dozens of top-level State Department positions still vacant, the timeline is foggy.

    Insiders hope new leadership will provide a shot in the arm. C.I.A. Director Mike Pompeo, who is expected to be confirmed by the Senate this week to take over the State Department, has expressed support for the G.E.C., testifying before the Senate Foreign Relations Committee earlier this month that he is committed to fighting disinformation. “We’ve had a small role at the Central Intelligence Agency pushing back against it,” he told Senator Rob Portman for a funding commitment. “I know that there’s been lots of talk about the Global Engagement Center. In the event that I’m confirmed, I promise you I will put excellent foreign-service officer, excellent civil-service officers, on the task of developing out that capability use—and using it in a robust way.”

    There are, of course, less public government efforts underway to combat disinformation, including within Cyber Command and Special Operations Command. According to the senior State Department official, the State Department contributed $1.3 billion during 2017 to bolster European resistance to Russian interference in the form of support for independent media, civil society, media literacy, and democratic institutions, among other things. “It is easy to criticize what we are trying to do, even though most people have no idea, because much of the work is classified and private,” Goldstein told me. “Listen, we take responsibility for what we are responsible for. It took too long to get the $40 million in, but it is not as if people were sitting around in their offices just waiting for the money to come in. That has just not been the case.”

    The problem, according to people familiar with these efforts, is that social media is a relatively new theater of war for the United States—and, potentially, one without any silver bullet. “I still haven’t seen anybody give me a good answer on how to combat disinformation,” said Rick Stengel, who had oversight of the G.E.C. as undersecretary for public diplomacy during the Obama administration. “It’s only been around since the Garden of Eden, and nobody’s ever come up with a good answer against it.” Obama-era staffers were flummoxed in 2014, when the Kremlin escalated so-called “special-war” tactics in Ukraine, manipulating the media to muddy international coverage of its invasion of Crimea. “There was a recognition at the White House that this was different, and that the kind of propaganda . . . that was different from what had been used before,” explained Brett Bruen, a former foreign-service officer who served as the White House director of global engagement from 2013 to 2015. “What was evident to me and to others was that we needed to get to a point where we had more tools that were preconstructed, and ready to deploy when and where they were needed, especially around principal vulnerabilities.”

    Yet the Obama administration was caught flat-footed. “There were people who were aware of the extent of Russian disinformation in the periphery, and around the annexation of Crimea and the invasion of Ukraine,” Stengel said. “In fact, the expansion or the creation of the G.E.C. was, in large part, due to that. Senators Portman and [Chris] Murphy were onto this. Their bill was about trying to make the U.S. more robust in responding to Russian disinformation,” he continued. “I was certainly aware of it and created the first counter-Russian disinformation entity at the State Department . . . but I’d be a liar if I told you I thought that the U.S. would be next.”

    Indeed, there was a widely held belief that the the U.S. media landscape would be more resistant to Kremlin-backed propaganda, a former high-ranking N.S.C. and State Department official told me. Instead of scrutinizing the war playing out on Twitter and Facebook, government officials in the summer of 2016 were concerned that Russian operatives might seek to physically alter votes, and missed the forest for the trees. “I think we kind of missed fully understanding the social-media landscape and how that could be manipulated or abused,” the former official told me, noting that it wasn’t until after the election that the full picture of Russia’s interference came into focus. “I think there was a certain amount of wishful thinking that we are all subject to—that we live in a society that has been less prone to conspiracy theories and misinformation and so-called ‘fake news’ until recently. In our system, the truth usually rises to the top. Unfortunately, that is no longer the case, and Russia was able to take advantage of that.”

    There was also a belief within the administration that Russia wouldn’t dare meddle in a U.S. election. The idea that “they would anticipate that our retaliation would be so strong that it wouldn’t be worth it—the price was too high because we would come at them with everything if they ever attacked our democracy,” Moira Whelan, who served as the deputy assistant secretary for digital strategy at the State Department under Obama, told me. “That was one assumption that, sitting inside the Obama administration, I think I personally made—I think other people made.”

    The more terrifying concern now, among some former officials, is that President Trump has little interest in solving the problem—and may even have a personal stake in allowing it to fester. “I think the Obama administration should have been stronger,” the former State Department official told me, offering a neat summation of the dilemma facing the G.E.C. “The Trump administration could hardly be weaker.”

    Read More

  • #Facebook Is Still the Perfect #Propaganda Platform. These Sketchy Mexican Pages Show Why

    Credit: Mother Jones illustration; Jeff Roberson/AP

    After a covert Russian operation to fuel political divisions in the United States reached 146 million Facebook users in 2016, the company vowed to do more to stop fake news and propaganda in the lead-up to this November’s midterm elections. But a propaganda campaign underway in the Mexican presidential election is showing how easy it still is for unknown actors to use social media platforms to covertly influence elections.

    Four websites in Mexico are using Facebook and Twitter to mount what appear to be coordinated attacks on Andrés Manuel López Obrador, the left-wing candidate who is leading in the polls ahead of the country’s July 1 election. Three of them are designed to look like traditional news sites; the fourth is a transparently anti-López Obrador page that claims the founders of López Obrador’s political party want to install a totalitarian regime in Mexico. It’s unclear who’s behind the sites—it could be foreign actors, special interests, or political campaigns themselves.  

    Facebook CEO Mark Zuckerberg will testify before Congress on Tuesday and Wednesday and seek to reassure the American public that the company is working to stem the proliferation of political propaganda. But in the case of these four anonymous sites in Mexico, Facebook says they’re not violating its policies, and it seems to have no intention of stopping them or other sites that mount coordinated political attacks under the guise of news.

    Together, the four sites have more than 250,000 Facebook followers, plus another 35,000 on Twitter. Their content is not fake news in the vein of the 2016 story announcing falsely that the pope had endorsed President Donald Trump, which spread like wildfire on social media. It’s more like Breitbart News articles attacking Democrats and painting Trump in the most flattering light. The difference is that Breitbart and its biases are well known, whereas these sites provide no information about who writes their content and were set up just months before the election but are designed to look like news outlets.

    A Mexican Facebook user scrolling through her feed in search of political news might encounter what appears to be a news report from El Mexicano Digital, one of the four sites, that says López Obrador and the late Venezuelan strongman Hugo Chávez share “terrifying similarities.” Then, after seeing posts from established newspapers like La Jornada that look similar, she might come across a story from Cielo e Infierno (Heaven and Hell), the Facebook page for another of the four sites, Morena.mx, that refers to an old interview in which López Obrador said he had high blood pressure and claims ominously in a headline, “López Obrador Is Sick.” 

    Mother Jones has found distinct links among the four sites, suggesting they were created by the same person or people. Three of them registered with the same company within seconds of each other in December. The fourth was registered earlier, but it operates on the same servers as the other three, shares a Google Analytics account with them, and uses similar source code. 

    A Facebook representative tells Mother Jones that all the sites comply with the company’s policies. The representative declined to disclose whether the sites pay Facebook to promote their content in users’ news feeds, but says Facebook would allow them to do so.

    In January, Zuckerberg touted the company’s effort to promote “high quality news” in order to prevent “sensationalism, misinformation, and polarization.” But Siva Vaidhyanathan, director of the University of Virginia’s Center for Media and Citizenship, believes Facebook’s problem with misleading news posts is essentially unsolvable in a society that protect people’s right to free speech. “There’s no way to regulate content discrimination in the United States of America,” he says. “That just won’t pass First Amendment muster. That’s just never going to happen.” 

    The ability to create anonymous websites posing as legitimate news outlets is, of course, not new. What’s changed is that these sites can reach hundreds of thousands of people on Facebook, sometimes by paying the company to promote its posts. There appears to be little to stop any special interest in the United States—or a malicious foreign actor—from creating a group of phony news outlets and then paying to push coordinated messages to large numbers of unsuspecting readers.

    Facebook is taking some steps to increase transparency before the midterm elections in the United States. The Facebook spokesperson says the company is planning to identify to users all the messages a particular page is paying to promote. The company also intends to tell users who is funding political ads, including issue ads that don’t mention a specific candidate, in the United States. Facebook announced on Friday that it will require pages with “large numbers of followers” to be authenticated, but it did not say what will be considered a large page or provide additional details about the authentication process. 

    After losing a disputed election in 2006 and coming in second again in 2012, López Obrador and his party, the National Regeneration Movement (known by its Spanish abbreviation Morena), are now the favorite to unseat the centrist Institutional Revolutionary Party (PRI), which has ruled Mexico for all but 12 years since 1929. Ricardo Anaya from the right-leaning National Action Party (PAN) is in second in the polls. José Antonio Meade, the PRI candidate, is currently in third.

    The four sites have dramatically expanded their reach in recent months. El Mexicano Digital had 18 Twitter followers in early January but now has more than 10,000. Some of its Facebook posts receive as many as 40,000 likes and other reactions, while many others receive only a handful, suggesting that the site may be paying Facebook, or an outside actor, to promote certain posts. Morena.mx’s Facebook videos have been viewed more than 8 million times, and some of its posts that read like press releases receive thousands of likes.

    Morena.mx is the most transparently biased site: It has featured an image of López Obrador with blood dripping from his mouth. The other three sites appear more like independent news outlets, but they’re similarly critical of López Obrador. El Zócalo de México claims to offer a “deep” look at Mexican news. Pinche Hemeroteca, roughly “Fucking Archive,” claims that it exposes Mexican politicians’ “lies” and “incoherent positions.” El Mexicano Digital provides no information about itself but has reportedly paid Google to promote its stories when people search for López Obrador. A spokesperson for Google declined to comment. 

    The four sites’ Google Analytics account is publicly linked to only one other website: Méxic-on, which until recently identified itself as a project of the Mexican business advocacy group Consejo de la Comunicación (Communications Council). After this story was first published, the Consejo de la Comunicación logo disappeared from the page; shortly after that, all of the site’s contents were deleted.* Roxana Núñez, the director of corporate affairs at the Consejo de la Comunicación, says her organization was not previously aware of Méxic-on and the four sites attacking López Obrador. Méxic-on, she says, is not a project of the Consejo de la Comunicación. 

    The Consejo de la Comunicación’s president, Federico López Otegui, said last year that his organization would not be backing a candidate in the 2018 election, and Núñez says that’s still the case. The organization reportedly opposed López Obrador when he ran for president in 2006. Raúl Benítez Manaut, a professor of North American security issues at the National Autonomous University of Mexico (UNAM), one of Mexico’s leading universities, says he would not be surprised if the Consejo de la Comunicación were opposing López Obrador behind the scenes. “The rich people in Mexico are really worried,” he says, noting proposals López Obrador has made that have concerned the business community. “And a big part of the middle class is really worried with [López Obrador] and they will try to stop him.”

    Last month, a group of 60 media outlets, civil society groups, and universities in Mexico launched a project called “Verificado 2018” to fact-check stories in the lead-up to the Mexican election. Google, Facebook, and Twitter are part of the effort. Facebook also took out full-page ads titled “Tips to Detect Fake News” in major Mexican newspapers last month. But the company’s efforts to promote reliable news outlets may be undermined by less trustworthy pages’ ability to stay within Facebook guidelines and purchase ads to spread propaganda.

    Twitter bots and trolls have often been seen as the main tool for manipulating Mexican voters through social media. Andrés Sepúlveda, a Colombian hacker, told Bloomberg Businessweek in 2016 that he received $600,000 to promote current Mexican President Enrique Peña Nieto and intercept his opponents’ communications in the 2012 election. As part of that effort, he said, he had a network of 30,000 Twitter bots at his disposal.

    A Twitter spokesperson told Mother Jones that the company was working to educate voters and parties “on best practices to inform, consume, and debate electoral information on Twitter.” The spokesperson added that Twitter is “thinking of working with” INE to recommend lists of accounts to follow, publish data on what users are discussing, and create election-related emoji.

    Sergio José Gutiérrez, a leading digital strategist for Mexican politicians, recently told the Spanish newspaper El País that there’s been a shift away from Twitter in 2018. “Parties have discovered more effective tactics,” he said, “such as generating fake news and publicity articles on supposed news sites that they sell to the highest bidder.” El Mexicano Digital and its likely affiliates appear to be part of that shift.

    Esteban Illades, a Mexican writer who recently published a book on fake news, is skeptical about how much these misinformation campaigns will influence Mexican voters, who already have strong opinions about López Obrador. More significant may be the precedent they set for would-be propagandists in other political campaigns, like the US midterms, who will see that Facebook is doing nothing to stop these efforts.

    Vaidhyanathan says Facebook’s struggles to promote high-quality news are likely to continue. “As long as Facebook is this big and Facebook behaves like Facebook, we’re kind of in for dark times,” he says, adding, “There’s nothing Facebook can do about it, frankly—unless it wants to completely alter what it is and what it does.”

    This story has been updated to include a response from the Consejo de la Comunicación and to note that the content of Méxic-on was deleted after this article was published.

    Read More

  • Here’s How Much Bots Drive Conversation During News Events - #socialmedia #propaganda #fakenews #Twitter #Bots #socialnetworks

     

    Here’s How Much Bots Drive Conversation During News Events

     

    Casey Chin; Getty Images

    Last week, as thousands of Central American migrants made their way northward through Mexico, walking a treacherous route toward the US border, talk of "the caravan," as it's become known, took over Twitter. Conservatives, led by President Donald Trump, dominated the conversation, eager to turn the caravan into a voting issue before the midterms. As it turns out, they had some help—from propaganda bots on Twitter.

    Late last week, about 60 percent of the conversation was driven by likely bots. Over the weekend, even as the conversation about the caravan was overshadowed by more recent tragedies, bots were still driving nearly 40 percent of the caravan conversation on Twitter. That's according to an assessment by Robhat Labs, a startup founded by two UC Berkeley students that builds tools to detect bots online. The team's first product, a Chrome extension called BotCheck.me, allows users to see which accounts in their Twitter timelines are most likely bots. Now it's launching a new tool aimed at news organizations called FactCheck.me, which allows journalists to see how much bot activity there is across an entire topic or hashtag.

    Take the deadly shooting at the Tree of Life synagogue in Pittsburgh over the weekend. On Sunday, one day after the shooting, bots were driving 23 percent of the Twitter activity related to the incident, according to FactCheck.me.

    "These big crises happen, and there’s a flurry of social media activity, but it's really hard to go back and see what’s being spread and get numbers around bot activity," says Ash Bhat, a Robhat Labs cofounder. So the team built an internal tool. Now they're launching it publicly, in hopes of helping newsrooms measure the true volume of conversation during breaking news events, apart from the bot-driven din.

    "The impact of these bot accounts is still seen and felt on Twitter."

    Ash Bhat, Robhat Labs

    Identifying bots is an ever-evolving science. To develop their methodology, Bhat and his partner Rohan Phadte compiled a sample set of accounts they had a high confidence were political propaganda bots. These accounts exhibited unusual behavior, like tweeting political content every few minutes throughout the day or amassing a huge following almost instantly. Unlike automated accounts that news organizations and other entities sometimes set up to send regularly scheduled tweets, the propaganda bots that Robhat Labs is focused on pose as humans. Bhat and Phadte also built a set of verified accounts to represent standard human behavior. They built a machine learning model that could compare the two and pick up on the patterns specific to bot accounts. They wound up with a model that they say is about 94 percent accurate in identifying propaganda bots. Factcheck.me does more than just track bot activity, though. It also applies image recognition technology to identify the most popular memes and images about a given topic being circulated by both bots and humans.

    The tool is still in its earliest stages and requires Bhat and his eight-person team to pull the numbers themselves each time they get a request. Newsrooms interested in tracking a given event have to email Robhat Labs with the topic they want to track. Within 24 hours, the company will spit back a report. Reporters will be able to see both the extent of the bot activity on a given topic, as well as the most shared pieces of content pertaining to that topic.

    There are limitations to this approach. It's not currently possible to the view the percentage of bot activity over a longer period of time. Factcheck.me also doesn't indicate which way the bots are swaying the conversation. Still, it offers more information than newsrooms have previously had at their disposal. Plenty of researchers have studied bot activity on Twitter as a whole, but FactCheck.me allows for more narrow analyses of specific topics, almost in real time. Already, Robhat Labs has released reports on the caravan, the shooting in Pittsburgh, and the senate race in Texas.

    Twitter has spent the last year cracking down on bot activity on the platform. Earlier this year, the company banned users from posting identical tweets to multiple accounts at once or retweeting and liking en masse from different accounts. Then, in July, the company purged millions of bot accounts from the platform, and has booted tens of millions of accounts that it previously locked for suspicious behavior.

    But according to Bhat, the bots have hardly disappeared. They've just evolved. Now, rather than simply sending automated tweets that Twitter might delete, they work to amplify and spread the divisive Tweets written by actual humans. "The impact of these bot accounts is still seen and felt on Twitter," Bhat says.

     

    Read More

  • How on Earth do people fall for misinformation? To put it bluntly, they might not be thinking hard enough.

    Don’t Want To Fall For Fake News? Don’t Be Lazy.

    https://media.wired.com/photos/5be502ea5d7c6a7b81d79e05/master/w_4850,c_limit/misinformation_pegs-01.jpg

    ON WEDNESDAY NIGHT, White House press secretary Sarah Huckabee Sanders shared an altered video of a press briefing with Donald Trump, in which CNN reporter Jim Acosta's hand makes brief contact with the arm of a White House Intern. The clip is of low quality and edited to dramatize the original footage; it's presented out of context, without sound, at slow speed with a close-crop zoom, and contains additional frames that appear to emphasize Acosta's contact with the intern.

    And yet, in spite of the clip's dubious provenance, the White House decided to not only share the video but cite it as grounds for revoking Acosta's press pass. "[We will] never tolerate a reporter placing his hands on a young woman just trying to do her job as a White House intern," Sanders said. But the consensus, among anyone inclined to look closely, has been clear: The events described in Sanders' tweet simply did not happen.

    This is just the latest example of misinformation roiling our media ecosystem. The fact that it continues to not only crop up but spread—at times faster and more widely than legitimate, factual news—is enough to make anyone wonder: How on Earth do people fall for this schlock?

    To put it bluntly, they might not be thinking hard enough. The technical term for this is "reduced engagement of open-minded and analytical thinking." David Rand—a behavioral scientist at MIT who studies fake news on social media, who falls for it, and why—has another name for it: "It's just mental laziness," he says.

    Misinformation researchers have proposed two competing hypotheses for why people fall for fake news on social media. The popular assumption—supported by research on apathy over climate change and the denial of its existence—is that people are blinded by partisanship, and will leverage their critical-thinking skills to ram the square pegs of misinformation into the round holes of their particular ideologies. According to this theory, fake news doesn't so much evade critical thinking as weaponize it, preying on partiality to produce a feedback loop in which people become worse and worse at detecting misinformation.

    The other hypothesis is that reasoning and critical thinking are, in fact, what enable people to distinguish truth from falsehood, no matter where they fall on the political spectrum. (If this sounds less like a hypothesis and more like the definitions of reasoning and critical thinking, that's because they are.)

    Several of Rand's recent experiments support theory number two. In a pair of studies published this year in the journal Cognition, he and his research partner, University of Regina psychologist Gordon Pennycook, tested people on the Cognitive Reflection Test, a measure of analytical reasoning that poses seemingly straightforward questions with non-intuitive answers, like: A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost? They found that high scorers were less likely to perceive blatantly false headlines as accurate, and more likely to distinguish them from truthful ones, than those who performed poorly.

    Another study, published on the preprint platform SSRN, found that asking people to rank the trustworthiness of news publishers (an idea Facebook briefly entertained, earlier this year) might actually decrease the level of misinformation circulating on social media. The researchers found that, despite partisan differences in trust, the crowdsourced ratings did "an excellent job" distinguishing between reputable and non-reputable sources.

    "That was surprising," says Rand. Like a lot of people, he originally assumed the idea of crowdsourcing media trustworthiness was a "really terrible idea." His results not only indicated otherwise, they also showed, among other things, "that more cognitively sophisticated people are better at differentiating low- vs high-quality [news] sources." (And because you are probably now wondering: When I ask Rand whether most people fancy themselves cognitively sophisticated, he says the answer is yes, and also that "they will, in general, not be." The Lake Wobegon Effect: It's real!)

    His most recent study, which was just published in the Journal of Applied Research in Memory and Cognition, finds that belief in fake news is associated not only with reduced analytical thinking, but also—go figure—delusionality, dogmatism, and religious fundamentalism.

    All of which suggests susceptibility to fake news is driven more by lazy thinking than by partisan bias. Which on one hand sounds—let's be honest—pretty bad. But it also implies that getting people to be more discerning isn't a lost cause. Changing people's ideologies, which are closely bound to their sense of identity and self, is notoriously difficult. Getting people to think more critically about what they're reading could be a lot easier, by comparison.

    Then again, maybe not. "I think social media makes it particularly hard, because a lot of the features of social media are designed to encourage non-rational thinking." Rand says. Anyone who has sat and stared vacantly at their phone while thumb-thumb-thumbing to refresh their Twitter feed, or closed out of Instagram only to re-open it reflexively, has experienced firsthand what it means to browse in such a brain-dead, ouroboric state. Default settings like push notifications, autoplaying videos, algorithmic news feeds—they all cater to humans' inclination to consume things passively instead of actively, to be swept up by momentum rather than resist it. This isn't baseless philosophizing; most folks just tend not to use social media to engage critically with whatever news, video, or sound bite is flying past. As one recent study shows, most people browse Twitter and Facebook to unwind and defrag—hardly the mindset you want to adopt when engaging in cognitively demanding tasks.

    But it doesn't have to be that way. Platforms could use visual cues that call to mind the mere concept of truth in the minds of their users—a badge or symbol that evokes what Rand calls an "accuracy stance." He says he has experiments in the works that investigate whether nudging people to think about the concept of accuracy can make them more discerning about what they believe and share. In the meantime, he suggests confronting fake news espoused by other people not necessarily by lambasting it as fake, but by casually bringing up the notion of truthfulness in a non-political context. You know: just planting the seed.

    It won't be enough to turn the tide of misinformation. But if our susceptibility to fake news really does boil down to intellectual laziness, it could make for a good start. A dearth of critical thought might seem like a dire state of affairs, but Rand sees it as cause for optimism. "It makes me hopeful," he says, "that moving the country back in the direction of some more common ground isn’t a totally lost cause."

    Read More

  • Online anger is gold to this #junknews pioneer

    Meet one of the Internet's most prolific distributors of hyper-partisan fare. From California, Cyrus Massoumi caters to both liberals and conservatives, serving up political grist through various Facebook pages. Science correspondent Miles O'Brien profiles a leading purveyor of junk news who has hit the jackpot exploiting the trend toward tribalism.

    Read the Full Transcript

    • Judy Woodruff:

      Now to our deep dive on the continuing problem of false or misleading news, or what you might call junk news.

      Much of the attention recently has centered on Facebook. And, yesterday, the company’s founder and CEO, Mark Zuckerberg, told “Wired” magazine that it may take up to three years to fully prevent all kinds of harmful content from affecting people’s news feeds.

      Tonight, Miles O’Brien’s latest report profiles a man who’s been a leading purveyor of junk news, and how he has been exploiting Facebook to reach an audience.

      It’s part of our weekly series on the Leading Edge of technology.

    • Man:

      There has been a shooting at a high school in Parkland.

    • Cyrus Massoumi:

      Right now, we have about 5,300 people and change on the Web site.

    • Miles O’Brien:

      It was a busy day at the office when we met one of the Internet’s most prolific distributors of hyperpartisan fare.

    • Cyrus Massoumi:

      Actually, in a story like this, we do actually beat the mainstream media for these sorts of breaking new events.

    • Miles O’Brien:

      It was the day of the high school shootings in Parkland, Florida, and as the horrific events unfolded, Cyrus Massoumi was spinning facts reported by others to fit the world view of his audience.

    • Cyrus Massoumi:

      You can see that, like, he is wearing a “Make America Great Again” hat.

    • Miles O’Brien:

      Right.

    • Cyrus Massoumi:

      And he has lots of photos of guns, so, obviously, this is going to be a very controversial issue.

    • Miles O’Brien:

      His site is called Truth Examiner. And it caters to liberals, with headlines like this designed to entice clicks on stories with little substance.

      His writers are among the five most successful at luring those clicks on Facebook.

      People want to read those lines to reaffirm their beliefs, right?

    • Cyrus Massoumi:

      Correct.

    • Miles O’Brien:

      And that is not rocket science, is it?

    • Cyrus Massoumi:

      It’s not rocket science, but doing it faster and better than your competitors is an art.

    • Miles O’Brien:

      Lately, Truth Examiner has added something else to the formula, a steady stream of conspiracy theories, ironically, accusing the Trump administration of peddling fake news.

      Massoumi has thrived in this murky world for eight years, hedging his bets, serving up grist for liberals and conservatives through various Facebook pages.

    • Cyrus Massoumi:

      They want like 250-word, like little hit them and go. It’s like — basically like a coke addict. Every hour, he just needs to get that little dopamine rush. Like, a fan on the conservative side or the liberal side needs to take out their phone, look at it, oh, Trump sucks. Trump sucks, so bad. All right, all right, I’m done, I’m done, and then, right?

      Like, that’s it. That’s it.

    • Miles O’Brien:

      People don’t care about the facts.

    • Cyrus Massoumi:

      Yes, of course. People don’t care about facts. Take it to the bank.

    • Miles O’Brien:

      He estimates he has spent over a million dollars in ads, reaching over 100 million people, and has made several million dollars by selling that audience to advertisers on his own site and on Facebook.

      Do you create fake news?

    • Cyrus Massoumi:

      No. No, I don’t.

    • Miles O’Brien:

      Tell me what it is then.

    • Cyrus Massoumi:

      Always inflammatory, like excluding facts from the other side, but never fake. My team, they don’t cover news angles which are favorable to opposition, in the same way that CNN would never cover a favorable angle to Trump or MSNBC.

    • Miles O’Brien:

      He lives in the home where he grew up, on a nine-acre vineyard in Napa, California.

    • Cyrus Massoumi:

      We grow a brand of cabernet which is, I’m told, very nice although I’m not a wine person.

    • Miles O’Brien:

      He is a self-described cultural libertarian, free thinker and lover of politics. For him, it all started in high school. He was selling anti-Obama T-shirts and decided Facebook was a good way to reach more customers.

      It worked. He learned how to build an audience on Facebook, dropped the T-shirts and created Mr. Conservative, his first hyperpartisan site.

    • Cyrus Massoumi:

      So, I’m a marketer with a love of politics. And, you know, I contend that marketers will be the king of the future of media. I think that the danger is not the Russians or the Macedonians, but that the actual danger is when you have a marketer who doesn’t love politics.

    • Miles O’Brien:

      Producer Cameron Hickey found Cyrus Massoumi during our 16 month investigation of hyperpartisan misinformation on Facebook.

      Cameron’s key reporting tool? Software that he wrote that analyzes social media, looking for the sources of what we call junk news.

    • Cameron Hickey:

      It’s clear that a lot of the publishers are domestic, and I think we have given a lot of attention to Russian disinformation or Macedonian teenage profiteers, but both of those groups, I think, learned it from these guys.

      They have learned it from Americans, who have been long profiting on partisan information or other kinds of junk.

    • Miles O’Brien:

      Social networking allows us all to bypass the traditional arbiters of truth that evolved in the 20th century.

    • Danah Boyd:

      Historically, our information landscape has been tribal. We turn to the people that are like us, the people that we know, the people around us to make sense of what is real and what we believe in.

    • Miles O’Brien:

      Computer scientist Danah Boyd is president and founder of Data & Society.

    • Danah Boyd:

      And what we’re seeing now with the network media landscape is the ability to move back towards extreme tribalism. And there are whole variety of actors, state actors, non-state actors, who are happy to move along a path where people are actually not putting their faith in institutions or information intermediaries, and are instead turning to their tribes, to their communities.

    • Miles O’Brien:

      Cyrus Massoumi’s first big jackpot exploiting this trend toward tribalism was linked to yet another mass shooting at a school, this one in Sandy Hook, Connecticut, in 2012.

      In the midst of that horror, he bought a Facebook ad that asked a question, do you stand against the assault weapons ban? If so, click like. Those who did became subscribers to his page, insuring his content would rise to the top of their news feeds. He had bought thousands of fans at a very low price.

    • Cyrus Massoumi:

      I felt subsequently that I built my first business, sort of if you want to call it, on the graves of young children who were killed.

    • Miles O’Brien:

      Well, how do you feel about that?

    • Cyrus Massoumi:

      I don’t know. How do people feel about things that they do badly? I feel bad about it, but, I mean, we do what we do to pay the mortgage, right?

    • Miles O’Brien:

      The strategy Massoumi helped pioneer spread like virtual wildfire. By 2016, marketers, political operatives and state actors were all using the same playbook of hyped headlines, political propaganda and outright falsehoods.

    • Danah Boyd:

      They were all in an environment together, a melting pot, if you will, and with a whole set of really powerful skills, when they saw a reality TV star start to run for president.

      And that’s pretty funny. That’s pretty interesting. And so it was fun to create spectacle.

    • Miles O’Brien:

      The stage was set for the 2016 presidential election and an unprecedented misinformation campaign waged on several fronts.

      Back in Napa, Cyrus Massoumi was doing well, running a conservative page called Truth Monitor, along with the liberal Truth Examiner. Massoumi says anger is what generates likes, and conservative stories were more lucrative.

    • Cyrus Massoumi:

      Conservatives are angrier people.

    • Miles O’Brien:

      Tell me about that.

    • Cyrus Massoumi:

      You ever seen a Trump rally on TV?

    • Miles O’Brien:

      Yes.

    • Cyrus Massoumi:

      Yes? It’s gold.

    • Miles O’Brien:

      But, since the election, the conservative side of Massoumi’s business has dried up. His site that used to offer that content has moved into feel-good stories.

      He says competition among conservative hyperpartisan sites created a junk news arms race, making the content too extreme to be ranked favorably by the Facebook news feed algorithm.

    • Cyrus Massoumi:

      On the conservative side, I think that we were at one point publishing low-quality clickbait. That’s what the conservative devolved into.

    • Miles O’Brien:

      Is it unpatriotic to do it?

    • Cyrus Massoumi:

      To publish low-quality clickbait? I think that people like what they like. And my goal at one point was to deliver to them what they like.

      And, unfortunately, the reality of that is, is that people are prone to go for the lowest common denominator.

    • Miles O’Brien:

      But, for Cyrus Massoumi, the target really doesn’t matter, so long as he hits the mark. Stirring up anger, no matter on which side, is very good for business.

      Ahead as we continue our series, you will meet two of the fans bought by Cyrus Massoumi, a deep blue liberal from Brooklyn and a Christian conservative from Indianapolis.

      For the “PBS NewsHour,” I’m Miles O’Brien in Napa, California.

    • Judy Woodruff:

      Miles’ series on Facebook and junk news continues next week. You can watch part one and find more reporting on our Web site, PBS.org/NewsHour.

    Read More

  • The hypodermic effect—how #propaganda manipulates our emotions

     

    Credit: CC0 Public Domain

    The scandal surrounding the improper use of data by Cambridge Analytica and Facebook in the 2016 U.S. election is reminiscent of the old debates about propaganda and its ability to "violate the minds of the masses," according to Sergei Tchakhotin, an expert in the study of Nazi propaganda.

    The Russian sociologist said that the masses were subjected to a sophisticated machinery of manipulation that could, through the strategic use of radio, film and well-orchestrated performances, touch on and influence the basic instincts of Germans.

    Decades later, we're once again back discussing the manipulation of emotions, this time via social media platforms.

    Of course, the communication ecosystem is very different from what existed for Joseph Goebbels, Hitler's propaganda minister. But the underlying principles for manipulating the masses do not seem to have changed much.

    Reports indicate that Cambridge Analytica developed a methodology that allowed them to establish psychographic profiles of Facebook users, and thus push emotional buttons that could influence their political preferences and voting behaviour.

    To some degree, this represents the return of what's known as the hypodermic effect in which the audience falls "victim" to powerful media that have the ability to manipulate our emotions and shape our understanding of the world.

    Research, however, indicates that how we respond to media does not adhere to what's known as a stimulus-response causality. There are other factors that intervene in the way people use, perceive and process what they consume in the media. They are known as "mediations" that, according to the Spanish-Colombian professor Jesús Martín Barbero, are the different ways people interpret the messages conveyed by the media.

    Using our data to influence us

    But today, governments, corporations and political parties have the unprecedented ability to process a litany of data and then, through sophisticated algorithms, broadcast messages and images to influence an increasingly segmented audience.

    One must ask, then, what role will Martín Barbero's mediations —our cultural references, values, family, friends and other reference groups that influence our reading of the mediated messages —play in how we consume information and entertainment on social networks.

    Are we condemned to live the "dystopian realism" presented by the British TV series Black Mirror in which digital media penetrate the intimacy of a human being too clumsy to resist the temptation of being manipulated, according to the show's creator Charlie Brooker?

    The debate about the influence of Facebook and unscrupulous companies like Cambridge Analytica reveals the importance of emotions not only in our private lives but also in our so-called "public lives" as citizens. The problem arises in terms not only of "emotional manipulation" but of the role emotions play in how we relate and understand the world around us.

    As the neuroscientist Antonio Damasio recently said: "Culture works by a system of selection similar to that of genetic selection, except that what is being selected is an instrument that we put into practice. Feelings are an agent in cultural selection. I think that the beauty of the idea is in seeing feelings as motivators, as a surveillance system, and as negotiators."

    If feelings are an integral part of this "cultural selection," are we facing a shift in this sociocultural evolutionary process due to the "algorithmization" of emotions?

    Is historian Yuval Noah Harari right when he says that "technological religion" —he calls it "dataism" —is transforming us in such a way that it will make the homo sapiens irrelevant and put the human being on the periphery in a world dominated by algorithms?

    More isolation ahead?

    These are complex questions that are difficult to answer.

    In any case, it seems that our intellectual or even emotional laziness is transforming us into puppets of our emotions. Evidence is emerging that digital media is changing the configuration of our nervous system and our forms of socialization.

    Sherry Turkle, a professor at MIT, observes in her book Alone together: Why we expect more from technology and less from each other that there are already signs of dissatisfaction among young people who are obsessed with their image on social media while losing the ability of introspection; mothers who feel that communication with their children via text messages is more frequent but less substantive; and Facebook users who think that the banalities they share with their "virtual friends" devalue the true intimacy between friends.

    If virtual relations replace face-to-face contact, we may see more isolation, individualism and less social cohesion, which does not bode well for the survival of democracy.

    It's also likely that the expansion of social media does not make us more rational. Although we have access to more information and participate in more public debates about issues that affect us as individuals and as a society, that doesn't mean we're doing so more rationally or based on arguments that are scientifically factual.

    The rise of religious fundamentalism, nationalism, of beliefs in all kinds of sects and New Age fashions are symptoms of a "return of sorcerers" or magical thinking in our digital society.

    We deploy our egos on social media, sometimes with a compulsive need for recognition. This knowledge of our self, quantified in big data and transformed into affective algorithms, is exploited by corporations and political parties to give us, as Andy Warhol said, our 15 minutes of fame.

    The sorcerers of propaganda are back —this time with more powerful means that their predecessors.

    Read More

  • Why #Sinclair Made Dozens of Local News Anchors Recite the Same Script - #Propaganda #FakeNews #StateTelevision

    “Unfortunately, some members of the media use their platforms to push their own personal bias and agenda to control exactly what people think,” dozens of news anchors said last month, reading from a script provided by Sinclair Broadcast Group. Creditfro nch, via YouTube

    On local news stations across the United States last month, dozens of anchors gave the same speech to their combined millions of viewers.

    It included a warning about fake news, a promise to report fairly and accurately and a request that viewers go to the station’s website and comment “if you believe our coverage is unfair.

    It may not have seemed strange until viewers began to notice that the newscasters from Seattle to Phoenix to Washington sounded very similar. Stitched-together videos on social media showed them eerily echoing the same lines:

    “The sharing of biased and false news has become all too common on social media.”

    “Some members of the media use their platforms to push their own personal bias.”

    “This is extremely dangerous to our democracy.”

    The script came from Sinclair Broadcast Group, the country’s largest broadcaster, which owns or operates 193 television stations.

    Last week, The Seattle Post-Intelligencer published a copy of the speech and reported that employees at a local news station there, KOMO, were unhappy about the script. CNN reported on it on March 7 and said Scott Livingston, the senior vice president of news for Sinclair, had read almost the exact same speech for a segment that was distributed to outlets a year ago.

    A union that represents news anchors did not respond immediately to requests for comment on Sunday.

    Dave Twedell of the International Cinematographers Guild, who is a business representative for photojournalists (but not anchors) at KOMO in Seattle and KATU in Portland, Ore., said Sinclair told journalists at those stations not to discuss the company with outside news media.

    Although it is the country’s largest broadcaster, Sinclair is not a household name and viewers may be unaware of who owns their local news station. Critics have accused the company of using its stations to advance a mostly right-leaning agenda.

    “We work very hard to be objective and fair and be in the middle,” Mr. Livingston told The New York Times last year. “I think maybe some other news organizations may be to the left of center, and we work very hard to be in the center.”

    Sinclair regularly sends video segments to the stations it owns. These are referred to as “must-runs,” and they can include content like terrorism news updatescommentators speaking in support of President Trump or speeches from company executives like the one from Mr. Livingston last year.

    But asking newscasters to present the material themselves is not something that Kirstin Pellizzaro, a doctoral candidate at Arizona State University’s Walter Cronkite School of Journalism and Mass Communication, remembered from her experience as a producer at a Sinclair-owned news station in Kalamazoo, Mich., from 2014 to 2015.

    The station had to air “must-run” segments that came from Sinclair, which is based outside Baltimore. “Some of them were a little slanted, a little biased,” Ms. Pellizzaro said. “Packages of this nature can make journalists uncomfortable.”

    Sinclair representatives did not immediately respond to requests for comment on Sunday. But Mr. Livingston told The Baltimore Sun that the script was meant to demonstrate Sinclair’s “commitment to reporting facts,” adding that false stories “can result in dangerous consequences,” referring to the Pizzagate conspiracy as an example.

    “We are focused on fact-based reporting,” Mr. Livingston continued. “That’s our commitment to our communities. That’s the goal of these announcements: to reiterate our commitment to reporting facts in a pursuit of truth.”

    Ms. Pellizzaro said she can talk about Sinclair more freely now because she is working in academia, whereas journalists at stations owned by Sinclair might feel pressured not to bite the hand that feeds them.

    “I hope people realize that the journalists are trying their best, and this shouldn’t reflect poorly on them,” she said. “They’re just under this corporate umbrella.”

    Sinclair has been accused of using connections in the Trump administration to ease regulations on media consolidation. In an effort to expand its reach, the company is seeking approval from the Justice Department and the Federal Communications Commission for a $3.9 billion deal to buy Tribune Media.

     

    Read More