• Here’s How Much Bots Drive Conversation During News Events - #socialmedia #propaganda #fakenews #Twitter #Bots #socialnetworks

     

    Here’s How Much Bots Drive Conversation During News Events

     

    Casey Chin; Getty Images

    Last week, as thousands of Central American migrants made their way northward through Mexico, walking a treacherous route toward the US border, talk of "the caravan," as it's become known, took over Twitter. Conservatives, led by President Donald Trump, dominated the conversation, eager to turn the caravan into a voting issue before the midterms. As it turns out, they had some help—from propaganda bots on Twitter.

    Late last week, about 60 percent of the conversation was driven by likely bots. Over the weekend, even as the conversation about the caravan was overshadowed by more recent tragedies, bots were still driving nearly 40 percent of the caravan conversation on Twitter. That's according to an assessment by Robhat Labs, a startup founded by two UC Berkeley students that builds tools to detect bots online. The team's first product, a Chrome extension called BotCheck.me, allows users to see which accounts in their Twitter timelines are most likely bots. Now it's launching a new tool aimed at news organizations called FactCheck.me, which allows journalists to see how much bot activity there is across an entire topic or hashtag.

    Take the deadly shooting at the Tree of Life synagogue in Pittsburgh over the weekend. On Sunday, one day after the shooting, bots were driving 23 percent of the Twitter activity related to the incident, according to FactCheck.me.

    "These big crises happen, and there’s a flurry of social media activity, but it's really hard to go back and see what’s being spread and get numbers around bot activity," says Ash Bhat, a Robhat Labs cofounder. So the team built an internal tool. Now they're launching it publicly, in hopes of helping newsrooms measure the true volume of conversation during breaking news events, apart from the bot-driven din.

    "The impact of these bot accounts is still seen and felt on Twitter."

    Ash Bhat, Robhat Labs

    Identifying bots is an ever-evolving science. To develop their methodology, Bhat and his partner Rohan Phadte compiled a sample set of accounts they had a high confidence were political propaganda bots. These accounts exhibited unusual behavior, like tweeting political content every few minutes throughout the day or amassing a huge following almost instantly. Unlike automated accounts that news organizations and other entities sometimes set up to send regularly scheduled tweets, the propaganda bots that Robhat Labs is focused on pose as humans. Bhat and Phadte also built a set of verified accounts to represent standard human behavior. They built a machine learning model that could compare the two and pick up on the patterns specific to bot accounts. They wound up with a model that they say is about 94 percent accurate in identifying propaganda bots. Factcheck.me does more than just track bot activity, though. It also applies image recognition technology to identify the most popular memes and images about a given topic being circulated by both bots and humans.

    The tool is still in its earliest stages and requires Bhat and his eight-person team to pull the numbers themselves each time they get a request. Newsrooms interested in tracking a given event have to email Robhat Labs with the topic they want to track. Within 24 hours, the company will spit back a report. Reporters will be able to see both the extent of the bot activity on a given topic, as well as the most shared pieces of content pertaining to that topic.

    There are limitations to this approach. It's not currently possible to the view the percentage of bot activity over a longer period of time. Factcheck.me also doesn't indicate which way the bots are swaying the conversation. Still, it offers more information than newsrooms have previously had at their disposal. Plenty of researchers have studied bot activity on Twitter as a whole, but FactCheck.me allows for more narrow analyses of specific topics, almost in real time. Already, Robhat Labs has released reports on the caravan, the shooting in Pittsburgh, and the senate race in Texas.

    Twitter has spent the last year cracking down on bot activity on the platform. Earlier this year, the company banned users from posting identical tweets to multiple accounts at once or retweeting and liking en masse from different accounts. Then, in July, the company purged millions of bot accounts from the platform, and has booted tens of millions of accounts that it previously locked for suspicious behavior.

    But according to Bhat, the bots have hardly disappeared. They've just evolved. Now, rather than simply sending automated tweets that Twitter might delete, they work to amplify and spread the divisive Tweets written by actual humans. "The impact of these bot accounts is still seen and felt on Twitter," Bhat says.

     

    Read More

  • How on Earth do people fall for misinformation? To put it bluntly, they might not be thinking hard enough.

    Don’t Want To Fall For Fake News? Don’t Be Lazy.

    https://media.wired.com/photos/5be502ea5d7c6a7b81d79e05/master/w_4850,c_limit/misinformation_pegs-01.jpg

    ON WEDNESDAY NIGHT, White House press secretary Sarah Huckabee Sanders shared an altered video of a press briefing with Donald Trump, in which CNN reporter Jim Acosta's hand makes brief contact with the arm of a White House Intern. The clip is of low quality and edited to dramatize the original footage; it's presented out of context, without sound, at slow speed with a close-crop zoom, and contains additional frames that appear to emphasize Acosta's contact with the intern.

    And yet, in spite of the clip's dubious provenance, the White House decided to not only share the video but cite it as grounds for revoking Acosta's press pass. "[We will] never tolerate a reporter placing his hands on a young woman just trying to do her job as a White House intern," Sanders said. But the consensus, among anyone inclined to look closely, has been clear: The events described in Sanders' tweet simply did not happen.

    This is just the latest example of misinformation roiling our media ecosystem. The fact that it continues to not only crop up but spread—at times faster and more widely than legitimate, factual news—is enough to make anyone wonder: How on Earth do people fall for this schlock?

    To put it bluntly, they might not be thinking hard enough. The technical term for this is "reduced engagement of open-minded and analytical thinking." David Rand—a behavioral scientist at MIT who studies fake news on social media, who falls for it, and why—has another name for it: "It's just mental laziness," he says.

    Misinformation researchers have proposed two competing hypotheses for why people fall for fake news on social media. The popular assumption—supported by research on apathy over climate change and the denial of its existence—is that people are blinded by partisanship, and will leverage their critical-thinking skills to ram the square pegs of misinformation into the round holes of their particular ideologies. According to this theory, fake news doesn't so much evade critical thinking as weaponize it, preying on partiality to produce a feedback loop in which people become worse and worse at detecting misinformation.

    The other hypothesis is that reasoning and critical thinking are, in fact, what enable people to distinguish truth from falsehood, no matter where they fall on the political spectrum. (If this sounds less like a hypothesis and more like the definitions of reasoning and critical thinking, that's because they are.)

    Several of Rand's recent experiments support theory number two. In a pair of studies published this year in the journal Cognition, he and his research partner, University of Regina psychologist Gordon Pennycook, tested people on the Cognitive Reflection Test, a measure of analytical reasoning that poses seemingly straightforward questions with non-intuitive answers, like: A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost? They found that high scorers were less likely to perceive blatantly false headlines as accurate, and more likely to distinguish them from truthful ones, than those who performed poorly.

    Another study, published on the preprint platform SSRN, found that asking people to rank the trustworthiness of news publishers (an idea Facebook briefly entertained, earlier this year) might actually decrease the level of misinformation circulating on social media. The researchers found that, despite partisan differences in trust, the crowdsourced ratings did "an excellent job" distinguishing between reputable and non-reputable sources.

    "That was surprising," says Rand. Like a lot of people, he originally assumed the idea of crowdsourcing media trustworthiness was a "really terrible idea." His results not only indicated otherwise, they also showed, among other things, "that more cognitively sophisticated people are better at differentiating low- vs high-quality [news] sources." (And because you are probably now wondering: When I ask Rand whether most people fancy themselves cognitively sophisticated, he says the answer is yes, and also that "they will, in general, not be." The Lake Wobegon Effect: It's real!)

    His most recent study, which was just published in the Journal of Applied Research in Memory and Cognition, finds that belief in fake news is associated not only with reduced analytical thinking, but also—go figure—delusionality, dogmatism, and religious fundamentalism.

    All of which suggests susceptibility to fake news is driven more by lazy thinking than by partisan bias. Which on one hand sounds—let's be honest—pretty bad. But it also implies that getting people to be more discerning isn't a lost cause. Changing people's ideologies, which are closely bound to their sense of identity and self, is notoriously difficult. Getting people to think more critically about what they're reading could be a lot easier, by comparison.

    Then again, maybe not. "I think social media makes it particularly hard, because a lot of the features of social media are designed to encourage non-rational thinking." Rand says. Anyone who has sat and stared vacantly at their phone while thumb-thumb-thumbing to refresh their Twitter feed, or closed out of Instagram only to re-open it reflexively, has experienced firsthand what it means to browse in such a brain-dead, ouroboric state. Default settings like push notifications, autoplaying videos, algorithmic news feeds—they all cater to humans' inclination to consume things passively instead of actively, to be swept up by momentum rather than resist it. This isn't baseless philosophizing; most folks just tend not to use social media to engage critically with whatever news, video, or sound bite is flying past. As one recent study shows, most people browse Twitter and Facebook to unwind and defrag—hardly the mindset you want to adopt when engaging in cognitively demanding tasks.

    But it doesn't have to be that way. Platforms could use visual cues that call to mind the mere concept of truth in the minds of their users—a badge or symbol that evokes what Rand calls an "accuracy stance." He says he has experiments in the works that investigate whether nudging people to think about the concept of accuracy can make them more discerning about what they believe and share. In the meantime, he suggests confronting fake news espoused by other people not necessarily by lambasting it as fake, but by casually bringing up the notion of truthfulness in a non-political context. You know: just planting the seed.

    It won't be enough to turn the tide of misinformation. But if our susceptibility to fake news really does boil down to intellectual laziness, it could make for a good start. A dearth of critical thought might seem like a dire state of affairs, but Rand sees it as cause for optimism. "It makes me hopeful," he says, "that moving the country back in the direction of some more common ground isn’t a totally lost cause."

    Read More

  • Online anger is gold to this #junknews pioneer

    Meet one of the Internet's most prolific distributors of hyper-partisan fare. From California, Cyrus Massoumi caters to both liberals and conservatives, serving up political grist through various Facebook pages. Science correspondent Miles O'Brien profiles a leading purveyor of junk news who has hit the jackpot exploiting the trend toward tribalism.

    Read the Full Transcript

    • Judy Woodruff:

      Now to our deep dive on the continuing problem of false or misleading news, or what you might call junk news.

      Much of the attention recently has centered on Facebook. And, yesterday, the company’s founder and CEO, Mark Zuckerberg, told “Wired” magazine that it may take up to three years to fully prevent all kinds of harmful content from affecting people’s news feeds.

      Tonight, Miles O’Brien’s latest report profiles a man who’s been a leading purveyor of junk news, and how he has been exploiting Facebook to reach an audience.

      It’s part of our weekly series on the Leading Edge of technology.

    • Man:

      There has been a shooting at a high school in Parkland.

    • Cyrus Massoumi:

      Right now, we have about 5,300 people and change on the Web site.

    • Miles O’Brien:

      It was a busy day at the office when we met one of the Internet’s most prolific distributors of hyperpartisan fare.

    • Cyrus Massoumi:

      Actually, in a story like this, we do actually beat the mainstream media for these sorts of breaking new events.

    • Miles O’Brien:

      It was the day of the high school shootings in Parkland, Florida, and as the horrific events unfolded, Cyrus Massoumi was spinning facts reported by others to fit the world view of his audience.

    • Cyrus Massoumi:

      You can see that, like, he is wearing a “Make America Great Again” hat.

    • Miles O’Brien:

      Right.

    • Cyrus Massoumi:

      And he has lots of photos of guns, so, obviously, this is going to be a very controversial issue.

    • Miles O’Brien:

      His site is called Truth Examiner. And it caters to liberals, with headlines like this designed to entice clicks on stories with little substance.

      His writers are among the five most successful at luring those clicks on Facebook.

      People want to read those lines to reaffirm their beliefs, right?

    • Cyrus Massoumi:

      Correct.

    • Miles O’Brien:

      And that is not rocket science, is it?

    • Cyrus Massoumi:

      It’s not rocket science, but doing it faster and better than your competitors is an art.

    • Miles O’Brien:

      Lately, Truth Examiner has added something else to the formula, a steady stream of conspiracy theories, ironically, accusing the Trump administration of peddling fake news.

      Massoumi has thrived in this murky world for eight years, hedging his bets, serving up grist for liberals and conservatives through various Facebook pages.

    • Cyrus Massoumi:

      They want like 250-word, like little hit them and go. It’s like — basically like a coke addict. Every hour, he just needs to get that little dopamine rush. Like, a fan on the conservative side or the liberal side needs to take out their phone, look at it, oh, Trump sucks. Trump sucks, so bad. All right, all right, I’m done, I’m done, and then, right?

      Like, that’s it. That’s it.

    • Miles O’Brien:

      People don’t care about the facts.

    • Cyrus Massoumi:

      Yes, of course. People don’t care about facts. Take it to the bank.

    • Miles O’Brien:

      He estimates he has spent over a million dollars in ads, reaching over 100 million people, and has made several million dollars by selling that audience to advertisers on his own site and on Facebook.

      Do you create fake news?

    • Cyrus Massoumi:

      No. No, I don’t.

    • Miles O’Brien:

      Tell me what it is then.

    • Cyrus Massoumi:

      Always inflammatory, like excluding facts from the other side, but never fake. My team, they don’t cover news angles which are favorable to opposition, in the same way that CNN would never cover a favorable angle to Trump or MSNBC.

    • Miles O’Brien:

      He lives in the home where he grew up, on a nine-acre vineyard in Napa, California.

    • Cyrus Massoumi:

      We grow a brand of cabernet which is, I’m told, very nice although I’m not a wine person.

    • Miles O’Brien:

      He is a self-described cultural libertarian, free thinker and lover of politics. For him, it all started in high school. He was selling anti-Obama T-shirts and decided Facebook was a good way to reach more customers.

      It worked. He learned how to build an audience on Facebook, dropped the T-shirts and created Mr. Conservative, his first hyperpartisan site.

    • Cyrus Massoumi:

      So, I’m a marketer with a love of politics. And, you know, I contend that marketers will be the king of the future of media. I think that the danger is not the Russians or the Macedonians, but that the actual danger is when you have a marketer who doesn’t love politics.

    • Miles O’Brien:

      Producer Cameron Hickey found Cyrus Massoumi during our 16 month investigation of hyperpartisan misinformation on Facebook.

      Cameron’s key reporting tool? Software that he wrote that analyzes social media, looking for the sources of what we call junk news.

    • Cameron Hickey:

      It’s clear that a lot of the publishers are domestic, and I think we have given a lot of attention to Russian disinformation or Macedonian teenage profiteers, but both of those groups, I think, learned it from these guys.

      They have learned it from Americans, who have been long profiting on partisan information or other kinds of junk.

    • Miles O’Brien:

      Social networking allows us all to bypass the traditional arbiters of truth that evolved in the 20th century.

    • Danah Boyd:

      Historically, our information landscape has been tribal. We turn to the people that are like us, the people that we know, the people around us to make sense of what is real and what we believe in.

    • Miles O’Brien:

      Computer scientist Danah Boyd is president and founder of Data & Society.

    • Danah Boyd:

      And what we’re seeing now with the network media landscape is the ability to move back towards extreme tribalism. And there are whole variety of actors, state actors, non-state actors, who are happy to move along a path where people are actually not putting their faith in institutions or information intermediaries, and are instead turning to their tribes, to their communities.

    • Miles O’Brien:

      Cyrus Massoumi’s first big jackpot exploiting this trend toward tribalism was linked to yet another mass shooting at a school, this one in Sandy Hook, Connecticut, in 2012.

      In the midst of that horror, he bought a Facebook ad that asked a question, do you stand against the assault weapons ban? If so, click like. Those who did became subscribers to his page, insuring his content would rise to the top of their news feeds. He had bought thousands of fans at a very low price.

    • Cyrus Massoumi:

      I felt subsequently that I built my first business, sort of if you want to call it, on the graves of young children who were killed.

    • Miles O’Brien:

      Well, how do you feel about that?

    • Cyrus Massoumi:

      I don’t know. How do people feel about things that they do badly? I feel bad about it, but, I mean, we do what we do to pay the mortgage, right?

    • Miles O’Brien:

      The strategy Massoumi helped pioneer spread like virtual wildfire. By 2016, marketers, political operatives and state actors were all using the same playbook of hyped headlines, political propaganda and outright falsehoods.

    • Danah Boyd:

      They were all in an environment together, a melting pot, if you will, and with a whole set of really powerful skills, when they saw a reality TV star start to run for president.

      And that’s pretty funny. That’s pretty interesting. And so it was fun to create spectacle.

    • Miles O’Brien:

      The stage was set for the 2016 presidential election and an unprecedented misinformation campaign waged on several fronts.

      Back in Napa, Cyrus Massoumi was doing well, running a conservative page called Truth Monitor, along with the liberal Truth Examiner. Massoumi says anger is what generates likes, and conservative stories were more lucrative.

    • Cyrus Massoumi:

      Conservatives are angrier people.

    • Miles O’Brien:

      Tell me about that.

    • Cyrus Massoumi:

      You ever seen a Trump rally on TV?

    • Miles O’Brien:

      Yes.

    • Cyrus Massoumi:

      Yes? It’s gold.

    • Miles O’Brien:

      But, since the election, the conservative side of Massoumi’s business has dried up. His site that used to offer that content has moved into feel-good stories.

      He says competition among conservative hyperpartisan sites created a junk news arms race, making the content too extreme to be ranked favorably by the Facebook news feed algorithm.

    • Cyrus Massoumi:

      On the conservative side, I think that we were at one point publishing low-quality clickbait. That’s what the conservative devolved into.

    • Miles O’Brien:

      Is it unpatriotic to do it?

    • Cyrus Massoumi:

      To publish low-quality clickbait? I think that people like what they like. And my goal at one point was to deliver to them what they like.

      And, unfortunately, the reality of that is, is that people are prone to go for the lowest common denominator.

    • Miles O’Brien:

      But, for Cyrus Massoumi, the target really doesn’t matter, so long as he hits the mark. Stirring up anger, no matter on which side, is very good for business.

      Ahead as we continue our series, you will meet two of the fans bought by Cyrus Massoumi, a deep blue liberal from Brooklyn and a Christian conservative from Indianapolis.

      For the “PBS NewsHour,” I’m Miles O’Brien in Napa, California.

    • Judy Woodruff:

      Miles’ series on Facebook and junk news continues next week. You can watch part one and find more reporting on our Web site, PBS.org/NewsHour.

    Read More

  • This PSA About #FakeNews From Barack #Obama Is Not What It Appears @JordanPeele

    Oscar-winning filmmaker Jordan Peele has a warning for viewers about trusting material they encounter online.

    Sitting before the Stars and Stripes, another flag pinned to his lapel, former president Barack Obama appears to be delivering an important message about fake news — but something seems slightly...off.

    “We’re entering an era in which our enemies can make it look like anyone is saying anything at any point in time — even if they would never say those things,” says “Obama,” his lips moving in perfect sync with his words as they become increasingly bizarre. “So, for instance, they could have me say things like, I don’t know, [Black Panther’s] Killmonger was right! Or Ben Carson is in the sunken place! Or, how ’bout this: Simply, President Trump is a total and complete dipshit.”

    As the video soon reveals, the man speaking is not the former commander-in-chief, but rather Oscar-winning filmmaker Jordan Peele with a warning for viewers about trusting material they encounter online.

    “This is a dangerous time. Moving forward, we need to be more vigilant with what we trust from the internet,” says Peele as Obama.

    The PSA for the Internet Age was a project first imagined by Peele and BuzzFeed CEO Jonah Peretti, the filmmaker’s brother-in-law.

    The pair wanted to warn the public about the rapidly evolving threat posed by digital misinformation after discussions between them about new technologies and the erosion of a shared reality.

    “I always enjoy talking with Jordan, and he’s actually very interested in news and the news business and understanding how information spreads,” said Peretti. “We were talking about deepfake [artificial intelligence] that can create things like that guy who put his wife’s face on Anne Hathaway's body for a late-night interview.”

    The Peele video comes after BuzzFeed News reported in February on what the future of fake news could look like: “a slew of slick, easy-to-use, and eventually seamless technological tools for manipulating perception and falsifying reality, for which terms have already been coined — ‘reality apathy,’ ‘automated laser phishing,’ and ‘human puppets.’”

    Aviv Ovadya, a technologist who predicted that misinformation would spread during the 2016 election, told reporter Charlie Warzel that technology is advancing to allow users to distort audio or video and make it seem real. Such tools could be used to create pornographic videos with celebrities’ faces superimposed or have world leaders appear to make outrageous or potentially dangerous statements.

    “What happens when anyone can make it appear as if anything has happened, regardless of whether or not it did?” Ovadya told BuzzFeed News.

    For example, University of Washington computer scientists last year produced a video of Obama that demonstrated a program they had developed capable of “turning audio clips into a realistic, lip-synced video of the person speaking those words.”

    Peretti said he wanted to use BuzzFeed as a platform for the PSA because of BuzzFeed News’ extensive past reporting on fake news.

    “We’ve covered counterfeit news websites that say the pope endorsed Trump that look kinda like real news, but because it’s text people have started to become more wary,” he said. “And now we’re starting to see tech that allows people to put words into the mouths of public figures that look like they must be real because it’s video and video doesn’t lie!”

    For the project, Peretti enlisted BuzzFeed video producer Jared Sosa, who was able to manipulate and digitally alter the footage of Obama to a script written and performed by Peele.

    The fakery was built using Adobe After Effects, a readily available piece of video software, and FakeApp, an artificial intelligence program that made headlines in January when it was used to transplant actor Nicolas Cage’s face into several movies in which he hadn’t appeared.

    Sosa first pasted Peele’s mouth over Obama’s, then replaced the former president’s jawline with one that moved with Peele’s mouth movements. He then used FakeApp to smooth over and refine the footage — a rendering that took more than 56 hours of automatic processing.

    “What I learned from this whole thing is that while it will still require a good deal of human intervention, this kind of thing is not only possible but going to get a lot better,” Sosa said.

    Peele, who last month won an Oscar for his film Get Out, memorably impersonated Obama on multiple occasions on his Comedy Central show Key & Peele with Keegan-Michael Key.

    The PSA ends with Peele urging people to “stay woke” by being vigilant to media sources. “It may sound basic,” Peele says as Obama, “but how we move forward in the Age of Information is gonna be the difference between whether we survive or whether we become some kind of fucked-up dystopia.”

    Peretti, who cofounded the Huffington Post before launching BuzzFeed, said he remains optimistic about the future of the internet, but said media literacy and trusted reporters “will be more important than ever."

    “I think by and large the internet has been amazingly beneficial to the world and to democracy,” he said, “and simultaneously it’s always had a dark side that’s objectionable, with people who are either trolls or hackers or scammers or politically motivated.”

    Read More

  • Western Europeans Under 30 View News Media Less Positively, Rely More on Digital Platforms Than Older Adults

    Western Europeans Under 30 View News Media Less Positively, Rely More on Digital Platforms Than Older Adults

    Across eight Western European countries, adults ages 18 to 29 are about twice as likely to get news online than from TV. They also tend to be more critical of the news media's performance and coverage of key issues than older adults

    (iStock by Getty Images)
    (iStock by Getty Images)
    Younger Europeans are more critical of how the news media covers immigration

    People of all ages in Western Europe value the importance of the news media in society. Yet, younger adults – those under 30 – are less trusting of the news media and less likely to think the news media are doing a good job in their key responsibilities. And while younger adults rarely read the news in print, they often name established newspaper brands as their main source of news.

    This new analysis builds off Pew Research Center’s earlier findings about news media and political identities to understand age dynamics in eight Western European countries – Denmark, France, Germany, Italy, the Netherlands, Spain, Sweden and the United Kingdom. Together, these eight European Union (EU) member states account for roughly 69% of the EU population and 75% of the EU economy.1

    Across the eight Western European countries surveyed, broad majorities in each of the three age groups say that news media are important to society. Among those under 30, the share who holds this view ranges from 75% in Italy to 94% in Sweden.

    Younger Western Europeans, however, are less approving of the news media. In five of eight countries polled, younger adults, defined here as those ages 18 to 29, are less likely to trust the news media than the oldest age group (those 50 and older). And when it comes to how the news media perform on key functions, in six countries adults under 30 give the news media lower ratings across at least three of the five performance areas measured than do those ages 50 and older.

    One issue where younger Europeans are noticeably less satisfied with the news media’s performance is coverage of immigration. In Denmark, for example, about half of those under 30 (49%) say the news media are doing a good job covering immigration, compared with 74% of those 50 and older, a gap of 25 percentage points. Similar but narrower gaps in how younger and older Europeans rate immigration coverage are evident in six of the seven other countries surveyed. Modest differences also emerge in ratings for coverage of the economy and crime, with younger adults giving the news media lower marks.

    These general patterns notwithstanding, the survey finds that Western Europeans under 30 can be more trusting of specific news outlets than older adults. For example, in the Netherlands, 59% of those ages 18-29 generally trust the news media, compared with 65% of those 30-49 and 72% of those 50 and above. Yet, about half of younger Dutch adults (53%) trust the specific newspaper De Telegraaf, compared with 36% of those 50 and older.

    Additionally, younger Europeans in these countries are almost twice as likely to get news online as they are from television. This stands in stark contrast to those 50 years and older, for whom television is the main pathway to news. At the same time, those ages 30 to 49, who bridge the gap between the youngest and the oldest age groups, also bridge the news consumption gap on these two platforms, with 61% getting news from TV and 68% getting it online. The greater appeal of digital among younger adults and television among the oldest age group is consistent across all eight countries studied, with majorities of those ages 18 to 29 getting news online daily. Within the digital realm, younger adults are also about twice as likely to get news daily through social media than those ages 50 and older.

    Younger Europeans more likely to get news online than from TVYounger Europeans also get news in print at much lower rates than those older than them. Those under 30 are about half as likely as those ages 30-49 to read print news sources on a daily basis – and the gap is even larger when compared to those 50 and older. But younger Europeans rely on – and trust – newspaper brands, suggesting that their consumption of news is more likely to be through newspaper websites or social media accounts.

    These are among the key findings of a new analysis of a Pew Research Center public opinion study that maps the media landscape in these eight Western European countries. The analysis is based on a survey of 16,114 adults across all eight countries conducted from Oct. 30 to Dec. 20, 2017, including 2,970 people under the age of 30.

    The survey also asked respondents to name the specific outlet they rely on most for news. Responses to the open-ended question vary by country, but some consistent differences by age emerge across the eight nations studied.

    Younger Europeans, for instance, are less likely than those 50 and older to name a public media outlet as their main source of news. This contrast is particularly pronounced in the three southern countries polled – France, Spain and Italy.2

    For example, in Spain, younger adults name the newspaper El País as their top main news source, while those ages 30-49 and those 50 and older name the public broadcaster RTVE. The UK is the one country surveyed where a public broadcaster (the BBC) dominates as the main news source across all age groups.

    Second, younger Europeans are much more likely to name social media and search engine sites as main sources of news. In seven of the eight countries, Facebook is named by at least 5% of younger adults. Twitter is also named as a main source by younger adults in one country (Spain), and Google is named in three countries (Spain, Germany and Italy). Across the eight countries, these sites are rarely named by those 50 and older as a main source: Just one site, Google, is named by at least 5% of this age group, and only in Italy.

    Younger Europeans are less likely than older adults to name public news media as top news source

    Read More

  • Why #AI isn’t going to solve #Facebook’s #fakenews problem (Full disclosure: ALIKE likes humans) @jjvincent


    Illustration by Alex Castro / The Verge

    Facebook has a lot of problems right now, but one that’s definitely not going away any time soon is fake news. As the company’s user base has grown to include more than a quarter of the world’s population, it has (understandably) struggled to control what they all post and share. For Facebook, unwanted content can be anything from mild nudity to serious violence, but what’s proved to be most sensitive and damaging for the company is hoaxes and misinformation — especially when it has a political bent.

    So what is Facebook going to do about it? At the moment, the company doesn’t seem to have a clear strategy. Instead, it’s throwing a lot at the wall and seeing what works. It’s hired more human moderators (as of February this year it had around 7,500); it’s giving users more information in-site about news sources; and in a recent interview, Mark Zuckerberg suggested that the company might set up some sort of independent body to rule on what content is kosher. (Which could be seen as democratic, an abandonment of responsibility, or an admission that Facebook is out of its depth, depending on your view.) But one thing experts say Facebook needs to be extremely careful about is giving the whole job over to AI.

    So far, the company seems to be just experimenting with this approach. During and interview with The New York Times about the Cambridge Analytica scandal, Zuckerberg revealed that for the special election in Alabama last year, the company “deployed some new AI tools to identify fake accounts and false news.” He specified that these were Macedonian accounts (an established hub in the fake-news-for-profit business), and the company later clarified that it had deployed machine learning to find “suspicious behaviors without assessing the content itself.”

    This is smart because when it comes to fake news, AI isn’t up to the job.

    AI can’t understand fake news because AI can’t understand writing

    The challenges of building an automated fake news filter with artificial intelligence are numerous. From a technical perspective, AI fails on a number of levels because it just can’t understand human writing the way humans do. It can pull out certain facts and do a crude sentiment analysis (guessing whether a piece of content is “happy” or “angry” based on keywords), but it can’t understand subtleties of tone, consider cultural context, or ring someone up to corroborate information. And even if it could do all this, which would knock out the most obvious misinformation and hoaxes, it would eventually run up against edge cases that confuse even humans. If people on the left and the right can’t agree on what is and is not “fake news,” there’s no way we can teach a machine to make that judgement for us.

    In the past, efforts to deal with fake news using AI have quickly run into problems, as with the Fake News Challenge — a competition to crowdsource machine learning solutions held last year. Dean Pomerleau of Carnegie Mellon University, who helped organize the challenge tells The Verge that he and his team soon realized AI couldn’t tackle this alone.

    “We actually started out with a more ambitious goal of creating a system that could answer the question ‘Is this fake news, yes or no?’ We quickly realized machine learning just wasn’t up to the task.”

    Pomerleau stresses that comprehension was the primary problem, and to understand why exactly language can be so nuanced, especially online, we can turn to the example set by Tide pods. As Cornell professor James Grimmelmann explained in a recent essay on fake news and platform moderation, the internet’s embrace of irony has made it extremely difficult to judge sincerity and intent. And Facebook and YouTube have found this out for themselves when they tried to remove Tide Pod Challenge videos in January this year.


    A YouTube thumbnail of a video that may either be endorsing the Tide Pod Challenge, or warning against it, or some combination of the two.
    Image: YouTube / Leonard

    As Grimmelmann explains, when it came to deciding which videos to delete, the companies would have been faced with a dilemma. “It’s easy to find videos of people holding up Tide Pods, sympathetically noting how tasty they look, and then giving a finger-wagging speech about not eating them because they’re dangerous,” he says. “Are these sincere anti-pod-eating public service announcements? Or are they surfing the wave of interest in pod-eating by superficially claiming to denounce it? Both at once?”

    Grimmelmann calls this effect “mimetic kayfabe,” borrowing the pro-wrestling term for the willing suspension of disbelief by audience and wrestlers. He also says this opacity in meaning is not limited to meme culture, and has been embraced by political partisans — often responsible for creating and sharing fake news. Pizzagate is the perfect example of this, says Grimmelmann, as it is “simultaneously a real conspiracy theory, a gleeful masquerade of a conspiracy theory, and a disparaging meme about conspiracy theories.”

    So if Facebook had chosen to block any pizzagate articles during the 2016 election, they would likely have been hit with complaints not only about censorship, but also protests that such stories were “only a joke.” Extremists exploit this ambiguity frequently, as was best shown in the leaked style guide of neo-Nazi website The Daily Stormer. Founder Andrew Anglin advised would-be writers “the unindoctrinated should not be able to tell if we are joking or not,” before making it clear they’re not: “This is obviously a ploy and I actually do want to gas kikes. But that’s neither here nor there.”

    Considering this complexity, it’s no wonder that Pomerleau’s Fake News Challenge ended up asking teams to complete a simpler task: make an algorithm that can simply spot articles covering the same topic. Something they turned out to be pretty good at.

    With this tool a human could tag a story as fake news (for example, claiming a certain celebrity has died) and then the algorithm would knock out any coverage repeating the lie. “We talked to real-life fact-checkers and realized they would be in the loop for quite some time,” says Pomerleau. “So the best we could do in the machine learning community would be to help them do their jobs.”

    Even with human fact-checkers in tow, Facebook relies on algorithms

    This seems to be Facebook’s preferred approach. For the Italian elections this year, for example, the company hired independent fact-checkers to flag fake news and hoaxes. Problematic links weren’t deleted, but when shared by a user they were tagged with the label “Disputed by 3rd Party Fact Checkers.” Unfortunately, even this approach has problems, with a recent report from the Columbia Journalism Review highlighting fact-checker’s many frustrations with Facebook. The journalists involved said it often wasn’t clear why Facebook’s algorithms were telling them to check certain stories, while sites well-known for spreading lies and conspiracy theories (like InfoWars) never got checked at all.

    However, there’s definitely a role for algorithms in all this. And while AI can’t do any of the heavy lifting in stamping out fake news, it can filter it in the same way spam is filtered out of your inbox. Anything with bad spelling and grammar can be knocked out, for example; or sites that rely on imitating legitimate outlets to entice readers. And as Facebook has shown with its targeting of Macedonian accounts “that were trying to spread false news” during the special election in Alabama, it can be relatively easy to target fake news when it’s coming from known trouble-spots.

    Experts say, though, that is the limit of AI’s current capabilities. “This kind of whack-a-mole could possibly help filtering out get-rich-fast teenagers from Tbilisi, but is unlikely to effect consistent but large-scale offenders like InfoWars,” Mor Naaman, an associate professor of information science at Cornell Tech, tells The Verge. He adds that even these simpler filters can create problems. “Classification is often based on language patterns and other simple signals, which may ‘catch’ honest independent and local publishers together with producers of fake news and misinformation,” says Naaman.

    And even here, there is a potential dilemma for Facebook. Although in order to avoid accusations of censorship, the social network should be open about the criteria its algorithms use to spot fake news, if it’s too open people could game the system, working around its filters.

    For Amanda Levendowski, a teaching fellow at NYU law, this is an example of what she calls the “Valley Fallacy.” Speaking to The Verge about Facebook’s AI moderation she suggests this is a common mistake, “where companies start saying, ‘We have a problem, we must do something, this is something, so we must do this,’ without carefully considering whether this could create new or different problems.” Levendowski adds that despite these problems, there are plenty of reasons tech firms will continue to pursue AI moderation, ranging from “improving users’ experiences to mitigating the risks of legal liability.”

    These are surely temptations for Zuckerberg, but even then, it seems that leaning too hard on AI to solve its moderation problems would be unwise. And not something he would want to explain to Congress next week.

    Read More

  • Why #Sinclair Made Dozens of Local News Anchors Recite the Same Script - #Propaganda #FakeNews #StateTelevision

    “Unfortunately, some members of the media use their platforms to push their own personal bias and agenda to control exactly what people think,” dozens of news anchors said last month, reading from a script provided by Sinclair Broadcast Group. Creditfro nch, via YouTube

    On local news stations across the United States last month, dozens of anchors gave the same speech to their combined millions of viewers.

    It included a warning about fake news, a promise to report fairly and accurately and a request that viewers go to the station’s website and comment “if you believe our coverage is unfair.

    It may not have seemed strange until viewers began to notice that the newscasters from Seattle to Phoenix to Washington sounded very similar. Stitched-together videos on social media showed them eerily echoing the same lines:

    “The sharing of biased and false news has become all too common on social media.”

    “Some members of the media use their platforms to push their own personal bias.”

    “This is extremely dangerous to our democracy.”

    The script came from Sinclair Broadcast Group, the country’s largest broadcaster, which owns or operates 193 television stations.

    Last week, The Seattle Post-Intelligencer published a copy of the speech and reported that employees at a local news station there, KOMO, were unhappy about the script. CNN reported on it on March 7 and said Scott Livingston, the senior vice president of news for Sinclair, had read almost the exact same speech for a segment that was distributed to outlets a year ago.

    A union that represents news anchors did not respond immediately to requests for comment on Sunday.

    Dave Twedell of the International Cinematographers Guild, who is a business representative for photojournalists (but not anchors) at KOMO in Seattle and KATU in Portland, Ore., said Sinclair told journalists at those stations not to discuss the company with outside news media.

    Although it is the country’s largest broadcaster, Sinclair is not a household name and viewers may be unaware of who owns their local news station. Critics have accused the company of using its stations to advance a mostly right-leaning agenda.

    “We work very hard to be objective and fair and be in the middle,” Mr. Livingston told The New York Times last year. “I think maybe some other news organizations may be to the left of center, and we work very hard to be in the center.”

    Sinclair regularly sends video segments to the stations it owns. These are referred to as “must-runs,” and they can include content like terrorism news updatescommentators speaking in support of President Trump or speeches from company executives like the one from Mr. Livingston last year.

    But asking newscasters to present the material themselves is not something that Kirstin Pellizzaro, a doctoral candidate at Arizona State University’s Walter Cronkite School of Journalism and Mass Communication, remembered from her experience as a producer at a Sinclair-owned news station in Kalamazoo, Mich., from 2014 to 2015.

    The station had to air “must-run” segments that came from Sinclair, which is based outside Baltimore. “Some of them were a little slanted, a little biased,” Ms. Pellizzaro said. “Packages of this nature can make journalists uncomfortable.”

    Sinclair representatives did not immediately respond to requests for comment on Sunday. But Mr. Livingston told The Baltimore Sun that the script was meant to demonstrate Sinclair’s “commitment to reporting facts,” adding that false stories “can result in dangerous consequences,” referring to the Pizzagate conspiracy as an example.

    “We are focused on fact-based reporting,” Mr. Livingston continued. “That’s our commitment to our communities. That’s the goal of these announcements: to reiterate our commitment to reporting facts in a pursuit of truth.”

    Ms. Pellizzaro said she can talk about Sinclair more freely now because she is working in academia, whereas journalists at stations owned by Sinclair might feel pressured not to bite the hand that feeds them.

    “I hope people realize that the journalists are trying their best, and this shouldn’t reflect poorly on them,” she said. “They’re just under this corporate umbrella.”

    Sinclair has been accused of using connections in the Trump administration to ease regulations on media consolidation. In an effort to expand its reach, the company is seeking approval from the Justice Department and the Federal Communications Commission for a $3.9 billion deal to buy Tribune Media.

     

    Read More