• #Americans’ complicated feelings about #socialmedia in an era of #privacy concerns

    By


    (Busakorn Pongparnit)

    Amid public concerns over Cambridge Analytica’s use of Facebook data and a subsequent movement to encourage users to abandon Facebook, there is a renewed focus on how social media companies collect personal information and make it available to marketers.

    Pew Research Center has studied the spread and impact of social media since 2005, when just 5% of American adults used the platforms. The trends tracked by our data tell a complex story that is full of conflicting pressures. On one hand, the rapid growth of the platforms is testimony to their appeal to online Americans. On the other, this widespread use has been accompanied by rising user concerns about privacy and social media firms’ capacity to protect their data.

    All this adds up to a mixed picture about how Americans feel about social media. Here are some of the dynamics.

     

    People like and use social media for several reasons

    http://assets.pewresearch.org/wp-content/uploads/var/www/vhosts/cms.pewresearch.org/htdocs/wp-content/blogs.dir/12/files/2018/03/27111548/FT_18.03.23_SocialMediaPrivacy.png

    About seven-in-ten American adults (69%) now report they use some kind of social media platform (not including YouTube) – a nearly fourteenfold increase since Pew Research Center first started asking about the phenomenon. The growth has come across all demographic groups and includes 37% of those ages 65 and older.

    The Center’s polls have found over the years that people use social media for important social interactions like staying in touch with friends and family and reconnecting with old acquaintances. Teenagers are especially likely to report that social media are important to their friendships and, at times, their romantic relationships.

    Beyond that, we have documented how social media play a role in the way people participate in civic and political activities, launch and sustain protests, get and share health information, gather scientific information, engage in family matters, perform job-related activities and get news. Indeed, social media is now just as common a pathway to news for people as going directly to a news organization website or app.

    Our research has not established a causal relationship between people’s use of social media and their well-being. But in a 2011 report, we noted modest associations between people’s social media use and higher levels of trust, larger numbers of close friends, greater amounts of social support and higher levels of civic participation.

    People worry about privacy and the use of their personal information

    While there is evidence that social media works in some important ways for people, Pew Research Center studies have shown that people are anxious about all the personal information that is collected and shared and the security of their data.

    Overall, a 2014 survey found that 91% of Americans “agree” or “strongly agree” that people have lost control over how personal information is collected and used by all kinds of entities. Some 80% of social media users said they were concerned about advertisers and businesses accessing the data they share on social media platforms, and 64% said the government should do more to regulate advertisers.

    http://assets.pewresearch.org/wp-content/uploads/sites/14/2017/01/23091951/PI_01.26.cyber-00-02.png

    Another survey last year found that just 9% of social media users were “very confident” that social media companies would protect their data. About half of users were not at all or not too confident their data were in safe hands.

    Moreover, people struggle to understand the nature and scope of the data collected about them. Just 9% believe they have “a lot of control” over the information that is collected about them, even as the vast majority (74%) say it is very important to them to be in control of who can get information about them.

    Six-in-ten Americans (61%) have said they would like to do more to protect their privacy. Additionally, two-thirds have said current laws are not good enough in protecting people’s privacy, and 64% support more regulation of advertisers.

    Some hope that the European Union’s General Data Protection Regulation, which goes into effect on May 25, will give users – even Americans – greater protections about what data tech firms can collect, how the data can be used, and how consumers can be given more opportunities to see what is happening with their information.

    People’s issues with the social media experience go beyond privacy

    In addition to the concerns about privacy and social media platforms uncovered in our surveys, related research shows that just 5% of social media users trust the information that comes to them via the platforms “a lot.”

    http://assets.pewresearch.org/wp-content/uploads/sites/14/2016/10/24130012/PI_2016.10.25_Politics-and-Social-Media_0-02.png

    Moreover, social media users can be turned off by what happens on social media. For instance, social media sites are frequently cited as places where people are harassed. Near the end of the 2016 election campaign, 37% of social media users said they were worn out by the political content they encountered, and large shares said social media interactions with those opposed to their views were stressful and frustrating. Large shares also said that social media interactions related to politics were less respectful, less conclusive, less civil and less informative than offline interactions.

    A considerable number of social media users said they simply ignored political arguments when they broke out in their feeds. Others went steps further by blocking or unfriending those who offended or bugged them.

    Why do people leave or stay on social media platforms?

    The paradox is that people use social media platforms even as they express great concern about the privacy implications of doing so – and the social woes they encounter. The Center’s most recent survey about social media found that 59% of users said it would not be difficult to give up these sites, yet the share saying these sites would be hard to give up grew 12 percentage points from early 2014.

    Some of the answers about why people stay on social media could tie to our findings about how people adjust their behavior on the sites and online, depending on personal and political circumstances. For instance, in a 2012 report we found that 61% of Facebook users said they had taken a break from using the platform. Among the reasons people cited were that they were too busy to use the platform, they lost interest, they thought it was a waste of time and that it was filled with too much drama, gossip or conflict.

    In other words, participation on the sites for many people is not an all-or-nothing proposition.

    People pursue strategies to try to avoid problems on social media and the internet overall. Fully 86% of internet users said in 2012 they had taken steps to try to be anonymous online. “Hiding from advertisers” was relatively high on the list of those they wanted to avoid.

    Many social media users fine-tune their behavior to try to make things less challenging or unsettling on the sites, including changing their privacy settings and restricting access to their profiles. Still, 48% of social media users reported in a 2012 survey they have difficulty managing their privacy controls.

    After National Security Agency contractor Edward Snowden disclosed details about government surveillance programs starting in 2013, 30% of adults said they took steps to hide or shield their information and 22% reported they had changed their online behavior in order to minimize detection.

    One other argument that some experts make in Pew Research Center canvassings about the future is that people often find it hard to disconnect because so much of modern life takes place on social media. These experts believe that unplugging is hard because social media and other technology affordances make life convenient and because the platforms offer a very efficient, compelling way for users to stay connected to the people and organizations that matter to them.

    Read More

  • #Facebook knew #Android #callscraping would be ‘high-risk,’ new documents reveal

    Facebook knew Android call-scraping would be ‘high-risk,’ new documents reveal

    Internal emails show Facebook weighing the privacy risks of collecting call records — then going ahead anyway

    Illustration by James Bareham / The Verge

    In March, many Android users were shocked to discover that Facebook had been collecting a record of their call and SMS history, as revealed by the company’s data download tool. Now, internal emails released by the UK Parliament show how the decision was made internally. According to the emails, developers knew the data was sensitive, but they still pushed to collect it as a way of expanding Facebook’s reach.

    The emails show Facebook’s growth team looking to call log data as a way to improve Facebook’s algorithms as well as to locate new contacts through the “People You May Know” feature. Notably, the project manager recognized it as “a pretty high-risk thing to do from a PR perspective,” but that risk seems to have been overwhelmed by the potential user growth.

    Initially, the feature was intended to require users to opt in, typically through an in-app pop-up dialog box. But as developers looked for ways to get users signed up, it became clear that Android’s data permissions could be manipulated to automatically enroll users if the new feature was deployed in a certain way.

    In another email chain, the group developing the feature seems to see the Android permissions screen as a point of unnecessary friction, to be avoided if possible. When testing revealed that call logs could be collected without a permissions dialog, that option seems to have been obviously preferable to developers.

    “Based on our initial testing,” one developer wrote, “it seems that this would allow us to upgrade users without subjecting them to an Android permissions dialog at all.”

    After the story broke in March, Facebook insisted that it had not collected any call logs without permission, and that any affected users had opted in to the feature. This contradicted the experience of many Facebook users, who reported installing Messenger with the bare minimum of permissions and nonetheless having logs collected.

    Facebook’s People You May Know feature has been the source of significant controversy for the company, often identifying connections through location or other obscure data sources. Most notably, the feature inspired Facebook to create so-called “shadow profiles” for contacts who haven’t signed up for Facebook, a practice some have criticized as overly aggressive.

    Reached for comment, Facebook said it stood by its original statement. “We of course discuss the options of keeping, removing, or changing features we offer,” a representative said. “This specific feature allows people to opt into giving Facebook access to their call and text messaging logs in Facebook Lite and Messenger on Android devices. We use this information to do things like make better suggestions for people to call in Messenger and ranking contact lists in Messenger and Facebook Lite.”

    Read More

  • #Facebook, This Is Not What “Complete User Control” Looks Like - #privacy #socialnetworks

    https://www.eff.org/files/2018/04/11/zuck-1.jpg

    If you watched even a bit of Mark Zuckerberg’s ten hours of congressional testimony over the past two days, then you probably heard him proudly explain how users have “complete control” via “inline” privacy controls over everything they share on the platform. Zuckerberg’s language here misses the critical distinction between the information a person actively shares, and the information that Facebook takes from users without their knowledge or consent.

    Zuckerberg’s insistence that users have “complete control” neatly overlooks all the ways that users unwittingly “share” information with Facebook.

    Of course, there are the things you actively choose to share, like photos or status updates, and those indeed come with settings to limit their audience. That is the kind of sharing that Zuckerberg seemed to be addressing in many of his answers to Congressmembers’ questions.

    But that’s just the tip of the iceberg. Below the surface are Facebook’s often-invisible methods for collecting and generating information on users without their knowledge or consent, including (but not limited to):

    Users don’t share this information with Facebook. It’s been actively—and silently—taken from them.

    This stands in stark contrast to Zuckerberg’s claim, while on the record with reporters last week, that “the vast majority of data that Facebook knows about you is because you chose to share it.” And he doubled down on this talking point in his testimony to both the Senate and the House, using it to dodge questions about the full breadth of Facebook’s data collection.

    Zuckerberg’s insistence that users have complete control is a smokescreen.

    Zuckerberg’s insistence that users have complete control is a smokescreen. Many members of Congress wanted to know not just how users can control what their friends and friends-of-friends see. They wanted to know how to control what third-party apps, advertisers, and Facebook itself are able to collect, store, and analyze. This goes far beyond what users can see on their pages and newsfeeds.

    Facebook’s ethos of connection and growth at all costs cannot coexist with users' privacy rights. Facebook operates by collecting, storing, and making it easy to find unprecedented amounts of user data. Until that changes in a meaningful way, the privacy concerns that spurred these hearings are here to stay.

    Read More

  • #Facebook’s #Targeting System Can Divide Us on More Than Just #Advertising - @rachelegoodman1 @ACLU

    It’s heartening to see, in the wake of the Cambridge Analytica revelations, growing skepticism about how Facebook handles data and data privacy. But we should take this opportunity to ask the bigger, harder questions, too — questions about discrimination and division, and whether we want to live in a society where our consumer data profile determines our reality.

    In the spring of 2016, a Facebook executive gave a presentation about the success of Facebook’s then-new “ethnic affinity” advertising categories. Facebook had grouped users as white, Black, or Latino based on what they had clicked, and this targeting had allowed the movie “Straight Outta Compton” to be marketed as two completely different films. For Black audiences, it was a deeply political biopic about the members of N.W.A. and their music, framed by contemporary reflections from Dr. Dre and Ice Cube. For white audiences, it was a scripted drama about gangsters, guns, and cops that barely mentioned the names of its real-life characters. From the perspective of Universal Pictures, this dual marketing had been wildly successful. “Straight Outta Compton” earned over $160 million at the U.S. box office.

    When we saw this news in 2016, it immediately raised alarm bells about the effect of such categories on civil rights. We went straight to Facebook with our immediate concern: How was the company ensuring that ads for jobs, housing, and employment weren’t targeted by race, given that such targeting is illegal under the civil rights laws? Facebook didn’t have an answer. We worked with officials from the company for more than a year on solutions that, as it turned out, were not properly implemented. Facebook still makes it possible for advertisers to target based on categories closely linked to gender, family status, and disability, and the company has recently gotten sued for it.

    To make matters worse, the government is actively turning a blind eye. The New York Times reported on Thursday that, under Secretary Ben Carson, the federal Department of Housing and Urban Development dropped its investigation into whether Facebook’s ad targeting system violated the Fair Housing Act. That means that HUD, on the eve of the 50th anniversary of that law, is choosing to put its head in the sand rather than investigate whether civil rights laws have been broken.

    It’s not illegal to market “Straight Outta Compton” differently based on race (as opposed to say, a housing or employment ad). Nonetheless, that tactic creates a distinction among people and treats them differently as a result. And these kinds of distinctions have real-world effects: Think about what it means to white teenagers to see a trailer with yet another image of criminal Black men, instead of hearing Dr. Dre reflect on police brutality in the 1980s and today.

    Then magnify that effect hundreds and thousands of times. In today’s world, a huge proportion of the advertising and media that we see reaches us based on accumulated data about us. If ad targeting means that my family and yours hear and read about different movies and TV shows, will that make it impossible for America to have another cross-racial Roots moment? (In 1977, 130 million Americans watched at least part of the famous miniseries tracing a Black family’s journey from Africa to slavery to the present day.)

    Targeting, of course, does enable advertisers — including the ACLU — to efficiently reach particular audiences with messages that are tailored to them, and that can sometimes be a good thing. But that doesn’t mean we shouldn’t acknowledge what’s lost with that efficiency: that people outside of the expected audiences won’t see these messages or know they exist.

    Ad targeting can make the world look different to different people. Some find the web full of job ads for high-paying CEO jobs, while others see mostly ads for sneakers or payday loans. Our news also reaches us and our networks through ad targeting. How can this not have huge implications for our ability to exist in a cohesive society? How can we agree on the policies that should govern our world when there are no common reference points for what that world looks like?

    It’s not just foreign interference and voter suppression campaigns that make this kind of targeting so dangerous for democracy.

    Read More

  • #Facebook’s #Tracking Of Non-Users Sparks Broader #Privacy Concerns - #ShadowTracking

    Facebook CEO Mark Zuckerberg testifies for a House Energy and Commerce Committee hearing regarding the company's use and prot
    Leah Millis / Reuters
    Facebook CEO Mark Zuckerberg testifies for a House Energy and Commerce Committee hearing regarding the company’s use and protection of user data on Capitol Hill in Washington, U.S., April 11, 2018. (REUTERS/Leah Millis)

    By David Ingram

    SAN FRANCISCO (Reuters) - Concern about Facebook Inc’s respect for data privacy is widening to include the information it collects about non-users, after Chief Executive Mark Zuckerberg said the world’s largest social network tracks people whether they have accounts or not.

    Privacy concerns have swamped Facebook since it acknowledged last month that information about millions of users wrongly ended up in the hands of political consultancy Cambridge Analytica, a firm that has counted U.S. President Donald Trump’s 2016 electoral campaign among its clients.

    Zuckerberg said on Wednesday under questioning by U.S. Representative Ben Luján that, for security reasons, Facebook also collects “data of people who have not signed up for Facebook.”

    Lawmakers and privacy advocates immediately protested the practice, with many saying Facebook needed to develop a way for non-users to find out what the company knows about them.

    “We’ve got to fix that,” Representative Luján, a Democrat, told Zuckerberg, calling for such disclosure, a move that would have unclear effects on the company’s ability to target ads. Zuckerberg did not respond. On Friday Facebook said it had no plans to build such a tool.

    Critics said that Zuckerberg has not said enough about the extent and use of the data. “It’s not clear what Facebook is doing with that information,” said Chris Calabrese, vice president for policy at the Center for Democracy & Technology, a Washington advocacy group. 

    COOKIES EVERYWHERE

    Facebook gets some data on non-users from people on its network, such as when a user uploads email addresses of friends. Other information comes from “cookies,” small files stored via a browser and used by Facebook and others to track people on the internet, sometimes to target them with ads.

    “This kind of data collection is fundamental to how the internet works,” Facebook said in a statement to Reuters.

    Read More

  • #Privacy Rules Won't Fix Real Problem in #Facebook #Scandal

    The Facebook-Cambridge Analytica revelations have caused renewed focus on privacy rules, although investors don’t seem too concerned judging by the nearly $50 billion increase in Facebook’s market value after announcing its first-quarter earnings. What should concern us all, however, is not Wall Street’s views on privacy rules, but that Congress and so many others believe privacy is the key policy area to focus on in response to the incident. 

    By its own admission, Facebook did not show sufficient interest in policing the privacy standards it promised its users. But the policy responses generally being considered -- new privacy rules based on the European Union’s stricter standards — are not directly relevant to the issue at hand because they would have done little, if anything, to prevent the damage the Cambridge Analytica incident helped foster. 

    Nobody among the 87 million users whose data Cambridge Analytica obtained appear to have been harmed directly. That shouldn’t be surprising since Facebook does not generally collect sensitive information like health and financial data. 

    But that does not mean the incident was harmless. 

    The real harm came from the ability of bad actors — the Russian government in particular — to use social media to promote misinformation in an effort to suppress voter turnout and change votes, as Robert Mueller so carefully laid out in his indictment of three organizations and 13 Russian nationals. For example, Mueller notes that in July 2016, “[d]efendants and their co-conspirators purchased advertisements on Facebook to promote the ‘March for Trump’ and ‘Down with Hillary’ rallies.”

     Thus, while the users whose data was taken were not directly harmed, anyone whose voting behavior changed because of misinformation targeted with the help of that data was harmed. This private harm aggregates to larger social harms if it affected the outcomes of any elections. This includes not just the presidential election but state and local elections as well. 

    Regardless of whether one believes European-style privacy rules would be a net benefit, they are not a response to the problem at hand. After all, strict privacy rules did not prevent similar election interference in Europe. 

    To its credit, Facebook has announced its intention to require more transparency in the identity of buyers of political ads, much like political ads on old media include a line saying, “I am politician so-and-so, and I approve this message.” But this change, beneficial though it may be, may be difficult to enforce, especially if political messages are disguised as news or other supposedly non-political posts. We may also see pushback against this rule from U.S. politicians themselves when they find themselves unable to instantly post campaign ads in the next election cycle. 

    A famous cliché says that it takes a theory to beat a theory. And I have no good suggestions for what the right policy solutions are. Still, it is useful to reframe the debate so that it focuses on ways to address the issue rather than on ways to implement a separate agenda that is only tangentially related. 

    We will probably never know if the misinformation campaigns affected the outcomes of any elections. But we want to make such campaigns more difficult to carry out in the future. Economic regulation was never intended to be a tool that protects our social choice mechanisms from well-financed targeted attacks, and we should not allow the Facebook-Cambridge Analytica incident to eclipse the reluctance of the Trump administration and Congress to properly respond to attacks on election integrity. 

    Let Facebook eat crow. Let’s have a robust debate on privacy based on empirical evidence on how much people truly value their privacy, in word and deed.  Conversations need to include the costs and benefits of different policy approaches to regulate the data-driven economy. But that is a separate debate. 

    We must remember that what the Facebook-Cambridge Analytica incident reveals is how easy it was for the Russian government and others to rapidly spread misinformation through advertisement channels in attempting to affect an election’s outcome. This problem is larger than the ad network of a single platform, but Facebook should be responsible for the potential and dangers of its own technology, and the administration and Congress should not feign ignorance of election interference in the information age.

    Scott Wallsten is president and senior fellow at the Technology Policy Institute.

    Read More

  • Facebook Allowed Some Tech Companies To Read And Delete Users’ Private Messages: NYT

    Facebook Allowed Some Tech Companies To Read And Delete Users’ Private Messages: NYT

    Since 2010, the tech giant has reportedly granted over 150 companies deeper access to users’ personal data than it has admitted.

    Facebook CEO Mark Zuckerberg testifying before Congress in April. The tech giant's privacy policies have been under scrutiny
    ASSOCIATED PRESS Facebook CEO Mark Zuckerberg testifying before Congress in April. The tech giant’s privacy policies have been under scrutiny for months.

    Facebook reportedly gave some of the world’s largest tech companies access to users’ personal data, including allowing some firms to read and delete users’ private messages and obtain contact information through their friends, without users’ knowledge or consent.

    The New York Times on Tuesday detailed how Facebook, through data-sharing “business partnerships,” shared and traded user data with more than 150 companies, including Amazon, Microsoft, Netflix, Spotify, Yahoo and the Russian search engine Yandex.

    These partnerships, the oldest of which dates to 2010 and all of which were active in 2017, “effectively exempt[ed] those business partners” from Facebook’s usual privacy rules, the Times reported, citing hundreds of pages of internal Facebook documents. 

    Microsoft’s Bing search engine, for example, was reportedly allowed to see the names of nearly all Facebook users’ friends without their consent; Spotify, Netflix and the Royal Bank of Canada were able to read, write and delete users’ private messages; and Amazon, Microsoft and Sony could obtain users’ contact information through their friends.

    Yahoo and Yandex reportedly retained access to Facebook user data even after such access was supposed to have been halted. And Facebook gave Apple the power to see Facebook users’ contacts and calendar entries even in cases where users had disabled all data sharing.

    In all, the data of “hundreds of millions of people” were sought monthly by applications made by these Facebook business partners, according to the Times. Some of these partnerships reportedly remain in effect today.

    Responding to the Times’ report, Facebook, whose privacy policies have come under intense scrutiny in recent months, said it had neither violated users’ privacy agreements nor a deal with the Federal Trade Commission that made it illegal for the social network to share user data without explicit consent.

    “None of these partnerships or features gave companies access to information without people’s permission, nor did they violate our 2012 settlement with the FTC,” Konstantinos Papamiltiadis, Facebook’s director of developer platforms and programs, said in a Tuesday blog post. 

    Facebook’s primary argument was that it did not need explicit consent from users because its business partners, which it refers to as “integration partners,” were “functionally extensions of Facebook itself,” Times reporter Nick Confessore explained.
     
    Still, Facebook acknowledged that it’s “got work to do to regain people’s trust.”

    “Protecting people’s information requires stronger teams, better technology, and clearer policies, and that’s where we’ve been focused for most of 2018,” Steve Satterfield, Facebook’s director of privacy and public policy, said in a statement, noting that partnerships “are one area of focus.” 

    Papamiltiadis said most of the features described in the Times’ article are “now gone.”

    At least two U.S. senators have called for more federal oversight in the wake of the Times report. 

    Sen. Amy Klobuchar (D-Minn.) lambasted Facebook’s reported data sharing as “unacceptable” and called for Congress to pass the data privacy bill that she and Republican Sen. John Kennedy of Louisana introduced in April.

    Sen. Brian Schatz (D-Hawaii) said he was angered by the report. 

    “It has never been more clear. We need a federal privacy law. They are never going to volunteer to do the right thing. The FTC needs to be empowered to oversee big tech,” he tweeted.

    An early investor of Facebook told the Times that “no one should trust Facebook until they change their business model.”

    “I don’t believe it is legitimate to enter into data-sharing partnerships where there is not prior informed consent from the user,” Roger McNamee said.

    Facebook did not immediately respond to HuffPost’s request for comment.

    Read More

  • Here’s What #Google and #Facebook Know About You—And What You Can Do About It

    https://www.alternet.org/sites/default/files/styles/story_image/public/story_images/shutterstock_667831579.jpg?itok=uLm3202s

    If you use Google or Facebook, you may have wondered just how much of your personal data these big internet giants have access to. This is a good question to ask in our modern era of Big Data, constant connectivity and rapidly decreasing personal privacy. Some people, like Washington State Chief Privacy Officer Alex Alben, even argue that your personal data isn’t really “personal” at all. In other words, you may have unwittingly agreed to give your deepest information to third-party vendors through websites and apps simply by agreeing to their lengthy and frequently skimmed Terms of Service.

    By the looks of it, Google seems to have some of the most invasive amounts of data on its users. This isn’t to say the company is using personal data on people for malicious and nefarious purposes. But the frequency, detail and amount it has amassed over the years are beginning to put people on edge. Let’s start off with location. If you have Google maps enabled (like many of us), your physical movements and the time you take to get from Point A to Point B, wherever that may be, has been logged into its search database. If you want to see proof of this activity, look at your Google timeline.

    Then there’s your search history. Google maintains a database of your search entries as a way to learn more about you and your preferences. But if you fear that this constant logging of your personal search history is a dash too deep for your taste, you need to delete your search history from all the devices you own. That’s not all. Ads, too, factor into Google’s profiles of its users. To give you an example, Google has an advertisement profile on me; its algorithm asserts that I'm a female between the age of 25-34 and that I might like computers, hair care and politics. Google presents ads based on the personal information you give the website, including your age, gender, location, and other metrics. Plus, Google stores your YouTube search history and maintains a log of information on the apps you use. From the amount you spend on these apps to the people you talk to, Google stores that information in its database.

    Suppose you’re not exactly excited about your digital footprint being so minutely tracked. When it comes to Google, you can do two things. For starters, you can download a copy of everything Google has on you through the Google takeout option. Depending on how often you’ve used it, the amount of information can range from kilobytes to gigabytes and more.

    After that, and perhaps more importantly, you can opt out of the Google Analytics program. Google Analytics lets website owners see traffic, number of clicks, time spent on a page, and a lot more for their own analyses. You can refuse to be part of this data collection by using the Google Analytics opt-out add-on for your browser. These little steps can restore some element of privacy to your online activity.

    Then there’s Facebook. Amid the Cambridge Analytica scandal, the social network giant is under massive fire from observers who say its practices on privacy are reprehensible. With many people joining the #DeleteFacebook sentiment, the company recently shared an update in its security settings, saying that access to it would be more readily available for users. But if you’re interested in knowing just how much Facebook has on you in terms of personal data, check out its download feature. Go to your general account settings and look for “Download a copy of your Facebook data” at the bottom of the options.

    It might be slightly jarring to see just how much Facebook logs about its users. From personal conversations, phone numbers, apps, photos, videos, events, locations, and a whole lot more, Facebook’s data can be converted into tons of documents on individual users. I’ll give you my example. Since 2008, Facebook has 430.1 megabytes of personal data on me. To make sense of such a colossal amount, conversion to a Word document helps. Since one megabyte is almost 500 character-filled pages, that's about 215,050 pages of text on yours truly. To make matters less uncomfortable, that’s several novels.

    While Facebook tries to figure out how to respond to growing concern over its privacy settings, you can do your (small) part in tightening your profile. You can opt out of Facebook’s API sharing feature so that third-party websites, games and applications don’t have access to your data.

    All of this information on information is to say that when you decide to use a website or program, read about it attentively to see what you’re getting into. Most of the time, the most unnerving aspects about Google and Facebook are actually part of their openly stated business models. As a Google spokesperson told CNBC News, “In order to make the privacy choices that are right for them, it's essential that people can understand and control their Google data. Over the years, we've developed tools like My Account expressly for this purpose, and we'd encourage everyone to review it regularly.”

    Mehreen Kasana is a news writer for AlterNet. Previously, she worked as the front-page editor for the Huffington Post. Follow her on Twitter at @mehreenkasana. 

    Read More

  • Kids on the Internet: Why parenting must keep up with the digital revolution - @page88

    Read More

  • Why #Zuckerberg’s 15-Year Apology Tour Hasn’t Fixed #Facebook

    Facebook's CEO's constant apologies aren't a promise to do better. They're a symptom of a profound crisis of accountability.

    In 2003, one year before Facebook was founded, a website called Facemash began nonconsensually scraping pictures of students at Harvard from the school’s intranet and asking users to rate their hotness. Obviously, it caused an outcry. The website’s developer quickly proffered an apology. "I hope you understand, this is not how I meant for things to go, and I apologize for any harm done as a result of my neglect to consider how quickly the site would spread and its consequences thereafter,” wrote a young Mark Zuckerberg. “I definitely see how my intentions could be seen in the wrong light.”

    In 2004 Zuckerberg cofounded Facebook, which rapidly spread from Harvard to other universities. And in 2006 the young company blindsided its users with the launch of News Feed, which collated and presented in one place information that people had previously had to search for piecemeal. Many users were shocked and alarmed that there was no warning and that there were no privacy controls. Zuckerberg apologized. “This was a big mistake on our part, and I'm sorry for it,” he wrote on Facebook’s blog. "We really messed this one up," he said. "We did a bad job of explaining what the new features were and an even worse job of giving you control of them."

    Zeynep Tufekci (@zeynep) is an associate professor at the University of North Carolina and an opinion writer for The New York Times. She recently wrote about the (democracy-poisoning) golden age of free speech.

    Then in 2007, Facebook’s Beacon advertising system, which was launched without proper controls or consent, ended up compromising user privacy by making people’s purchases public. Fifty thousand Facebook users signed an e-petition titled “Facebook: Stop invading my privacy.” Zuckerberg responded with an apology: “We simply did a bad job with this release and I apologize for it." He promised to improve. “I'm not proud of the way we've handled this situation and I know we can do better,” he wrote.

    By 2008, Zuckerberg had written only four posts on Facebook’s blog: Every single one of them was an apology or an attempt to explain a decision that had upset users.

    In 2010, after Facebook violated users' privacy by making key types of information public without proper consent or warning, Zuckerberg again responded with an apology—this time published in an op-ed in The Washington Post. “We just missed the mark,” he said. “We heard the feedback,” he added. “There needs to be a simpler way to control your information.” “In the coming weeks, we will add privacy controls that are much simpler to use,” he promised.

    I’m going to run out of space here, so let’s jump to 2018 and skip over all the other mishaps and apologies and promises to do better—oh yeah, and the consent decree that the Federal Trade Commission made Facebook sign in 2011, charging that the company had deceptively promised privacy to its users and then repeatedly broken that promise—in the intervening years.

    Last month, Facebook once again garnered widespread attention with a privacy related backlash when it became widely known that, between 2008 and 2015, it had allowed hundreds, maybe thousands, of apps to scrape voluminous data from Facebook users—not just from the users who had downloaded the apps, but detailed information from all their friends as well. One such app was run by a Cambridge University academic named Aleksandr Kogan, who apparently siphoned up detailed data on up to 87 million users in the United States and then surreptitiously forwarded the loot to the political data firm Cambridge Analytica. The incident caused a lot of turmoil because it connects to the rolling story of distortions in the 2016 US presidential election. But in reality, Kogan’s app was just one among many, many apps that amassed a huge amount of information in a way most Facebook users were completely unaware of.

    At first Facebook indignantly defended itself, claiming that people had consented to these terms; after all, the disclosures were buried somewhere in the dense language surrounding obscure user privacy controls. People were asking for it, in other words.

    But the backlash wouldn’t die down. Attempting to respond to the growing outrage, Facebook announced changes. “It’s Time to Make Our Privacy Tools Easier to Find”, the company announced without a hint of irony—or any other kind of hint—that Zuckerberg had promised to do just that in the “coming few weeks” eight full years ago. On the company blog, Facebook’s chief privacy editor wrote that instead of being “spread across nearly 20 different screens” (why were they ever spread all over the place?), the controls would now finally be in one place.

    Zuckerberg again went on an apology tour, giving interviews to The New York Times, CNN, Recode, WIRED, and Vox (but not to the Guardian and Observer reporters who broke the story). In each interview he apologized. “I’m really sorry that this happened,” he told CNN. “This was certainly a breach of trust.”

    But Zuckerberg didn’t stop at an apology this time. He also defended Facebook as an “idealistic company” that cares about its users and spoke disparagingly about rival companies that charge users money for their products while maintaining a strong record in protecting user privacy. In his interview with Vox’s Ezra Klein, Zuckerberg said that anyone who believes Apple cares more about users than Facebook does has “Stockholm syndrome”—the phenomenon whereby hostages start sympathizing and identifying with their captors.

    This is an interesting argument coming from the CEO of Facebook, a company that essentially holds its users' data hostage. Yes, Apple charges handsomely for its products, but it also includes advanced encryption hardware on all its phones, delivers timely security updates to its whole user base, and has largely locked itself out of user data—to the chagrin of many governments, including that of the United States, and of Facebook itself.

    Most Android phones, by contrast, gravely lag behind in receiving security updates, have no specialized encryption hardware, and often handle privacy controls in a way that is detrimental to user interests. Few governments or companies complain about Android phones. After the Cambridge Analytica scandal, it came to light that Facebook had been downloading and keeping all the text messages of its users on the Android platform—their content as well as their metadata. “The users consented!” Facebook again cried out. But people were soon posting screenshots that showed how difficult it was for a mere mortal to discern that’s what was going on, let alone figure out how to opt out, on the vague permission screen that flashed before users.

    On Apple phones, however, Facebook couldn’t harvest people’s text messages because the permissions wouldn’t allow it.

    In the same interview, Zuckerberg took wide aim at the oft-repeated notion that, if an online service is free, you—the user—are the product. He said that he found the argument that “if you’re not paying that somehow we can’t care about you, to be extremely glib and not at all aligned with the truth.” His rebuttal to that accusation, however, was itself glib; and as for whether it was aligned with the truth—well, we just have to take his word for it. “To the dissatisfaction of our sales team here,” he said, “I make all of our decisions based on what’s going to matter to our community and focus much less on the advertising side of the business.”

    As far as I can tell, not once in his apology tour was Zuckerberg asked what on earth he means when he refers to Facebook’s 2 billion-plus users as “a community” or “the Facebook community.” A community is a set of people with reciprocal rights, powers, and responsibilities. If Facebook really were a community, Zuckerberg would not be able to make so many statements about unilateral decisions he has made—often, as he boasts in many interviews, in defiance of Facebook’s shareholders and various factions of the company’s workforce. Zuckerberg’s decisions are final, since he controls all the voting stock in Facebook, and always will until he decides not to—it’s just the way he has structured the company.

    This isn’t a community; this is a regime of one-sided, highly profitable surveillance, carried out on a scale that has made Facebook one of the largest companies in the world by market capitalization.

    Facebook’s 2 billion users are not Facebook’s “community.” They are its user base, and they have been repeatedly carried along by the decisions of the one person who controls the platform. These users have invested time and money in building their social networks on Facebook, yet they have no means to port the connectivity elsewhere. Whenever a serious competitor to Facebook has arisen, the company has quickly copied it (Snapchat) or purchased it (WhatsApp, Instagram), often at a mind-boggling price that only a behemoth with massive cash reserves could afford. Nor do people have any means to completely stop being tracked by Facebook. The surveillance follows them not just on the platform, but elsewhere on the internet—some of them apparently can’t even text their friends without Facebook trying to snoop in on the conversation. Facebook doesn’t just collect data itself; it has purchased external data from data brokers; it creates “shadow profiles” of nonusers and is now attempting to match offline data to its online profiles.

    Again, this isn’t a community; this is a regime of one-sided, highly profitable surveillance, carried out on a scale that has made Facebook one of the largest companies in the world by market capitalization.

    There is no other way to interpret Facebook’s privacy invading moves over the years—even if it’s time to simplify! finally!―as anything other than decisions driven by a combination of self-serving impulses: namely, profit motives, the structural incentives inherent to the company’s business model, and the one-sided ideology of its founders and some executives. All these are forces over which the users themselves have little input, aside from the regular opportunity to grouse through repeated scandals. And even the ideology—a vague philosophy that purports to prize openness and connectivity with little to say about privacy and other values—is one that does not seem to apply to people who run Facebook or work for it. Zuckerberg buys houses surrounding his and tapes over his computer’s camera to preserve his own privacy, and company employees went up in arms when a controversial internal memo that made an argument for growth at all costs was recently leaked to the press—a nonconsensual, surprising, and uncomfortable disclosure of the kind that Facebook has routinely imposed upon its billions of users over the years.

    This isn’t to say Facebook doesn’t provide real value to its users, even as it locks them in through network effects and by crushing, buying, and copying its competition. I wrote a whole book in which I document, among other things, how useful Facebook has been to anticensorship efforts around the world. It doesn’t even mean that Facebook executives make all decisions merely to increase the company valuation or profit, or that they don’t care about users. But multiple things can be true at the same time; all of this is quite complicated. And fundamentally, Facebook’s business model and reckless mode of operating are a giant dagger threatening the health and well-being of the public sphere and the privacy of its users in many countries.

    So, here’s the thing. There is indeed a case of Stockholm syndrome here. There are very few other contexts in which a person would be be allowed to make a series of decisions that have obviously enriched them while eroding the privacy and well-being of billions of people; to make basically the same apology for those decisions countless times over the space of just 15 years; and then to profess innocence, idealism, and complete independence from the obvious structural incentives that have shaped the whole process. This should ordinarily cause all the other educated, literate, and smart people in the room to break into howls of protest or laughter. Or maybe tears.

    Facebook has tens of thousands of employees, and reportedly an open culture with strong internal forums. Insiders often talk of how free employees feel to speak up, and indeed I’ve repeatedly been told how they are encouraged to disagree and discuss all the key issues. Facebook has an educated workforce.

    By now, it ought to be plain to them, and to everyone, that Facebook’s 2 billion-plus users are surveilled and profiled, that their attention is then sold to advertisers and, it seems, practically anyone else who will pay Facebook—including unsavory dictators like the Philippines’ Rodrigo Duterte. That is Facebook’s business model. That is why the company has an almost half-a-trillion-dollar market capitalization, along with billions in spare cash to buy competitors.

    These are such readily apparent facts that any denial of them is quite astounding.

    And yet, it appears that nobody around Facebook’s sovereign and singular ruler has managed to convince their leader that these are blindingly obvious truths whose acceptance may well provide us with some hints of a healthier way forward. That the repeated word of the use “community” to refer Facebook’s users is not appropriate and is, in fact, misleading. That the constant repetition of “sorry” and “we meant well” and “we will fix it this time!” to refer to what is basically the same betrayal over 14 years should no longer be accepted as a promise to do better, but should instead be seen as but one symptom of a profound crisis of accountability. When a large chorus of people outside the company raises alarms on a regular basis, it’s not a sufficient explanation to say, “Oh we were blindsided (again).”

    Maybe, just maybe, that is the case of Stockholm syndrome we should be focusing on.

    Zuckerberg’s outright denial that Facebook’s business interests play a powerful role in shaping its behavior doesn’t bode well for Facebook’s chances of doing better in the future. I don’t doubt that the company has, on occasion, held itself back from bad behavior. That doesn’t make Facebook that exceptional, nor does it excuse its existing choices, nor does it alter the fact that its business model is fundamentally driving its actions.

    At a minimum, Facebook has long needed an ombudsman’s office with real teeth and power: an institution within the company that can act as a check on its worst impulses and to protect its users. And it needs a lot more employees whose task is to keep the platform healthier. But what would truly be disruptive and innovative would be for Facebook to alter its business model. Such a change could come from within, or it could be driven by regulations on data retention and opaque, surveillance-based targeting—regulations that would make such practices less profitable or even forbidden.

    Facebook will respond to the latest crisis by keeping more of its data within its own walls (of course, that fits well with the business of charging third parties for access to users based on extensive profiling with data held by Facebook, so this is no sacrifice). Sure, it’s good that Facebook is now promising not to leak user data to unscrupulous third parties; but it should finally allow truly independent researchers better (and secure, not reckless) access to the company’s data in order to investigate the true effects of the platform. Thus far, Facebook has not cooperated with independent researchers who want to study it. Such investigation would be essential to informing the kind of political discussion we need to have about the trade-offs inherent in how Facebook, and indeed all of social media, operate.

    Even without that independent investigation, one thing is clear: Facebook’s sole sovereign is neither equipped to, nor should he be in a position to, make all these decisions by himself, and Facebook’s long reign of unaccountability should end.


    Facebook in Crisis

    • Initially, Facebook said that Cambridge Analytica got unauthorized access to some 50 million users' data. The social network has now raised that number to 87 million.
    • Next week, Mark Zuckerberg will testify before Congress. The question on our minds: How can Facebook prevent the next crisis if its guiding principle is and always has been connection at all cost?
    • Facebook has a long history of privacy gaffes. Here are just a few.

    Photograph by WIRED/Getty Images

    Read More