You don’t have to delete Facebook, but you could definitely be using it better | Popular Science

You don’t need to stop using Facebook, but you should stop using it so poorly.

Source: You don’t have to delete Facebook, but you could definitely be using it better | Popular Science

Advertisements

Facebook’s Privacy Message Undermined by the Times—Again | WIRED

Facebook has spent much of 2018 apologizing to people. A recent New York Times investigation calls all those apologies into question.

Source: Facebook’s Privacy Message Undermined by the Times—Again | WIRED

IF THERE IS one message Facebook has been trying to send to the world in 2018, it’s that it understands that it needs to rethink the way it operates as a company. It says it understands that it must better police the content that appears on its platforms. And as a result of the Cambridge Analytica scandal early this year, it says that it must be more effective in how it protects user data, more transparent about all the data it collects on them, and more clear about who has access to it. CEO and co-founder Mark Zuckerberg said fixing Facebook was his project for 2018, and he said earlier this year that he was dedicating enough resources to the problem that we should expect to see tangible progress as we approached 2019.

But facts have proven to be inconvenient things for Facebook in 2018. Every month this year—and in some months, every week—new information has come out that makes it seem as if Facebook’s big rethink is in big trouble. The billions the company is spending to fix itself, along with slowing advertising growth in Europe and North America, have stalled revenues. Its once highflying stock price is down 35 percent. Well-known and well-regarded executives, like the founders of Facebook-owned Instagram, Oculus, and WhatsApp, have left abruptly. And more and more current and former employees are beginning to question whether Facebook’s management team, which has been together for most of the last decade, is up to the task.

Technically, Zuckerberg controls enough voting power to resist and reject any moves to remove him as CEO. But the number of times that he and his number two Sheryl Sandberg have overpromised and underdelivered since the 2016 election would doom any other management team. And so for the first time in Facebook’s storied history as a public company, employees, investors, and users are beginning to wonder if the only way to solve Facebook’s current spate of problems is to replace them.

Just since the end of September, Facebook announced the biggest security breach in its history, affecting more than 30 million accounts. Meanwhile, investigations in November revealed, among other things, that the company had hired a Washington firm to spread its own brand of misinformation on other platforms, including borderline anti-Semitic stories about financier George Soros. And only two weeks ago, a cache of internal Facebook emails dating back to 2012 revealed that at times Facebook thought a lot more about how to make money off its users’ data than it did protecting it.

Now, according to a New York Times investigation into Facebook’s data practices published Tuesday, long after Facebook said it had taken steps to protect user data from the kinds of leakages that made Cambridge Analytica possible, it continued special, undisclosed data sharing arrangements with more than 150 companies—some into this year. And unlike with Cambridge Analytica, the Timessays, Facebook provided access to its users’ data knowingly, and on a greater scale.

Some companies like Microsoft’s Bing search engine had access to all a Facebook user’s friends without consent. Apple devices had access to the contact numbers and calendar entries of people who had changed their account settings to disable all sharing. Spotify and Netflix had the ability to read users’ private messages. The search engine Yandex was one of the companies with special access, even though it has long been suspected of having special ties to the Kremlin. The Times had access into 2017 to users’ friend lists for an article-sharing application it had discontinued in 2011; the company told its reporters that it was not obtaining any data. Apple, Spotify, Yandex, and Netflix told the Times that they were unaware Facebook granted them such broad access.

There have been murmurings all year over whether Congress might pass new data protection laws akin to the GDPR in Europe, or whether the FTC would fine Facebook for violating its 2011 consent decree with the agency. Now it would not be a stretch to wonder if both those things aren’t imminent when the new Congress convenes in January. Already—today—the attorney general for the District of Columbia decided to sue Facebookfor alleged data misuse stemming from Cambridge Analytica. It’s likely to have company in that effort.

Facebook told the Times that no data was mismanaged or misused, that the data was all available publicly, that it considered its partners to effectively be part of Facebook and therefore subject to the same strict rules of conduct, and that as a result of all this it was not in violation of any statutes or its consent decree with the FTC.

Facebook posted further comment in a blog post, authored by Konstantinos Papamiltiadis, Facebook’s director of developer platforms and programs. “Today, we’re facing questions about whether Facebook gave large tech companies access to people’s information and, if so, why we did this. To put it simply, this work was about helping people do two things. First, people could access their Facebook accounts or specific Facebook features on devices and platforms built by other companies like Apple, Amazon, Blackberry and Yahoo. These are known as integration partners. Second, people could have more social experiences—like seeing recommendations from their Facebook friends—on other popular apps and websites, like Netflix, The New York Times, Pandora and Spotify,” Papamiltiadis wrote.

“We’ve been public about these features and partnerships over the years because we wanted people to actually use them – and many people did. They were discussed, reviewed, and scrutinizedby a wide variety of journalists and privacy advocates. But most of these features are now gone. We shut down instant personalization, which powered Bing’s features, in 2014 and we wound down our partnerships with device and platform companies months ago, following an announcement in April.

“Still, we recognize that we’ve needed tighter management over how partners and developers can access information using our APIs. We’re already in the process of reviewing all our APIs and the partners who can access them.”

Zuckerberg and his executives are such masters of this kind of sincere apology, it should have a special name like “apolozuck,” or perhaps just “zucked.” It’s truly rhetoric as art. “We’re sorry. We’re as upset as you are. But that thing you are angry at us about happened a few years ago, and we’ve fixed the problems. They happened because we were trying to make Facebook better for you. But we now see how it left your data vulnerable to bad things too. We care more about your data and your privacy than anyone. It won’t happen again. We promise.”

What has enabled them to deliver these apologies year after year was that these sycophantic monologues were always true enough to be believable. The Times’ story calls into question every one of those apologies—especially the ones issued this year.

All year long, Facebook has encouraged the world to believe that a Cambridge Analytica–style data leakage couldn’t happen anymore—that, as Zuckerberg told lawmakers in April, users had “complete control” over what happened to their data. Two weeks ago, after those scheming emails Zuckerberg exchanged with executives about data sharing arrangements were released by the UK Parliament, Zuckerberg said in a Facebook post that they were taken out of context.

Except, now it appears Facebook has had all manner of data sharing relationships it hasn’t been telling the world about. “We’ve never sold anyone’s data,” Zuckerberg wrote in his post, and has insisted at various other times this year. But Zuckerberg saying that Facebook has never sold user data is an answer that only an engineer could love. It is technically correct but practically false. Sure, Facebook has never given other companies user data in exchange for cash. But it’s quite obvious to the world now that Facebook for a long time has been giving user data to other companies in exchange for other equally or more valuable things.

There’s a simple takeaway from all this, and it’s not a pretty one: Facebook is either a mendacious, arrogant corporation in the mold of a 1980s-style Wall Street firm, or it is a company that is in much more disarray than it has been letting on. Think about almost everything bad that’s happened to Facebook since the 2016 election: Russian interference, Cambridge Analytica, data sharing, astroturfing. Facebook could have kept all of them from becoming scandals—or at least becoming as big of a scandal—had it just leveled with the world when it had the chance. The fact that it hasn’t suggests that it didn’t want to, or it is just not well managed enough to pull it off.

It’s all hard to read without finally realizing what it is that’s made us so angry with Silicon Valley, Facebook in particular, in 2018: We feel lied to, like these companies are playing us, their users, for chumps, and laughing at us for being so naive.

We’d expect such deceptions from banks, or oil companies, or car makers, or tobacco firms. But companies like Facebook built their brands by promising something different. They told us, “It’s not about the money and the power of being a billionaire and running one of the richest companies on the planet, it’s about making the world a better place—making it more open and connected.” And we fell for the ruse hook, line, and sinker.

Americans are weird about their tycoons. We have a soft spot for success, especially success from people as young as Zuckerberg was when he started Facebook. But we hate it when they become super rich and powerful like he is now and seem accountable to no one. We’ll tolerate rogues like Larry Ellison, founder and CEO of Oracle, who once happily admitted to hiring investigators to search Bill Gates’ trash. Ellison makes no effort to hide the fact that he’s in it for the money and the power. But what people despise more than anything is what we have now with the companies in Silicon Valley, especially with Facebook: Greed falsely wrapped in sanctimony.

Facebook gave the world a great new tool for staying connected. Zuckerberg even pitched it as a better internet—a safe space away from the anonymous trolls lurking everywhere else online. But it’s now very debatable whether Facebook is really a better internet that is making the world a better place, or just another big powerful corporation out to make as much money as possible. Perhaps the world would be happier with Zuckerberg and Facebook, and the rest of his Silicon Valley brethren, if they stopped pretending to be people and businesses they are not.

Chuck Schumer’s Facebook ties came donations and a job for his daughter

Do you still believe that your concerns about Facebook privacy matter at all to those in power?

 

What does it take to friend a U.S. senator? If you’re Facebook, all you need is about $50,000 in donations – and a cushy job for the politician’s daughter.

Source: Chuck Schumer’s Facebook ties came donations and a job for his daughter

Facebook employees, including some at the top of its corporate pyramid, have helped fill Schumer’s campaign coffers – and he’s returned the favor by carrying water for the social media giant in Congress, according to a recent report.

And Alison Schumer, the senator’s youngest of two daughters, works as a Facebook product marketing manager – which pays an average of $160,000, according to Glassdoor.com.

“It sure looks hinky,” political strategist Susan Del Percio told The Post. “This is an industry that’s been trying for years to fend off heavy government regulation by actively cultivating relationships with senators and House members.”

Last week, it emerged that Schumer has been a strong advocate of Facebook on Capitol Hill. He pressured Sen. Mark Warner (D- Virginia), one of Facebook’s most aggressive challengers in Congress, to back off from investigating the company, according to The New York Times.

Schumer’s support of Facebook remained steadfast even as it emerged that Russian trolls were using the social media platform to interfere with the 2016 presidential election. The company’s also come under fire for lax privacy standards leading to the exposure of users’ personal data.

“Facebook is a very powerful force,” Schumer said in March, as the problems began coming to light. “I think, overall, it’s been a very positive force.”

Top Facebook execs have contributed thousands to Schumer’s campaign fund for years.

Founder and CEO Mark Zuckerberg gave the senator $5,200 in 2013.

Sheryl Sandberg, the company’s high-profile chief operating officer, kicked in $5,400 – the maximum legal amount – to Schumer’s 2016 re-election campaign.

Facebook general counsel Colin Stretch gave the same sum in 2015.

Newly appointed board member Kenneth Chenault has been a loyal Schumer supporter since 1995. Most recently, he gave $1,200 to the senator’s 2016 primary election campaign and $2,700 in that year’s general election. Chenault has contributed a total of $6,900.

Critics have called out the cozy relationship. In April, right-leaning street artist Sabo plastered the city with posters reading “Conflict of Interest? The daughter of Chuck is working for Zuck,” The Post reported.

Alison Schumer worked for Facebook from 2011 to 2013 and rejoined the company in 2017, according to her LinkedIn profile. Schumer, who could not be reached Saturday, is set to marry Elizabeth Weiland in Brooklyn on Sunday.

“Sen. Schumer has worked aggressively to push Facebook to do more to purge fake accounts and bots used by the right wing and Russians to perpetuate a disinformation campaign and interfere with our elections,” Schumer spokesman Justin Goodman said Thursday.

But evidence of the senator’s direct efforts to grease the skids for the company could fuel Republican proposals to clamp down on Facebook.

Conservatives say the company has suppressed right-leaning news sites – LaChance’s traffic cratered by 90% after Facebook introduced a new algorithm in 2017, he said — and has banned Trump supporters unjustly.

“It will force some action,” Del Percio said. “Facebook will see a lot more aggressive questioning and oversight after this.”

Surveillance Kills Freedom By Killing Experimentation | WIRED

When we’re being watched, we conform. We don’t speak freely or try new things. But social progress happens in the gap between what’s legal and what’s moral.

Source: Surveillance Kills Freedom By Killing Experimentation | WIRED

In my book Data and Goliath, I write about the value of privacy. I talk about how it is essential for political liberty and justice, and for commercial fairness and equality. I talk about how it increases personal freedom and individual autonomy, and how the lack of it makes us all less secure. But this is probably the most important argument as to why society as a whole must protect privacy: it allows society to progress.

We know that surveillance has a chilling effect on freedom. People change their behavior when they live their lives under surveillance. They are less likely to speak freely and act individually. They self-censor. They become conformist. This is obviously true for government surveillance but is true for corporate surveillance as well. We simply aren’t as willing to be our individual selves when others are watching.

Let’s take an example: hearing that parents and children are being separated as they cross the U.S. border, you want to learn more. You visit the website of an international immigrants’ rights group, a fact that is available to the government through mass internet surveillance. You sign up for the group’s mailing list, another fact that is potentially available to the government. The group then calls or emails to invite you to a local meeting. Same. Your license plates can be collected as you drive to the meeting; your face can be scanned and identified as you walk into and out of the meeting. If instead of visiting the website you visit the group’s Facebook page, Facebook knows that you did and that feeds into its profile of you, available to advertisers and political activists alike. Ditto if you like their page, share a link with your friends, or just post about the issue.

Maybe you are an immigrant yourself, documented or not. Or maybe some of your family is. Or maybe you have friends or coworkers who are. How likely are you to get involved if you know that your interest and concern can be gathered and used by government and corporate actors? What if the issue you are interested in is pro- or anti-gun control, anti-police violence or in support of the police? Does that make a difference?

Maybe the issue doesn’t matter, and you would never be afraid to be identified and tracked based on your political or social interests. But even if you are so fearless, you probably know someone who has more to lose, and thus more to fear, from their personal, sexual, or political beliefs being exposed.

This isn’t just hypothetical. In the months and years after the 9/11 terrorist attacks, many of us censored what we spoke about on social media or what we searched on the internet. We know from a 2013 PEN study that writers in the United States self-censored their browsing habits out of fear the government was watching. And this isn’t exclusively an American event; internet self-censorship is prevalent across the globe, China being a prime example.

Ultimately, this fear stagnates society in two ways. The first is that the presence of surveillance means society cannot experiment with new things without fear of reprisal, and that means those experiments—if found to be inoffensive or even essential to society—cannot slowly become commonplace, moral, and then legal. If surveillance nips that process in the bud, change never happens. All social progress—from ending slavery to fighting for women’s rights—began as ideas that were, quite literally, dangerous to assert. Yet without the ability to safely develop, discuss, and eventually act on those assertions, our society would not have been able to further its democratic values in the way that it has.

Consider the decades-long fight for gay rights around the world. Within our lifetimes we have made enormous strides to combat homophobia and increase acceptance of queer folks’ right to marry. Queer relationships slowly progressed from being viewed as immoral and illegal, to being viewed as somewhat moral and tolerated, to finally being accepted as moral and legal.

In the end, it was the public nature of those activities that eventually slayed the bigoted beast, but the ability to act in private was essential in the beginning for the early experimentation, community building, and organizing.

Marijuana legalization is going through the same process: it’s currently sitting between somewhat moral, and—depending on the state or country in question—tolerated and legal. But, again, for this to have happened, someone decades ago had to try pot and realize that it wasn’t really harmful, either to themselves or to those around them. Then it had to become a counterculture, and finally a social and political movement. If pervasive surveillance meant that those early pot smokers would have been arrested for doing something illegal, the movement would have been squashed before inception. Of course, the story is more complicated than that, but the ability for members of society to privately smoke weed was essential for putting it on the path to legalization.

We don’t yet know which subversive ideas and illegal acts of today will become political causes and positive social change tomorrow, but they’re around. And they require privacy to germinate. Take away that privacy, and we’ll have a much harder time breaking down our inherited moral assumptions.

The second way surveillance hurts our democratic values is that it encourages society to make more things illegal. Consider the things you do—the different things each of us does—that portions of society find immoral. Not just recreational drugs and gay sex, but gambling, dancing, public displays of affection. All of us do things that are deemed immoral by some groups, but are not illegal because they don’t harm anyone. But it’s important that these things can be done out of the disapproving gaze of those who would otherwise rally against such practices.

If there is no privacy, there will be pressure to change. Some people will recognize that their morality isn’t necessarily the morality of everyone—and that that’s okay. But others will start demanding legislative change, or using less legal and more violent means, to force others to match their idea of morality.

It’s easy to imagine the more conservative (in the small-c sense, not in the sense of the named political party) among us getting enough power to make illegal what they would otherwise be forced to witness. In this way, privacy helps protect the rights of the minority from the tyranny of the majority.

This is how we got Prohibition in the 1920s, and if we had had today’s surveillance capabilities in the 1920s it would have been far more effectively enforced. Recipes for making your own spirits would have been much harder to distribute. Speakeasies would have been impossible to keep secret. The criminal trade in illegal alcohol would also have been more effectively suppressed. There would have been less discussion about the harms of Prohibition, less “what if we didn’t…” thinking. Political organizing might have been difficult. In that world, the law might have stuck to this day.

China serves as a cautionary tale. The country has long been a world leader in the ubiquitous surveillance of its citizens, with the goal not of crime prevention but of social control. They are about to further enhance their system, giving every citizen a “social credit” rating. The details are yet unclear, but the general concept is that people will be rated based on their activities, both online and off. Their political comments, their friends and associates, and everything else will be assessed and scored. Those who are conforming, obedient, and apolitical will be given high scores. People without those scores will be denied privileges like access to certain schools and foreign travel. If the program is half as far-reaching as early reports indicate, the subsequent pressure to conform will be enormous. This social surveillance system is precisely the sort of surveillance designed to maintain the status quo.

For social norms to change, people need to deviate from these inherited norms. People need the space to try alternate ways of living without risking arrest or social ostracization. People need to be able to read critiques of those norms without anyone’s knowledge, discuss them without their opinions being recorded, and write about their experiences without their names attached to their words. People need to be able to do things that others find distasteful, or even immoral. The minority needs protection from the tyranny of the majority.

Privacy makes all of this possible. Privacy encourages social progress by giving the few room to experiment free from the watchful eye of the many. Even if you are not personally chilled by ubiquitous surveillance, the society you live in is, and the personal costs are unequivocal.


From The End of Trust (McSweeney’s 54), out November 20th, a collection featuring over thirty writers investigating surveillance, technology, and privacy, with special advisors The Electronic Frontier Foundation.

Facebook’s Massive Security Breach: Everything We Know | WIRED

Up to 50 million Facebook users were affected—and possibly 40 million more—when hackers compromised the social network’s systems.

FACEBOOK’S PRIVACY PROBLEMS severely escalated Friday when the social network disclosed that an unprecedented security issue, discovered September 25, impacted almost 50 million user accounts. Unlike the Cambridge Analyticascandal, in which a third-party company erroneously accessed data that a then-legitimate quiz app had siphoned up, this vulnerability allowed attackers to directly take over user accounts.

The bugs that enabled the attack have since been patched, according to Facebook. The company says that the attackers could see everything in a victim’s profile, although it’s still unclear if that includes private messages or if any of that data was misused. As part of that fix, Facebook automatically logged out 90 million Facebook users from their accounts Friday morning, accounting both for the 50 million that Facebook knows were affected, and an additional 40 million that potentially could have been. Later Friday, Facebook also confirmed that third-party sites that those users logged into with their Facebook accounts could also be affected.

Source: Facebook’s Massive Security Breach: Everything We Know | WIRED