In January 2012 Facebook conducted a study on nearly 700,000 users to find how positive and negative emotion posts in people’s feeds affect their own emotions. In essence, their findings show that if your friends write happier posts, your posts tend to happier as well, with the same true of negative posts. They did not add any content, but merely modified the algorithms that determine what content gets displayed. The study has been proven controversial with the public since Facebook’s findings were published in the Proceedings of the National Academy of Sciences. The legality of the research depends on the extent to the involvement of Cornell University and the University of California, as federally funded research is required to follow the Common Rule for human subjects, which insists researchers get informed consent from human subjects. Facebook states in their data use policy that “we may use the information we receive about you… for internal operations, including… research”. The UK Information Commissioner’s Office is currently investigating the issue and has the power to levy fines of up to £500,000 (858,145 USD).

Opinions

Tal Yarkoni via [citation needed]

Facebook simply removed a variable proportion of status messages that were automatically detected as containing positive or negative emotional words. It did not, as many people seem to be assuming, add content specifically intended to induce specific emotions. Second, it’s not clear what the notion that Facebook users’ experience is being “manipulated” really even means, because the Facebook news feed is, and has always been, a completely contrived environment. The reality is that Facebook–and virtually every other large company with a major web presence–is constantly conducting large controlled experiments on user behavior. Typically, these manipulations aren’t conducted in order to test basic questions about emotional contagion; they’re conducted with the explicit goal of helping to increase revenue.Fourth, it’s worth keeping in mind that there’s nothing intrinsically evil about the idea that large corporations might be trying to manipulate your experience and behavior. Everybody you interact with–including every one of your friends, family, and colleagues–is constantly trying to manipulate your behavior in various ways. Consider: by far the most likely outcome of the backlash Facebook is currently experiencing is that, in future, its leadership will be less likely to allow its data scientists to publish their findings in the scientific literature. The fact that Facebook is willing to allow its data science team to spend at least some of its time publishing basic scientific research that draws on Facebook’s unparalleled resources is something to be commended, not criticized.

thatnameagain via /r/worldnews on Reddit

What’s funny is that this is only legally questionable because it was conducted as a legitimate scientific study, for which regulations like data privacy can apply. As a private company to whom you pay no money, Facebook can manipulate your mood and mess with your feed all it wants so long as they don’t bring in outsiders. Legitimizing the scientific accuracy of the study is what caused it to be (maybe) illegal.

Paige Brown via SciLogs

Given the many examples of other online content experimental manipulation studies, including former studies conducted using Facebook, it seems the biggest reason people are complaining about the ethics of this study in particular is that Facebook apparently intentionally made us sad. But did they really? They didn’t make our Facebook friends post any more negative statuses than they would have otherwise. Those of us who got the ‘negative’ experimental manipulation just happened to see a variable amount less of our friends’ positive statuses that week. It certainly wasn’t all ‘puppies’ for one group and all ‘death’ for the other. Most Facebook users are probably aware that their news feed isn’t simply a collection of everything their friends post. As the PNAS study pointed out, “[b]ecause people’s friends frequently produce much more content than one person can view, the News Feed filters posts, stories, and activities undertaken by friends.” Facebook has probably experimented with many different ways to filter the content that shows up on your news feed.

brilliantjoe via /r/worldnews on Reddit

The study doesn’t show any personal information, just metadata collected over the course of the study. There aren’t any privacy issues here. Also: Facebook owns the data set, they own the algorithms used to produce the data set, you agree to allowing Facebook to show you information via their algorithms (which they don’t tell you HOW they are choosing what to display and when to any great degree). There’s nothing to see here. Facebook did something scummy, but it’s not illegal.

Dan Diamond via Forbes

But let Facebook’s study serve as a wake-up call: If you’re actively surfing the Web, you’re a high-tech lab rat. At least 15% of the top 10,000 websites conduct A/B testing at any given time, the MIT Technology Review reported earlier this year. Facebook’s just the only site to publish an academic paper about it. Meanwhile, don’t plan on the company to stop this sort of testing, no matter how loud the outcry over this study. There are too many business-driven reasons for Facebook to keep tweaking its platform and learning more about how its users respond to different triggers. And if that concept bothers you—if you find Facebook’s artificial environment somehow less friendly today than yesterday—there’s a simple solution: Quit Facebook.

Katy Waldman via Slate

As Grimmelmann observes, nothing in the data use policy suggests that Facebook reserves the right to seriously bum you out by cutting all that is positive and beautiful from your news feed. Emotional manipulation is a serious matter, and the barriers to experimental approval are typically high.

Jacob Silverman via The Wire

What’s disturbing about how Facebook went about this, though, is that they essentially manipulated the sentiments of hundreds of thousands of users without asking permission (blame the terms of service agreements we all opt into). This research may tell us something about online behavior, but it’s undoubtedly more useful for, and more revealing of, Facebook’s own practices.