It has emerged that a study carried out by Facebook, University of California & San Francisco, and Cornell, monitored the emotional responses of nearly 700,000 users after their news feeds were deliberately manipulated with both positive and negative information.
Researcher Adam Kramer says the purpose of the 2012 study was to understand how Facebook can impact people’s emotions, and the implications of that, though critics argue that it was unethical because participants were not informed nor asked for their consent.
The research found that when users news feeds were manipulated in January 2012 to show mainly negative information, they made more negative posts themselves, whereas when they viewed mainly positive information their posts too were of a positive nature.
This helps to confirm the concept of massive-scale emotional contagion, or more simply – that emotions can be passed on without conscious awareness. It might also suggest that “Facebook depression” or the feeling of glumness after seeing that your friends are having a better time than you are, is more complex because in this case happy produced happy, not the other way around.
What it also might show however is that Facebook are not open to their users about how they are being manipulated and what their data is being used for.
While the paper notes how it was consistent with Facebook’s Data Use Policy, we’re sure that if people were expressly told that their emotions may be manipulated in secret experiments when signing up, they may have had second thoughts.
Some lawyers are even arguing that the social media site and researchers broke the law because “informed consent” cannot be pushed through the backdoor of a “terms and conditions” page, especially not when public funding is involved. Though Adam Kramer is from Facebook, Jamie Guillory is from UCSF, and Jeffrey Hancock is from Cornell – two federally funded research Universities.
It is also not so much a privacy issue (researchers used a bot to manipulate news feeds, and didn’t read them personally), it’s that people were for all intents and purposes made to feel shitty.
“My co-authors and I are very sorry for the way the paper described the research and any anxiety it caused,” apologised Kramer in a blog post. “In hindsight, the research benefits of the paper may not have justified all of this anxiety.”
The experiment was only conducted on English speaking people. Posts were determined to be positive or negative if they contained at least one positive or negative word, as defined by Linguistic Inquiry and Word Count software, and hidden or shown accordingly over a one week period.
“They are manipulating material from people’s personal lives and I am worried about the ability of Facebook and others to manipulate people’s thoughts in politics or other areas,” said UK Labour MP Jim Sheridan. “If people are being thought-controlled in this kind of way there needs to be protection and they at least need to know about it.”
This sentiment has been echoed across the web by other politicians, activists, and concerned users. However a Facebook spokeswoman said that part of the reason the study was commissioned was to understand whether fears about the power of Facebook are warranted, and to help them “improve the service.”
“A big part of this is understanding how people respond to different types of content, whether it’s positive or negative in tone, news from friends, or information from pages they follow.”
What do you think is this another example of Facebook crossing that ethical line? Or is it all just hype?
Leave a Reply