top of page

Research Ethics: The 2014 Facebook Emotion Study

  • Writer: Cayenne
    Cayenne
  • Sep 28, 2018
  • 3 min read

Updated: May 20, 2020

This was a reflection essay I wrote for a research ethics module about the 2014 Facebook Emotion Study, deemed ethically questionable by many in the academic field.


According to the principle of autonomy, research participants are required to give informed consent before participation. In this case (Kramer et al., 2014), Facebook claimed their users agreed to their Terms of Service (ToS), including their data use policy, thus implying informed consent was given. However, one cannot expect users to remember, let alone challenge, the specific clause of the ToS (shown to them possibly years ago) that allowed Facebook to alter their feed. Additionally, while the ToS was shown to all users, they probably would not have read and understood the entire document (Obar & Oeldorf-Hirsch, 2018). Therefore, the claim that users agreeing to Facebook’s ToS is equivalent to informed consent is invalid. Furthermore, Facebook made no effort to inform users of the ongoing research study or explain the study’s aims. Users were also unaware they could choose not to participate or withdraw at any time. Hence, this study can be considered unethical based on autonomy.


During the experiment, Facebook showed unsuspecting users either neutral to positive, or neutral to negative posts (Waldman, 2014). One could only infer whether users suffered emotionally due to the intentionally negative posts because there was no follow-up action from the researchers to ensure the participants’ well-being. As proven by the results, people became sadder because of the study, yet, the researchers did not debrief the users to minimise the emotional harm suffered. For this reason, nonmaleficence was not upheld here. In addition, users were treated unequally, where each group was shown either negative or positive posts. Not only was the positive group made happier, which could be considered a benefit the negative group was deprived of, but also, the negative group was made unhappier, making the unequal treatment worse. Consequently, in the aspect of justice, the study would be deemed unethical.


The results of the study proved that people’s emotions were reinforced by what they saw – researchers called it ‘emotional contagion’ (Meyer, 2014). Weighing the benefits of proving ‘emotional contagion’ against the costs of emotional harm suffered by users in the negative group allows one to decide if the study upheld beneficence. Consider this: Instead of hiring private researchers and keeping the data as proprietary information, Facebook consulted academic researchers from Cornell University and published their findings publicly in the Proceedings of the National Academy of Sciences (PNAS) (Meyer, 2014). This backs their claim that they wanted to improve their services by understanding people’s emotions. Overall, the research had good intentions (despite its problematic execution) and it was ethical in the matter of beneficence.


In conclusion, this Facebook study, despite its valuable results, was unethical due to the lack of informed consent and emotional harm inflicted on some participants. Here are some suggestions to improve the study’s ethicality. Facebook could disclose upfront that they were conducting a research study and explain its aims and potential risks. Users could then opt not to participate in the study. If they choose to participate, they should be reminded of their right to withdraw at any time during the study. If there are insufficient willing participants, Facebook could incentivise users by presenting the study like personality profiling, where participants will have access to the analysis of their own data to know more about themselves (Schocker, 2014). Alternatively, Facebook could offer more concrete incentives like two weeks of ad-free usage (Albertson & Gadarian, 2014). If there are concerns about reactivity, the study’s aims could be disguised, allowing participants to act more naturally during the experiment. Subsequently, a debriefing message could be broadcasted, explaining the use of deception. Perhaps implementing an Institutional Review Board (IRB) that supervises research using personal data from social media would also prevent such unethical practices from being carried out in the first place.



References

Albergotti, R. & Dwoskin, E. (2014) Facebook Study Sparks Soul-Searching and Ethical Questions Retrieved September 20, 2018, from https://www.wsj.com/articles/facebook-study-sparks-ethical-questions-1404172292

Albertson, B & Gadarian, S. (2014) Was the Facebook emotion experiment unethical? Retrieved September 20, 2018, from https://www.washingtonpost.com/news/monkey-cage/wp/2014/07/01/was-the-facebook-emotion-experiment-unethical/?utm_term=.93edf73fcceb

Kramer, A. D. I., Guillory, J. E., & Hancock, J. T. (2014). Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences, 111(24), 8788–8790. https://doi.org/10.1073/pnas.1320040111

Meyer, R. (2014) Everything We Know About Facebook's Secret Mood Manipulation Experiment Retrieved September 20, 2018, from https://www.theatlantic.com/technology/archive/2014/06/everything-we-know-about-facebooks-secret-mood-manipulation-experiment/373648/

Obar, J. A. & Oeldorf-Hirsch, A. (2018) The Biggest Lie on the Internet: Ignoring the Privacy Policies and Terms of Service Policies of Social Networking Services TPRC 44: The 44th Research Conference on Communication, Information and Internet Policy, 2016. Retrieved September 25, 2018. Available at SSRN: https://ssrn.com/abstract=2757465

Schocker, L. (2014) This Is Why Those Online Personality Quizzes Are So Irresistible Retrieved September 25, 2018, from https://www.huffingtonpost.com/2014/02/05/psychology-online-quiz_n_4725593.html



Cover image originally from here, edited by me to fit the display dimensions of this site.


Comments


©2020 by Cayenne

bottom of page