Skip to content

Facebook Users or Lab Rats: Ethical Research in the Age of Big Data

Editor’s Note: This post is written by Jenny Stromer-Galley, Associate Professor here at the iSchool and Vice President of the Association of Internet Researchers; it originally appeared on her blog on July 1, 2014.

A firestorm, in the academic world anyway, erupted this week when the AV Club posted a short review and commentary of a new research study on Facebook users.

The article, written by Adam Kramer, Jamie Guillory and Jeffrey Hancock and published in the journal Proceedings of the Academy of Natural Sciences, reports on a nearly 700,000-person study. For a week, a set of the participants experienced news feeds that had fewer positive emotional posts, and another set had news feeds with fewer negative posts. There was also a control condition—that set of participants saw news feeds in which some of the posts were randomly removed. Their study lends support to the idea that there is emotional contagion in social networks: if people see more positive emotional messages, they produce positive messages in kind; if they see negative messages, they write more negative messages. lab-rat

Research Ethics

The firestorm centers around research ethics. Researchers whose work is funded by the federal government, or who work for organizations that receive federal dollars, must submit their research proposals to review by a board of members from the community who consider the ethical implications of the research. These boards are often known as institutional review boards or IRBs. The review board ensures research participants are treated with respect and with regard for their welfare. It also helps researchers do the necessary work to inform participants of their involvement and their rights, and to consent to be a participant in a research study if they wish, or to opt out if they don’t want to participate. The reasons for this go back to unethical research conducted through the early 1970s, from medical to psychological experiments, in which people were subjected, usually without their knowledge, to what are now viewed as inhumane, injurious, and physically and psychologically damaging experiences.

I’m sure that the first author at Facebook, the other two academic researchers, and the journal that published the research were not particularly concerned about ethics and the principles that guide basic research for several reasons. Facebook isn’t an organization that explicitly receives federal dollars (at least not that I was able to ascertain in my quick search). Moreover, as an online business, Facebook routinely conducts what is called A/B testing, a form of research in which different messages, images, or layouts are randomly put in front of users and the results, like clicks or purchases, are recorded so that businesses can determine the most effective communication strategy. Finally, Facebook has a terms of use legal agreement (not that anyone reads it) that notes that they will collect data from users for a variety of purposes, including research.

Facebook Business or Facebook Research?

Except, there are two problems with these lines of reasoning. First, it’s true that Facebook is a business, but the experiment they conducted with heavy involvement by faculty from the University of California, San Francisco and Cornell University wasn’t conducted for business purposes, at least not obviously. It was conducted to inform our understanding of human behavior, which is the core of basic research. And basic research is subject to core ethical principles of human research. In the words of the Belmont report (the document all researchers must abide by if they do research with people), all generalizable research conducted with humans must ensure that “persons are treated in an ethical manner not only be respecting their decisions and protecting them from harm, but also by making efforts to secure their well-being.”

Mark Zuckerberg People First
Image via

Second, burying a clause about research in the terms of use is not in any way informed consent. Although it appears Facebook changed their terms of use to include “research” after this particular study commenced, to me that’s beside the point. The issue is that people don’t read terms of use documents, and ethical principles mandate that people involved in basic research must be informed of their rights as a par. Even burying “research” into one of the many things Facebook can do to users and with their data doesn’t come anywhere close to informed consent. Moreover, legally binding people to be subject to psychological experiments should trigger everyone’s ethical spidey sense that we have a problem here.


I know that big data and the temptation to experiment on thousands of people who are a captive audience on a platform like Facebook is too tempting for researchers like me to not go after. But, just because researchers can experiment on people doesn’t mean we should, and we certainly should not be conducting mass experiments on the unwitting without a purposeful, objective ethical review, with careful consideration of the balance of knowledge gained at the expense of human participants (as suggested by the Association of Internet Researchers ethics guide). Human subjects review boards need to extend not only to federally funded research but to any research that aims at generalizable knowledge. Without such protections, unfortunately, researchers in their quest to reveal the truth of human behavior will repeat history. They will fail to ensure that people are treated with dignity and respect and instead will treat them as lab rats.

Please share your thoughts in the comments below or via twitter to @profjsg.


Kelly Lux

Kelly is the former Executive Editor of Information Space. Kelly currently teaches courses on Social Media, Online Community Management, and Content Strategy and Application, and she is currently the Assistant Director of the Communications@Syracuse program.

More Posts