That our interactions with media can affect our emotional state is hardly a new phenomenon. If you’ve ever cried at the death of Bambi’s mother, or got angry with a story in a tabloid you’ve experienced the impact that media can have.

So, the response to the recent paper from Facebook on their psychological experiment on 600,000 users in both old and new media has been interesting to say the least. From the dismissive (are you surprised?) to the offended (unethical, corrupt...) the reaction has been ‘contagious’ to use a word from the paper.

In the digital world, there may be such a thing as a free lunch, but if so it would appear that you are on the menu.

Has a boundary been crossed in this case? If you pay to have your emotions manipulated by a Spielberg blockbuster, or choose to read the Daily Mail, that is your choice. Here, however 600,000 people for a week in 2012 had their personal feeds modified without their knowledge or consent. Facebook’s argument is that it’s ‘in the terms of service’, which it is, given how broadly ‘research purposes’ is drafted.

Best practice on psychological research clearly requires informed consent. Personally, I’m surprised that this got past an ethics committee in the first place. So a good case can be made on grounds of dubious ethics. It may be that the terms of service will need to be modified to avoid legal or regulatory challenge. It will be interesting to see what pressure is applied to clarify this or whether the whole thing will blow over.

The question of how sinister this type of manipulation could become with time is worthy of debate. I would caution against knee-jerk responses to create a legal straitjacket. The unintended consequences might be worse than the crime claimed.

Using the film analogy, consider the following. 600,000 people watch a film. Each of them sees a slightly different story that has been personally tailored to their emotional state and what is known about their psychological profile. Half of them are being manipulated to hate a group of people (religious, ethnic, disability, migrant) and the other half to feel relaxed and supportive about those same groups. If that were possible would it be acceptable or not? Would it matter if they did not know that this was an experiment, because they had bought a ticket and it was in the small print?

(Note how my choice of subject is deliberately manipulative!)

The economics of such an experiment means that today it would not be feasible. In time however, how close do you think we could get to this type of targeting? Is the Facebook experiment something that we should stop dead in its tracks, or do we take a more cautious, if wary, approach to such developments?

Given how litigious the US is, consider the following. If out of the 600,000 users one person committed suicide in the week after this experiment and was on the negative group, is there a case to answer? In reality, sadly given the suicide statistics, there will have been multiple such cases in a population of 600,000.

To add to Bad Science we now need to consider a class of Bad Data Science. In this case the research is badly designed. The tool used is inappropriate to the task at hand. On top of that, the claims are wholly disproportionate to the measured effects. For a critique of the paper I recommend John Grohol’s analysis. What surprises me is how little effect is demonstrated by the research. The response is 0.07 per cent. Whether that is due to the tools, the research design or the media itself I cannot say. It is probably an order of magnitude less than the emotional response to a well-made film.

For what it’s worth, this is my position. I have decided not to make another post on Facebook till they agree not to conduct such psychological research in the future, without informed consent. If they don’t accept this I will close my account in due course.

Research of this nature could be hugely valuable, but I’d like to see it conducted with openness and transparency and in a proper ethical framework. That way we may all benefit.

About the author

Chris Yapp is a technology and policy futurologist. Chris has been in the IT industry since 1980. His roles have spanned Honeywell, ICL, HP, Microsoft and Capgemini. He is a Fellow of the BCS and a Fellow of the RSA.