The need to control psychological targeting
Das Magazin N°48 – 3. Dezember 2016
Ein wichtiger Beitrag von Sandra Matz zur Diskussion um die Auswirkungen von Big-Data-Methoden, namentlich Psychologisches Targeting, auf demokratische Entscheidungsprozesse.
Sandra Matz ist Wissenschaftlerin mit Schwerpunkt Psychometrie an der University of Cambridge in England. Die Forscherin hat zusammen mit Prof. Michal Kosinski von der Stanford University eine Reihe von Tests durchgeführt, um die Wirksamkeit von Methoden zu messen, wie sie Cambridge Analytica verwendet.
Wᴇ Nᴇᴇᴅ ᴛᴏ Lᴇᴀʀɴ Hᴏᴡ ᴛᴏ Cᴏɴᴛʀᴏʟ Psʏᴄʜᴏʟᴏɢɪᴄᴀʟ Tᴀʀɢᴇᴛɪɴɢ
Psychological targeting refers to the practice of tailoring marketing content to the psychological characteristics of individuals or small groups of people. It has recently become front-page news, thanks to an article in das Magazin discussing how the UK-based company Cambridge Analytica implemented such technology to support Donald Trump in the 2016 U.S. Presidential Election.
As a researcher who is closely involved in the development and evaluation of psychological targeting methods, I would like to thank the authors Grassegger and Krogerus for their insightful and thought-provoking piece on the dark side of such technologies. The lively and controversial discussion it has sparked is much needed and was long overdue. I would like to take the opportunity to contribute to this discussion by addressing two questions that have been raised again and again. First, to what extent did psychological targeting really decide the Presidential Election? And second, what can we do to protect ourselves (and society-at-large) from such forms of opaque influence?
Before I share my thoughts on these questions, let me briefly talk about the concept of psychological targeting. Because in a way, psychological targeting is nothing new. Marketers have long tailored their placement of advertisements based on their target group, for example by placing ads aimed at conservative consumers in magazines read by conservative audiences. What is new about the psychological targeting methods implemented by Cambridge Analytica, however, is their precision and scale. According to CEO Alexander Nix, the company holds detailed psycho-demographic profiles of more than 220 million US citizens and used over 175,000 different ad messages to meet the unique motivations of their recipients. This is huge!
Now back to the question of whether – and to what extent – psychological targeting has played a role in the presidential election. Here is what we know: Cambridge Analytica, a UK-based company headed by Alexander Nix implemented psychological targeting methods to support the Trump/Pence 2016 Presidential Campaign. But were those campaigns really responsible for Trump’s election as president? When faced with a difficult question like the one above, researchers tend to search for answers in two ways. First, we consult existing research that allows us to make an educated guess of what the answer to the question might be. Second, we look for empirical evidence that proves (or disproves) our educated guess.
So what can we learn from existing research? We know, for example, that psychological traits such as personality are strongly correlated with a person’s political orientation, and that they predict voting patterns above and beyond sociodemographic variables such as age, gender or education. We also know that marketing messages in a wide variety of contexts – including political attitudes and voting intentions – can be made significantly more persuasive if they are tailored to the psychological characteristics of their recipients. And then finally, my own work demonstrates the power of real-life psychological targeting on Facebook. People were more likely to install an app that was matched to their personality characteristics than to install an app that was contrary to their characteristics. Similarly, people were more likely to buy from an online beauty retailer if the marketing message used to promote the brand was in line with, rather than opposed to, their personality. Taken together, these existing scientific findings suggest that the psychological targeting methods implemented by Cambridge Anaytica may indeed have contributed to making Trump’s campaigns more effective.
However, without the hard, scientific, and concrete empirical evidence Cambridge Analytica has so far failed to produce, there cannot be a conclusive answer to the question of whether – or at least, to what extent – their psychological targeting campaigns were responsible for Trump’s victory. Without any information on the details of their psychological targeting campaigns – such as who was targeted, with which materials, when, and through which channels – we can make an educated guess, but cannot know for sure.
Given this lack of empirical evidence, I believe that the fierce and highly opinionated discussion around the question of whether or not Cambridge Analytica made Trump the 45th president of the United States of America is of relatively little value. The discussion I would like to see occur instead is one about the broader ethical concerns of the fact that Cambridge Analytica was able – and legally authorized – to (a) collect the personal data of millions of unsuspecting US citizens, (b) use this data to make inferences about people’s highly intimate traits, such as their personality, and ultimately (c) employ large-scale psychological targeting to influence people’s voting behaviors without their awareness (an aspect that is very nicely covered in the original article).
The problem here is not the existence of technologies such as psychological targeting. Indeed, as with most scientific inventions, psychological targeting itself constitutes a fairly neutral tool that can be used to both benefit and harm society. Take the case of “nudging” for example, which refers to the increasingly widespread application of insights from the behavioral sciences now featured in over thirty governments around the world. By understanding how citizens make decisions, governments have been able to save taxpayers billions of dollars, while improve society’s well-being on a number of important factors (such as health and retirement savings). Similar to the concept of nudging, psychological targeting can provide a powerful tool to encourage people to eat healthier, donate more to charity, or to reduce their energy consumption.
The real problem is the lack of legal regulations to control and restrict the application of such technologies. Without such regulations, we are all at the mercy of the powerful who have the means to develop and obtain such technologies. And when I say all I mean all, not just the younger generation which all to willingly shares information about themselves and their friends on social media platform such as Facebook. Yes, you might not have a Facebook account and you might even make sure to browse the web in private mode and to use search engines that don’t track your search queries. But what about the credit card you use to do your daily shopping or simply the fact that the smartphone you carry on you for most of the day has its location and microphone services switched on? Our lives today are so dependent on digital devices that it is almost impossible for anyone to escape the data-sucking, predictive machinery of companies such as Cambridge Analytica.
The question is not over whether psychological targeting will be applied in the future –inevitably, it has and it will be –but whether we can learn how to use it in the best interest of both individuals and society-at-large. So let’s stop arguing about whether Cambridge Analytica really decided the US Presidential Election and instead, let’s start discussing how we can use the impetus this story has created to make a meaningful change in the future. Let’s push for research that will help develop scientifically based recommendations on how psychological targeting can be implemented in an ethical way – one that is transparent to its users and gives them back their data ownership. And most importantly, let’s all – yes that’s me, you, your family and friends – show policy-makers and companies that we care about our privacy and do not tolerate the secretive and opaque application of predictive technologies such as psychological targeting.