Lessons from Cambridge Analytica – Using Digital Data to Drive Decision Making

In a world being digitally transformed, the fear for many is that the digitally-empowered will increasingly exploit technology for their own personal gains, manipulate laws in their favour, and influence politics and society through data-driven targeting of individuals and groups. The poster child for many of these fears is the Cambridge Analytica scandal. Much has been written and discussed about this, but the results of a 3-year investigation released this month provide an opportunity to reflect on two of the broader lessons.

From the earliest definitions and discussions of the digital economy in the 1990s there have been concerns expressed for the influences digital technologies may have on our behaviours, and the impact they might be able to exhort on our society and institutions. These fears came to the forefront for many with the Cambridge Analytica scandal during the last US election. The company had been using data gathering and analysis to influence elections in all parts of the work over several years. However, with its activities in 2013–16 in the Brexit referendum and the US election, it was accused of obtaining large amounts of data from Facebook without users’ consent to target political advertising and influence the results.

After the reports of these activities surfaced in 2018, there was a great deal of discussion about the sophisticated technologies employed by Cambridge Analytica, and the way this had changed the actions of many people. Can the result of elections be changed by those with the digital know-how to nudge their thinking in one direction or another? A new report on Cambridge Analytica casts doubt on this.

The report is a great reminder that in the world of AI and data science, not everything is as it seems. The Cambridge Analytica story illustrates this perfectly. The analysis of large amounts of personal data to learn more about an individual’s behaviour is combined with a barrage of prompts to nudge opinions and convert indecision into action. Far from a trivial matter of influencing purchasing decisions or getting to like a video, these techniques have huge implications when technology, data surveillance, and politics collide.

However, as outlined by Laurie Clark’s article summarizing the report in the New Statesman, the substantial 3-year investigation by the UK Information Commissioner’s Office (ICO) concluded that there was no evidence that Cambridge Analytica had misused data to influence Brexit or aid Russian intervention in elections. What is more, the investigation placed a great deal of doubt on the claims of Cambridge Analytica in two key areas with important implications for our broader understanding of the digital economy.

The first key finding was that Cambridge Analytica was not doing anything novel in its application of data science to understand and analyze personal data. The investigation found that commonly used data science techniques were used. There was no magic to what they did to gather data, examine it, and make predictions. What they were able to achieve is possible by anyone using commonly available technology, infrastructure, and skills.

The second critical comment from the review was to cast doubt on whether the approach of data-driven micro-targeting by Cambridge Analytics influenced the election results at all. The evidence is mixed. However, there is certainly no data that shows these techniques were able to change voting habits so that Cambridge Analytica helped US states “turn red instead of blue”, as claimed by their former director in the Netflix documentary, The Great Hack. We just don’t know enough about the triggers for nudging voting behaviour to make such claims.

The consequence of this review is therefore both disturbing and reassuring at the same time. What Cambridge Analytica did to understand and target individuals with state-of-the-practice approaches is something happening to us all the time. The frequency of these attempts to influence us will become commonplace, and their range will undoubtedly grow. However, even so, as individuals with different experiences and ways of seeing the world, it is good to know that we are not so easily pushed, prodded, and persuaded as we’d feared.

It is comforting to conclude that in a world increasingly instrumented, automated, and guided by sophisticated data-driven algorithms, it is the fact that we are human that may well be our best hope.

Originally posted on Alan Brown’s Dispatches

Share the Post: