Scientist Julian Jaursch calls for governments to invest more in promoting the digital news literacy of their citizens. He shares his doubts on microtargeting in campaigns and highlights the dangers of "dark ads.”
DW: Elections have become a digital arms race in recent years. US President Donald Trump prioritized microtargeting ads on digital platforms as part of his 2016 campaign. From the voters’ perspective what exactly is involved in these methods of campaigning?
Julian Jaursch: For a potential voter, being microtargeted online usually involves three steps, roughly speaking: First, campaigns and platforms gather personal data from your web searches, likes, clicks, shares and comments. Second, they use that data to infer your identities, preferences, fears and hopes, and put you in a small group of people that the data says share those attributes. Third, they try to reach this rather homogenous, small group of people with their paid messages.
The goal of microtargeting is for campaigns to know better what content voters are susceptible to: it could be a message in support of a certain cause, or an image invoking a certain feeling. Or, in a more nefarious manner, it could be a message to dissuade certain people from voting or to spread lies. One issue that arises here is that microtargeting can involve “dark ads”, meaning paid messages that only the targeted group sees.
Is the use of microtargeting a fair instrument of election campaigning or rather, does it pose a danger to democracies?
It's both. Political campaigns across the globe pay to target people, which can be very useful to reach niche audiences and build followers. For example, candidates for public office can reach folks within their electoral districts this way. But there are big risks involved in microtargeting, too.
The danger for democracy really lies in the way in which the citizenry can potentially be divided into certain groups — based on personal data — and then be shown exactly those message that the data shows they want to hear. This can strengthen existing in-group dynamics and heighten already existing polarization.
It can also distort political debates and undermine people’s opinion formation. It is hard to monitor these potential risks, because an ad can be shown to a small, homogeneous group of people, without others knowing. Neither other citizens nor other political campaigns might ever see the ad to call out potentially misleading or discriminatory statements.
Can political online advertising be improved through more transparency?
Yes, meaningful transparency is a great first step to keep a check on political ads online. Citizens, journalists and regulators need a chance to scrutinize what ads are out there, who is behind the ads, who is being targeted and who is maybe also left out of paid political messaging. And that’s only possible with meaningful transparency.
The EU and some big platforms have taken steps in this direction. For example, platforms have established what they call ad archives. It’s a database of all political ads, allowing some insights into targeting. But these ad archives need to be vastly improved, there should be mandatory standards and there should also be reporting requirements on the targeting and ad delivery criteria.
Political advertisers, too, need to be transparent about the source of their funding and the expenditure for digital advertising, including influencers. Such transparency rules are really outdated in some European countries, including Germany.
Do companies like Facebook need to be regulated to prevent them from amplifying fake news and false reports?
Yes, this is a primary reason for regulating big tech companies like Facebook/Instagram, Google/YouTube, Twitter and TikTok. Just look at the issues a lot of societies all over the world are facing: the fast and wide spread of conspiracy myths surrounding a public health crisis, disinformation on voting that threatens to disenfranchise many people, discriminatory content that can prevent some people from taking part in online discussions. By no means do tech companies cause these issues, but they might amplify them.
That is why regulation is necessary, but it can’t be the only measure: Platform regulation will not solve the issue of conspiracy myths and it will not prevent people from bullying and threatening others. But clear rules for platforms could establish a transparency and accountability regime, so that it’s not only the market logic of selling ever more targeted ads and individual CEOs’ decisions determining what people’s online news and media spaces look like.
What can governments do to prevent social divisions from solidifying in the digital realm?
Decisionmakers should take a long-term approach focusing on education and regulation: Everyone in the digital news and media space needs a set of skills that goes beyond merely knowing how to post something online. It’s about knowing how to fact-check, how to engage in fruitful debates, how to have a healthy sense of skepticism about sources — but without being paranoid. Governments and the EU could support and fund more journalists and civil society organizations to strengthen citizens’ digital news literacy.
On the regulation side, the EU can use the proposed Digital Services Act to create transparency reporting obligations, to establish independent oversight of platforms that have the power and means to audit various platforms’ algorithms, and to find common rules for paid political advertising in the digital public sphere. Considering the crucial role personal behavioral data plays in the amplification engines at big platforms, governments could also invest more in enforcing data protection rules.
Read more: EU takes action against fake news
Julian Jaursch is a project director at Stiftung Neue Verantwortung (SNV), a not-for-profit, non-partisan tech policy think tank in Berlin. Working on platform regulation in a German, EU and transatlantic context, he analyzes and develops policy options to tackle issues such as disinformation and political online advertising.
This interview was conducted by Martina Bertram.