Blog Posts

Countering disinformation on the eve of elections – is regulation of online platforms a viable solution?

Vittoria Zanellati


Disinformation has been cropping up on online platforms for years but Europe’s recent election season has highlighted the potential disruptive impact of digital misinformation on voters’ choices. Elections in Germany, France, the Netherlands, the United Kingdom, as well as in the United States, have been marred with a bunch of online falsities – shared with potential voters either by traditional media outlets or through social media. In effect, there are reasons to believe that the use of disinformation to sway people’s opinions would continue to pose ominous risks to democracies. The spread of this phenomenon imposes larger social costs: consumers who mistake a false outlet for a legitimate one have less-accurate beliefs and are worse off for that reason, the consequence being that these less-accurate beliefs may undermine the ability of the democratic process to select high-quality candidates. At the same time, consumers may also turn more sceptical of legitimate media outlets, to the extent that they are increasingly perceived on the same level as false news producers. These effects may be reinforced in the long-run by supply-side responses: a reduced demand for high precision and low-bias reporting might reduce the incentives to invest in accurate reporting. It is therefore important that individuals have a firm understanding of what reputable journalism is and that they are proactive in differentiating real news from disinformation. 

The rationale for the growing impact of false news is inherently linked with changes in the media technology. On the one hand, barriers to entry in the media industry have dropped precipitously, both because it is now easy to set up websites and because it is easy to monetize web content through advertising platforms. On the other hand, social media are well suited for spreading disinformation, since content can be relayed among users with no significant third party filtering, fact-checking, or editorial judgment. While traditional media companies struggle to define a new business model amid lower advertising revenues and declines in readership, online platforms such as Google and Facebook have become a sort of influential editor-in-chief curating users’ news intake through algorithms emphasising posts that engage an audience – exactly what a lot of false news is specifically designed to do.  

Interestingly, Russia has something to do with disinformation campaigns at times of elections. Sophisticated hacking and false news operations to influence voters’ choices in the US, France, the Netherlands and the UK were largely associated with hackers linked to the Kremlin, as well as with Facebook accounts with apparent Russian ties purchasing a significant amount in political ads aimed at voters. Given the challenge to counter digital misinformation, in the run-up to elections fact-checking programs have been launched by local publishers to debunk myths almost in real time, while tech giants like Facebook and Google had to take action to boost public awareness of potential online misinformation and introduce new tools to help counter it. As far as educational initiatives are concerned, Facebook – which has been harshly criticized in the U.S. for failing to combat disinformation on the social network – has provided its users with a set of tips to help them identifying false reports. Eager to show it is tackling the problem at a technical level, Facebook has also rolled out a tool to let journalists spot false news when it is flagged by Facebook users and has opened a partnership with investigative journalists to fact-check material shared on the social network. Moreover, Facebook has recently announced efforts to boost transparency for its advertising, including making political advertisers verify their identity and creating new graphics where users can click on the ads and find out more about who is behind them. In a similar vein, Twitter has announced the launch of an advertising transparency centre with stricter rules for political ads and has effectively banned all accounts owned by Russia Today (RT) and Sputnik from advertising. Despite these initiatives proposed by online platforms themselves – from partnering with fact-checkers to depriving suspicious sites from advertising income – tech giants are still inadvertently promoting online misinformation. 

The establishment of a regulatory framework that forces online platforms to take greater responsibility for their content has also been recently discussed as a way to counter disinformation. It is worth mentioning that false news per se, however harmful, is not considered ‘illegal content’. Consequently, it is impossible to sue their disseminators or online platforms that facilitate their spread unless they include illegal elements (e.g. hate speech, incitement to violence, copyright infringement or defamation). So far, the European Commission has proposed several initiatives with an aim of enforcing the quick recognition and removal of illegal hate speech by online platforms (Audiovisual Media Services Directive reform and the Code of Conduct on Fighting Illegal Hate Speech Online). East StratCom Task Force also aims to counter a rising tide of false news and anti-EU propaganda aimed at destabilizing people’s trust in institutions – although with limited personnel and financial resources. Similarly, the Czech Republic has set up a police agency to scan social media for disinformation and other “hybrid threats”, while the French President Macron banned two Russian news organizations, RT and Sputnik, from his campaign headquarters in April. 

These efforts notwithstanding, the stricter position has been adopted by the German government in the run-up to the September 2017 elections, which appeared to have been largely unaffected by disinformation. Under the Network Enforcement Act – passed by the Bundestag in June 2017 and entering into force in October – online platforms will face fines of up to €50 million for failing to remove “obviously illegal” content – including hate speech, defamation, and incitements to violence – within 24 hours. Web companies will have up to one week to decide on cases that are less clear-cut. However, the law has nurtured the concerns of many politicians, lobbyists and tech companies over its potentially drastic implications, which could muzzle free speech online. The Act has given social networks a new, weightier role in the political debate as well as the task of legal enforcement since they will be entrusted with establishing what constitutes false news, defamation or propaganda. It is worthy to note that, on the one hand, the German approach might strengthen online platforms’ accountability, since actors such as Facebook cannot claim to be a neutral channel of communication, indifferent to what its users say. On the other hand though, concerns about freedom of expression are valid, as the line between policy makers’ views on hate speech and misleading reports and what is considered legitimate freedom of expression is a thin one. It is too easy to imagine a government using the blurred fake news/hate speech discourse to censor inconvenient views – the most recent example being Donald Trump’s embracement of the “fake news” term with the aim of discrediting media. At the same time, obligations for taking down illegal content should be subject to proper judicial oversight or transparency and reporting obligation, so that concerns triggered by privatized governance solutions, whereby online platforms govern the conduct of billions of users through their terms of service and non-transparent editorial practices, might also be contained. 

Online platforms do have an influence over public life, just like online falsities do have an impact on the public debate. Although voluntary measures to fight disinformation – such as increasing the firepower of fact-checking initiatives and boosting online platforms’ transparency for their advertisements – should be encouraged, these are not sufficient. State-imposed legislation might sometimes enhance the protection of the public interest and the German government has embraced this regulatory approach. However, only time will tell whether the law would eventually result in a safer, more truthful internet or in a more censored one. In the meantime, there is also a clear need to improve levels of media literacy among many internet users, who frequently appear unable to distinguish between facts and fiction online or unaware of the consequences of their online activity on their audience. Education would address the problem of disinformation on the demand side, as too many users are forwarding content without reading anything but the headline, and too few care about the source of the content. This is a cultural problem, one that is shaped by disconnections in values, relationships and the social fabric. It is therefore vital that debates on online misinformation and its impact do not prevent us from examining the underlying concerns of those who believe in false stories and who do not trust mainstream media.

Vittoria Zanellati is currently hosted by ISFED as a research intern with the support of a grant from the Lazio Region aimed at developing her own research project.