Image of a map of Europe with lots of pins

Vast increase of algorithmic systems in Europe: Transparency, oversight and competence still lacking

The deployment of automated decision-making (ADM) and AI-based systems in Europe has vastly increased over the last two years. Used by public authorities and private companies alike, they affect the lives of millions of citizens. But transparency, oversight and competence are still lacking. There is an urgent need for policy makers at EU and member state level to address this gap, otherwise they risk to jeopardize the potential benefits of such systems. This is the result of the most comprehensive research on the issue conducted in Europe so far, AlgorithmWatch and we compiled for the 2020 edition of the Automating Society report.

The deployment of automated decision-making (ADM) and AI-based systems in Europe has vastly increased over the last two years, affecting the access to rights and services, and thus to opportunities in life, of millions of citizens. And yet, most such systems are being adopted without meaningful democratic debate, and either lack transparency as to their actual goals, workings and efficacy or, when such evidence is available, demonstrably fail to benefit individuals and society in practice.

These are the main results of the 2020 edition of the 'Automating Society' report, an unprecedented research effort to assess how automated processes and decisions are currently impacting all aspects of everyday life, ranging from welfare to health, education, justice and policing, in 16 European countries.

Through the research and investigations of a network of researchers located across the continent, the project has been able to document and illustrate that, in certain cases, ADM systems can actually be a force for good. For example, automation helps assessing the risk of gender violence in Spain, and has reduced medical prescriptions fraud in Portugal by 80% in a single year.

Detailed examination of more than 100 examples of ADM systems

But the report's detailed examination of more than 100 examples of ADM systems deployed all over Europe demonstrates that the vast majority of uses tend to put people at risk rather than help them – failing, for example, to fairly assess students’ grades in the UK, to correctly detect social welfare fraud in the Netherlands or to accurately predict the prospects of the unemployed in Denmark as a result.

"Automated decision-making systems surely have the potential to positively contribute to society", says Fabio Chiusi, project manager at AlgorithmWatch and lead researcher of the report, "but our work shows that in practice this has so far been the exception, rather than the norm".

Good intentions – poorly implemented

We see many cases of good intentions which are then poorly implemented. To use the full potential of such software systems, we need a European framework with coherent rules on transparency, oversight and enforcement mechanisms resulting from an informed and inclusive democratic debate.

Sarah Fischer, expert on algorithmic decision-making at Bertelsmann Stiftung

The report also clearly shows that it is possible to challenge the emerging unfair and opaque algorithmic status quo, and to right the ADM wrongs. In many of the analyzed countries, journalists, academics and civil society organizations succeeded in bringing an increasing number of opaque and rights-infringing ADM systems to a halt, effectively operating as watchdogs of the automated society.

Policy recommendations for more benefits

While essential, this is of course not enough to structurally enable and guarantee the beneficial use of ADM systems. This is why the report details a set of actionable policy recommendations that, once implemented, would allow both to reap their benefits and minimize their shortcomings. AlgorithmWatch and we recommend to:

  • increase the transparency of ADM systems by establishing public registers for ADM systems used within the public sector and by introducing legally-binding data access frameworks to support and enable public interest research;
  • create a meaningful accountability framework for ADM systems by developing approaches to effectively audit them, by supporting civil society organizations as watch-dogs, and by banning "high-risk" ADM systems such as face recognition that might amount to mass surveillance; 
  • enhance algorithmic literacy and strengthen public debate on ADM systems by establishing independent centers of expertise on ADM and by promoting an inclusive and diverse democratic debate around ADM systems.

The country-specific reports can also be accessed separately in the local language. They are a translated excerpt of the Automating Society Report 2020: