Predictive policing, credit assessments, job recruitment – algorithmic systems are impacting many areas of life today. If machine-based decisions are to serve people and their needs, society must shape how those decisions are made, for example by establishing criteria that ensure algorithmic predictions are truly beneficial. For this to be the case, predictions must be applied in an objective manner and must be refutable, and it must be possible for them to be independently verified.
For decades, artificial beings have existed in films and novels who are both conscious and who pursue their own agenda. That does not describe the systems which currently assess creditworthiness, calculate insurance rates, pre-select job applicants or suggest the neighborhoods where police officers should patrol. Our working paper When Machines Judge People presents nine such examples of how algorithmic decision-making (ADM) processes are currently being deployed.
One finding: Algorithmic systems can have a significant impact on the opportunities that many people have to participate in society. One area is the selection of job candidates. On the one hand, automated processes can highlight competencies and, by identifying a person's abilities, lead them to their dream job even if they do not have all formal qualifications that are typically required. On the other hand, widespread use of ADM processes can exacerbate discrimination and errors. If, for instance, an algorithmic system used by many different employers rejects certain types of job applicants because of faulty programming, what is at stake for these individuals is not just one job, but access to a major part of the labor market.
If the opportunities inherent in ADM processes are to be used in order to increase participation, this goal must be taken into account when the processes are planned, programmed and implemented. Otherwise these tools, when used, can lead to greater social inequality – something that must be avoided at all costs.
The case examples presented in the paper show the opportunities and risks inherent in such processes. One example: Algorithmic systems make very consistent decisions. They reliably apply the predetermined decision-making logic in each individual case. That is beneficial since, in contrast to human beings, software does not have good and bad days and does not arbitrarily apply new criteria which might be unsuitable or discriminatory. At the same time, however, exactly this consistency brings with it a number of risks, since faulty or ethically inappropriate decision-making logic is also applied unfailingly in each case. Another potential danger: In unusual situations, the system lacks the necessary flexibility to consider information that is important to the specific case in question and to weigh the relevant factors differently than it otherwise would.
Action is required not only on a technical level. Based on numerous case studies, the analysis shows that all levels of the socioinformatic process must be shaped to promote participation. For example, the selection of data and the steps used to make certain concepts measurable (such as "creditworthiness") at the beginning of the development process can themselves be informed by normative attitudes that reflect fundamental ethical values. In such cases, a broad-based social discourse must take place to address these issues.
An important – even indispensable – sign of quality is the possibility to check ADM processes. In many of the cases discussed, the social consequences are only known because independent third parties invested the time and money in collecting and evaluating data and, occasionally, taking the legal steps required to gain access to the relevant information. Whether or not a societal debate about the impact of certain algorithmic processes is even possible currently depends on such institutions. Therefore, it must be possible to both check machine-based decisions and understand how they were reached if a solution-oriented discourse is to take place, with the ultimate goal of ensuring these decisions can be used to shape participation and serve people.
The working paper is an initial interim assessment made as part of a larger exploration of "Participation in the Age of Algorithms and Big Data," which we are using to examine how digital developments are impacting social participation.