SHOULD WE TRUST ARTIFICIAL INTELLIGENCE?.

AuthorSutrop, Margit
  1. Introduction

    Trust is believed to be a cornerstone for artificial intelligence (AI). In April 2019 the European Commission's High-Level Expert Group on Artificial Intelligence (AI HLEG) published Ethics Guidelines for Trustworthy AI, stressing that human beings will be able to confidently and fully reap the benefits of AI only if they have trust in it. The Guidelines call for the development of 'Trustworthy AI', featuring a human-centric approach and emphasizing two components: (1) respect for fundamental rights, applicable regulations and core principles and values, ensuring an 'ethical purpose' and (2) technical robustness and reliability, to avoid unintentional harm caused by lack of technological mastery (EU Commission 2019b: I). Trustworthy AI is, according to the Guidelines, ethical, lawful, and robust AI.

    Formulation of the Guidelines is a pioneering initiative, since for the first time they set forth a normative framework for developing, deploying and using AI in the EU, while also aspiring to offer guidance to discussions taking place outside the EU. Arguing that the EU should follow a human-centric approach, where humans have primacy in civil, political, economic and social fields, the Guidelines employ an individual rights-based approach. They set the foundations of Trustworthy AI in fundamental rights and four principles (respect for human autonomy, prevention of harm, fairness, and explicability), using these to formulate specific requirements in the AI context. The document also describes technical and non-technical methods to achieve Trustworthy AI.

    However, there are three things that strike me about the Guidelines. Firstly, although there is much talk about trust, surprisingly little is said about what constitutes trust and what it depends upon. But how we understand the nature of trust will make a difference in what we say about the conditions in which trust is justified and how it should be built and maintained.

    Trust seems to be understood in terms of trustworthiness, as there is an implicit assumption that being demonstrably worthy of trust will create trust. The Guidelines identify Trustworthy AI as the European Commission's foundational ambition, since trustworthiness is "a prerequisite for people and societies to develop, deploy and use AI systems" (2019b: 4). Indeed, ideally, those whom we trust will be trustworthy, and those who are trustworthy will be trusted. But, in reality, having the property of trustworthiness is not a guarantee for being trusted. We know that sometimes it happens that those whom we trust are not trustworthy, and those who are trustworthy are not actually trusted. Thus, if it is important that people trust AI systems, it is not enough to establish and articulate the purpose of achieving trustworthy AI. It is imperative that we also think about how to build trust in AI.

    Secondly, it is surprising that the document talks about trust in AI and not about reliance. Since Annette Baier's study (1986) it has been a commonplace in philosophical literature to distinguish between trust and reliance. Trust is thought to be an interpersonal relationship between two peers, while our attitudes towards inanimate objects, such as cars, computers, or alarm clocks evoke the mental attitude of reliance. An important condition for trust is the potential of betrayal, whereas the condition corresponding to trustworthiness is the power to betray. The standard view in philosophical literature is that people considered to be trustworthy have the power to betray us, whereas people and inanimate objects considered to be merely reliable can only disappoint us (Baier 1986, Holton 1994, Wright 2010, McLeod 2015).

    The way in which the European Commission's Ethics Guidelines for Trustworthy AI talk about trust in AI and trustworthy AI raises the question whether AI is being treated on par with a human, or whether the document ignores this conceptual distinction as it is made in the philosophical literature. Also, one should ask whether there are any consequences to talking about the trustworthiness of AI, instead of its reliability or accountability.

    Thirdly, it is noteworthy that the Guidelines employ an individual rights-based approach, ignoring the fact that liberal individualism, with its conceptual base of autonomy, dignity, and privacy has, after a long period of dominance in research ethics increasingly come under attack from communitarian ideologies which promote a more salient role for concepts of solidarity, community, and public interest (Sutrop 2011a; 2011b). The Guidelines list four principles: respect for human autonomy, prevention of harm, fairness, and explicability, the source of which seems to lie in existing legal requirements, with the admission that the "adherence to ethical principles goes beyond formal compliance with existing laws" (EU Commission 2019b: 12). There is also some inconsistency, as on the one hand the Guidelines stress that AI systems should respect the plurality of values and choices of individuals (EU Commission 2019b: 11); on the other hand, they claim that certain fundamental rights and principles, such as human dignity, are absolute and cannot be subject to a balancing exercise (EU Commission 2019b: 13). Also, the Guidelines are not helpful in advising what should be done if the principles conflict.

    In the following I will first provide a brief overview of the definition of AI and how the benefits and risks of AI are being envisioned in scholarly literature. I will then conduct a philosophical analysis of trust and distinguish between its different forms. On the basis of this conceptual analysis of trust, I will hopefully be able to answer the question, whether we should avoid talking about trust in AI and limit the concept of trust to peer relationships; thus also limiting the concept of trustworthiness for people and institutions who design, deploy, and govern AI. The other alternative, if it is warranted, is to continue to speak of trusting AI and its trustworthiness. In the last part of the article I shall point out that metaphorical talk about trustworthy AI and ethically-aligned AI ignores the real disagreements that we have about ethical values.

  2. The definition of AI

    Artificial intelligence has been described in various ways. AI can refer to certain systems designed by humans, or to a scientific discipline that includes several approaches and techniques, such as machine learning, machine reasoning, and robotics. In this paper I will draw upon the definition of AI developed by AI HLEG in the document "A Definition of AI: Main Capabilities and Disciplines," made public in April 2019.

    AI is defined by AI HLEG as follows: "Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behaviour by analysing how the environment is affected by their previous actions" (EU Commission 2019a).

    Currently, AI systems are narrowly dedicated to specific tasks such as face recognition, spam filters or self-driving cars, and they are not capable of setting their own goals or choosing the best courses of action across domains. An AI which, with human-level ability or beyond is able to find a solution when presented with an unfamiliar task, is called Artificial General Intelligence (AGI). The achievements of AGI are hypothesised to lead to an intelligence explosion, facilitated by recursive self-improvement of the AGI, eventually attaining the level of Superintelligence (SAI) (Kurzweil 2005, Tegmark 2017). SAI is "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest" (Bostrom 2014: 26).

    Many researchers now take seriously the possibility that within the current century intelligence equal to our own will be created in computers. Freed from biological constraints such as limited memory and slow biochemical processing speeds, machines may eventually become more intelligent than we are--with profound implications for us all. As the famous physicist Stephen Hawking says in his posthumously published book, Brief Answers to the Big Questions: "... the advent of super-intelligent AI would be either the best or the worst thing ever to happen to humanity. /.../ Our future is a race between the growing power of our technology and the wisdom with which we use it. Let us make sure that the wisdom wins" (Hawking 2018).

    The impact of AI will permeate all spheres of our lives, from commercial and social interactions to relationships with the state, including dramatic structural transformations in the public sphere (cf. Habermas [1962]1991, Beck 1992, 2016). Let us look at some specific examples. (1) If AI gets increasingly better at performing tasks towards a given goal (e.g. good governance, better healthcare), then we ought to let AI work on that goal. This requires figuring out the proper distribution of control between humans and AI which is, in turn, premised on humans trusting AI. (2) The power and effect of bots to influence social media discussion has been well documented and researched (e.g. Suarez-Serrato et al. 2016, Hwang and Rosen 2017, Bessi and Ferrara 2016). The more advanced AI becomes, the easier it will be to influence public discussion. Thus, AI has the potential to erode trust in publicly available information. (3) In the context of healthcare, AI is already playing a significant role in triaging and diagnosing patients (Varun et al. 2018). Given that we already have several diagnostic devices that outperform human...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT