A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6191666 below:

Governing artificial intelligence: ethical, legal and technical opportunities and challenges

Abstract

This paper is the introduction to the special issue entitled: ‘Governing artificial intelligence: ethical, legal and technical opportunities and challenges'. Artificial intelligence (AI) increasingly permeates every aspect of our society, from the critical, like urban infrastructure, law enforcement, banking, healthcare and humanitarian aid, to the mundane like dating. AI, including embodied AI in robotics and techniques like machine learning, can improve economic, social welfare and the exercise of human rights. Owing to the proliferation of AI in high-risk areas, the pressure is mounting to design and govern AI to be accountable, fair and transparent. How can this be achieved and through which frameworks? This is one of the central questions addressed in this special issue, in which eight authors present in-depth analyses of the ethical, legal-regulatory and technical challenges posed by developing governance regimes for AI systems. It also gives a brief overview of recent developments in AI governance, how much of the agenda for defining AI regulation, ethical frameworks and technical approaches is set, as well as providing some concrete suggestions to further the debate on AI governance.

This article is part of the theme issue ‘Governing artificial intelligence: ethical, legal, and technical opportunities and challenges’.

Keywords: artificial intelligence, law, ethics, technology, governance, culture

1. Introduction

Artificial intelligence (AI) increasingly permeates every aspect of our society, from the critical, like healthcare and humanitarian aid, to the mundane like dating. AI, including embodied AI in robotics and techniques like machine learning, can enhance economic, social welfare and the exercise of human rights. The various sectors mentioned can benefit from these new technologies. At the same time, AI may be misused or behave in unpredicted and potentially harmful ways. Questions on the role of the law, ethics and technology in governing AI systems are thus more relevant than ever before. Or, as Floridi [1] argues: ‘because the digital revolution transforms our views about values and priorities, good behaviour, and what sort of innovation is not only sustainable but socially preferable – and governing all this has now become the fundamental issue’ (p. 2).

AI systems, most of which apply learning techniques from statistics to find patterns in large sets of data and make predictions based on those patterns, are used in a wide variety of applications. Owing to the proliferation of AI in high-risk areas, pressure is mounting to design and govern AI to be accountable, fair and transparent. How can this be achieved and through which frameworks? This is one of the central questions addressed by the various authors in this special issue, who present in-depth analyses of the ethical, legal-regulatory and technical challenges posed by developing governance regimes for AI systems.

Societies are increasingly delegating complex, risk-intensive processes to AI systems, such as granting parole, diagnosing patients and managing financial transactions. This raises new challenges, for example around liability regarding automated vehicles, the limits of current legal frameworks in dealing with ‘big data's disparate impact’ [2] or preventing algorithmic harms [3], social justice issues related to automating law enforcement or social welfare [4], or online media consumption [5]. Given AI's broad impact, these pressing questions can only be successfully addressed from a multi-disciplinary perspective.

This theme issue collects eight original articles, written by internationally leading experts in the fields of AI, computer science, data science, engineering, ethics, law, policy, robotics and social sciences. The articles are revised versions of papers presented at three workshops organized in 2017 and 2018 by Corinne Cath, Sandra Wachter, Brent Mittelstadt and Luciano Floridi (the editors) at the Oxford Internet Institute and the Alan Turing Institute. The workshops were titled ‘Ethical auditing for accountable automated decision-making’; ‘Ethics & AI: responsibility & governance’; and ‘Explainable and accountable algorithms'. This special issue will propose new ideas on how to develop and support the ethical, legal and technical governance of AI. It is focused on the investigation of three specific areas of research:

A growing body of the literature covers questions of AI and ethical frameworks [1,610], laws [3,1114] to govern the impact of AI and robotics [15], technical approaches like algorithmic impact assessments [1618], and building trustworthiness through system validation [19]. These three guiding forces in AI governance (law, ethics and technology) can be complementary [1]. However, the debate on when which approach (or combination of approaches) is most relevant is unresolved, as Nemitz and Pagallo expertly highlight in this issue [13,17].

Across the globe, industry representatives, governments, academics and civil society are debating where legal-regulatory frameworks are needed and when, if ever, ethical or technical approaches suffice. Even if those questions are answered, the issue of the extent to which our existing ethical and regulatory frameworks sufficiently cover the impact of these technologies remains. Pagallo, for instance, highlights this conundrum by analysing the debate on the legal status of embodied AI (robots) in the EU [13]. Veale et al. [3] argue here that European data protection provides robust principles but that ‘many socio-technical challenges presented by machine learning and algorithmic systems more broadly are not wholly dealt with using the provisions in regulations such as the General Data Protection Regulation, which are the result of a slow evolution in definitions and concerns' (p. 17). Winfield & Jirotka [9] specifically consider the role of technical standards in the ethical and agile governance of robotics and AI systems.

Academia is also debating its own approach to AI governance. In a recent article on ‘troubling trends in machine learning scholarship’, Lipton & Steinhardt [20], for example, warned against technical solutionism through the misuse of concepts like ‘fairness’ and ‘discrimination’. They argue that borrowing these complicated social concepts to talk about ‘simple statistics’ is dangerous because it is ‘confusing researchers who become oblivious to the difference, and policy-makers who become misinformed about the ease of incorporating ethical desiderata into machine learning’ (p. 5). Various academics expertly questioned the imaginaries underlying data-driven technologies like AI [21] in current debates and highlighted the risks of the use of AI systems [2224]. More work needs to be done to apply these critical lenses to the ethical, legal and technical solutions proposed for AI governance.

The articles in this special issue reflect the nuanced and advanced state of the debate. At the same time, the authors show that some of the legal governance solutions proposed are too limited in scope. As well as indicating that particular ethical solutions suffer from conceptual ambiguity and lack of enforcement mechanisms. Likewise, some technical approaches run the risk of narrowing down complicated social concepts, like fairness, beyond recognition or turning transparency into a box-ticking exercise. Hence, in addition to suggesting further ethical, legal and technical refinements, the articles in this special issue also critically assess the status quo of AI governance. In doing so, the authors highlight the importance of considering who is driving AI governance and what these individuals and organizations stand to gain. Because as Harambam et al. [5, p. 1] state: ‘Technology is, after all, never an unstoppable or uncontrollable force of nature, but always the product of our making, including the course it may take. Even with AI’.

There are clearly outstanding questions regarding what good AI governance should look like [2,2527]. These questions are currently debated by political institutions across the globe, including the UK [28], South Korean [29], the Indian [30] and the Mexican government [31], as well as the European Commission [32]. Through the articles in this special issue, we hope to contribute to shaping these debates. To situate the various articles, a brief overview of recent developments in AI governance and how agendas for defining AI regulation, ethical frameworks and technical approaches are set, will be given.

2. Setting the agenda for AI governance

Academics and regulators alike are scrambling to keep up with the number of articles, principles, regulatory measures and technical standards produced on AI governance. In the first six months of 2018 alone, at least a dozen countries put forward new AI strategies [33], several pledging up to 1.8 billion [34] in government funding. Industry, meanwhile, is developing its own AI principles1 or starting multistakeholder initiatives to develop best-practices. They are also involved in developing regulation for AI, whether through direct participation or lobbying efforts. These industry efforts are laudable, but it is important to position them in light of three important questions. First, who sets the agenda for AI governance? Second, what cultural logic is instantiated by that agenda and, third, who benefits from it? Answering these questions is important because it highlights the risks of letting industry drive the agenda and reveals blind spots in current research efforts.

Excellent work exists on the problematic developments in machine learning research regarding the conflation of complicated social concepts with simple statistics [20,35]. Similarly, various authors highlight how unchecked use of ‘black box’ systems in finance [36], education and criminal justice [37], search engines [38] or social welfare [4] can have detrimental effects. Beer [39] aims to focus the debate on the ‘social power of algorithms'. He argues that the cultural notion of algorithm serves ‘as part of the discursive reinforcement of particular norms, approaches and modes of reasoning’ (p. 11). As mentioned, it is not just how AI systems work, but also how they are understood and imagined [21] that fundamentally shapes AI governance. The next paragraphs will highlight some concerns and invite closer scrutiny of the cultural logic set forward by having industry actively shape the debate.

Many of the industry leaders in the field of AI are incorporated in the USA. An obvious concern is the extent to which AI systems mirror societies in the image of US culture and to the predilections of American tech behemoths. AI programming does not necessarily require massive resources. Much of its value comes from the data that is held. As a result, most of the technical innovation is led by a handful of American companies.2 As these companies are at the forefront of various regulatory initiatives,3 it is essential to ensure this particular concern is not exacerbated. An American, corporate needs-driven agenda is not naturally going to be a good fit for the rest of the world. For instance, the EU has very different privacy regulations than the USA. But this is not the only concern.

AI systems are often presented as being ‘black boxes’ [36] that are very complex and difficult to explain [23]. Kroll [19] shows that these arguments obfuscate that algorithms are fundamentally understandable. He argues that ‘rather than discounting systems which cause bad outcomes as fundamentally inscrutable and therefore uncontrollable, we should simply label the application of inadequate technology what it is: malpractice, committed by a system's controller’ (p. 5). Yet, the cultural logic of the ‘complicated inscrutable’ technology is often used to justify the close involvement of the AI industry in policy-making and regulation [40]. Generally, the industry players involved in these policy processes represent the same select group that is leading the business of online marketing and data collection. This is not a coincidence. Companies like Google, Facebook and Amazon are able to gather large quantities of data, which can be used to propel new AI-based services. The ‘turn to AI’ thus both further consolidates big companies' market position and provides legitimacy to their inclusion in regulatory processes.

A related concern is the influence companies exert over AI regulation. In some instances, they act as semi-co-regulators. For example, after the Cambridge Analytica scandal, Facebook's CEO testified before a joint-hearing of the US Senate Commerce and Judiciary Committee about his company's role in the data breach. During the hearing, he was explicitly asked [41] by multiple Senators to provide examples of what regulation for his company should look like. Likewise, the European Commission recently appointed a High-Level Expert Group on AI [42]. The group is mandated to work with the Commission on the implementation of a European AI strategy. The group's 52 members come from various backgrounds and, even though not all affiliations are apparent, it appears almost half of the members are from industry; 17 are from academia, only four are from civil society. Marda, in this issue, highlights the importance of ensuring civil society—often closest to those affected by AI systems—has an equal seat at the table when developing AI governance regimes. She shows that the current debate in India is heavily focused on governmental and industry concerns and goals of innovation and economic growth, at the expense of social and ethical questions [27].

Nemitz [17], likewise, focuses on how a limited number of corporations wield a lot of power in the field of AI. He states in this issue: ‘The critical inquiry into the relationship of the new technologies like AI with human rights, democracy and the rule of law must therefore start from a holistic look on the reality of technology and business models as they exist today, including the accumulation of technological, economic and political power in the hands of the ‘frightful five’, which are at the core of the development and systems integration of AI into commercially viable services’. Industry's influence is also visible in the creation of various large-scale global initiatives on AI and ethics. There are clear advantages to having open norm-setting venues that aim to address AI governance by developing technical standards, ethical principles and professional codes of conducts. However, the solutions presented could do more to go beyond current voluntary ethical frameworks or narrowly defined technical interpretations of fairness, accountability and transparency. The various articles in this edition clearly indicate why it is vital to further address questions of hard-regulation and the internet's business model of advertising and attention. If we are serious about AI governance, then these issues must be holistically contended with.

3. Concluding remarks

The argument presented in this article should not be read as a dismissal of the work done by industry or the relevance of current ethical, technical solutions and regulatory AI governance frameworks. Rather, much can be learnt from this ongoing work but only if we carefully assess its aims, impact and process. It is crucial to remain critical of the underlying aims of AI governance solutions as well as the (unforeseen) collateral cultural impacts, especially in terms of legitimizing private-sector led norm development around ethics, standards and regulation. Likewise, we must remain cognizant of the concerns not, or only partially, covered by phrases like fairness, accountability and transparency. In focusing on these issues what is not discussed? Are we assuming that issues around AI and equity, social justice or human rights are automatically caught by these popular acronyms? Or are these concerns out of scope for the organizations pushing the agenda? Asking these hard questions matters because these concepts are increasingly making their way into regulatory initiatives [43] across the globe.

The authors in this special issue expertly engage with these various hard questions. From the articles, it becomes clear that the authors are unsatisfied with the current state of AI governance. Nemitz, for instance, argues in favour of fostering a new culture of technology and business development stooled on the rule of law, human rights and democratic principles [17]. Pagallo highlights the importance of pragmatism and testing new forms of accountability and liability through methods of legal experimentation [13]. Veale et al. explore how machine learning models could be considered personal data under European data protection law and argue that ‘enabling users to deploy local personalization tools might balance power relations in relation to large firms hoarding personal data’ [3, p. 5]. Winfield and Jirotka argue that creating strong ethical principles is only the first step and that more should be done to assure implementation and accountability. Because the real test for good governance of AI systems comes when the rubber hits the road, or rather, the robot.

Harambam et al. explore the notion of ‘voice’, both as a way of allowing individuals to exert more control over the algorithms in the news industry and to mitigate the pitfalls of attempts at achieving algorithmic transparency [5]. The editors argued here, and in other pieces [26], that it is important to ensure that there is equitable stakeholder representation when regulating AI. Furthermore, there is a need for more non-US led initiatives like the Europe-based AI4People4 and the Council on Europe's Expert Committee on AI and Human Rights.5 Even though it is important to have more Europe-led initiatives, we must also incorporate concerns from the Global South. Marda's article about India highlights why these voices are especially relevant [27]. Similarly, it is essential to go beyond the fairness, accountability and transparency rhetoric to formulate what additional fundamental values should be included. Nemitz, Floridi and Marda, for example, argue for the inclusion of human rights' principles [1,17,27].

Overall, the critical perspectives offered in this special issue highlight the nuances of the debate on AI, ethics, technology and the law and pave the road for a broader, more inclusive, AI governance agenda. Or as Kroll reminds us: ‘In general, opacity in socio-technical systems results from power dynamics between actors that exist independent of the technical tools in use. No artefact is properly comprehended without reference to its human context, and software systems are no different’ [19, p. 11].

The editors would like to thank the authors for their thoughtful engagement with the topics of this special issue. Their contributions are exemplary of the kind of multi-disciplinary research needed. In this introductory article, an attempt was made tried to highlight the various topics covered by the authors, but the short summaries included do not do justice to the rich and invigorating arguments made in the individual articles. The articles both reflect the three central themes of this special issue: ethical governance, explainability and interpretability, and ethical auditing as well as critically assessing the current state of AI governance. Throughout this special issue the reader is invited to, as Floridi argues, resist the distracting narrative that ‘digital innovation leads, and everything else lags behind, or follows at best: business models, working conditions, standards of living, legislation, social norms, habits, expectations and even hope’ [1, p. 2].

Acknowledgements

We thank the Oxford Internet Institute (OII), the Alan Turing Institute (ATI) and in particular the ATI's Data Ethics Group (DEG) for supporting the workshops that led to this Special Issue. We also express our gratitude to the PETRAS Internet of Things research hub for their support. The author would also like to thank Vidushi Marda, Joris van Hoboken, Andrew Selbst, Kate Sim, Mariarosaria Taddeo and Robert Gorwa for their excellent feedback on this article.

Footnotes 2

We recognize that there are various major technical players in China and other Asian countries that play a significant role in furthering technological developments in the field of machine learning. However, these companies play a less prominent role in global policy development regarding AI governance than American companies.

Data accessibility

This article does not contain any additional data.

Competing interests

I declare I have no competing interests.

Funding

Cath's and Floridi's contributions to the editing of this theme issue have been funded as part of the Privacy and Trust Stream—Social lead of the PETRAS Internet of Things research hub. PETRAS is funded by the Engineering and Physical Sciences Research Council (EPSRC), grant agreement no. EP/N023013/1.

References Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

This article does not contain any additional data.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4