Table Of ContentDigital Ethics Lab Yearbook
Jakob Mökander
Marta Ziosi Editors
The 2021
Yearbook
of the Digital
Ethics Lab
Digital Ethics Lab Yearbook
Series Editors
Luciano Floridi, Oxford Internet Institute, Digital Ethics Lab,
University of Oxford, Oxford, UK
Department of Legal Studies, University of Bologna, Bologna, Italy
Mariarosaria Taddeo, Oxford Internet Institute, Digital Ethics Lab,
University of Oxford, Oxford, UK
The Alan Turing Institute, London, UK
The Digital Ethics Lab Yearbook is an annual publication covering the ethical
challenges posed by digital innovation. It provides an overview of the research from
the Digital Ethics Lab at the Oxford Internet Institute. Volumes in the series aim to
identify the benefits and enhance the positive opportunities of digital innovation as
a force for good, and avoid or mitigate its risks and shortcomings. The volumes
build on Oxford’s world leading expertise in conceptual design, horizon scanning,
foresight analysis, and translational research on ethics, governance, and
policy making.
Jakob Mökander • Marta Ziosi
Editors
The 2021 Yearbook
of the Digital Ethics Lab
Editors
Jakob Mökander Marta Ziosi
Oxford Internet Institute Oxford Internet Institute
University of Oxford University of Oxfordo
Oxford, UK Oxford, UK
Center for Information Technology Policy
Princeton University
Princeton, NJ, USA
ISSN 2524-7719 ISSN 2524-7727 (electronic)
Digital Ethics Lab Yearbook
ISBN 978-3-031-09845-1 ISBN 978-3-031-09846-8 (eBook)
https://doi.org/10.1007/978-3-031-09846-8
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature
Switzerland AG 2022
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of
illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and
transmission or information storage and retrieval, electronic adaptation, computer software, or by similar
or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the
editors give a warranty, expressed or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Preface
The field of digital ethics – whether understood as an academic discipline or an area
of practice – is maturing. This process has both been propelled and reflected by two
long-term trends. First, and most importantly, the focus of the discourse concerning
how to design and use digital technologies is increasingly shifting from ‘soft ethics’
to ‘hard governance’. The second trend is an ongoing shift from ‘what’ to ‘how’,
whereby abstract or ad-hoc approaches to AI governance are giving way to more
concrete and systematic solutions. While these trends are neither new nor surpris-
ing, the maturing of the field of digital ethics has, as we shall see, been accelerated
by a series of recent events.
Consider the shift in focus from soft to hard governance. While the latter is
enforced by government institutions, the former relies on mechanisms that allow for
some contextual flexibility, such as cultural norms and economic incentives. The
plethora of ‘AI ethics’ guidelines or principles produced by regulators and technol-
ogy providers alike in recent years, including the Ethics Guidelines for Trustworthy
(AI HLEG, 2019), the Montreal Declaration for a Responsible Development of AI
(University of Montreal, 2018), and the Beijing AI Principles (Beijing Academy of
Artificial Intelligence, 2019), constitute soft governance. In contrast, the Artificial
Intelligence Act (AIA) published by the European Commission (2021) is an exam-
ple of hard governance (Mökander et al., 2021).
The AIA is a unique milestone insofar as it is the first attempt to elaborate a
general legal framework for AI carried out by any major economy. Yet, the AIA did
not come as a surprise. Several recent initiatives and publications have foreshad-
owed the arrival of hard legislation.1 Moreover, the need to manage the ethical chal-
lenges posed by autonomous and self-learning systems has been pressing and clear
for a long time, and the fact that soft and hard mechanisms complement and rein-
force each other is well established in the governance literature (Erdelyi &
Goldsmith, 2018; Floridi, 2018). The AIA can thus be viewed as one example of
how the focus in the field of digital ethics is shifting from soft to hard governance
(Floridi, 2021). Following the same logic, the eventual imposition of hard
1 For example, the European Commission’s Whitepaper on AI (2020).
v
vi Preface
legislation on the design and use of digital technologies is to be expected outside the
EU as well – although the shape such legislation will take is likely to vary between
different jurisdictions. A step in that direction was taken with the Algorithmic
Accountability Act of 2022 (AAA), which was put before the U.S. Senate by the
Office of Senator Ron Wyden (2022). The AAA calls on companies to conduct
impact assessments for bias, accuracy, and other issues when designing or deploy-
ing automated systems that make critical decisions with little or no human interven-
tion (Mökander et al., 2022).
That brings us to the second long-term trend, from theoretical to implemented
solutions. While playing an important role in raising awareness of the ethical chal-
lenges associated with specific technologies, early works in the field of digital ethics
remained largely abstract. Of course, the convergence around a set of high-level eth-
ics principles to guide the design and use of digital technologies was a significant
achievement in and of itself. However, researchers quickly established that technol-
ogy providers lacked both incentives and translational tools to interpret, implement,
and demonstrate adherence to abstract ethics principles, namely a link was missing
from what to how (Morley et al., 2020; Taddeo & Floridi, 2018). In response to this
critical knowledge gap, a rich literature has emerged on how organisations can
ensure that the technologies they design or deploy are ethical, legal, and technically
robust in practice (see e.g., AIEIG, 2020; Ayling & Chapman, 2021; Mökander &
Axente, 2021; Morley et al., 2021). In these attempts to provide more detailed guid-
ance, both policymakers and academic researchers have drawn upon established best
practices to provide adequate assurance in adjacent fields, including quality manage-
ment in systems engineering, auditing in the financial sector, and pre-market testing
and approval procedures in safety-sensitive areas such as food and medical devices.
Both trends discussed above are reflected in this volume: the fourth edition of the
Digital Ethics Lab Yearbook. The shift from soft ethics to hard governance runs like
a red thread through the first half of this volume and binds together seven chapters
that otherwise cover a wide range of domains and geographic areas. In ‘The
European Legislation on AI: A Brief Analysis of Its Philosophical Approach’,
Luciano Floridi highlights some foundational aspects of the AIA and analyses the
regulatory approach underpinning it; in ‘Informational Privacy with Chinese
Characteristics’, Huw Roberts discusses the emergence of a new privacy protection
regime in China; in ‘Lessons Learned from Co-governance Approaches – Developing
Effective AI Policy in Europe’, Caitlin Corrigan demonstrates that addressing the
ethical challenges posed by AI systems will require close collaboration between
state and non-state actors; in ‘State-Firm Coordination in AI Governance’, Noah
Schöppl discusses the role of states in digital governance and argues that national
governments need to increase and coordinate their regulatory capabilities; in ‘The
Impact of Australia’s News Media Bargaining Code on Journalism, Democracy, and
the Battle to Regulate Big Tech’, Emmie Hine analyses the new Australian legisla-
tion designed to provide financial support to publishers and journalists in terms of
its compatibility with the business models of big tech giants; in ‘App Store
Governance: The Implications and Limitations of Duopolistic Dominance’, Josh
Cowls and Jessica Morley discuss the challenges and tensions inherent to app store
Preface vii
governance; and in ‘A Legal Principles-Based Framework for AI Liability
Regulation’, Massimo Durante and Luciano Floridi review the work of the European
Commission’s Expert Group on Liability and New Technologies (2019) to show
how it has started to lay the basis for a set of legal principles for an AI liabil-
ity regime.
Similarly, the nine chapters in the latter half of this volume are linked insofar as
they concern concrete procedures for implementing digital governance in practice.
In ‘The New Morality of Debt’, Nikita Aggarwal argues for the inadequacy of exist-
ing regulatory frameworks governing consumer lending in alleviating harms around
privacy, autonomy, and dignity; in ‘Site of the Living Dead: Clarifying Our Moral
Obligations Towards Digital Remains’, Mira Pijselman assesses the absence of a
unified roadmap for how digital remains ought to be managed; in ‘The Statistics of
Interpretable Machine Learning’, David Watson provides an in-depth survey of the
affordances and constraints in the plethora of existing interpretable machine learn-
ing approaches; in ‘Formalising Trade-Offs Beyond Algorithmic Fairness: Lessons
from Ethical Philosophy and Welfare Economics’, Michelle Lee and colleagues
introduce the use of Key Ethics Indicators (KEIs) as a way towards understanding
whether or not an algorithmic system is aligned to a decision-maker’s ethical val-
ues; in ‘Ethics Auditing Framework for Trustworthy AI: Lessons from the IT Audit
Literature’, Nathaniel Zinda explores how the emerging field of ‘AI auditing’ can
learn from and build on traditional IT audits; in ‘Ethics Auditing: Lessons from
Business Ethics for Ethics Auditing of AI’, Noah Schöppl and colleagues conduct a
similar review of the business ethics literature to establish best practices for how
auditing – as a governance mechanism – can help organisations (a) design AI sys-
tems in ways that are ethical and (b) make verifiable claims about those systems; in
‘AI Ethics and Policies: Why European Journalism Needs More of Both’, Guido
Romeo and Emanuela Griglié argue that policymakers can help newsrooms manage
the ethical issues raised by the use of AI in journalism by supporting the develop-
ment of tools like checklists and guidance on how to use such tools; in ‘Towards
Equitable Health Outcomes Using Group Data Rights’, Gal Wachtel proposes a
framework for practically implementing group data rights in a healthcare setting;
and, finally, in ‘Ethical Principles for Artificial Intelligence in National Defence’,
Mariarosaria Taddeo and colleagues propose a framework consisting of five prin-
ciples and issue-related recommendations to foster ethically sound uses of AI for
national defence purposes.
From its very start in 2018, the main purpose of the Digital Ethics Lab Yearbook
has been to give a non-exhaustive snapshot of the diverse and cutting-edge research
agendas being pursued within our research group at the Oxford Internet Institute.
However, the 2020/21 Yearbook marks the maturation not only of the field of digital
ethics but also of the Digital Ethics Lab itself. As the discipline develops in terms of
thematic focus and methodological rigour, so do our ways of working. The Digital
Ethics Lab has served its purpose of identifying the opportunities and enhancing the
benefits of digital innovation whilst showing how to avoid or mitigate the associated
risks. To reflect the shifting focus – from soft to hard governance and from abstract
to more concrete solutions – our future efforts will be directed towards supporting
viii Preface
the newly formed Centre for Digital Ethics and Governance at the University of
Bologna and the Digital Governance Research Group at Exeter College, Oxford.
Oxford, UK Jakob Mökander
Princeton, NJ, USA
Oxford, UK Marta Ziosi
References
AI HLEG. (2019). European Commission’s ethics guidelines for trustworthy Artificial Intelligence
(Issue May). https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines/1
AIEIG. (2020). From principles to practice – An interdisciplinary framework to operationalise
AI ethics (pp. 1–56). AI Ethics Impact Group, VDE Association for Electrical Electronic &
Information Technologies e.V., Bertelsmann Stiftung. https://doi.org/10.11586/2020013
Ayling, J., & Chapman, A. (2021). Putting AI ethics to work: Are the tools fit for purpose? AI and
Ethics, 0123456789. https://doi.org/10.1007/s43681-021-00084-x
Beijing Academy of Artificial Intelligence. (2019). The Beijing AI principles.
Erdelyi, O. J., & Goldsmith, J. (2018). Regulating Artificial intelligence proposal for a global
solution. In AAAI/ACM conference on artificial intelligence, ethics and society. http://www.
aies-conference.com/wp-content/papers/main/AIES_2018_paper_13.pdf
European Commission. (2020). White Paper on Artificial Intelligence – A European approach to
excellence and trust. 27.
European Commission. (2021). Proposal for regulation of the European parliament and of the
council – Laying down harmonised rules on artificial intelligence (artificial intelligence act)
and amending certain Union legislative acts.
Floridi, L. (2018). Soft ethics and the governance of the digital. Philosophy and Technology, 31(1).
https://doi.org/10.1007/s13347-018-0303-9
Floridi, L. (2021). The end of an Era: From self-regulation to hard law for the digital industry.
Philosophy and Technology, 34(4), 619–622. https://doi.org/10.1007/s13347-021-00493-0
Mökander, J., & Axente, M. (2021). Ethics-based auditing of automated decision-making sys-
tems: Intervention points and policy implications. AI & SOCIETY, 0123456789. https://doi.
org/10.1007/s00146-021-01286-x
Mökander, J., Axente, M., Casolari, F., & Floridi, L. (2021). Conformity assessments and post-
market monitoring: A guide to the role of auditing in the proposed European AI regulation.
Minds and Machines, 0123456789, 1–27. https://doi.org/10.1007/s11023-021-09577-4
Mökander, J., Juneja, P., Watson, D.S. et al. (2022). The US Algorithmic Accountability Act of
2022 vs. The EU Artificial Intelligence Act: what can they learn from each other?. Minds and
Machines. https://doi.org/10.1007/s11023-022-09612-y
Morley, J., Floridi, L., Kinsey, L., & Elhalal, A. (2020). From what to how: An initial review
of publicly available AI ethics tools, methods and research to translate principles into prac-
tices [Article]. Science and Engineering Ethics, 26(4), 2141. https://doi.org/10.1007/
s11948-019-00165-5
Morley, J., Elhalal, A., Garcia, F., Kinsey, L., Mökander, J., & Floridi, L. (2021). Ethics as a
service: A pragmatic operationalisation of AI ethics. Minds and Machines, 31(2), 239–256.
https://doi.org/10.1007/s11023-021-09563-w
Office of U.S. Senator Ron Wyden. (2022). Algorithmic accountability act of 2022. 117th Congress
2D Session. https://doi.org/10.1016/S0140-6736(02)37657-8
Taddeo, M., & Floridi, L. (2018). How AI can be a force for good. Science, 361(6404), 751–752.
https://doi.org/10.1126/science.aat5991
The European Commission’s Expert Group on Liability and New Technologies-New Technologies
Formation. (2019). Liability for Artificial Intelligence and other emerging digital technologies.
https://doi.org/10.2838/25362
University of Montreal. (2018). Montréal Declaration responsible AI. https://www.montrealdecla-
ration-responsibleai.com/the-declaration
Contents
The European Legislation on AI: A Brief Analysis
of Its Philosophical Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Luciano Floridi
Informational Privacy with Chinese Characteristics. . . . . . . . . . . . . . . . . . 9
Huw Roberts
Lessons Learned from Co-governance Approaches – Developing
Effective AI Policy in Europe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Caitlin C. Corrigan
State-Firm Coordination in AI Governance . . . . . . . . . . . . . . . . . . . . . . . . . 47
Noah Schöppl
The Impact of Australia’s News Media Bargaining Code
on Journalism, Democracy, and the Battle to Regulate Big Tech . . . . . . . . 63
Emmie Hine
App Store Governance: The Implications and Limitations
of Duopolistic Dominance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Josh Cowls and Jessica Morley
A Legal Principles-Based Framework for AI Liability Regulation . . . . . . 93
Massimo Durante and Luciano Floridi
The New Morality of Debt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
Nikita Aggarwal
Site of the Living Dead: Clarifying Our Moral Obligations
Towards Digital Remains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Mira Pijselman
The Statistics of Interpretable Machine Learning . . . . . . . . . . . . . . . . . . . . 133
David S. Watson
ix