Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

AI Ethics & Governance - A Legal Perspective Vol. 2: AI Ethics & Governance - A Legal Perspective, #2
AI Ethics & Governance - A Legal Perspective Vol. 2: AI Ethics & Governance - A Legal Perspective, #2
AI Ethics & Governance - A Legal Perspective Vol. 2: AI Ethics & Governance - A Legal Perspective, #2
Ebook1,028 pages7 hours

AI Ethics & Governance - A Legal Perspective Vol. 2: AI Ethics & Governance - A Legal Perspective, #2

Rating: 0 out of 5 stars

()

Read preview

About this ebook

As AI rapidly transforms society, urgent questions have emerged on governing these powerful technologies responsibly. In AI Ethics & Governance: A Legal Perspective Vol. 2, technology lawyer and arbitrator Mark Wright offers an expert analysis of the complex legal and ethical issues surrounding AI. He explores proposals for AI governance through a pragmatic legal lens, delving into critical questions around privacy, bias, accountability, and more. Wright combines legal insights with computer science, philosophy and other disciplines to put forth concrete recommendations for aligning AI's trajectory with human rights and values. This essential and timely book will inform prudent policies on maximizing AI's benefits while minimizing its harms, as societies worldwide grapple with steering this technology towards the common good.

LanguageEnglish
PublisherMark Wright
Release dateDec 20, 2023
ISBN9798223654896
AI Ethics & Governance - A Legal Perspective Vol. 2: AI Ethics & Governance - A Legal Perspective, #2

Related to AI Ethics & Governance - A Legal Perspective Vol. 2

Titles in the series (1)

View More

Related ebooks

Law For You

View More

Related articles

Reviews for AI Ethics & Governance - A Legal Perspective Vol. 2

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    AI Ethics & Governance - A Legal Perspective Vol. 2 - Mark Wright

    Mark Wright

    AI Ethics & Governance - A Legal Perspective Vol. 2

    About the author

    Mark Wright is a barrister and an arbitrator with extensive expertise in international, commercial, and IT law. While studying finance and technology at Saïd Business School, University of Oxford, Mark was a student of Professor David Shrier, one of the world's leading futurists and an authority on technology. Mark holds 1st class master’s degrees in both business administration and in international commercial law. Since 1997, he has worked as an international lawyer in England, Oman, Bahrain, and Saudi Arabia holding several head of legal and senior in-house appointments including at HSBC, Capital One, Gulf International Bank in Bahrain, Bahrain Steel and Sciensus. Mark has given several newspaper and magazine interviews and published multiple articles in leading academic journals on IT law, intellectual property, company and commercial law and finance law.

    Mark also serves on the IT Panel and the AI and Emerging Technologies Discussion Forum of The Bar Council of England & Wales, to provide guidance to the bar and to influence IT law and regulations by directly participating in government consultations on new IT laws and policies and to report on and debate matters affecting all industries.

    Mark can be contacted via his website www.markwrightarbitration.com, linked in the QR code below, or via LinkedIn.

    Foreword

    In the era of rapid technological advancements, the intersection of artificial intelligence (AI) and the legal landscape has ushered in a new frontier, bringing with it a myriad of legal, ethical and governance challenges. As the digital realm expands, so too does the need for a comprehensive understanding of the ethical implications and legal frameworks that govern AI technologies. In this book AI Ethics & Governance: A Legal Perspective, Mark Wright provides a beacon for professionals and policymakers navigating the intricate terrain of AI Ethics, offering profound insights into the ethical considerations, challenges and regulatory frameworks.

    Mark being an employed Barrister, Member of the Chartered Institute of Arbitrators (CIArb) and a Senior In-House Legal Counsel at Sciensus, Burton upon Trent, UK, weaves together his legal expertise, commercial pragmatism, and global perspectives to provide a comprehensive guide for practitioners grappling with the profound implications of AI. He is also a member of the IT Panel, and the AI Emerging Technologies Discussion Forum of the Bar Council of England and Wales.

    Mark’s insights are drawn from a wealth of commercial experience which elevates the discourse beyond theory, providing practical insights that resonate with the day-to-day realities faced by society. Having previously held several Head of Legal and senior in-house appointments, including at HSBC, Capital One, Gulf International Bank in Bahrain and Foulath Holding in Bahrain, as well as in private practice with the Banking & Finance practice groups at Eversheds in London, and at Addleshaw Goddard in Oman, Mark has particular experience in connection with the contracts of financial institutions and companies in the property, technology, retail and mining sectors in England, Oman, Bahrain and Saudi Arabia. Mark has advised leading financial institutions on the legal aspects of their technology and the development of their Fintech solutions.

    As the architects of the legal framework, lawyers in particular, play a crucial role in sculpting a future where AI is ethically harnessed for the betterment of society. In the pages that follow, you will find a roadmap for understanding, navigating, and shaping the ethical landscape of artificial intelligence. This book serves as a testament to the dynamic interplay between law and technology, offering invaluable insights into the ethical considerations that define our digital future. As we collectively navigate the ethical frontiers of AI, may this work empower professionals, policymakers and technologists to champion the principles of justice, fairness, accountability, and transparency in the ever-evolving landscape of artificial intelligence and broader Web3.0.

    I extend my heartfelt gratitude to Mark for his invaluable contribution to this critical discourse. May this book serve as a cornerstone in the ongoing dialogue surrounding AI ethics, providing readers with the tools they need to navigate the evolving intersection of law and technology.

    Barrister FCIArb

    Vice-Chair of the Legal Services Committee, and Chair of the AI and Emerging Technologies Discussion Forum, for the Bar Council of England and Wales.

    Review of AI Ethics & Governance - A Legal Perspective

    "A powerful and thought-provoking book approaching AI through a diverse range of topics made relevant by the rapid advances in, and potential for the application of, AI. Written by an experienced and long serving legal professional with an obvious passion for the emerging technology and its uses. This book is a much-needed voice to give structure and guidance on how to approach the ethical considerations of AI use covering ethics, governance, and application on a global scale with a holistic approach.

    Mark discusses public awareness and trust, high risk processing, cultural differences and their significance in AI uses and governance and the case for global governance approaches.

    Woven throughout is the need for transparency, accountability, fairness, security and human oversight with the positioning of lawyers as Guardians of Justice in AI.

    Mark highlights the impact of AI upon the legal profession and the need for extensive AI education for lawyers going forward to enable them to advise their clients, governments and law makers with encompassing knowledgeable expertise.

    As a privacy and data protection professional it was particularly interesting to read the discourse on data governance, bias concerns, accountability, the need for human intervention, possibilities of applying opt outs, privacy concerns, anonymisation of data, consent, and regulatory approaches. It was interesting and reassuring to note the similarities between privacy and data protection principles and the key principles common in AI ethics. I also enjoyed the insight into global perspectives on ethics in AI and the varying cultural approaches and influences in AI uses and governance.

    An interesting discourse on regulating AI without stifling creativity, technological advancements, and the wealth of benefits it may bring to all sectors such as medical, finance, education, employment, transportation, military, criminal justice and law enforcement and social services.

    Drawing on a diverse range of guidance and publications on approaches to ethics in AI, regulations, and principles from across the world and in different cultures, Mark covers the rewards and risks associated with AI.

    As Mark points out the key to successful and safe AI application benefitting not just the corporate world but society and the individual is by; "instilling ethics intrinsically throughout technical education and training, not just retrospectively reviewing finished products "

    A great read not just for those in the legal profession but also for the conscientious privacy professional."

    First published in Great Britain in 2023 by Mark Wright

    Copyright © 2023 Mark Wright

    The moral right of the author has been asserted. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form, or by any means, without the prior permission in writing of the publisher, nor be otherwise circulated in any form of binding or cover other than that in which it is published and without a similar condition including this condition being imposed on the subsequent purchaser.

    ISBN: 978-1-7384517-1-5

    To my wife Renata, my daughter Charlotte, my parents Maria and Brian, my brother Steve and my sister-in-law Mariko.

    Table Of Contents

    Need for Realistic Validation and Testing 69

    Importance of Linking Autonomous Weapon Design to Military

    Strategic Objectives 70

    The Proliferation of Autonomous Weapons 70

    Potential for Limits on Autonomous Targeting Decisions 71

    Chapter 1: AI in the workplace and labour laws

    The integration of artificial intelligence (AI) into work processes is transforming organizations and labour markets worldwide. As intelligent algorithms and robots assume a growing range of workplace tasks, companies stand to realize efficiency gains and productivity increases [1]. However, these technological disruptions also necessitate adapting legal frameworks and ethical perspectives to protect workers' rights and human dignity. This analysis examines philosophical, legal, policy and practical considerations around balancing workplace automation with labour rights and protections.

    Philosophical Perspectives

    Philosophically, the proliferation of workplace AI technologies raises complex debates regarding values like privacy, autonomy, justice, and the meaning of work. Some argue that rather than a technology issue, workplace AI reflects deeper philosophical tensions around autonomy, dignity and justice rooted in capitalism and instrumentalist management paradigms [150]. AI monitoring and algorithmic control epitomize tendencies to reduce workers to optimized production inputs [151]. Realizing technology's emancipatory potential relies on reforming these ideological foundations valuing exploitation and efficiency over humanistic ethics [152]. However, critics contend structural critiques overlook people's pragmatic need for livelihoods [153]. Transformative change cannot ignore immediate material impacts on vulnerable populations [154]. Some argue that rather than a technology issue, workplace AI reflects deeper philosophical tensions around autonomy, dignity and justice rooted in capitalism and instrumentalist management paradigms [172].

    Potential Reform Models

    Citizen juries or panels represent another model for democratizing AI governance [224]. These deliberative bodies comprising diverse public members study issues and provide input into policies. For workplace AI, panels could gather evidence on impacts, weigh trade-offs and formulate recommendations to balance productivity, equity, and welfare. However, critics argue most citizens lack sufficient context for fully-informed opinions [225]. Panels must access comprehensive, balanced information and technical training. Outcomes could be advisory rather than binding, but inclusive participation foregrounds community values over corporate or elite interests [226]. The process aims to build collective capacity for navigating AI's societal implications.

    Worker Privacy and Autonomy

    A major area of contention is the impact of extensive data gathering and algorithmic monitoring enabled by AI systems on employee privacy and autonomy. Corporate deployment of techniques like people analytics, which applies big data and algorithms to manage the workforce, allows for unprecedented tracking of workers through badges, sensors, computer- vision monitoring and analysis of communication metadata [2].

    Critics contend such pervasive surveillance represents a fundamental violation of privacy and exertion of management control over employees [3]. They argue that constant algorithmic monitoring exceeds what is required for reasonable security or productivity needs and enables micromanagement with dehumanizing effects [4]. AI-enabled sensors and analytics pierce the boundary between workers' professional persona at the workplace and their personal lives [5]. Persistent monitoring pressures employees to conform to prescribed behaviours, infringing their basic freedom and self-determination [6]. While some argue that extensive monitoring and algorithmic management enabled by AI increases organizational efficiency and safety [190], critics contend this comes at the cost of infringing employee dignity and self-determination [191]. Persistent performance tracking and automated directives pressure personnel to conform behaviours to prescribed metrics, eroding professional discretion [192]. Rather than empowering workers, pervasive surveillance may simply enhance managerial control [193]. Ethically implementing workplace AI necessitates respecting reasonable boundaries around oversight and preserving spaces for autonomous decision-making based on contextual factors not captured by algorithms.

    In response, advocates of workplace AI monitoring contend it is both necessary and beneficial when implemented responsibly [7]. Organizations require data to optimize processes and output, necessitating some oversight of employees. Further, monitoring can enhance occupational safety, such as in hazardous environments where AI can detect dangerous behaviours or conditions [8]. Employees allegedly consent to monitoring in exchange for continued employment and benefits like superior healthcare [9]. Workers can purportedly override algorithms determining factors like assignments or evaluations [10].

    However, critics dispute whether consent is truly voluntary and informed given power imbalances [11]. Workers may have limited understanding of monitoring mechanisms or feel obligated to accept implicit conditions [12]. Alternatives like union representation are increasingly scarce amid declining organized labour [13]. Significant asymmetries between employers and employees persist, necessitating updated regulations even with mutual agreements [14]. While some monitoring and analytics may be justifiable, extensive surveillance and algorithmic control enabled by AI appear to violate fundamental privacy boundaries and coerce worker behaviours [15]. The Information Commissioner's Office in the United Kingdom cautions that excessive monitoring can infringe upon basic human rights like privacy and freedom of expression [296]. They urge employers to carefully assess monitoring's impact on autonomy and only implement systems that are proportionate to legitimate aims like productivity or security [296]. However, critics argue principles alone cannot prevent abuses of power, necessitating binding laws governing workplace surveillance and employee consent processes [297].

    Overall, while some monitoring and analytics may be justifiable, extensive surveillance and algorithmic control enabled by AI appear to violate fundamental privacy boundaries and coerce worker behaviours [15]. Ethical implementation necessitates transparent consent procedures and preserving spaces free from constant oversight.

    Meaning of Work

    AI automation also surfaces more foundational debates over the role and meaning of work itself. Marxian philosophy’s view alienating, exploitative labour under capitalism as antithetical to human flourishing [16], but work holds instrumental value in providing income and societal status and intrinsic value in furnishing meaning, dignity, and self-actualization [17]. AI threatens both instrumental and inherent worth by potentially displacing huge workforces. Algorithms excel at routinized tasks, jeopardizing manual, and clerical jobs [18]. Reallocating redundant workers is challenging, risking chronic underemployment or unemployment [19]. Persistent joblessness deprives individuals of financial means and social identities conferred by occupations [20].

    Post-work philosophies consider liberating people from obligatory labour altogether. With AI performing most necessary functions, societies could implement universal basic incomes to provide for citizens’ needs [21]. Individuals would pursue inherently meaningful activities like caregiving, creativity and learning rather than corporatized busywork [22]. However, critics argue most humans require structured work to thrive, regardless of material necessity [23]. Work furnishes purpose, relationships, and achievement. Long-term unemployment breeds psychological distress and deteriorating skills [24]. People may lack motivation if basic needs are satisfied. Post-work advocates concede roles for limited labour where humans intrinsically enjoy applying their faculties or helping others [25]. This debate reveals tensions in technology’s potential to both displace jobs and liberate humanity from drudgery. Ethically balancing these outcomes requires ensuring access to dignified, self-actualizing work.

    Emerging Policy Experiments

    A few preliminary experiments demonstrate early policy options to address workforce impacts of automation. The city of Barcelona launched a trial of universal basic income, providing payments to disadvantaged residents without work requirements [276]. Initial results found improved wellbeing, but the limited duration could not assess long-term sustainability [277]. The Indian state of Kerala ran a basic income program focused on women, finding increased educational attainment and work participation [278]. While informative, short-term basic income trials lack sufficient duration for comprehensive evaluation [279]. Some urge more extensive national-level pilots [280]. Policy experiments with job guarantees, conditional transfers and revised taxation also require exploration [281,282]. Comprehensive assessments integrating cultural attitudes, incentive structures and macroeconomic factors could guide balanced, evidence-based solutions [283].

    Distributive Justice

    Allocating AI’s economic impacts also raises issues of justice. Concentrated technology ownership disproportionately rewards small investor and inventor classes [26], but labour and resources creating AI come from broader society. Utilitarian philosophies generally emphasize net positive outcomes regardless of distributional equity [27]. However, uneven returns can distort political systems and exclude masses from technology benefits, violating egalitarian notions of justice [28]. Proposed solutions include redistributive taxation, profit sharing, worker representation on boards and technology dividends or sovereign wealth funds to socialize benefits [29], but companies resist redistributing gains they consider rightfully theirs. Implementing inclusive policies requires contesting prevailing philosophical norms around maximizing output and shareholder value [30].

    Some propose using taxation policy to redistribute gains from AI and counter growing wealth divides. The trade union UNI Global advocates taxing robotics use and establishing a robot levy fund to finance new training programs and social benefits protecting workers from displacement [298]. Analysts estimate a robot tax of 5-10% could generate government revenues of billions [299]. However, corporations staunchly resist such policies, arguing productivity losses from tax disincentives outweigh speculative future benefits [300]. Imposing focused taxes only on robotics also risks definitional ambiguities given automation's complexity [301]. More gradualist reforms like broadening capital gains taxes may garner wider political viability while still promoting equitable benefit sharing as AI transforms work processes [302].

    Overall, realizing AI’s benefits for all of society necessitates reimagining economic justice. Technology should not further concentrate capital in the hands of the few.

    Legal Frameworks and Precedents

    Adapting workplace laws and regulations to protect workers in the AI era remains nascent globally. Notable precedents and provisions demonstrate the challenges and possibilities of regulation. Recent court rulings demonstrate growing scrutiny of algorithmic decisions affecting individuals' rights and opportunities. In the United States, judges overturned automated tenant screening denying housing applications [284] and AI- predicted risk assessment determining criminal sentences [285]. These cases suggest emerging legal willingness to contest unregulated automated systems' impacts. However, technical opacity poses ongoing barriers to accountability, necessitating specialist expertise [286]. Impacted groups urgently require improved legal resources and representation to effectively challenge detrimental algorithmic determinations under existing laws [287]. Longer term legislative reforms governing transparency, due process and recourse also appear necessary to fully protect citizens from harmful applications [288].

    Recent Developments in Europe

    Several European countries have implemented new laws and regulations governing AI and automation in the workplace. In 2022, Spain passed legislation giving delivery app workers recognized employment status, requiring transparency around automated management systems, and restricting real-time tracking [102]. The law aims to protect workers' rights in the on-demand digital economy. The EU is also currently debating a proposed Directive on improving working conditions in platform work, which contains protections around algorithmic management and surveillance [103]. These developments demonstrate growing European efforts to regulate and provide oversight over workplace AI through binding legal frameworks.

    England and Wales

    The United Kingdom, comprising England, Wales, Scotland, and Northern Ireland, lacks a unified national AI strategy. England and Wales share a common legal system and policy approaches. Guidelines from the UK's Information Commissioner's Office, the Alan Turing Institute and other bodies promote ethical AI practices, but compliance remains voluntary [95].

    The UK government champions pro-innovation policies to foster an AI industry, resists significant regulation of technologies like facial recognition, and believes existing laws largely suffice [96]. Critics argue this disregard of potential harms from biased and opaque algorithms threatens citizens' rights [97]. England and Wales focus more on enabling economic applications rather than restrictive governance of AI systems' societal impacts. However, accumulating discrimination cases highlight algorithmic harms. In 2021, the Welsh Government became the first UK public body found to have unlawfully used an algorithmic recruitment tool disadvantaging women applicants [98]. Other lawsuits allege biased facial recognition and discriminatory tenant screening algorithms [99]. Courts increasingly scrutinize automated decisions affecting individuals' liberties and livelihoods.

    Parliamentary committees like the Lords AI Committee recommend codes of conduct, workforce training in algorithmic literacy, and meaningful consent procedures for data collection [100]. Activists pressure government to address AI's threats to equality and democracy through laws mandating transparency, accountability, and rights-based impact assessments [101], but progress remains incremental amid influential innovation advocates. England and Wales' predominant ethos supports commercial development and light-touch governance of AI systems. However, ethical guidelines, court judgments and civil society activism increasingly contest uncritical technological optimism and call for human rights protections. The ideal equilibrium between AI innovation and regulation remains contested and evolving.

    European Union

    The EU’s General Data Protection Regulation (GDPR), in effect since 2018, contains several provisions extending privacy rights around automated decisions based on personal data [31]. For example, Article 22 states:

    The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her. [32]

    This empowers EU citizens to request human input into, explanation of and ability to contest algorithmic decisions affecting them. Other regulations grant workers’ rights like restricting workplace monitoring and mandating transparent management [33]. The GDPR places pressure on employers to implement ethical, accountable AI. However, numerous barriers persist in enforcing and fully realizing these rights. Workers may lack the expertise in opaque algorithms to meaningfully contest their determinations [34]. Companies can claim exemptions like contractual necessity [35]. They may subtly pressure employees to relinquish rights. The scale of data collection makes full human oversight infeasible [36]. While recognition of risks is progress, practical protections remain inadequate.

    United States

    In contrast to the EU’s proactive governance approach, US law lacks comprehensive protections against workplace algorithmic harms. Illinois’ Artificial Intelligence Video Interview Act prohibits unfair discrimination based on protected classes in automated video analysis of job applicants [37]. This represents a modest step toward algorithmic accountability. Federal anti-discrimination laws offer further grounds for recourse, as algorithms replicating existing biases against minorities or women could illegally worsen hiring disparities [38], but advancing claims remains challenging absent direct evidence that factors like race or gender unlawfully determined outcomes [39]. Workers must conclusively link statistical gaps to discriminatory algorithms rather than other causes. Broader legislation governing workplace AI usage generally is non-existent at the federal level.

    Overall, US legal frameworks lag behind technology deployment and ethical concerns. However, evolving case law could strengthen protections as evidence of biases and harms mounts [40].

    Singapore

    Singapore has promoted extensive economic integration of AI through both public- and private sector initiatives [41], but governance around worker impacts remains underdeveloped. The government’s voluntary Model AI Governance Framework contains limited references to workplace issues like harassment, safety, job losses and discrimination [42]. Adherence relies on voluntary corporate adoption rather than binding regulations or penalties. Singapore similarly lacks comprehensive laws like the EU’s GDPR safeguarding workers from adverse algorithmic decisions or invasive monitoring. The city-state’s values of innovation, flexibility and minimal regulation currently supersede labour protections in AI usage and outcomes.

    India

    India recently enacted the occupational safety, health, working conditions and social security Code, 2020, which contains provisions requiring certain companies to provide a 60-day notice period and governmental approval before retrenching 50% or more of employees because of AI and automation [43]. This law aims to protect workers and provide transition periods to adapt to technological disruptions. However, practical effects remain uncertain. The law does not guarantee job security or alternative employment. Requirements like retraining workers may dissuade investment. Blanket thresholds for notification do not account for variations in firm resources. Like other labour regulations in India, enforcement also remains challenging [44]. However, the law signals increasing policy prioritization of worker welfare amid automation.

    Overall, these examples demonstrate some nascent but still minimal governance frameworks specifically addressing workplace AI risks. More evidence, litigation and civil society pressures will likely be required before substantial binding legislation materializes globally.

    Policy Approaches

    Some scholars propose AI justice frameworks assessing how automation's material impacts distribute across social hierarchies like race, gender, and class [155,177]. They argue pure efficiency-based deployment overlooks disadvantaged populations disproportionately affected by job losses [156,178]. Policies must specifically target inclusion of marginalized groups in any wealth or opportunities from AI [157,179]. However, some contend AI justice risks divisive identity politics if not united with universal welfare policies [158,180]. Holistic approaches address job losses, build worker power, and democratize technology ownership [159,181], but pragmatic proposals wrestle with complex diversity of global contexts [160,182].

    Some urge differentiating humans’ rights from intelligent machines to crystallize accountable governance [238,315]. While emergent AI may evince free will like humans [239,316], current technologies enact predetermined instructions lacking cognition [240,317]. Rights frameworks recognize beings with intrinsic moral worth, not replicable by code [241,318]. As machine capabilities advance, rights require careful delineation [242,319]. However, regulations holding innovators responsible for AI outcomes could sufficiently protect human welfare for now [243,320]. Codifying overarching principles like transparency and accountability facilitates just integration across diverse global contexts [244,245,321,322].

    Proposals to tax automation and subsidize human employment require examination [246]. Some urge targeted taxes on robotics and AI to fund worker welfare schemes [247]. While responding to economic anxieties, taxing productivity risks stifling innovation [248]. Subsidizing industries resistant to automation helps maintain jobs but could prove expensive if prolonged [249]. Policy experiments modelling various taxation and subsidy formulas could assess impacts on growth, government budgets and public attitudes [250]. Findings would inform balanced approaches equitably distributing the benefits of intelligent automation [251].

    While proponents of basic income cite its potential to liberate workers and stimulate entrepreneurship [259], critics argue it risks entrenching gender inequalities if implemented in isolation [260]. Analysts warn unconditional cash transfers alone cannot address systemic barriers women face in labour force participation, asset ownership, and career advancement [261]. Comprehensive policies would need to couple basic incomes with investments in accessible childcare, paid parental leave, and other supportive programs enabling women's economic empowerment [262]. Singapore's more holistic approach combining income, training, and career supports demonstrates the importance of multi-layered initiatives to uplift those most vulnerable to displacement [263].

    England and Wales

    The United Kingdom, comprising England, Wales, Scotland, and Northern Ireland, lacks a unified national AI strategy. England and Wales share a common legal system and policy approaches. Guidelines from the UK's Information Commissioner's Office, the Alan Turing Institute and other bodies promote ethical AI practices, but compliance remains voluntary [95].

    Parliamentary committees like the Lords AI Committee recommend codes of conduct, workforce training in algorithmic literacy, and meaningful consent procedures for data collection [100]. Activists pressure government to address AI's threats to equality and democracy through laws mandating transparency, accountability, and rights-based impact assessments [101], but progress remains incremental amid influential innovation advocates.

    England and Wales' predominant ethos supports commercial development and light-touch governance of AI systems. However, ethical guidelines, court judgments and civil society activism increasingly contest uncritical technological optimism and call for human rights protections. The ideal equilibrium between AI innovation and regulation remains contested and evolving.

    European Union

    The EU’s policy vision seeks responsible, ethical and humancentric AI aligned with European values [45]. Regulations like the GDPR coupled with major funding initiatives emphasize governing AI to protect citizens. New proposed laws would impose requirements including algorithmic transparency, fairness, accuracy, and human oversight [46]. The EU aims to harness AI’s benefits while ensuring human rights and democratic control.

    However, critics contend excessively stringent requirements could hamper European technology competitiveness against rivals [47]. AI development could shift to more permissive jurisdictions, hurting rather than protecting European workers [48]. The EU argues ethical technology uplifts everyone rather than engendering a race to the bottom [49], but forging global consensus on AI governance will prove challenging.

    China

    In contrast to the EU’s caution and regulation, China actively deploys automation throughout its industrial and service sectors with minimal protections for workers [50]. Intelligent algorithms manage employees at major technology and ecommerce firms [51]. Retail outlets utilize automated cashiers and robots to cut labour costs [52]. AI-enabled video surveillance is ubiquitous. The central government considers goading along technological disruption as necessary for economic advancement and societal progress [53].

    This philosophy values rapid innovation and efficiency over pre-emptive regulation or moralizing. Chinese officials expect society to naturally adapt jobs and skills to technological changes rather than resisting automation or burdening companies [54]. However, civil society and academics increasingly question this laissez-faire approach as mass unemployment looms [55]. How Beijing balances economic priorities, social stability and workers' welfare remains uncertain.

    While broadly permissive of workplace automation, China faces rising pressures to address detrimental impacts on vulnerable migrant labourers. State planning agencies increasingly discuss strategies to optimize, not simply maximize, automation [303]. Some urge strengthening protections and social security programs for displaced workers, while providing targeted retraining toward higher-skilled roles [304]. However, pro-growth factions resist reforms that could discourage foreign investment or hamper industrial upgrading [305]. Reconciling tensions between development goals, business demands and social stability will test Chinese policymakers as automation's labour effects accumulate [306]. The lack of independent unions leaves few channels for workers to influence the state's reform agenda [307].

    Gulf States

    Several Gulf countries like Saudi Arabia and the United Arab Emirates have promoted visions of extensive automation across private industries and public administration [56,138]. With small domestic labour pools, these states rely heavily on millions of vulnerable immigrant workers who risk job displacement [57] from AI and robots [139].

    Developing Countries

    Most developing countries prioritize industrialization and economic catch- up before regulating AI impacts [60]. India and ASEAN members focus policies on growth, technology infrastructure and human capital development [61]. They aim to climb the value chain from basic manufacturing toward advanced automation. Populous developing countries expect to leverage young demographics and large workforces as automation still requires human input [62]. However, wilful ignorance of automation’s risks jeopardizes livelihoods. Sudden workforce exclusions could provoke unrest [63]. At minimum, developing countries must prepare workers for technological transitions even if not impeding investment.

    International Coordination

    While AI development and governance remain largely national in orientation, some scholars urge greater international cooperation and coordination [194]. Universal guidelines could encourage responsible innovation and prevent a race to the bottom in ethics and rights protections [195]. However, competitive economic incentives impede concrete global accords [196]. The Council of Europe's efforts represent an important step, but translating principles into national policies and organizational practices remains complex across diverse contexts [197]. Regional partnerships on joint oversight mechanisms tailored to local needs may offer more feasible paths to coordinated governance.

    All nations balance retaining domestic policy autonomy with recognizing AI’s global impacts. Calls exist for transnational principles or accords governing AI ethics [64]. The OECD advocates public consultation, transparency, accountability, and international cooperation [65]. However, each country ultimately prioritizes its own economic interests. Competing national advantages impede concrete binding agreements. This risks a race to the bottom in social protections absent shared ethical foundations [66]. While some positive initiatives like the Global Partnership on AI promote collaboration, improving international AI governance remains an immense challenge [67]. Regional bodies are beginning to pioneer shared principles for governing AI impacts. The Council of Europe adopted the first modern international legal framework on AI ethics in 2021 comprising the Protocol on AI and Convention 108+ [123]. These instruments obligate 47 diverse member nations to uphold human rights and rule of law in AI usage, including for workplace technologies. The conventions demonstrate the viability of transnational collaboration around emerging technologies given sufficient political will. However, translating high-level principles into national policies and corporate practices remains complex [124]. Tangible enforcement and accountability mechanisms are needed alongside aspirational declarations.

    While some positive initiatives like the Global Partnership on AI promote collaboration, improving international AI governance remains an immense challenge [211]. Competitive economic incentives impede concrete global accords [212]. Regional partnerships on joint oversight mechanisms tailored to local needs may offer more feasible paths to coordinated governance. The Council of Europe’s efforts represent an important step, but translating principles into national policies and organizational practices remains complex across diverse contexts [213]. All nations balance retaining domestic policy autonomy with recognizing AI’s global impacts [214]. Calls exist for transnational principles or accords governing AI ethics [215], but achieving meaningful commitment and accountability remains difficult.

    The UN's International Labour Organization could provide a forum for establishing global accords on protecting workers' rights and welfare amid AI transitions. The ILO's recent Global Commission on the Future of Work proposes international governance cooperation to ensure automation complements rather than displaces human labour [308]. Key principles include transparent and democratic oversight of workplace AI, emphasize augmenting workers through upskilling rather than full substitution, and strengthen collective bargaining powers [309]. However, competitive pressures and diverse national contexts impede concrete multilateral commitments [310]. While an important agenda setter, the ILO lacks regulatory authority. Achieving harmonized global action relies on rallying political will among member states.

    Worker Reskilling and Retraining

    As automation alters job requirements, proactive policies around worker reskilling and mobility will be critical. Singapore’s SkillsFuture initiative subsidizes skills upgrading and career coaching across occupations vulnerable to disruption [109]. The EU’s Skills Agenda aims to boost digital skills through partnerships between industry, education, and government [110]. Critics argue reskilling policies overly emphasize individual responsibility rather than systemic reforms [111]. However, skills support combined with strengthened social protections could aid workforce transitions. The International Labour Organization advocates balancing employer needs, government protections, and worker participation in shaping AI transitions [112], but sustainable, equitable training programs rely on long-term resource commitments some governments resist.

    Worker retraining programs necessitate circumspect curricula reflecting projected occupational transitions [232]. Governments face pressures to make swift reforms before public resentment towards automation festers [233]. However, accurately anticipating jobs at highest risk of automation remains challenging [234]. Rapidly implementing programs around uncertain projections risks mismatching skills training with actual industry needs [235]. Deep involvement of companies and unions could provide vital workforce insights, but also introduces interests conflicting with workers’ welfare [236]. A period of small-scale pilots and incremental reforms may enable carefully tailored, widely accessible reskilling initiatives [237].

    Critics argue reskilling places undue burdens on already disadvantaged populations to adapt [161]. Structural barriers like discrimination persist regardless of acquired skills [162]. Effective programs necessitate wraparound services addressing multiple needs to enable mobility [163]. Singapore's SkillsFuture supplements training subsidies with career guidance, wage support and tailored coaching [164]. The EU provides job search assistance, relocation grants and guaranteed social protection alongside reskilling [165]. Holistic transition policies combining multiple interventions demonstrate greater success [166], but such comprehensive initiatives require long-term resource commitments some governments resist.

    Successful worker reskilling hinges on aligning training programs with actual occupational transitions. However, accurately projecting automation's impacts across industries remains challenging [311,323]. Analysts urge policymakers to shift focus from specific technological capabilities toward overall job tasks and activities that may be feasibly automated [312,324]. This functional approach provides more pragmatic insights for targeting threatened roles and skills. Further research is vital to mapping automation potentials across diverse work tasks, not just occupations [313,325]. Reskilling initiatives also require built-in flexibility and continuous environmental scanning to adjust training as technology and adoption trajectories evolve [314,326].

    Practical Evidence and Experiments

    Beyond conceptual debates, accumulating workplace evidence reveals both benefits and challenges of AI adoption along with promising approaches. Citizen juries or panels represent another model for democratizing AI governance (125). These deliberative bodies comprising diverse public members study issues and provide input into policies. For workplace AI, panels could gather evidence on impacts, weigh trade-offs and formulate recommendations to balance productivity, equity, and welfare. However, critics argue most citizens lack sufficient context for fully-informed opinions [126]. Panels must access comprehensive, balanced information and technical training. Outcomes could be advisory rather than binding, but inclusive participation foregrounds community values over corporate or elite interests [127]. The process aims to build collective capacity for navigating AI's societal implications.

    Some companies are trying to improve transparency and accountability through internal audits and algorithmic impact assessments. For example, Google asked external reviewers to audit its health care AI for fairness, privacy, and efficacy issues [142,227]. Microsoft has an internal Aether oversight committee monitoring its AI systems [143,228]. However, critics argue that internal governance cannot substitute for external regulation and scrutiny [144,229]. Independent algorithmic auditing by accredited third parties could verify AI systems’ compliance with social values [145,230]. External audits avoid conflicts of interest associated with companies judging their own technologies' ethics, but truly comprehensive oversight still requires enabling meaningful participation by impacted communities beyond just technical experts [146,231].

    While some industries like agriculture increasingly deploy autonomous technologies, most workplaces exhibit moderate automation penetration currently [327]. Gradual integration allows evaluating impacts on productivity, employment levels and service quality over time [328]. Small-scale pilots in limited operational areas provide contained opportunities to iteratively optimize human-robot collaboration [329]. Longitudinal studies tracking employee perceptions, stress levels and skill development can reveal positive and negative effects [330]. Such evidence informs ethical scaling focused on complementing human strengths [331], but gradual automation should not engender complacency around transformations in work and power dynamics [332].

    Cultivating Public Discourse

    Advancing AI for societal good requires cultivating broad public discourse and literacy. Citizen panels, embedded ethnography and

    Enjoying the preview?
    Page 1 of 1