maps

European Union
Europe
Latin America

Document Name: Ethics and governance of artificial intelligence for health: Guidance on large multi-modal models
Date Uploaded to OECD.AI: 2024-01-24
Objectives: Fairness
Tool Type: Not Defined
Target Sector(s)/ Group(s)/ User(s): Sector: Health Group:Civil Society
Related Lifecycle Stage(s): 1. Build and Interpret Model 2. Plan and Design
Type of Approach: Procedural
Country (Origin): Switzerland
Country (Scope): International
Principles: Key points: Provides guidance on the ethical and governance challenges posed by large multi-modal models (LMM) for health-related applications Provides recommendations for the governance of LMMs to address the potential challenges while maximizing benefits during the development, provision, and deployment phase Emphasizes the importance of involving diverse stakeholders in designing the governance of AI Highlights international cooperation and collective development of international rules for the governance of AI as necessary to ensure that the development of LMMs occurs in a responsible and ethical manner Ethical principles highlighted: Protect autonomy - Humans to remain in control of healthcare systems and medical decisions - Data privacy and confidentiality to be protected by valid informed consent through appropriate legal frameworks for data protection Promote human well-being, human safety, and the public interest - Designers of AI to satisfy regulatory requirements for safety, accuracy, and efficacy for well-defined uses or indications Ensure transparency, explainability, and intelligibility - Government regulators to require the transparency of certain aspects of AI technology while accounting for proprietary rights to improve oversight and assessment of safety and efficacy Foster responsibility and accountability - AI developers and providers to have and to enforce strong data protection laws and data protection impact assessments Ensure inclusiveness and equity - Governments to involve diverse stakeholders in design. Promote responsive and sustainable AI - Governments to support the collective development of international rules for the governance of AI
Document Description and Other Notes (from OECD.AI Website): Artificial Intelligence (AI) refers to the capability of algorithms integrated into systems and tools to learn from data so that they can perform automated tasks without explicit programming of every step by a human. Generative AI is a category of AI techniques in which algorithms are trained on data sets that can be used to generate new content, such as text, images or video. This guidance addresses one type of generative AI, large multi-modal models (LMMs), which can accept one or more type of data input and generate diverse outputs that are not limited to the type of data fed into the algorithm. It has been predicted that LMMs will have wide use and application in health care, scientific research, public health and drug development. LMMs are also known as “general-purpose foundation models”, although it is not yet proven whether LMMs can accomplish a wide range of tasks and purposes.
Organization(s): World Health Organization
Company Logo(s):
Document Name: Digital Catapult: Ethics upskilling and tool development for startups
Date Uploaded to OECD.AI: 2023-09-14
Objectives: Reskill or Upskill
Tool Type: Not Defined
Target Sector(s)/ Group(s)/ User(s): Users: 1. Business Leaders 2. Developers 3. Researchers
Related Lifecycle Stage(s): 1. Build and Interpret Model 2. Plan and Design
Type of Approach: Not Defined
Country (Origin): United Kingdom
Country (Scope): United Kingdom
Principles: Key points: Showcases how the Machine Intelligence Garage program has helped more than 100 startups accelerate the development of their industry-leading machine learning and artificial intelligence solutions Highlights the success of the program in providing startups with access to critical computation resources through partnerships with leading tech companies like Amazon Web Services and Google Cloud Platform Stresses the importance of ethical considerations in the implementation of AI, as well as the role of the Machine Intelligence Garage program in supporting responsible and ethical AI by providing startups with AI Ethics Deep Dives, ethics consultations, and the Digital Catapult AI Ethics Framework Ethical principles highlighted: Legal and regulatory risks Developing responsible and ethical products and services
Document Description and Other Notes (from OECD.AI Website): The Ethics Upskilling Programme is focused on providing startups with practical and impactful ethics tools and the space to discover and learn from each other. The workshops are focused on a specific area of ethics and provide resources to help startups advance in that specific area, and the complementary peer mentoring sessions allow startups to discuss the resources and activities and to learn from each other.
Organization(s): Digital Catapult
Company Logo(s):
Document Name: Fujitsu AI Ethics Impact Assessment
Date Uploaded to OECD.AI: 2023-08-31
Objectives: 1. Accountability - 2. Fairness - 3. Respect of Human Rights
Tool Type: 1. Audit Process 2. Risk Management Framework
Target Sector(s)/ Group(s)/ User(s): Groups: 1. Civil Society 2. Private Sector 3. Public Sector --- Users: 1. Business Leaders 2. Data Scientists 3. Project Managers
Related Lifecycle Stage(s): 1. Operate and Monitor 2. Verify and Validate 3. Plan and Design
Type of Approach: Procedural
Country (Origin): United Kingdom Japan European Union
Country (Scope): All Countries
Principles: Key points: Highlights the critical role of developing ethical principles and guidelines for promoting trustworthy AI and preventing AI from causing serious social problems Proposes the AI Ethics Impact Assessment as a method to evaluate the ethical impact of AI on people and society based on AI ethics guidelines Ethical principles highlighted: Development of AI Ethical Principles - AI ethical principles (transparency, accountability, and privacy) as proposed by the European Commission - AI Utilization Guidelines as published by the Japan Ministry of Internal Affairs and Communications - Governance Guidelines for Implementation of AI Principles as published by the Japan Ministry of Economy, Trade, and Industry Trustworthy AI - Human oversight, technical robustness, and safety and accuracy as provided in guidelines for Trustworthy AI by the European High-Level Expert Group on AI Avoidance of Fairness and Other Biases - AI as prone to making biased decisions based on trained data, including past cases of discrimination and unfairness Ethics Impact Assessment - Conducting AI Ethics Impact Assessments during the design and auditing of AI systems as proactive measures to prevent AI from causing serious social problems
Document Description and Other Notes (from OECD.AI Website): Fujitsu AI Ethics Impact Assessment can be used to future-proof AI systems so that they comply with ethics principles and legal requirements. It can also be used to create evidence to engage with auditors, approvers, and stakeholders. This technology is used to assess ethical impact and risks of AI systems based on international AI ethics guidelines and previous incidents from the AI Incidents. The methodological approach is freely available through Fujitsu’s website. Fujitsu AI Ethics Impact Assessment includes the following steps: 1) based on the list of stakeholders relevant to an AI system, an AI system model is generated to map the interactions among the components of AI system (e.g. training dataset, AI model, output); the stakeholders directly involved in the system (e.g. business users); and the stakeholders indirectly involved in the system (e.g. citizens). 2) Fujitsu AI Ethics Impact Assessment automatically assesses the ethical requirements and potential risks emerging from each interaction. 3) Fujitsu AI Ethics Impact Assessment automatically develops a risk analysis table containing a comprehensive list of ethical risks. The risk analysis is based on the seven requirements of Trustworthy Artificial Intelligence (AI) published by the European Commission’s High-Level Expert Group on Artificial Intelligence. Fujitsu has further developed these requirements to include over a hundred sub-categories of ethical risks, which are consistent with The Assessment List for Trustworthy Artificial Intelligence (ALTAI) developed by the European Union, and the other assessment list for software quality. 4) Fujitsu AI Ethics Impact Assessment, through the AI Ethics Risk Comprehension technology, automatically contextualises the risks identified based on the AI use case under examination and provides prompts that can help the user to define effective countermeasures.
Organization(s): Fujitsu
Company Logo(s):
Document Name: Data Ethics Framework
Date Uploaded to OECD.AI: 2023-05-24
Objectives: 1. Accountability - 2. Fairness - 3. Transparency and Explainability
Tool Type: Not Defined
Target Sector(s)/ Group(s)/ User(s): Not Defined
Related Lifecycle Stage(s): Not Defined
Type of Approach: Procedural
Country (Origin): United Kingdom
Country (Scope): Not Defined
Principles: Key points: Provides guidance to appropriate and responsible use of data in the government and public sector for public servants to understand ethical considerations and address them within their projects Encourages responsible innovation through seven specific actions - Defining public benefit and user need - Involving stakeholders - Complying with the law - Being aware of the quality and limitations of the data and mitigate against bias - Ensuring privacy and security by protecting personal data - Evaluating and considering wider policy implications - Ensuring skills, training, and maintenance for the longevity of the project Ethical principles highlighted: Promotes transparency, fairness, and accountability – three overarching principles Promotes inclusivity Encourages mitigation of biases that may influence AI models' outcomes - To ensure respect for the dignity of individuals - To ensure non-discrimination and consistency with public interests, including human rights and democratic values
Document Description and Other Notes (from OECD.AI Website): The Data Ethics Framework guides appropriate and responsible data use in government and the wider public sector. It helps public servants understand ethical considerations, address these within their projects, and encourages responsible innovation.
Organization(s): United Kingdom Government Digital Services
Company Logo(s):
Document Name: Data Ethics Canvas
Date Uploaded to OECD.AI: 2023-05-24
Objectives: Fairness
Tool Type: Not Defined
Target Sector(s)/ Group(s)/ User(s): Not Defined
Related Lifecycle Stage(s): Not Defined
Type of Approach: Procedural
Country (Origin): London
Country (Scope): Not Defined
Principles: Key points: Emphasizes a comprehensive and iterative approach to data ethics by ensuring that the principles are prioritized throughout the lifecycle of a data-driven project Ethical principles highlighted: Considering the rights around data sources Identifying limitations and potential bias in data sources Being aware of ethical codes and legislative context Considering the potential impact of the project on individuals and society Mitigating negative impact Engaging with people who are impacted by the project Being transparent and open about the project Being responsible when sharing data with others
Document Description and Other Notes (from OECD.AI Website): The Data Ethics Canvas is a tool for anyone who collects, shares or uses data. It helps identify and manage ethical issues – at the start of a project that uses data, and throughout. It encourages you to ask important questions about projects that use data, and reflect on the responses. These might be: What is your primary purpose for using data in this project? Who could be negatively affected by this project? The Data Ethics Canvas provides a framework to develop ethical guidance that suits any context, whatever the project’s size or scope.
Organization(s): Open Data Institute
Company Logo(s):
Document Name: Australia’s AI Ethics Principles
Date Uploaded to OECD.AI: 2023-05-24
Objectives: 1. Accountability - 2. Fairness - 3. Human Wellbeing - 4. Privacy and Data Governance - Safety
Tool Type: Not Defined
Target Sector(s)/ Group(s)/ User(s): Not Defined
Related Lifecycle Stage(s): Not Defined
Type of Approach: Not Defined
Country (Origin): Australia
Country (Scope): Australia
Principles: Key points: Provides eight “voluntary” AI Ethics principles that aim to help achieve safer, more reliable, and fairer outcomes for all Australians (businesses and governments alike) by promoting ethical AI practices Ethical principles highlighted: Human, societal, and environmental well-being - AI systems to benefit individuals, society, and the environment Human-centered values - AI systems to respect human rights, diversity, and the autonomy of individuals Fairness - AI systems to be inclusive and accessible - AI systems to not involve or result in unfair discrimination against individuals, communities, or groups Privacy protection and security - AI systems to respect and uphold privacy rights and data protection and ensure the security of data Reliability and safety - AI systems to reliably operate by their intended purpose Transparency and explainability - Need for transparency and responsible disclosure for people to understand when AI is significantly impacting them and find out when an AI system is engaging with them Contestability - Need for a timely process to allow people to challenge the use or outcomes of the AI system Accountability - Need to enable people responsible for the different phases of the AI system lifecycle to be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems
Document Description and Other Notes (from OECD.AI Website): Australia’s 8 Artificial Intelligence (AI) Ethics Principles are designed to ensure AI is safe, secure and reliable. They will help: achieve safer, more reliable and fairer outcomes for all Australians reduce the risk of negative impact on those affected by AI applications businesses and governments to practice the highest ethical standards when designing, developing and implementing AI. A voluntary framework The principles are voluntary. We intend them to be aspirational and complement – not substitute –existing AI regulations and practices. By applying the principles and committing to ethical AI practices, you can: build public trust in your product or organisation drive consumer loyalty in your AI-enabled services positively influence outcomes from AI ensure all Australians benefit from this transformative technology.
Organization(s): Australian Government: Department of Industry, Science, and Resources
Company Logo(s):
Document Name: SIENNA: Technology, ethics and human rights
Date Uploaded to OECD.AI: 2023-05-22
Objectives: 1. Performance - 2. Robustness and Digital Security
Tool Type: Not Defined
Target Sector(s)/ Group(s)/ User(s): Not Defined
Related Lifecycle Stage(s): Not Defined
Type of Approach: Procedural
Country (Origin): Netherlands
Country (Scope): Not Defined
Principles: Key points: Provides an insight into unique challenges that come with the development of emerging and new technologies posed by Human genomics, human enhancement, AI and robotics, and how these technologies challenge human rights and what is/are considered ethical today Includes key messages and recommendations from the EU-funded Horizon 2020 SIENNA project (2017-2021) Promotes the need for an Ethical Impact Assessment Emphasizes the need for stakeholders and developers of technologies to develop frameworks, guidelines, and regulations that place ethics and human rights at the center of managing and regulating these technologies – research ethics assessment, Ethics by Design, technical standards, regulations, and Corporate Social Responsibility (CSR) policies Ethical principles highlighted: Note: Does not provide a comprehensive list of AI ethics principles but highlights the development of a framework for AI ethics that emphasizes the need for AI development to focus on non-technical components like social, economic, legal, ethical, environmental policies, and multi-stakeholder perspectives
Document Description and Other Notes (from OECD.AI Website): SIENNA (Stakeholder-Informed Ethics for New technologies with high socio-ecoNomic and human rights impAct) has developed ethical frameworks, recommendations for better regulation and operational tools for the ethical management of human genomics, human enhancement and AI & robotics.
Organization(s): SIENNA
Company Logo(s):
Document Name: MIGarage Ethics Framework
Date Uploaded to OECD.AI: 2023-05-22
Objectives: Transparency and Explainability
Tool Type: Not Defined
Target Sector(s)/ Group(s)/ User(s): Not Defined
Related Lifecycle Stage(s): Not Defined
Type of Approach: Procedural
Country (Origin): United Kingdom
Country (Scope): Not Defined
Principles: Key points: Provides a practical framework for ethical considerations in AI development and implementation Emphasizes the importance of proactive ethical considerations in ensuring the positive impact of AI on society and its adoption by users Highlights seven key concepts that businesses should consider to ensure the positive impacts of AI while avoiding negative consequences Use AI in ways that benefit individuals, groups, and society as a whole Know and manage the risks - Use data responsibly - Be worthy of trust - Communicate clearly, honestly, and directly - Consider the impact on society and the environment - Consider the business model Ethical principles highlighted: Importance of questioning, assessing, and mitigating risks at all stages of development Importance of transparency and communication in building trust between businesses, users, and stakeholders in the AI development process
Document Description and Other Notes (from OECD.AI Website): The responsible use of algorithms and data is paramount for the sustainable development of machine intelligence applications, Digital Catapult's Ethics Committee has created an Ethical Framework consisting of seven concepts, along with corresponding questions intended to inform how they may be applied in practice.
Organization(s): Digital Catapult
Company Logo(s):
Document Name: Ethics Lab - The Box
Date Uploaded to OECD.AI: 2023-05-22
Objectives: Not Defined
Tool Type: Toolkit/Software
Target Sector(s)/ Group(s)/ User(s): Not Defined
Related Lifecycle Stage(s): Not Defined
Type of Approach: Procedural
Country (Origin): Germany
Country (Scope): Not Defined
Principles: Key points: No specified document for reference on the website Ethical principles highlighted: No specified document for reference on the website
Document Description and Other Notes (from OECD.AI Website): The Box is a tool from our toolkit, Dynamics of AI Principles. Read our manual to learn about the idea behind the Box, how to use it (with a use-case), and which questions to ask to understand each principle.
Organization(s):
Company Logo(s):
Document Name: An interdisciplinary framework to operationalise AI ethics
Date Uploaded to OECD.AI: 2023-05-22
Objectives: 1. Accountability - 2. Privacy and Data Governance - 3. Robustness and Digital Security - 4. Transparency and Explainability
Tool Type: Not Defined
Target Sector(s)/ Group(s)/ User(s): Not Defined
Related Lifecycle Stage(s): Not Defined
Type of Approach: 1. Technical 2. Procedural
Country (Origin): Europe
Country (Scope): Not Defined
Principles: Key points: Proposes an interdisciplinary framework to operationalize AI ethics by addressing the practical challenges of implementing them Framework includes a values, criteria, indicators, and observables (VCIO) model for rating ethical characteristics of AI applications and a classification approach for handling AI ethical challenges Highlights the challenges of context dependency, the socio-technical nature of AI systems, and the different requirements for the ease of use of system developers, users, oversight bodies, policymakers, and consumers Introduces a classification of different AI application contexts based on the risk they pose for individuals affected and society overall Proposes six values that form the basis for AI ethics to bring the ethical principles into actionable practice when designing, implementing, and evaluating AI systems - Accountability - Justice - Privacy - Reliability - Transparency - Environmental sustainability Ethical principles highlighted: Note: Incorporated in the six values
Document Description and Other Notes (from OECD.AI Website): Artificial intelligence (AI) increasingly pervades all areas of life. To seize the opportunities this technology offers society, while limiting its risks and ensuring citizen protection, different stakeholders have presented guidelines for AI ethics. Nearly all of them consider similar values to be crucial and a minimum requirement for “ethically sound” AI applications – including privacy, reliability and transparency. However, how organisations that develop and deploy AI systems should implement these precepts remains unclear. This lack of specific and verifiable principles endangers the effectiveness and enforceability of ethics guidelines. To bridge this gap, this paper proposes a framework specifically designed to bring ethical principles into actionable practice when designing, implementing and evaluating AI systems. We have prepared this report as experts in spheres ranging from computer science, philosophy, and technology impact assessment via physics and engineering to social sciences, and we work together as the AI Ethics Impact Group (AIEI Group). Our paper offers concrete guidance to decision-makers in organisations developing and using AI on how to incorporate values into algorithmic decision-making, and how to measure the fulfilment of values using criteria, observables and indicators combined with a context-dependent risk assessment. It thus presents practical ways of monitoring ethically relevant system characteristics as a basis for policymakers, regulators, oversight bodies, watchdog organisations and standards development organisations. So this framework is for working towards better control, oversight and comparability of different AI systems, and also forms a basis for informed choices by citizens and consumers. The report does so in four steps: In chapter one, we present the three main challenges for the practical implementation of AI ethics: (1) the context-dependency of realising ethical values, (2) the sociotechnical nature of AI usage and (3) the different requirements of different stakeholders concerning the ‘ease of use’ of ethics frameworks. We also explain how our approach addresses these three challenges and show how different stakeholders can make use of the framework. In chapter two, we present the VCIO model (values, criteria, indicators, and observables) for the operationalisation and measurement of otherwise abstract principles and demonstrate the functioning of the model for the values of transparency, justice and accountability. Here, we also propose context-independent labelling of AI systems, based on the VCIO model and inspired by the energy efficiency label. This labelling approach is unique in the field of AI ethics at the time of writing. For the proposed AI Ethics Label, we carefully suggest six values, namely justice, environmental sustainability, accountability, transparency, privacy, and reliability, based on contemporary discourse and operability. Chapter three introduces the risk matrix, a two-dimensional approach for handling the ethical challenges of AI, which enables the classification of application contexts. Our method simplifies the classification process without abstracting too much from the given complexity of an AI system’s operational context. Decisive factors in assessing whether an AI system could have societal effects are the intensity of the system’s potential harm and the dependence of the affected person(s) on the respective decision. This analysis results in five classes which correspond to increasing regulatory requirements, i.e. from class 0 that does not require considerations in AI ethics to class 4 in cases where no algorithmic decision-making system should be applied. Chapter four then reiterates how these different approaches come together. We also make concrete propositions to different stakeholders concerning the practical use of the framework, while highlighting open questions that require a response if we ultimately want to put ethical principles into practice. The report does not have all the answers but provides valuable concepts for advancing the discussion among system developers, users and regulators. Coming together as AI Ethics Impact Group, led by VDE Association for Electrical, Electronic & Information Technologies and Bertelsmann Stiftung and presenting our findings here, we hope to contribute to work on answering these open questions, to refine conceptual ideas to support harmonisation efforts, and to initiate interdisciplinary networks and activities.
Organization(s): AI Ethics Impact Group led by VDE | Bertelsmann Stftung
Company Logo(s):
Document Name: Understanding artificial intelligence ethics and safety
Date Uploaded to OECD.AI: 2023-05-20
Objectives: 1. Accountability - 2. Transparency and Explainability
Tool Type: Not Defined
Target Sector(s)/ Group(s)/ User(s): Sector: Public Governance
Related Lifecycle Stage(s): Not Defined
Type of Approach: Procedural
Country (Origin): United Kingdom
Country (Scope): Not Defined
Principles: Key points: Provides guidance in the responsible design and implementation of AI systems in the public sector Offers end-to-end guidelines that consider aspects of application type and domain context Ethical principles highlighted: Accountability of AI systems for their decisions and actions Transparency to the need for AI processes to be open, fair, and understandable Explainability to be able to understand and justify algorithmically supported decisions Inclusivity to serve diverse communities in the development and deployment of AI systems Safety, security, and functionality of AI-driven system as intended Privacy and protection of personal data throughout the lifecycle of AI-powered projects
Document Description and Other Notes (from OECD.AI Website): A remarkable time of human promise has been ushered in by the convergence of the ever-expanding availability of big data, the soaring speed and stretch of cloud computing platforms, and the advancement of increasingly sophisticated machine learning algorithms. Innovations in AI are already leaving a mark on government, by improving the provision of essential social goods and services from healthcare, education, and transportation to food supply, energy, and environmental management. These bounties are likely just the start. The prospect that progress in AI will help government to confront some of its most urgent challenges is exciting, but legitimate worries abound. As with any new and rapidly evolving technology, a steep learning curve means that mistakes and miscalculations will be made and that both unanticipated and harmful impacts will occur. In order to manage these impacts responsibly and to direct the development of AI systems toward optimal public benefit, The Alan Turing Institute's public policy programme partnered with the Office for Artificial Intelligence and the Government Digital Service to produce guidance on the responsible design and implementation of AI systems in the public sector. The guide, Understanding Artificial Intelligence Ethics and Safety, is the most comprehensive guidance on the topic of AI ethics and safety in the public sector to date. It identifies the potential harms caused by AI systems and proposes concrete, operationalisable measures to counteract them. The guide stresses that public sector organisations can anticipate and prevent these potential harms by stewarding a culture of responsible innovation and by putting in place governance processes that support the design and implementation of ethical, fair, and safe AI systems. The guidance is relevant to everyone involved in the design, production, and deployment of a public sector AI project: from data scientists and data engineers to domain experts, delivery managers and departmental leads. Our aim -- and hope -- in writing the guide is to encourage civil servants interested in conducting AI projects to make considerations of AI ethics and safety a first priority.
Organization(s): The Allan Turing Institute
Company Logo(s):
Document Name: The AI Ethics Playbook: Implementing Ethical Principles Into Everyday Business
Date Uploaded to OECD.AI: 2023-03-20
Objectives: 1. Accountability - 2. Privacy and Data Governance - 3. Reskill or Upskill - 4. Transparency and Explainability
Tool Type: 1. Risk Management Framework 2. Awareness Building 3. Governance Framework 4. Education and Training
Target Sector(s)/ Group(s)/ User(s): Sector: Corporate Governance
Related Lifecycle Stage(s): 1. Operate and Monitor 2. Deploy 3. Plan and Design
Type of Approach: Procedural
Country (Origin): United Kingdom
Country (Scope): All Countries France Australia Spain
Principles: Key points: Provides a comprehensive guide to help organizations how to ethically design, develop, and deploy AI systems Provides a pragmatic approach for organizations with varying levels of AI adoption and familiarity with ethical principles Emphasizes the importance of involving a diversity of experts and stakeholders in the decision-making process – ethicists, social scientists, legal experts, human rights advocates, and end-users Ethical principles highlighted: Fairness Accountability Responsibility Transparency Safety Privacy Inclusivity Respect for human rights
Document Description and Other Notes (from OECD.AI Website): This playbook is intended as a practical tool to help organisations consider how to ethically design, develop and deploy artificial intelligence (AI) systems. It can be read cover to cover, like a report, or you can find and use the appropriate chapter for your role and purpose. It is designed to cater to organisations with varying levels of maturity with regards to AI adoption and familiarity with ethical principles. Some Chapters may, therefore, be more relevant than others. Chapter 1: An overview of common ethical principles This chapter will help introduce some of the ethical issues presented by AI to interested readers throughout an organisation. It can be used to develop a basic level of familiarity with these topics. Chapter 2: A proposed organisational structure for dealing with ethical issues This chapter is designed to help organisations to think through the governance of a system. It is useful for people who make decisions about the structure and key resources for an organisation. Chapter 3: A Self‑Assessment Questionnaire designed to help establish ethical risks and a series of tools to help you answer and address issues raised in the questionnaire This chapter contains a Self-Assessment Questionnaire that can help establish the risks presented by a system, and tools to identify and address those risks. It is most relevant to people working directly with AI systems who are aiming to implement ethical principles. These might include product managers or responsible AI champions. Chapter 4: Key themes and recommendations from the report This chapter will be useful for anyone hoping to get a sense of the playbook without reading it in detail. It might be used as an educational tool for people who are less directly involved in AI projects or for senior leadership.
Organization(s): GSM Association
Company Logo(s):
Document Name: AI Ethics Self-Assessment Questionnaire
Date Uploaded to OECD.AI: 2023-04-04
Objectives: 1. Accountability - 2. Fairness - 3. Human Wellbeing - 4. Privacy and Data Governance - 5. Respect of Human Rights - 6. Safety - 7. Sustainability (Help the Planet) - 8. Transparency and Explainability
Tool Type: Risk Management Framework
Target Sector(s)/ Group(s)/ User(s): Sector: Science and Technology --- Group: Private Sector --- Users 1. Business Leaders 2. Developers 3. Project Managers
Related Lifecycle Stage(s): Plan and Design
Type of Approach: Procedural
Country (Origin): United States
Country (Scope): All Countries
Principles: Key points: Provides guidance to employers, psychologists, investigators, and compliance officers on how to interpret and comply with the Uniform Guidelines on Employee Selection Procedures Types of employers covered by the guidelines Requirements necessary for validation of employee selection procedures Different methods of validation Adverse impact that selection procedures can have on certain demographic groups and how to minimize it Documentation requirements for the validation process Ethical principles highlighted: Note: Does not explicitly highlight principles of AI ethics since it predates the widespread use of AI in employment selection procedures but incorporates principles of fair treatment, non-discrimination, and transparency
Document Description and Other Notes (from OECD.AI Website): AI has the potential to truly change the world. However, this represents not only an opportunity but also a risk. As the adoption of AI accelerates, organisations and governments around the world are considering how best to harness this technology for the benefit of people and planet. It is vital therefore, that AI is designed, developed and deployed ethically. This self-assessment questionnaire is designed to help you bridge the gap between ethical AI principles and ethical AI practice. It asks questions structured around the principles outlined in the AI Ethics Playbook developed by the GSMA. The questions are differentiated depending on the project's risk level, so it will be quicker to complete for lower risk projects. We hope it helps you to operationalise ethical AI principles and do good business responsibly. The questionnaire is constantly evolving and regularly updated. The primary objectives of conducting this assessment are to: Evaluate overall risk level for a specific use-case, classifying it as high, medium or low. Please, also bear in mind that use cases with unacceptable risk levels are prohibited under a current Proposal for Regulation of the European parliament and of the Council*. Answer the relevant ethical questions. The assessment works through each of the principles, with the specific questions differentiated depending on the risk level. The lower the risk, the fewer questions you will be asked to complete. Depending on your answer to the question, you may receive suggestions for further actions. Record information to help you track status, report, conduct future planning and potential auditing. You should carry out the assessment in the following order: Complete the AI use case's general information in the 'Pre-Assessment' tab. In the 'Pre-Assessment' tab, you will also find three questions under 'Risk Assessment.' Your answers to these questions will determine the risk level of your AI system and hence, the questions you will be later asked. Work through the AI ethics principles listed in the 'Ethical Dimensions' menu. A series of relevant baseline questions will be established for each of the principles based on the risk level. Review the proposed further actions. At the end of each question list, you will find a text box to record status, evidence and any other notes; as also to support with prioritisation and planning. Review a summary of your answers across dimensions in the 'Results' section. Repeat the process as necessary. The first assessment should happen in the design phase, with re-assessments conducted at key stages of the product-lifecycle - for example, development and deployment. Further reassessments might also be triggered periodically, with the time period depending on the risk level; or by significant changes to the deployment, such as an increase in scale. All versions of the assessment should be saved for posterity.
Organization(s): GSM Association
Company Logo(s):
Document Name: Appen Crowd Code of Ethics to Build Better AI
Date Uploaded to OECD.AI: 2019-10-23
Objectives: 1. Fairness - 2. Human Wellbeing - 3. Privacy and Data Governance - 4. Transparency and Explainability
Tool Type: Sectoral Code of Conduct
Target Sector(s)/ Group(s)/ User(s): Sector: Business Group
Related Lifecycle Stage(s): Collect and Process Data
Type of Approach: Procedural
Country (Origin): Australia
Country (Scope): International
Principles: Key points: Presents the company’s commitment to providing opportunities to a global workforce with a focus on equity, inclusivity, and well-being Ethical principles highlighted: Equity - “Fair play” as part of Appen’s Code of Ethics, ensuring its contributors are paid fairly and above minimum wage compensation across its global markets, and built-in technology capabilities are available in its platform to benefit its Crowd and communities Inclusivity - “Diversity and inclusion” and “Crowd voice” as part of Appen’s Code of Ethics, offering opportunities for individuals of all abilities and backgrounds to contribute to its workforce while also valuing feedback system for continuous improvement Well-being - “Well-being” as part of Appen’s Code of Ethics, promoting wellness, community, and connections through online fora and best practices Privacy - “Privacy and Confidentiality” as part of Appen’s Code of Ethics, ensuring any information collected about the crowd is requested solely for the purposes of the project and not release private data on individuals to third parties without consent
Document Description and Other Notes (from OECD.AI Website): Appen Limited, the leading provider of high-quality datasets needed by companies and governments to train AI systems quickly and at scale, has published its Crowd Code of Ethics. The initiative reflects the importance of the Crowd’s well-being in creating the data for AI systems and applications that people worldwide increasingly use and depend on every day. Moreover, it formalizes the best practices Appen has developed over the past several years to support and promote contractor wellness. Appen is also now a proud member of the Global Impact Sourcing Coalition (GISC), a global network of businesses creating jobs for those most in need.
Organization(s): Appen Limited
Company Logo(s):
Document Name: IBM Everyday Ethics for Artificial Intelligence
Date Uploaded to OECD.AI: 2022-02-22
Objectives: 1. Accountability - 2. Fairness - 3. Respect for Human Rights - 4. Transparency and Accountability
Tool Type: Guidelines
Target Sector(s)/ Group(s)/ User(s): Sector: Private Sector --- Group: Business --- User: Developers
Related Lifecycle Stage(s): Plan and Design
Type of Approach: Procedural
Country (Origin): United States
Country (Scope): International
Principles: Key points: Provides guidance to team discussions and daily practices for designers and developers of AI systems Outlines five practices of everyday ethics - Consider outcomes - Align with norms and values - Minimize bias and improve inclusivity - Ensure explainability - Protect user data Encourages designers and developers to build and use AI systems in a manner that aligns with the values and principles of the society or community the system affects Ethical principles highlighted: Explainability Fairness Robustness Transparency Privacy
Document Description and Other Notes (from OECD.AI Website): This document represents the beginning of a conversation defining Everyday Ethics for AI. Ethics must be embedded in the design and development process from the very beginning of AI creation. Rather than strive for perfection first, we’re releasing this to allow all who read and use this to comment, critique and participate in all future iterations. So please experiment, play, use, and break what you find here and send us your feedback. Designers and developers of AI systems are encouraged to be aware of these concepts and seize opportunities to intentionally put these ideas into practice. As you work with your team and others, please share this guide with them.
Organization(s): IBM
Company Logo(s):
Document Name: Samsung Principles for AI Ethics
Date Uploaded to OECD.AI: 2022-02-23
Objectives: 1. Accountability - 2. Fairness - 3. Transparency and Explainability
Tool Type: Guidelines
Target Sector(s)/ Group(s)/ User(s): Sector: Industry and Entrepreneurship --- Group: Private Sector --- Users: 1. Business Leaders 2. Developers
Related Lifecycle Stage(s): Not Defined
Type of Approach: Procedural
Country (Origin): South Korea
Country (Scope): International
Principles: Key points: Provides a brief yet meaningful set of principles for AI ethics that companies should consider and follow to promote corporate citizenship and ensure compliance with ethical values and legal frameworks Ethical principles highlighted: Fairness - Based on respecting human rights, equality, and diversity in AI throughout its entire lifecycle, avoiding negative or unfair bias, and providing easy access to all users Transparency - Awareness of users that they are interacting with AI, clearly explaining the technology of AI to users and making the process of collecting or utilizing personal data in AI services transparent Accountability - Prioritization of social and ethical responsibility to prevent any infringement on human rights and ensuring strong security measures to protect AI from vulnerabilities and cyber-attacks
Document Description and Other Notes (from OECD.AI Website): AI technology has enormous positive potential but Samsung believes in taking a robust social and ethical approach that implements the technology in a sustainable and ethical way. To support this, they have established a set of AI ethics principles, ʻFairnessʼ, ʻTransparencyʼ and ʻAccountabilityʼ. These principles are designed to fulfill our social and ethical responsibilities as well as to comply with applicable laws. They have also set up AI ethics guidelines for their employees to ensure that they put their AI ethics principles into practice. Samsung also plans to promote employee awareness of AI ethics through training programs.
Organization(s): Samsung
Company Logo(s):
Document Name: Bias, AI ethics and the HireVue approach
Date Uploaded to OECD.AI: 2022-02-23
Objectives: 1. Fairness - 2. Human Wellbeing - 3. Performance - 4. Privacy and Data Governance
Tool Type: Sectoral Code of Conduct
Target Sector(s)/ Group(s)/ User(s): Sector: Employment and Labour --- Group: Private Sector --- Users: 1. Business Leaders 2. Developers 3. HR Managers 4. System Operators
Related Lifecycle Stage(s): Not Defined
Type of Approach: Procedural
Country (Origin): United States
Country (Scope): United States
Principles: Key points: Highlights the company’s commitment to ethical principles surrounding AI technology in their products Discusses how HireVue works to prevent and mitigate bias in assessment algorithms by using a model development process that removes data from consideration by the algorithm that contributes to adverse impact Ethical principles highlighted: Benefiting society - Focus on building systems that augment and improve human decision-making while understanding the impact of their software on individuals, companies, and society at large Designing to promote diversity and fairness - Use of AI technology in HireVue’s products and active work to prevent the introduction or propagation of bias against any group or individual - Review of datasets being used so as to ensure diversity - Building teams from diverse backgrounds to represent the people their systems serve Designing to help people make better decisions - Development of solutions that combine the strengths of machines and people together to improve decision-making Designing for privacy and data protection - Incorporation of privacy of data at each step of their process of technology development and deployment - Provision of transparency and control over the use of data consistent with best practices and legal standards Validating and testing continuously - Testing algorithms prior to use to ensure that unbiased outputs are generated - Ongoing monitoring – real-world behavior vis-à-vis expected behavior.
Document Description and Other Notes (from OECD.AI Website): HireVue recognizes the impact that their software can have on individuals and on society, and they act upon this responsibility with deep commitment. The following principles guide their thoughts and actions as they develop artificial intelligence (AI) technology and incorporate it into their products and technologies. 1. They are committed to benefiting society 2. They design to promote diversity and fairness 3. They design to help people make better decisions 4. They design for privacy and data protection 5. They validate and test continuously
Organization(s): Hirevue
Company Logo(s):
Document Name: Sony Group AI Ethics Guidelines
Date Uploaded to OECD.AI: 2022-02-22
Objectives: 1. Fairness - 2. Privacy and Data Governance - 3. Reskill or Upskill - 4. Transparency and Explainability
Tool Type: Guidelines
Target Sector(s)/ Group(s)/ User(s): Sector: Industry and Entrepreneurship --- Group: Private Sector
Related Lifecycle Stage(s): Not Defined
Type of Approach: Procedural
Country (Origin): Japan
Country (Scope): International
Principles: Key points: Provides principles of AI Ethics, which Sony aims to follow to contribute to the development of a peaceful and sustainable society while delivering a sense of excitement, wonder, or emotion to the world through the utilization of AI Outlines seven principles that Sony must follow to ensure and promote a dialogue with various stakeholders and the proper utilization and research and development (R&D) of AI within Sony Group - Supporting Creative Lifestyles and Building a Better Society - Stakeholder Engagement - Provision of Trusted Products and Services - Privacy Protection - Respect for Fairness - Pursuit of Transparency - The Evolution of AI and Ongoing Education Ethical principles highlighted: Note: Incorporated in the seven principles
Document Description and Other Notes (from OECD.AI Website): To operate business based on Sony’s Purpose to ”Fill the world with emotion, through the power of creativity and technology.”, Sony Group AI Ethics Guidelines are hereby set forth below to ensure and promote a dialogue with various stakeholders and the proper utilization and research and development of AI within Sony Group.
Organization(s): Sony Group
Company Logo(s):
Document Name: Ethics & Algorithms Toolkit
Date Uploaded to OECD.AI: 2022-03-17
Objectives: 1. Accountability - 2. Fairness - 3. Transparency and Explainability
Tool Type: Toolkit/Software
Target Sector(s)/ Group(s)/ User(s): Groups: 1. Private Sector 2. Public Sector --- Users: 1. Data Scientists 2. Developers 3. Government System Operators
Related Lifecycle Stage(s): Not Defined
Type of Approach: Technical
Country (Origin): United State
Country (Scope): International
Principles: Key points: Provides principles of AI Ethics, which Sony aims to follow to contribute to the development of a peaceful and sustainable society while delivering a sense of excitement, wonder, or emotion to the world through the utilization of AI Outlines seven principles that Sony must follow to ensure and promote a dialogue with various stakeholders and the proper utilization and research and development (R&D) of AI within Sony Group Supporting Creative Lifestyles and Building a Better Society Stakeholder Engagement Provision of Trusted Products and Services Privacy Protection Respect for Fairness Pursuit of Transparency The Evolution of AI and Ongoing Education Ethical principles highlighted: Note: Incorporated in the seven principles
Document Description and Other Notes (from OECD.AI Website): GovEx, the City and County of San Francisco, Harvard DataSmart, and Data Community DC have collaborated on a practical toolkit for cities to use to help them understand the implications of using an algorithm, clearly articulate the potential risks, and identify ways to mitigate them.
Organization(s): 1. Center for Government Excellence, Johns Hopkins University 2. Datacommunity DC 3. Harvard DataSmart
Company Logo(s):
Document Name: Deutsche Telekom Digital Ethics Guidelines on AI
Date Uploaded to OECD.AI: 2022-02-24
Objectives: 1. Accountability - 2. Fairness - 3. Human Wellbeing - 4. Performance - 5. Reskill or Upskill - 6. Respect for Human Rights - 7. Robustness and Digital Security - 8. Safety - 9. Sustainability (Help the Planet)
Tool Type: Guidelines
Target Sector(s)/ Group(s)/ User(s): Sector: Industry and Entrepreneurship --- Users: 1. Data Scientists 2. Developers 3. HR Managers 4. System Operators
Related Lifecycle Stage(s): Not Defined
Type of Approach: Procedural
Country (Origin): Germany
Country (Scope): Germany
Principles: Key points: Provides thought-provoking and valuable insights into ethical AI use in corporate processes and products Details the steps taken by Deutsche Telekom to develop its digital ethics guidelines on AI, which involved discussions with high-tech companies, universities, and startups during a business trip to Israel Emphasizes the need to consider the impact on society and the environment when developing and using AI Ethical principles highlighted: Need for AI to meet human-defined rules and laws, comply with ethical values and societal conventions, and be developed using deep analysis and evaluation that is transparent, auditable, fair, and fully documented AI vis-à-vis diversity of human experience and values
Document Description and Other Notes (from OECD.AI Website): Deutsche Telekom addresses such questions and many others in public discussions of relevance to the growth of digitalization. That’s why Manuela Mackert, their Chief Compliance Officer, and her team contacted all units at Deutsche Telekom that use AI and contribute to the development and design of this technology. These units are comprised of specialists from Technology and Innovation, as well as Telekom Innovation Laboratories, IT Security, Data Privacy, Finance, and Service, not to mention T-Systems. The objective: To develop a digital ethics policy governing the use of AI.
Organization(s): Deutsche Telekom
Company Logo(s):
Document Name: Ethics of Artificial Intelligence in Radiology: Summary of the Joint European and North American Multisociety Statement
Date Uploaded to OECD.AI: 2022-02-23
Objectives: Not Defined
Tool Type: Sectoral Code of Conduct
Target Sector(s)/ Group(s)/ User(s): Sector: Health --- Group: Academia --- Users: 1. Business Leaders 2. Developers 3. System Operators
Related Lifecycle Stage(s): Not Defined
Type of Approach: Procedural
Country (Origin): United Sates
Country (Scope): United States Canada Europe
Principles: Key points: Discusses the ethical considerations associated with the implementation of AI within radiology Highlights the need to start developing codes of ethics and practice for AI that promote any use for helping patients and promoting common good while prohibiting the use of radiology data and algorithms for financial gain without those two attributes Ethical principles highlighted: Ethical use of AI in radiology to promote well-being, minimize harm, and ensure that the benefits and harms are distributed among the possible stakeholders in a just manner Ethical use of AI in radiology to respect human rights and freedoms, including dignity and privacy Ethical use of AI in radiology to promote transparency, responsibility, and accountability of human designers or operators
Document Description and Other Notes (from OECD.AI Website):
Organization(s): 1. American College of Radiology 2. Radiological Society of North America 3. Society for Imaging Informatics in Medicine 4. Canadian Association of Radiologists 5. European Society of Medical Imaging Informatics 6. American Association of Physicists in Medicine 7. European Society of Radiology
Company Logo(s):
Document Name: Kakao Algorithm Ethics
Date Uploaded to OECD.AI: 2022-02-24
Objectives: 1. Fairness - 2. Human Wellbeing - 3. Privacy and Data Governance - 4. Transparency and Explainability
Tool Type: Guidelines
Target Sector(s)/ Group(s)/ User(s): Group: Private Setor --- Users: 1. Developers 2. System Operators
Related Lifecycle Stage(s): Not Defined
Type of Approach: Procedural
Country (Origin): South Korea
Country (Scope): Korea
Principles: Key points: Emphasizes the importance of considering ethical and societal implications when developing AI Provides specific principles to follow to ensure responsible algorithm development Outlines Kakao’s eight principles of Algorithm Ethics Enhancing mankind’s benefit and well-being and keeping all efforts for algorithm development within the ethical framework of society Ensuring that algorithms do not generate biased results Collecting and managing data for algorithm learning in accordance with social and ethical norms Ensuring that the algorithm shall not be manipulated internally or externally Providing explanations on the algorithm to strengthen users’ trust to the extent that it does not compromise corporate competitiveness Ensuring that algorithm-based technology and services embrace our society Putting a priority on child ren and youth not to be exposed to inappropriate information and threats from the stages of algorithm development and service design. Making best efforts to protect users’ privacy throughout the entire process of designing and operating services and technologies that utilize algorithms Ethical principles highlighted: Note: Incorporated in the eight principles
Document Description and Other Notes (from OECD.AI Website): Kakao is committed to enhance the quality of life of our service users and to create a better society through the development of ethical algorithm. Kakao’s efforts in the algorithm development and management process will be in line with the ethical principles of our society.
Organization(s): Kakao
Company Logo(s):
Document Name: Statement of Ethics for the Design, Development, and Use of Artificial Intelligence
Date Uploaded to OECD.AI: 2022-02-23
Objectives: 1. Fairness - 2. Privacy and Data Governance - 3. Respect for Human Rights - 4. Sustainability (Help the Planet)
Tool Type: Guidelines
Target Sector(s)/ Group(s)/ User(s): Group: Private Sector --- Users: 1. Developers 2. Resarchers 3. System Operators
Related Lifecycle Stage(s): Not Defined
Type of Approach: Procedural
Country (Origin): Latin America
Country (Scope): Latin America
Principles: Key points: Highlights a variety of ethical considerations that must be considered when developing and utilizing AI in Latin America and beyond Ethical principles highlighted: Ensuring the development of AI at the service of individuals and society AI developers taking responsibility for their projects Establishing a system of constant testing and validations Protecting the right to privacy of personal data Promoting equity Avoiding biases Respecting and protecting intellectual property Taking care of the environment Promoting continuous ethical evaluation and collaboration with society, stakeholders, and other relevant parties Promoting accountability Promoting education Crafting legal compliance Obeying local laws and regulations Fostering continuous review and improvement
Document Description and Other Notes (from OECD.AI Website): As technologies advance, the ethical ramifications will become more relevant, where the conversation is no longer based on “comply” but rather on “we are doing the right thing and getting better”. For this reason, IA-LATAM is committed to Innovation and Responsible Evolution, and they present their Declaration of Ethical Principles for AI in Latin America, which might be a starting point and of great help to all.
Organization(s): IA-LATAM
Company Logo(s):
Document Name: Guiding Principles on Trusted AI Ethics
Date Uploaded to OECD.AI: 2022-02-24
Objectives: 1. Accountability - 2. Fairness - 3. Privacy and Data Governance - 4. Respect of Human Rights - 5. Safety - 6. Transparency and Explainability
Tool Type: Guidelines
Target Sector(s)/ Group(s)/ User(s): Group: Private Sector -- Users: 1. Developers 2. System Operators
Related Lifecycle Stage(s): Not Defined
Type of Approach: Procedural
Country (Origin): Sweden
Country (Scope): International
Principles: Key points: No specified document for reference on the website Ethical principles highlighted: No specified document for reference on the website
Document Description and Other Notes (from OECD.AI Website): Telia Company provides a Guiding Principles to its operations and employees for proactive design, implementation, testing, use and follow-up of AI.
Organization(s): Telia
Company Logo(s):
Document Name: IEEE Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS)
Date Uploaded to OECD.AI: 2022-02-14
Objectives: 1. Accountability - 2. Fairness - 3. Transparency and Explainability
Tool Type: Sectoral Code of Conduct
Target Sector(s)/ Group(s)/ User(s): Not Defined
Related Lifecycle Stage(s): Not Defined
Type of Approach: Procedural
Country (Origin): United States
Country (Scope): Not Defined
Principles: Key points: Presents an ethical and transparent approach to contact tracing applications/technology that accounts the values of individuals and communities Recognizes that technologies alone cannot solve the crisis and underscores the importance of ethical principles that call for a balance between the use of technology and human-centered values when operationalizing technology Recognizes that the COVID-19 pandemic presents a unique situation that demands technological solutions with ethical considerations that go beyond traditional academic considerations Ethical principles highlighted: Human dignity and Human autonomy Privacy Transparency Robustness Contextual integrity Explainability Responsibility Agility and Flexibility
Document Description and Other Notes (from OECD.AI Website): The goal of The Ethics Certification Program for Autonomous and Intelligent Systems (ECPAIS) is to create specifications for certification and marking processes that advance transparency, accountability and reduction in algorithmic bias in Autonomous and Intelligent Systems (AIS). The value of this certification process in the marketplace and society at large cannot be underestimated. The proliferation of systems in the form of smart homes, companion robots, autonomous vehicles or any myriad of products and services that already exist today desperately need to easily and visually communicate to consumers and citizens whether they are deemed “safe” or “trusted” by a globally recognized body of experts providing a publicly available and transparent series of marks.
Organization(s): IEEE
Company Logo(s):
Document Name: Salesforce AI Ethics
Date Uploaded to OECD.AI: 2022-04-13
Objectives: 1. Fairness - 2. Respect of Human Rights - 3. Transparency and Explainability
Tool Type: Guidelines
Target Sector(s)/ Group(s)/ User(s): Sector: Science and Technology -- Group: Private Sector --- Users: 1. Developers 2. Management System Operators
Related Lifecycle Stage(s): Not Defined
Type of Approach: Procedural
Country (Origin): California
Country (Scope): International
Principles: Key points: Provides an overview of a maturity model for building an ethical AI practice as divided into four stages - Ad Hoc stage, establishment of a formal Ethics team to initiate executive buy-in to support development best practices for creating ethical AI systems - Organized and Repeatable stage, attention to employee education, such as mandatory training and meetings - Managed and Sustainable stage, ethics checkpoints through metrics and minimum thresholds for monitoring - Optimized and Innovative stage, attention to privacy, legal, user research, design, and accessibility considerations to develop a holistic approach to AI development Ethical principles highlighted: Responsibility and accountability Transparency Empowerment Inclusivity Safeguarding human rights Data protection
Document Description and Other Notes (from OECD.AI Website): Salesforce believes the benefits of AI should be accessible to everyone. But it is not enough to deliver only the technological capabilities of AI – they also have an important responsibility to ensure that AI is safe and inclusive for all. They take that responsibility seriously and are committed to providing their employees, customers, and partners with the tools they need to develop and use AI safely, accurately, and ethically.
Organization(s): Salesforce
Company Logo(s):
Document Name: References on Machine Learning Ethics
Date Uploaded to OECD.AI: 2022-09-20
Objectives: Reskill or Upskill
Tool Type: Toolkit/Software
Target Sector(s)/ Group(s)/ User(s): Users: 1. All Employees 2. Business Leaders 3. Data Scientists 4. Developers 5. Government 6. HR Managers 7. IT Specialists 8. Management 9. Project Managers 10. Researchers
Related Lifecycle Stage(s): Not Defined
Type of Approach: 1. Technical 2. Educational
Country (Origin):
Country (Scope):
Principles: See below.
Document Description and Other Notes (from OECD.AI Website):
Organization(s):
Company Logo(s):
Document Name: The Ethics of Artificial Intelligence
Date Uploaded to OECD.AI: 2022-09-20
Objectives: Not Defined
Tool Type: Not Defined
Target Sector(s)/ Group(s)/ User(s): Not Defined
Related Lifecycle Stage(s): Not Defined
Type of Approach: Not Defined
Country (Origin): England
Country (Scope): Not Defined
Principles: Key points: Discusses the many ethical considerations that arise from the creation of thinking machines and other AI technologies - Potential harm that AI could cause to humans and other morally relevant beings - Moral status of machines themselves - How to ensure that AI operates safely as it approaches human levels of intelligence - How to determine whether AI has moral status Ethical principles highlighted: Fairness Transparency and accountability in AI decision-making Safety
Document Description and Other Notes (from OECD.AI Website):
Organization(s): Draft for Cambridge Handbook of Artificial Intelligence Authors: Nick Bostrom, Eliezer Yudkowsky
Company Logo(s):
Document Name: The Robot’s Dilemma
Date Uploaded to OECD.AI: 2022-09-20
Objectives: Not Defined
Tool Type: Not Defined
Target Sector(s)/ Group(s)/ User(s): Not Defined
Related Lifecycle Stage(s): Not Defined
Type of Approach: Not Defined
Country (Origin): United Kingdom
Country (Scope): Not Defined
Principles: Key points: Discusses various challenges in developing ethical robots - Difficulty in encoding logical statements that lead to ethical decisions - Challenge of using counterfactuals in resolving ethical dilemmas - Difficulty in codifying explicit rules - Incomplete nature of explicit rules Proposes that building ethical robots could have major consequences for the future of robotics, particularly in the field of autonomous transport Ethical principles highlighted: Relates to the Three Laws of Robotics proposed by Asimov Ethical decision-making using rule-based or machine-learning approaches Need for explicit rules that allow robots to make ethical decisions Potential consequences of autonomous robots in warfare
Document Description and Other Notes (from OECD.AI Website):
Organization(s): Feature News, MacMillan Publishers Limited Author: Boer Deng
Company Logo(s):
Document Name: Ethics of Artificial Intelligence
Date Uploaded to OECD.AI: 2022-09-20
Website:
Objectives: Not Defined
Tool Type: Not Defined
Target Sector(s)/ Group(s)/ User(s): Not Defined
Related Lifecycle Stage(s): Not Defined
Type of Approach: Not Defined
Country (Origin): United Kingdom
Country (Scope): Not Defined
Principles: Key points: Raises concerns about the use of autonomous weapons systems and the need for AI researchers and developers to engage in ethical debates surrounding their development and use Ethical principles highlighted: Safety - Argument of experts that AI must be designed with safety in mind from the outset to ensure that adverse outcomes don’t happen - Emphasis that AI systems should be accountable, transparent, and subjected to independent oversight Human control - Ethical implications of using lethal force when humans don’t make the decisions - Support for the development of lethal autonomous weapons systems (LAWS)
Document Description and Other Notes (from OECD.AI Website):
Organization(s): Feature News, MacMillan Publishers Limited Authors: Stuart Russell, “Take a stand on AI weapons” (Professor of Computer Science, University of California, Berkeley) Sabine Hauert, “Shape the debate, don’t shy from it” (Lecturer in Robotics, University of Bristol) Russ Altman, “Distribute AI benefits fairly (Professor of Bioengineering, Genetics, Medicine and Computer Science, Sandford University) Manuela Veloso, “Embrace a robot-human world” (Professor of Computer Science, Carnegie Mellon University)
Company Logo(s):
Document Name: Ethics as attention to context: recommendations for ethics of artificial intelligence
Date Uploaded to OECD.AI: 2022-09-20
Website:
Objectives: Not Defined
Tool Type: Not Defined
Target Sector(s)/ Group(s)/ User(s): Not Defined
Related Lifecycle Stage(s): Not Defined
Type of Approach: Not Defined
Country (Origin): Ireleand United Kingdom
Country (Scope): Not Defined
Principles: Key points: Explores the ethics of AI and the existing approaches in AI ethics Suggests practical recommendations - Engaging with social scientists and their research on the social impacts of AI in the short, medium, and long term - Ensuring diversity and inclusion of social scientists - Taking into consideration the environmental impact, among others Ethical principles highlighted: Risk in the tendency of current AI ethics guidance and initiatives to be dominated by a principled approach to ethics Need to shift attention in AI ethics away from high-level abstract principles to concrete practice, context, and social, political, and environmental materialities
Document Description and Other Notes (from OECD.AI Website):
Organization(s): Paper, Open Research Europe Authors: Anais Resseguier, Rowena Rodrigues
Company Logo(s):
Document Name: WHITE PAPER: On Artificial Intelligence – A European approach to excellence and trust
Date Uploaded to OECD.AI: 2022-09-20
Website:
Objectives: Not Defined
Tool Type: Not Defined
Target Sector(s)/ Group(s)/ User(s): Not Defined
Related Lifecycle Stage(s): Not Defined
Type of Approach: Not Defined
Country (Origin): Belgium
Country (Scope): European Union
Principles: Key points: Addresses the opportunities and challenges linked to the development of AI Highlights AI’s potential benefits in sectors ranging from healthcare, security, and climate change adaptation and mitigation Raises concerns related to its use, such as opaque decision-making, discrimination, and intrusion into privacy Ethical principles highlighted: Human-centric approach Respect for fundamental rights Safety Transparency Accountability
Document Description and Other Notes (from OECD.AI Website):
Organization(s): European Commission
Company Logo(s):
Document Name: The Ethics of Genetic Cognitive Enhancement: Gene Editing or Embryo Selection?
Date Uploaded to OECD.AI: 2022-09-20
Website:
Objectives: Not Defined
Tool Type: Not Defined
Target Sector(s)/ Group(s)/ User(s): Not Defined
Related Lifecycle Stage(s): Not Defined
Type of Approach: Not Defined
Country (Origin): Swizerland
Country (Scope): France Germany Greece The Netherlands Poland Spain Sweden Brazil South Africa South Korea United States
Principles: Key points: Examines the ethical implications of genetic human enhancement, with a focus on the prospect of pursuing cognitive enhancement using embryo selection Argues that the philosophical debate on the ethics of enhancement should consider public attitudes to research on human genomics and human enhancement technologies and suggests that philosophical investigation should neither ignore public opinion nor pander to it Argues that regulatory frameworks for new technologies such as CRISPR and IVG should recognize and incorporate people’s varying attitudes towards human enhancement technologies and research on the genetics of human intelligence across different parts of the world Ethical principles highlighted: Transparency Accountability Inclusivity
Document Description and Other Notes (from OECD.AI Website):
Organization(s): Article, Philosophies 2020 Author: Marcelo de Araujo
Company Logo(s):
Document Name: AI ethics should not remain toothless! A call to bring back the teeth of ethics
Date Uploaded to OECD.AI: 2022-09-20
Website:
Objectives: Not Defined
Tool Type: Not Defined
Target Sector(s)/ Group(s)/ User(s): Not Defined
Related Lifecycle Stage(s): Not Defined
Type of Approach: Not Defined
Country (Origin): Ireland
Country (Scope): Not Defined
Principles: Key points: Highlights the need for a renewed focus on the critical and constantly evolving nature of ethics to ensure that AI deployment and usage uphold societal values and norms Highlights that ethical principles, norms, and values as an “end of ethics” not ethics itself Ethical principles highlighted: Ethical and responsible development and deployment of AI in society Problematic character of a legalistic approach to ethics
Document Description and Other Notes (from OECD.AI Website):
Organization(s): Paper, Big Data & Society Authors: Anais Resseguier, Rowena Rodrigues
Company Logo(s):
Document Name: Ethics of Using Smart City AI and Big Data: The Case of Four Large European Cities
Date Uploaded to OECD.AI: 2022-09-20
Website:
Objectives: Not Defined
Tool Type: Not Defined
Target Sector(s)/ Group(s)/ User(s): Not Defined
Related Lifecycle Stage(s): Not Defined
Type of Approach: Not Defined
Country (Origin): Netherlands
Country (Scope): Not Defined
Principles: Key points: Examines the ethical challenges associated with using smart city systems, particularly big data and AI, in four large European cities Highlights the diversity of approaches and implementations of smart information systems (SIS) and the infancy of smart cities Stresses the need for policy frameworks and guidelines on how to effectively and ethically implement SIS in smart cities to ensure a more ethical and responsible approach towards AI technology Ethical principles highlighted: Responsible innovation to involve all relevant stakeholders in the development of major innovative projects like smart cities Ethical SIS algorithms and governance to ensure that decisions made with the technology are sound Transparency in data processing Safety and security of data Building trust between the municipality and its citizens on voicing their opinions about the use of SIS technology Balance between using big data and AI to develop efficient and effective urban systems while protecting the privacy, security, and well-being of citizens
Document Description and Other Notes (from OECD.AI Website):
Organization(s): Article, ORBIT Journal DOI Authors: Mark Rayan, Univeristy of Twente Anya Gregory, European Business Summit
Company Logo(s):
Document Name: Ethical issues related to research on genome editing in human embryos
Date Uploaded to OECD.AI: 2022-09-20
Website:
Objectives: Not Defined
Tool Type: Not Defined
Target Sector(s)/ Group(s)/ User(s): Not Defined
Related Lifecycle Stage(s): Not Defined
Type of Approach: Not Defined
Country (Origin): Sweden
Country (Scope): Not Defined
Principles: Key points: Provides an overview of the debate on potential clinical uses of germline genome editing (GGE) Discusses ethical issues related to the studies necessary to evaluate potential clinical uses of GGE - Challenges related to the evaluation of the safety and efficacy of GGE - Current technical hurdles in human GGE - Destruction of human embryos used in the experiments - Involvement of egg donors - Genomic sequencing performed on the samples of the research participants - Use of animals for research purposes Ethical principles highlighted: Note: Does not specifically mention AI Ethics principles but discusses informed consent, privacy, confidentiality, and the importance of balancing benefits and harms as relevant to AI Ethics since AI researchers and practitioners need to consider such principles when developing and deploying AI technologies that involve human subjects or their personal data
Document Description and Other Notes (from OECD.AI Website):
Organization(s): Article, Computational and Structural Biotechnology Journal Authors: Emilia Niemiec, Heidi Carmen Howard
Company Logo(s):
Document Name: DataSF Research Brief: Ethics and Accountability in Algorithms
Date Uploaded to OECD.AI: 2022-09-20
Website:
Objectives: Not Defined
Tool Type: Not Defined
Target Sector(s)/ Group(s)/ User(s): Not Defined
Related Lifecycle Stage(s): Not Defined
Type of Approach: Not Defined
Country (Origin): United States
Country (Scope): Not Defined
Principles: Key points: Emphasizes the need for transparency in algorithmic decision-making to ensure that decisions are accountable and responsible Contends that government has two primary roles to play in regulating the algorithmic space and implementing its responsible usage Ethical principles highlighted: Accountability Transparency Fairness
Document Description and Other Notes (from OECD.AI Website):
Organization(s): Article, DataSF.org Author: Charlie Moffett
Company Logo(s):
Document Name: Engineering Ethics into AI
Date Uploaded to OECD.AI: 2022-04-19
Objectives: 1. Accountability - 2. Fairness - 3. Reskill or Upskill - 4. Robustness and Digital Security - 5. Safety - 6. Transparency and Explainability
Tool Type: Guidelines
Target Sector(s)/ Group(s)/ User(s): Sector: Science and Technology --- Groups: 1. Academia 2. Private Sectir --- Users: 1. Developers 2. System Operators
Related Lifecycle Stage(s): Not Defined
Type of Approach: Procedural
Country (Origin): United Kingdom
Country (Scope): International
Principles: Key points: Aims to start a much-needed conversation about AI ethics and advocate for an ethical framework to steer the development of AI in a way that benefits everyone and contributes to society’s well-being Ethical principles highlighted: Ethical principles proposed by Arm in the Trust Manifesto for the development of an ethical framework for AI that enhances trust All AI systems should employ state-of-the-art security Every effort should be made to eliminate discriminatory bias in designing and developing AI decision systems AI systems should record and explain their results Users of AI systems have the right to know who is responsible for the consequences of AI decision-making Human safety must be the primary consideration in the design of any AI system Every effort should be made to retrain people from all backgrounds to develop the skills needed for an AI world
Document Description and Other Notes (from OECD.AI Website): This tool describes the principles and challenges of building a strong and sustainable framework for Artificial Intelligence systems, and is intended to drive debate across the industry. Our guiding objective: AI must be ethical by design. Arm would like to see the technology sector come together to create an ethical framework to ensure that AI is developed in a fair and responsible way. Without such a framework, there is a risk that regulation will become onerous and fragmented and will not allow AI to succeed. We believe that ethics should be incorporated into the key design principles for AI products, services and components. However, at present there is no defining set of ethics to follow and so we call for the formation of an industry-wide working group to define and standardize a set of ethics that can be adopted by anyone deploying AI technologies. Since ethics are so critical to AI, it is essential that anyone working in the field has a solid foundation in the issues. We call on all universities and colleges that teach AI to include mandatory courses on issues relevant to ethics in AI at undergraduate and graduate level. Further, we believe that all businesses developing AI technologies ensure that their staff complete mandatory professional training in the field of AI ethics. Ethical principles of trust in AI systems. There are many issues that must be addressed in the development of an ethical framework for AI that enhances trust. As a starting point, Arm proposes the Arm AI Trust Manifesto.
Organization(s): ARM
Company Logo(s):
Document Name: BMW Group code of ethics for artificial intelligence
Date Uploaded to OECD.AI: 2022-04-27
Objectives: 1. Accountability - 2. Fairness - 3. Privacy and Data Governance - 4. Respect of Human Rights - 5. Robustness and Digital Security - 6. Safety - 7. Transparency and Explainability
Tool Type: Guidelines
Target Sector(s)/ Group(s)/ User(s): Sector: Transport --- Group: Private Sector --- Users: 1. Developers 2. Management System Operators
Related Lifecycle Stage(s): Not Defined
Type of Approach: Procedural
Country (Origin): Germany
Country (Scope): International
Principles: Key points: Considers AI as a central element to its digital transformation process as already implemented throughout its value chain Supports the transparent, rapid development, and scaling of smart data and AI technologies Outlines seven principles in the Code of Ethics to guide the use of AI in the company - Human agency and oversight - Technical robustness and safety - Privacy and data governance - Transparency - Diversity, non-discrimination, and fairness - Environmental and societal well-being - Accountability Ethical principles highlighted: Note: Incorporated in the seven principles
Document Description and Other Notes (from OECD.AI Website): Building on the fundamental requirements formulated by the EU for trustworthy AI, the BMW Group has worked out seven basic principles covering the use of AI within the company. These will be continuously refined and adapted as required according to the multi-layered application of AI across all areas of the company. In this way, the BMW Group will pave the way for extending the use of AI and increase awareness among its employees of the need for sensitivity when working with AI technologies.
Organization(s): BMW
Company Logo(s):
Document Name: Open Ethics Label
Date Uploaded to OECD.AI: 2022-04-28
Objectives: Transparency and Explainability
Tool Type: Process-Related Documentation
Target Sector(s)/ Group(s)/ User(s): Private Sector --- Users: 1. Data Scientists 2. Developers 3. System Operators
Related Lifecycle Stage(s): Not Defined
Type of Approach: Procedural
Country (Origin): Estonia
Country (Scope): International
Principles: Key points: Addresses why labeling is needed and what the Open Ethics Label is for – to provide information about a solution’s training data, algorithms, and decision space to bring transparency and gain trust with users Provides information on the basic elements of disclosure – training data, source code, decision space, and ecosystem partners Ethical principles highlighted: Alignment with the requirement for “voluntary labeling for no-high risk AI applications” proposed in the EU Commission’s white paper on Artificial Intelligence Transparency and disclosure for consumers and other AI product stakeholders as part of Open Ethics’ core values
Document Description and Other Notes (from OECD.AI Website): Open Ethics Label brings standard of self-disclosure to digital products. Open Ethics Label is the first level of Open Ethics Maturity Model (OEMM), aimed to provide information about your solution to bring transparency and gain trust with your users.
Organization(s): Open Ethics
Company Logo(s):
Document Name: Capgemini Code of Ethics for AI
Date Uploaded to OECD.AI: 2022-04-27
Objectives: 1. Accountability - 2. Fairness - 3. Human Wellbeing - 4. Privacy and Data Governance - 5. Respect of Human Rights - 6. Robustness and Digital Security - 7. Safety - Sustainability (Help the Planet) - 8. Transparency and Explainability
Tool Type: Process-Related Documentation
Target Sector(s)/ Group(s)/ User(s): Group: Private Sector --- Users: 1. Business Leaders 2. Data Scientists 3. Developers 4. Management System Operators
Related Lifecycle Stage(s): Not Defined
Type of Approach: Procedural
Country (Origin): France
Country (Scope): International
Principles: Key points: Promotes the adoption of AI technology that delivers clear benefits within a trusted, human-centric framework Aligns their principle with the “Ethics Guidelines for Trustworthy AI” issued in 2019 by the independent High-Level Expert Group on AI set up by the European Commission - AI with Carefully Delimited Impact - Sustainable AI - Fair AI - Transparent and Explainable AI - Controllable AI with Clear Accountability - Robust and Safe AI - AI Respectful of Privacy and Data Protection Ethical principles highlighted: Note: Incorporated in the seven principles
Document Description and Other Notes (from OECD.AI Website): Capgemini Code of Ethics for AI guides our organization on how to embed ethical thinking in our business. It is illustrated by concrete examples from projects or solutions that we deliver. Reference to its principles stimulates ethical reasoning and is intended to launch an open-ended process of discussion within the company, with our clients, and with all stakeholders. Capgemini Code of Ethics for AI concerns both the intended purpose of the AI solution, and the way we embed ethical principles in the design and delivery of AI solutions and services to our clients.
Organization(s): Capgemini
Company Logo(s):