Jump to section
Emerging principles
Given the global nature of the challenges posed by harnessing emerging technologies and data, the Organisation for Economic Co-operation and Development (OECD) has been leading on the development of a core set of ethical principles and recommendations for policy makers for wider international adoption. The following ethically value-based principles and recommendations for policymakers have been adopted by the UK and all the other G20 countries and are now being taken up by a growing number of international bodies and UN member states (via the UNESCO AI ethics consultation), and are informing the World Economic Forum AI and Robotics forward programme. They provide a common ethical baseline for the development and adoption of Smart Information Systems that combine the use of Artificial Intelligence to analyse, describe and predict information from Big Data.
OECD values-based principles
Inclusive growth, sustainable development and well-being
This principle highlights the potential for trustworthy AI to contribute to overall growth and prosperity for all – individuals, society, and planet – and advance global development objectives.
Recommended supporting actions:
- Stakeholders should proactively engage in responsible stewardship of trustworthy AI in pursuit of beneficial outcomes for people and the planet, such as augmenting human capabilities and enhancing creativity, advancing inclusion of underrepresented populations, reducing economic, social, gender and other inequalities, and protecting natural environments, thus invigorating inclusive growth, sustainable development and well-being.
Human-centred values and fairness
AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and should include appropriate safeguards to ensure a fair and just society.
Recommended supporting actions:
- AI actors should respect the rule of law, human rights and democratic values, throughout the AI system lifecycle. These include freedom, dignity and autonomy, privacy and data protection, non-discrimination and equality, diversity, fairness, social justice, and internationally recognised labour rights.
- To this end, AI actors should implement mechanisms and safeguards, such as capacity for human determination, that are appropriate to the context and consistent with the state of art.
Transparency and explainability
This principle covers transparency and responsible disclosure around AI systems to ensure that people understand when they are engaging with them and can challenge outcomes.
Recommended supporting actions:
- AI Actors should commit to transparency and responsible disclosure regarding AI systems. To this end, they should provide meaningful information, appropriate to the context, and consistent with the state of art:
- to foster a general understanding of AI systems,
- to make stakeholders aware of their interactions with AI systems, including in the workplace,
- to enable those affected by an AI system to understand the outcome, and,
- to enable those adversely affected by an AI system to challenge its outcome based on plain and easy-to-understand information on the factors, and the logic that served as the basis for the prediction, recommendation or decision.
Robustness, security and safety:
AI systems must function in a robust, secure and safe way throughout their lifetimes, and potential risks should be continually assessed and managed.
Recommended supporting actions:
- AI systems should be robust, secure and safe throughout their entire lifecycle so that, in conditions of normal use, foreseeable use or misuse, or other adverse conditions, they function appropriately and do not pose unreasonable safety risk.
- To this end, AI actors should ensure traceability, including in relation to datasets, processes and decisions made during the AI system lifecycle, to enable analysis of the AI system’s outcomes and responses to inquiry, appropriate to the context and consistent with the state of art.
- AI actors should, based on their roles, the context, and their ability to act, apply a systematic risk management approach to each phase of the AI system lifecycle on a continuous basis to address risks related to AI systems, including privacy, digital security, safety and bias.
Accountability:
Organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the other values-based principles for AI.
Recommended supporting actions:
- AI actors should be accountable for the proper functioning of AI systems and for the respect of the above principles, based on their roles, the context, and consistent with the state of art.
OECD recommendations for policy makers
Investing in AI research and development
Governments should facilitate public and private investment in research and development to spur innovation in trustworthy AI.
Recommended supporting actions:
- Governments should consider long-term public investment, and encourage private investment, in research and development, including inter-disciplinary efforts, to spur innovation in trustworthy AI that focus on challenging technical issues and on AI-related social, legal and ethical implications and policy issues.
- Governments should also consider public investment and encourage private investment in open datasets that are representative and respect privacy and data protection to support an environment for AI research and development that is free of inappropriate bias and to improve interoperability and use of standards.
Fostering a digital ecosystem for AI
Governments should foster accessible AI ecosystems with digital infrastructure and technologies, and mechanisms to share data and knowledge.
Recommended supporting actions:
- Governments should foster the development of, and access to, a digital ecosystem for trustworthy AI. Such an ecosystem includes in particular digital technologies and infrastructure, and mechanisms for sharing AI knowledge, as appropriate. In this regard, governments should consider promoting mechanisms, such as data trusts, to support the safe, fair, legal and ethical sharing of data.
Shaping an enabling policy environment for AI
Governments should create a policy environment that will open the way to deployment of trustworthy AI systems.
Recommended supporting actions:
- Governments should promote a policy environment that supports an agile transition from the research and development stage to the deployment and operation stage for trustworthy AI systems. To this effect, they should consider using experimentation to provide a controlled environment in which AI systems can be tested, and scaled-up, as appropriate.
- Governments should review and adapt, as appropriate, their policy and regulatory frameworks and assessment mechanisms as they apply to AI systems to encourage innovation and competition for trustworthy AI.
Building human capacity and preparing for labour market transformation
Governments should equip people with the skills for AI and support workers to ensure a fair transition.
Recommended supporting actions:
- Governments should work closely with stakeholders to prepare for the transformation of the world of work and of society. They should empower people to effectively use and interact with AI systems across the breadth of applications, including by equipping them with the necessary skills.
- Governments should take steps, including through social dialogue, to ensure a fair transition for workers as AI is deployed, such as through training programmes along the working life, support for those affected by displacement, and access to new opportunities in the labour market.
- Governments should also work closely with stakeholders to promote the responsible use of AI at work, to enhance the safety of workers and the quality of jobs, to foster entrepreneurship and productivity, and aim to ensure that the benefits from AI are broadly and fairly shared.
International co-operation for trustworthy AI
Governments should co-operate across borders and sectors to share information, develop standards and work towards responsible stewardship of AI.
Recommended supporting actions:
- Governments, including developing countries and with stakeholders, should actively cooperate to advance these principles and to progress on responsible stewardship of trustworthy AI.
- Governments should work together in the OECD and other global and regional fora to foster the sharing of AI knowledge, as appropriate. They should encourage international, cross-sectoral and open multi-stakeholder initiatives to garner long-term expertise on AI.
- Governments should promote the development of multi-stakeholder, consensus-driven global technical standards for interoperable and trustworthy AI.
- Governments should also encourage the development, and their own use, of internationally comparable metrics to measure AI research, development and deployment, and gather the evidence base to assess progress in the implementation of these principles.
The G20 AI principles draw from and agree with the OECD principles and recommendations and a new Global Partnership on Artificial Intelligence (GPAI) of international and multi-stakeholder initiative has been launched to guide the responsible development and use of AI, grounded in human rights, inclusion, diversity, innovation, and economic growth. The founding members intend to support the responsible and human-centric development and use of AI in a manner consistent with human rights, fundamental freedoms, and our shared democratic values, as elaborated in the OECD Recommendations on AI.
Emerging common values
The UK is playing a leading role in theses agendas. Official bodies like the Office for Artificial Intelligence, Centre for Digital Ethics and Innovation, Government Digital Service and the Information Commissioner’s Office are working closely with Digital Ethics Lab, Alan Turing Institute, Open Data Institute, Doteveryone and Digital Catapult in championing digital ethical practice across the UK public sector.
A key part of this work has been the mapping of a core set of values underpinning the ethical use of emerging technologies and data (see diagram), built on the traditional bioethics principles of Beneficence, Non-maleficence, Autonomy and Justice; together with Explicability a new enabling principle for AI:
As a result, we are seeing an emerging digital ethics framework, built around the following core values or attributes:
Beneficence: “Do Good”: That work is to the benefit, not detriment of individuals and society. The benefits of the work should outweigh the potential risks and there should be unmet need.
Non maleficence: “Do no Harm”: To avoid harm, including harm from malicious or unexpected uses, by recognising that the moral quality of a technology depends on its consequences. Conditional risks and benefits must therefore be weighed so as to avoid potential harms.
Autonomy: “Preserve Human Agency”: To enable people to make choices. To allow people to modify or override solutions when appropriate. This also requires people to have sufficient knowledge and understanding to decide.
Justice: “Be Fair”: That all benefits and risks should be distributed fairly. Solutions should promote fair treatment and equitable outcomes
Explicability: “Operate transparently”: To ensure that there is Intelligibility, transparency, trustworthiness, and accountability around how and why digital, data and technology solutions generate the outcomes they do. Working and outputs should be explicable and explain how to use and when to trust solutions and approaches.
Principles into practice
The OCED has established a Policy Observatory to shape and share public policies for responsible, trustworthy and beneficial. It has also produced a primer on Artificial intelligence and its use in the public sector that sets out a range of approaches on the ethical use of emerging technologies and data.
More widely, a number of key resources are available to help embed Principles into Practice at the heart of adopting emerging technologies and data solutions:
- The AI4People initiative is a first multi-stakeholder forum bringing together all actors interested in shaping the social impact of new applications of AI. With a three-year roadmap, the goal of AI4People is to create a common public space for laying out the founding principles, policies and practices on which to build a “good AI society”. The programme has established supporting ethical framework and governance/regulatory toolbox to support organisations address key ethical issues.
- Likewise, the SHERPA project which analyses how AI and data analytics impact ethics and human rights. In dialogue with stakeholders, the project is developing novel ways to understand and address these challenges to find desirable and sustainable solutions that can benefit both innovators and society. It offers a series of workbooks and practical case studies on the ethical use, design and implementation.
- Following on this work the High-Level Expert Group on Artificial Intelligence (AI HLEG) has published the Ethics Guidelines for Trustworthy Artificial Intelligence and Assessment List for Trustworthy AI (ALTAI). The ALTAI guidance outlines the key stages of ethical practice into an accessible and dynamic checklist that guides developers and deployers of AI in implementing such principles in practice. ALTAI and its accompanying web-based tool is aimed at helping to ensure that users benefit from AI without being exposed to unnecessary risks by indicating a set of concrete steps for self-assessment.
- In addition to the UK guidance materials a number of OECD member states have started to build up their own digital ethics resources that offer models that local practitioners and policy-makers may find helpful. The Australian Federal and State Governments have established a core set of strategy for the future resources and tech human rights that offer the basis for local level adoption around ethical use and investigation. The Canadian government has established a responsible use of public AI initiative and New Zealand has developed the world’s first Algorithm Charter which demonstrates a commitment to ensuring public confidence in how government agencies use algorithms.
- Closer to home, the Nordic Council has developed a common vision on ethical AI and digitalisation, Germany’s Tübingen International Centre for Ethics in the Sciences and Humanities and partners have produced a From Principles to Practice practitioner framework likewise the French data protection authority policy paper How Can Humans Keep The Upper Hand address some of the key ethical questions that are raised by algorithms and artificial intelligence.
- Elsewhere, the Oxford Commission on AI and Good Governance (OxCAIGG) which brings together academics, technology experts and policymakers to analyse the AI implementation and procurement challenges faced by governments around the world inaugural think piece, “Four Principles for Integrating AI & Good Governance” which underscores the urgent need for inclusive design, informed procurement, purposeful implementation and persistent accountability.
- In a similar vein, the Confederation of British Industry has developed AI: Ethics into practice – Steps to navigate emerging ethical issues and the World Economic Forum has created an Empowering AI Leadership toolkit for corporate officers to identify the benefits of artificial intelligence for their organisations and how to operationalise it in a responsible way.