A range of international standards organisations are developing and publishing AI-related standards. With differing degrees of granularity, these:
- identify foundational areas for ongoing technical definition and refinement,
- codify existing good practice(s), drawing on broader ICT-focused standards, and,
- engage with questions of ethics and responsible development, deployment and evaluation of AI.
In support of these developments, the joint technical committee of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) have been co-sponsoring a joint sub-Committee (SC) 42 to focus on standards development for AI systems. The UK is represented by the British Standards Institute that, in partnership with the ISO/IEC, has identified the following emerging standards as a baseline for Smart Information Systems combing artificial intelligence and data analytics.
Standard | Aim/objective(s) |
---|---|
PD ISO/IEC/TR 24372 | Information technology — Artificial intelligence (AI) — Overview of computational approaches for AI systems |
PD ISO/IEC/TR 24368 | Information technology — Artificial intelligence — Overview of ethical and societal concerns |
ISO/IEC JTC 1/SC 42 N 649, ISO/IEC NP 5394 | Information Technology – Artificial intelligence – Management System |
ISO/IEC JTC 1/SC 42 N 647, ISO/IEC NP 5392 | Information technology – Artificial intelligence – Reference Architecture of Knowledge Engineering |
ISO/IEC JTC 1/SC 42 N 642, ISO/IEC NP 5339 | Information Technology – Artificial Intelligence – Guidelines for AI Applications |
ISO/IEC JTC 1/SC 42 N 640, ISO/IEC NP 5338 | Information technology – Artificial intelligence – AI system life cycle processes |
ISO/IEC JTC 1/SC 42 N 634, ISO/IEC NP 5259-1 | Data quality for analytics and ML – Part 1: Overview, terminology, and examples |
ISO/IEC JTC 1/SC 42 N 636, ISO/IEC NP 5259-3 | Data quality for analytics and ML — Part 3: Data Quality Management Requirements and Guidelines |
ISO/IEC JTC 1/SC 42 N 638, ISO/IEC NP 5259-4 | Data quality for analytics and ML — Part 4: Data quality process framework |
ISO/IEC JTC 1/SC 42 N 537 | Software engineering — Systems and software Quality Requirements and Evaluation (SQuaRE) — Quality Model for AI-based systems |
ISO/IEC JTC 1/SC 42 N 478, ISO/IEC NP 24029-2 | Artificial Intelligence (AI) — Assessment of the robustness of neural networks — Part 2: Formal methods methodology |
ISO/IEC JTC 1/SC 42 N 474 , ISO/IEC NP TS 4213 | Assessment of classification performance for machine learning models |
BS ISO/IEC 5392 | Information technology – Artificial intelligence – Reference architecture of knowledge engineering |
BS ISO/IEC 5339 | Information Technology — Artificial Intelligence — Guidelines for AI applications |
BS ISO/IEC 5338 | Information technology – Artificial intelligence – AI system life cycle processes |
BS ISO/IEC 5259-1 | Data quality for analytics and ML. Part 1: Overview, terminology, and examples |
ISO/IEC NP 5259-2 | Data quality for analytics and ML — Part 2: Part 2: Data quality measures |
BS ISO/IEC 5259-3 | Data quality for analytics and ML. Part 3: Data quality management requirements and guidelines |
BS ISO/IEC 5259-4 | Data quality for analytics and ML. Part 4: Data quality process framework |
BS ISO/IEC 5059 | Software engineering — Systems and software Quality Requirements and Evaluation (SQuaRE) — Quality model for AI-based systems |
BS ISO/IEC 42001 | Information Technology — Artificial intelligence — Management system |
BS ISO/IEC 24668 | Information technology — Artificial intelligence —Process management framework for Big data analytics |
BS ISO/IEC 23894 | Information Technology — Artificial Intelligence — Risk Management |
In addition the Institute of Electrical and Electronics Engineers (IEEE) is also facilitating a global consultation on the development of a comprehensive set of standards for ‘Ethically Aligned Design’ of Smart Information Systems/Autonomous and Intelligent Systems.
Positioning ‘human well-being’ as a central precept, the IEEE initiative explicitly seeks to reposition robotics and AI as technologies for improving the human condition rather than simply as vehicles for economic growth.
The IEEE work is focused on the following five areas for investigation, which aim to educate, train and empower AI/robot stakeholders to ‘prioritise ethical considerations so that these technologies are advanced for the benefit of humanity’:
- Human Rights: How can we ensure that A/IS do not infringe upon human rights?
- Prioritizing Well-being: Traditional metrics of prosperity do not take into account the full effect of A/IS technologies on human well-being.
- Accountability: How can we assure that designers, manufacturers, owners, and operators of A/IS are responsible and accountable?
- Transparency How can we ensure that A/IS are transparent?
- A/IS Technology Misuse and Awareness of It: How can we extend the benefits and minimize the risks of A/IS technology being misused?
There are currently 14 IEEE standards working groups working on drafting so-called ‘human’ standards that have implications for artificial intelligence (see table below).
Standard | Aim/objective(s) |
---|---|
IEEE P7000™ – Model Process for Addressing Ethical Concerns During System Design | To establish a process for ethical design of Autonomous and Intelligent Systems. |
IEEE P7001™ – Transparency of Autonomous Systems | To ensure the transparency of autonomous systems to a range of stakeholders. It specifically will address: Users: ensuring users understand what the system does and why, with the intention of building trust; Validation and certification: ensuring the system is subject to scrutiny; Accidents: enabling accident investigators to undertake investigation; Lawyers and expert witnesses: ensuring that, following an accident, these groups are able to give evidence; and Disruptive technology (e.g. driverless cars): enabling the public to assess technology (and, if appropriate, build confidence). |
IEEE P7002™ – Data Privacy Process | To establish standards for the ethical use of personal data in software engineering processes. It will develop and describe privacy impact assessments (PIA) that can be used to identify the need for, and effectiveness of, privacy control measures. It will also provide checklists for those developing software that uses personal information. |
IEEE P7003™ – Algorithmic Bias Considerations | To help algorithm developers make explicit the ways in which they have sought to eliminate or minimise the risk of bias in their products. This will address the use of overly subjective information and help developers ensure they are compliant with legislation regarding protected characteristics (e.g. race, gender). It is likely to include: Benchmarking processes for the selection of data sets; Guidelines on communicating the boundaries for which the algorithm has been designed and validated (guarding against unintended consequences of unexpected uses); and Strategies to avoid incorrect interpretation of system outputs by users. |
IEEE P7004™ – Standard on Child and Student Data Governance | Specifically aimed at educational institutions, this will provide guidance on accessing, collecting, storing, using, sharing and destroying child/student data. |
IEEE P7005™ – Standard for Transparent Employer Data Governance | Similar to P7004, but aimed at employers. |
IEEE P7006™ – Standard for Personal Data Artificial Intelligence (AI) Agent | Describes the technical elements required to create and grant access to personalised AI. It will enable individuals to safely organise and share their personal information at a machine-readable level, and enable personalised AI to act as a proxy for machine-to-machine decisions. |
IEEE P7007™ – Ontological Standard for Ethically Driven Robotics and Automation Systems | This standard brings together engineering and philosophy to ensure that user well-being is considered throughout the product life cycle. It intends to identify ways to maximise benefits and minimise negative impacts, and will also consider the ways in which communication can be clear between diverse communities. |
IEEE P7008™ – Standard for Ethically Driven Nudging for Robotic, Intelligent, and Automation Systems | Drawing on ‘nudge theory’, this standard seeks to delineate current or potential nudges that robots or autonomous systems might undertake. It recognises that nudges can be used for a range of reasons, but that they seek to affect the recipient emotionally, change behaviours and can be manipulative, and seeks to elaborate methodologies for ethical design of AI using nudge. |
IEEE P7009™ – Standard for Fail-Safe Design of Autonomous and Semi-Autonomous Systems | To create effective methodologies for the development and implementation of robust, transparent and accountable fail-safe mechanisms. It will address methods for measuring and testing a system’s ability to fail safely. |
IEEE P7010™ – Wellbeing Metrics Standard for Ethical Artificial Intelligence and Autonomous Systems | To establish a baseline for metrics used to assess well-being factors that could be affected by autonomous systems, and for how human well-being could proactively be improved. |
IEEE P7011™Standard for the Process of Identifying and Rating the Trustworthiness of News Sources | Focusing on news information, this standard sets out to standardise the processes for assessing the factual accuracy of news stories. It will be used to produce a ‘trustfulness’ score. This standard seeks to address the negative effects of unchecked ‘fake’ news, and is designed to restore trust in news purveyors. |
IEEE P7012™ Standard for Machine Readable Personal Privacy Terms | To establish how privacy terms are presented and how they could be read and accepted by machines. |
IEEE P7013™ Inclusion and Application Standards for Automated Facial Analysis Technology. | To provide guidelines on the data used in facial recognition, the requirements for diversity, and benchmarking of applications and situations in which facial recognition should not be used. |
IEEE P7014 – Standard for Ethical considerations in Emulated Empathy in Autonomous and Intelligent Systems | his standard defines a model for ethical considerations and practices in the design, creation and use of empathic technology, incorporating systems that have the capacity to identify, quantify, respond to, or simulate affective states, such as emotions and cognitive states. This includes coverage of ‘affective computing’, ’emotion Artificial Intelligence’ and related fields. |