Be part of the next Top Talent training cohort in Essex on 24 and 25 June

Digital ethics collection | Article

Explicability: Operate transparently

Authored by William Barker, Martin Ferguson

Explicability: “Operate transparently”: To ensure that there is intelligibility, transparency, trustworthiness, and accountability around how and why Digital, Data and Technology (DDaT) solutions generate the outcomes they do. Working and outputs should be explicable and explain how to use and when to trust solutions and approaches. Be ready to explain a system’s working as well as its outputs. Make all stages of the implementation process open to public and community scrutiny.

Key themes

Intelligibility, transparency, trustworthiness and accountability

Transparency: traceability, explainability and communication – as Smart Information Systems can be involved in high-stakes decision-making, it is important to understand how the system achieves its decisions. Transparency, and concepts such as explainability, explicability, and traceability relate to the importance of having (or being able to gain) information about a system (transparency), and being able to understand or explain a system and why it behaves as it does (explainability).

Accountability: auditability, minimisation and reporting of negative impact, internal and external governance frameworks, redress, and human oversight. Given that Smart Information Systems act like agents in the world, it is important that someone is accountable for the systems’ actions. Furthermore, an individual must be able to receive adequate compensation in the case of harm from a system (redress). We must be able to evaluate the system, especially in the situation of a bad outcome (audibility). There must also be processes in place for minimisation and reporting of negative impacts, with internal and external governance frameworks (e.g. whistleblowing), and human oversight.

Areas of focus

Traceability: the datasets and the processes that yield the AI system’s decision should be documented.

Explainability: the ability to explain both the technical processes of an AI system and the related human decisions.

Interpretability: Helping to give users confidence in AI systems, safeguarding against bias, meeting regulatory standards or policy requirements and overall improving system design.

System Accountability: Any system, and those who design it, should be accountable for the design and impact of the system.

Specific operational ethics requirements

The amount of transparency needed for a system is a function of (1) the severity of potential impacts of decisions taken or recommended by the system on humans and society; and (2) the importance of accountability for system errors or failures. Accountability is, for example, crucial in cases of systems that can strongly affect the rights and wellbeing of individuals. It allows them to get redress. The requirement of transparency is closely related to the requirement of accountability, in this regard. The requirement of transparency includes three sub-requirements:

  • Ensure that the system has a sufficient level of Traceability;
  • Ensure that the system has a sufficient level of Explainability; and
  • Ensure that the relevant functions of the system are Communicated to stakeholders.

In addition, any system, and those who design it, should be accountable for the design and impact of the system as follows:

  • Ensure that systems with significant impact are designed to be auditable;
  • Ensure that negative impacts are minimised and reported;
  • Ensure internal and external governance frameworks;
  • Ensure redress in cases where the system has significant impact on stakeholders; and
  • Ensure human oversight when there is a substantial risk of harm to human values.

Specific operational ethics considerations

Data sources

Name/describe your project’s key data sources, whether you are collecting data yourself or accessing via third parties. Is any personal data involved, or data that is otherwise sensitive?

Sharing data with others

Are you going to be sharing data with other organisations? If so, who? Are you planning to publish any of the data? Under what conditions?

Rights around data sources

Where did you get the data from? Is it produced by an organisation or collected directly from individuals? Was the data collected for this project or for another purpose? Do you have permission to use this data, or another basis on which you’re allowed to use it? What ongoing rights will the data source have?

Your reason for using data

What is your primary purpose for collecting and using data in this project? What are your main use cases? What is your business model? Are you making things better for society? How and for whom? Are you replacing another product or service as a result of this project?

Explicability scrutiny questions

Q: How do we establish clear governance around ethical responsibility and lines of accountability?


Effective ethical responsibility for DDaT approaches and solutions can only be sustained across an organisation through establishing clear chains of responsibility and accountability to assure that everyone involved focuses on their ethical duty of care towards both outcomes and the wider community.

This means:

Clearly define what leaders, policy makers and practitioners need to do to deliver ethical accountability in practice and who is responsible for each aspect of ethical risk management and prevention of harm in each of the relevant areas of risk laden activity (data collection, use, security, analysis, disclosure, etc.).

Q: How do we uphold the values of Transparency and Trustworthiness?


Central to maintaining public confidence in emerging approaches and solutions is respect for transparency, autonomy and trustworthiness.

This means:

Ensure that as far as practically possible the design, development, deployment and delivery processes of DDaT outcomes are open to public and community scrutiny, offer clear routes of redress and are mindful of people’s autonomous right to choose.

Explicability resources: supporting principles into practice

Socitm’s existing resource hub collections on smart places, location intelligence and harnessing data each demonstrate place-based ethical change that reflects the practical application of beneficence attributes. Likewise from an operational perspective, the Planting the flag – a new local normal initiative draws on the ideas and experience from members of Socitm and its partner associations that similarly can be seen to reflect the attributes in practice.

Similar approaches are at the heart of Doughnut Economics, the Doughnut Economics Action Lab (DEAL), City Portraits and their supporting database of project materials that model ethical ways in which “people and planet can thrive in balance”. They build on the earlier related concepts like the International City/County Management Association (ICMA) “Building Digitally Inclusive Communities

Likewise, the SHERPA project that analyses how AI and data analytics impact ethics and human rights, reflects many of the attributes in practice. In dialogue with stakeholders, the project is developing novel ways to understand and address these challenges to find desirable and sustainable solutions that can benefit both innovators and society. It offers a series of workbooks and practical case studies on the ethical use, design and implementation in step with an explicability approach.

Following on this work, the High-Level Expert Group on Artificial Intelligence (AI HLEG) has published the Ethics Guidelines for Trustworthy Artificial Intelligence and Assessment List for Trustworthy AI (ALTAI). The ALTAI guidance and web-based tool outline the key stages of ethical practice in an accessible and dynamic checklist that guides developers and deployers of AI in implementing such principles in practice.

The OECD Principle Resources linked to transparency and explainability cover responsible disclosure around AI systems in order to ensure that people understand when they are engaging with them and can challenge outcomes, while those linked to accountability address how organisations and individuals developing, deploying or operating AI systems should be held accountable for their proper functioning in line with the other values-based principles for AI.

The Royal Society explainable AI project and supporting briefing summarises current discussions about the development of explainable AI, setting out some of the reasons why such AI methods might be desirable, the different approaches available, and some of the considerations involved in creating these systems. In particular, the briefing describes a range of tools and methods that allow computer systems to carry out complex tasks or act in challenging environments that organisations would look to apply the attributes of ethical practice operationally.

The Digital Catapult outlines a set resources based around the benefice theme they have identified in partnership with the Digital Ethics Lab and Alan Truing Institute as part of their wider ethical typology and tools mapping In addition The responsible use of algorithms and data is paramount for the sustainable development of machine intelligence applications, Digital Catapult’s Ethics Committee has created a Machine Intelligence Garage Ethics Framework consisting of following seven concepts, along with corresponding questions intended to inform how they may be applied in practice.

  1. Clear benefits
  2. Know and manage the risks
  3. Use data responsibly
  4. Be worthy of trust
  5. Diversity, equality and inclusion
  6. Transparent communication
  7. Business model

The Information Commissioner’s Office and the Alan Turing Institute are developing guidance on Explaining decisions made with AI. The draft guidance and supporting consultation aims to give organisations practical advice to help explain the processes, services and decisions delivered or assisted by AI, to the individuals affected by them.  The draft guidance comprises the following three parts:

  • Part 1: The basics of explaining AI defines the key concepts and outlines a number of different types of explanations. It will be relevant for all members of staff involved in the development of AI systems.
  • Part 2: Explaining AI in practice helps you with the practicalities of explaining these decisions and providing explanations to individuals. This will primarily be helpful for the technical teams in your organisation; however, your Data Projection Officer and compliance team will also find it useful.
  • Part 3: What explaining AI means for your organisation goes into the various roles, policies, procedures and documentation that you can put in place to ensure your organisation is set up to provide meaningful explanations to affected individuals. This is primarily targeted at your organisation’s senior management team; however, your Data Projection Officer and compliance team will also find it useful.

Institute for Ethical AI & Machine Learning Responsible Machine Learning Principles are a practical framework put together by domain experts to provide guidance for technologists to develop machine learning systems responsibly. The framework covers the following 8 areas of ethical practice:

  1. Human augmentation
  2. Bias evaluation
  3. Explainability by justification
  4. Reproducible operations
  5. Displacement strategy
  6. Practical accuracy
  7. Trust by privacy
  8. Data risk awareness

The Institute of Electrical and Electronics Engineers Ethics in Action Resources provide a range of materials on ethical standards development around the theme of Explicability as follows:

Case Studies