Submit your nomination for the 2024 Socitm Awards

Digital ethics collection | Article

Non-maleficence: do no harm

Authored by William Barker, Martin Ferguson

Non-maleficence: “Do no Harm”: To avoid harm, including harm from malicious or unexpected uses, by recognising that the moral quality of a technology depends on its consequences. Conditional risks and benefits must therefore be weighed so as to avoid potential harms. Risks and harms need to be considered holistically, rather than just for the individual or organisation. Privacy and security risks require social and organisational design, not just technological.

Key themes

Safety, reliability, robustness, data provenance, privacy, and cybersecurity.

Technical robustness, safety in support of Privacy and Data Governance: including resilience to attack and security, fall-back plan and general safety, accuracy, reliability and reproducibility.
Because we value humans, human life, and human resources, it is important that the system and its use is safe (often defined as an absence of risk) and secure (often defined as a protection against harm, i.e., something which achieves safety). Under this attribute we should also include the quality of system decisions in terms of their accuracy, reliability, and precision.
In particular, as systems use data that is private or sensitive, it is important to make sure that the system does not violate or infringe upon the right to privacy, and that private and sensitive data (such as those data linked to an individual’s ability to have a private life) are well-protected. This requires due diligence over the quality and integrity of data (i.e., whether the data is representative of reality), access to data, data rights and the wider set of data rights such as ownership.

Areas of focus

Resilience to attack and security: systems should be protected against vulnerabilities that can allow them to be exploited by adversaries.

Fallback plan and general safety: systems should have safeguards that enable a fallback plan in case of problems.

Accuracy: for example, documentation that demonstrates evaluation of whether the system is properly classifying results.

Privacy and Data Protection: systems should guarantee privacy and data protection throughout a system’s entire lifecycle.

Reliability and Reproducibility: systems should work the same way in a variety of different scenarios.

Quality and integrity of the data: when data is gathered it may contain socially constructed biases, inaccuracies, errors and mistakes – this needs to be addressed.

Social Impact: the effects of systems on people’s physical and mental wellbeing should be carefully considered and monitored.

Specific operational ethics requirements

It is essential that technical systems are robust, resilient, safe, and secure. They should focus on the following three requirements:

  • Ensure that the system is Secure and Resilient against attacks;
  • Ensure that the system is Safe in case of failure;
  • Ensure the Accuracy, Reliability, and Reproducibility of the system.

Ethical data handling considerations

Ethical and legislative contexts

What existing ethical codes apply to your sector or project? What legislation, policies, or other regulation shape how you use data? What requirements do they introduce? Consider: the rule of law; human rights; data protection; IP and database rights; antidiscrimination laws; and data sharing, policies, regulation and ethics codes/frameworks specific to sectors (e.g. health, employment, taxation).

Negative effects on people

Who could be negatively affected by this project? Could the way that data is collected, used or shared cause harm or expose individuals to risk of being re-identified? Could it be used to target, profile or prejudice people, or unfairly restrict access (e.g. exclusive arrangements)? How are limitations and risks communicated to people? Consider: people who the data is about; people impacted by its use and organisations using the data.

Minimising negative impact

What steps can you take to minimise harm? How could you reduce any limitations in your data sources? How are you keeping personal and other sensitive information secure? How are you measuring, reporting and acting on potential negative impacts of your project? What benefits will these actions bring to your project?

Non-maleficence scrutiny questions

Q: How do we consider ethical risks and harms?

Remember:

That involvement with the ethical issues concerning emerging technologies and big data don’t go away once we have undertaken our own particular tasks and fulfilled our immediate responsibilities in the delivery chain.

This means:

We need to view ethical risks and harms holistically – taking a comprehensive end-to-end view of the ethical issues across all stages of design, development, deployment and delivery, and how they interact with people and society in an ethical manner.

Q: How do we ensure design processes support privacy and security?

Remember:

Ethical design is not only technical design (of networks, databases, devices, platforms, websites, tools, or apps) but also social and organisational design of groups, policies, procedures, incentives, resource allocations, and techniques that promote privacy and security objectives.

This means:

Whilst DDaT implementation approaches and solutions will vary depending on context, it is an ethical imperative that the values of privacy and security are always at the forefront of operational design, planning, execution, and oversight.

Non-maleficence resources: supporting principles into practice

Socitm’s resource hub collections on smart places, location intelligence and harnessing data each address place-based ethical change that reflects the practical application of non-maleficence attributes. Likewise, from an operational perspective, the Planting the flag – a new local normal initiative, drawing on the ideas and experience from members of Socitm and its partner associations, reflects the attributes in practice.

Similar approaches are at the heart of Doughnut Economics, the Doughnut Economics Action Lab (DEAL), City Portraits and their supporting database of project materials that model ethical ways in which “people and planet can thrive in balance”. They build on the earlier related concepts like the International City/County Management Association (ICMA) “Building Digitally Inclusive Communities framework.

Likewise, the SHERPA project, that analyses how AI and data analytics impact ethics and human rights, reflects many of the attributes in practice. In dialogue with stakeholders, the project is developing novel ways to understand and address these challenges to find desirable and sustainable solutions that can benefit both innovators and society. It offers a series of workbooks and practical case studies on the ethical use, design and implementation in step with a non-maleficence approach.

Following on this work, the High-Level Expert Group on Artificial Intelligence (AI HLEG) has published the Ethics Guidelines for Trustworthy Artificial Intelligence and Assessment List for Trustworthy AI (ALTAI). The ALTAI guidance and web-based tool outline the key stages of ethical practice in an accessible and dynamic checklist that guides developers and deployers of AI in implementing such principles in practice.

The OECD Principle Resources linked to Robustness, security and safety address how AI systems must function in a robust, secure and safe way throughout their lifetimes, and potential risks should be continually assessed and managed.

The Digital Catapult outlines a set resources based around the benefice theme that they have identified in partnership with the Digital Ethics Lab and Alan Truing Institute, as part of their wider ethical typology and tools mapping work.

National Cyber Security Centre (NCSC) Intelligent security tools provide guidance on using AI tools in the security sector. Consideration of intelligent security tools is divided into four sections, each with a series of questions designed to aid users. The wider Think Cyber Think Resilience resources offer a range of practical insights into developing ethical place based cyber resilience.

The Local Government Association research report Using predictive analytics in local public services acknowledges that the use of predictive analytics in local government is still at an early stage, although it is becoming more common. While there are some sophisticated examples of predictive analytics being used across a range of local public services, much of the sector is just starting to consider the opportunities, and risks, of this type of technology.

The Midlands Accord and East Accord Digital Ethics Charter is a set of common principles for digital professionals and those working with “technology for public use”.

Delft University’s Ethical tools for designers can help you uncover, explore and discuss the ethical aspects of your designs. The tools are grouped based on the skill they help you develop and can be used in different phases of the design process.

Institute for the Future – Ethical Operating System Toolkit comprises a checklist of 8 risk zones to help you identify the emerging areas of risk and social harm most critical for your organisation to start considering now, 14 scenarios to spark conversation and stretch your imagination about the long-term impacts of tech you’re building today and 7 future-proofing strategies to help you take ethical action

The ADAPT Ethics Canvas has been developed to encourage public/private sector educators, entrepreneurs, engineers and designers to engage with ethics in their research and innovation projects. Research and innovation foster great benefits for society, but also raise important ethical concerns.

The Institute of Electrical and Electronics Engineers Ethics in Action Resources provide a range of materials on ethical standards development around the theme of non-maleficence = do no harm as follows:

Case Studies