Understanding digital ethics
Taking a step back can help us focus on the real role of ethics. Quite simply, these are moral principles that govern a person’s or group’s behaviour and inform decision making. The basic role of ethics can be described as:
- Establishing a foundation for decision making based on a core set of values committed to “doing no harm” and supporting human flourishing and environmental sustainability.
- Providing a structure to address value dilemmas and conflicts that are specific to each context, on a case-by-case basis.
- Drawing on the ideas built around values, deliberations and recommendations that support the common good.
- Acting as a basis for choosing how best to consider an issue and then act on a case-by-case basis by rationally considering different and conflicting values/principles.
At a basic level it’s possible to see digital ethics as “principles” and “concepts” that can be used to govern technology and data including factors like risk management and individual rights. In effect, they can be used to understand and resolve moral issues that have to do with the development and application of differing technology and data solutions and approaches across a range of ethical challenges such as:
- Access rights: access to empowering technology as a right
- Accountability: decisions made for who is responsible when considering success or harm in technological advancements
- Digital rights: protecting intellectual property rights and privacy rights
- Environment: how to produce technology that could harm the environment
- Existential risk: technologies that represent a threat to the global quality of life pertaining to extinction
- Freedom: technology that is used to control a society raising questions related to freedom and independence
- Health & safety: health and safety risks that are increased and imposed by technologies
- Human enhancement: human genetic engineering and human-machine integration
- Human judgement: when can decisions be judged by automation and when do they acquire a reasonable human?
- Over-automation: when does automation decrease quality of life and start affecting society?
- Precaution principle: who decides that developing this new technology is safe for the world?
- Privacy: protection of privacy rights
- Security: is due diligence required to ensure information security?
- Self-replicating technology: should self-replicating be the norm?
- Technology transparency: clearly explaining how a technology works and what its intentions are
- Terms of service: ethics related to legal agreements
Source: Ethics of technology (Wikipedia)
Defining digital ethics
In effect the ethics of digital technology and data can be said to focus on the ethical aspects of technological design and use, together with the ethical impacts of digital technology on society as whole as follows:
- Ethics by design: These focus on the design phase of digital and data tools. It directly concerns technology in all its technical complexity and the know-how of engineers, programmers, etc. These ethics therefore touches in particular on the deontology (duty-based ethics) of digital creators of all kinds (developers, digital designers, project managers, etc.). Indeed, they have an ethical responsibility from the design stage onwards, insofar as data or algorithms may or may not reproduce human biases, reveal new discriminations (or reproduce them on a larger scale), give rise to injustices, etc.
- Ethics of use: These aim to examine how the service users and employees as well as the managers and partners of an organisation use emerging technology and data. This entails conducting an ethical evaluation of how people use the technological resources at their disposal.
- Societal ethics: These examines the impacts of digital technology and data analytics on wider society. It thus deals with the acceptability of digital innovations and solutions, human rights and agency, the environmental/energy footprints of digital tools, and wider issue of social inclusion.
These three categories of ethical focus are of course interconnected, but (as outlined in the other pages in this topic area) we need to understand context in which they operate in order to better define the basic issues at stake, and how that can start to be addressed. The following sections takes each of these three categories in turn and looking at the following set of introductory questions and points to consider can help to build up a clearer understanding of the digital ethical landscape and how organisations and individuals can start to navigate it as they consider these and other points as they emerge in their wider examination of these and related issues.
Responding to the challenge
How do we understand and promote the ethical use of emerging technologies? This includes the data they generate and store and the public service designs, processes and interactions they enable. It also applies to the outcomes that they generate. All this while ensuring public benefit and minimising unintended consequences. What do we mean by ethics and how do we apply them?
Given the global nature of these challenge, international bodies such as the Organisation for Economic Co-operation and Development (OECD) and Institute of Electrical and Electronics Engineers (IEEE) have identified broad principles and standards on digital ethics. These principles have adopted by the UK and all the other G20 countries and are now being taken up by a growing number of international bodies, UN member states (via the UNESCO AI ethics consultation), informing the World Economic Forum AI and Robotics forward programme. They also form the basis of the new Global Partnership on Artificial Intelligence (GPAI) which the UK, US, EU and other partners have established to champion responsible AI and data governance. Whilst at an operational level the European High-Level Expert Group and Sherpa programme has likewise identified the following key requirements for assessing ethical practice across smart information systems.
Alongside this in the UK official bodies like the Office for Artificial Intelligence, Centre for Digital Ethics and Innovation, Government Digital Service and the Information Commissioner’s Office are working closely with Digital Ethics Lab, Alan Turing Institute, Open Data Institute, Doteveryone and Digital Catapult in championing digitally ethical practice across the UK public sector.
In support of this Socitm’s new resource hub collection (see quick links) and together with our digital ethics and Planting the Flag policy briefings highlight how places can harness emerging technologies and data ethically through the adoption of leading-edge practice and thinking built around the Socitm principles of Simplify, Standardise, Share, Sustain and focusing on:
- Reset: the collective mindset to ensure adherence to ethical principles, respecting social foundations and ecological constraints locally and globally
- Reform: public services by embracing innovation and modernisation
- Renew: communities by collaborating across place and encouraging self-sufficiency.
- Resilient: reset, reform and renew communities and places to be resilient to disruptive changes, to thrive and to achieve better, sustainable and inclusive outcomes for everyone