Jump to section
“Societal ethics” examines the impacts of digital technology and data analytics on wider society. It thus deals with the acceptability of digital innovations and solutions, human rights and agency, the environmental/energy footprints of digital tools, and wider issue of social inclusion.
Responding to the challenge
The rise of Smart Information Systems (SIS; the combination of Artificial Intelligence (AI) and big data analytics) increasingly impacts on societal and human rights issues. As Sir Tim Berners-Lee noted when speaking about the ethical impact of emerging technologies and data “Everything has to be put out there as something that we think will be a good idea as a component of our society.” As a consequence, we need, in effect, to embed ethical resilience at the very heart of our response to these unprecedented digital opportunities and challenges.
In response, Socitm’s Planting the flag – a new local normal neatly illustrates the wider societal and ethical focus of applying emerging technologies and data in the wake of Covid-19. The model draws directly on on ideas and experience drawn from members of Socitm and its partner associations (the Linked Organisation of Local Authority ICT Societies and Major Cities of Europe) around the world. The following eight key areas of place-based activity that are being transformed by local responses to Covid-19 potentially offer a model of place-based ethical change built around the key “principles/concepts” off Simplify – Standardise – Share – Sustain:
- Distributed, place-based leadership and delivery
- Democratic engagement and renewal
- Service design
- Workstyles
- Virtual infrastructure
- Data insights
- Asset rationalisation
- Living spaces.
Alongside this, Socitm recognises that understanding and promoting the ethical use of emerging technologies; proactively looking afresh at how we use the data they generate and store, promoting the public service designs, processes and interactions they enable; and considering how this generates better outcomes and public benefit all need to be purposeful. This higher purpose can be encapsulated in successfully addressing societal challenges, supporting planetary sustainability and minimising unintended consequences. as advocated in Kate Raworth’s concept of Doughnut Economics and City Portraits.
These model ethical ways in which “people and planet can thrive in balance” and readily link to related concepts like the International City/County Management Association (ICMA) “Building Digitally Inclusive Communities” framework and Socitm’s five key policy themes. These themes are presented visually below, by reference to the doughnut framework. A central pillar is the theme of ethical use of emerging technologies and data, which includes our digital ethics collection.
More widely, a number of key resources are available to help embed ethical practice at the heart of societal adoption of specific emerging technologies and data solutions:
The Organisation for Economic Co-operation and Development (OECD) has identified broad principles on societal digital ethics. These principles have been adopted by the UK and all the other G20 countries, are now being taken up by a growing number of international bodies, UN member states (via the UNESCO AI ethics consultation) and are informing the World Economic Forum AI and Robotics forward programme.
The Institute of Electrical and Electronics Engineers (IEEE) comprehensive set of standards for ‘Ethically Aligned Design’ smart information systems/Autonomous and Intelligent Systems. Positioning ‘human well-being’ as a central precept, the IEEE initiative explicitly seeks to reposition robotics and AI as technologies for improving the human condition rather than simply vehicles for economic growth.
At an operational level the European High-Level Expert Group and SHERPA programme has identified the key areas for assessing societal impact and ethical practice across smart information systems. The Sherpa programme report on Ethical Tensions and Social Impacts outlines these key areas of ethical challenge as: Dignity and Care for the Elderly; Digital Divide; Unemployment; Privacy and Data Protection; Accountability/Liability; Bias and Discrimination; Democracy; Freedom of Thought; Security; Dual Use/Misuse; Health, Environment; and Human Rights.
KTH Royal Institute of Technology/De Montfort University have subsequently developed an overview guide for developers and users that sets out the wider body of literature and ethical approaches to address these issues in more detail and offers a starting point for deeper investigation.
The Markkula Center for Applied Ethics based at Santa Clara University has produced a comprehensive set of Technology Ethics in practice resources that addresses societal issues arising from artificial intelligence, cybersecurity, information technology, biotechnology, and other emerging fields.
Alongside these resources, official bodies in the UK like the Office for Artificial Intelligence, Centre for Digital Ethics and Innovation, Government Digital Service and the Information Commissioner’s Office are working closely with Digital Ethics Lab, Alan Turing Institute, Open Data Institute, Doteveryone and Digital Catapult in championing digitally ethical practice across society . As a result, we are seeing the emerging set of common core values or attributes that can help inform digital ethical practice:
- Beneficence: do good. Benefits of work should outweigh potential risks.
- Non-maleficence: do no harm. Risks and harms need to be considered holistically, rather than just for the individual or organisation.
- Autonomy: preserve human agency. To make choices, people need to have sufficient knowledge and understanding.
- Justice: be fair. Specific issues include algorithmic bias and equitable treatment.
- Explicability: operate transparently so as to explain systems working and its outputs
Wider points to consider
The following points to consider are by no means exhaustive so should be seen as a starting point to further discovery…
Has an approach designed to improve the environmental footprint of the IT system been introduced?
- Identify a manager and draw up an action plan including raising the awareness of all IT department staff and users, building on recognised standards
- Factor in environmental impact when entering into any contract that has consequences for the environmental footprint of the IT system
Does the evaluation of the IT system’s environmental impacts cover primary energy, greenhouse gas emissions, water, the depletion of abiotic resources, paper and WEEE (waste electrical and electronic equipment)?
- Conduct a regular assessment of the environmental footprint of the IT system based on recognised and auditable indicators
Is the societal impact of projects (origin of materials, partners’ good practices) taken into account?
- Carry out a societal impact assessment of projects
Is an assessment carried out of the impact of innovations on jobs in the company, especially when automation occurs?
- Forecast, with the help of teams specialising in forward planning and strategy, the impacts of technological change on the organisation’s jobs and activities.
- Include the impacts of automation and more broadly of digital technology in strategic workforce planning
Are addiction phenomena factored into the design of digital solutions?
- Formally discourage the use of “dark patterns” (interface design tricks intended to trick the user)
Are the risks of human cognitive biases factored into the design of digital solutions?
- Ensure that digital applications and solutions have not been designed in such a way as to deliberately manipulate users by exploiting cognitive biases