Submit your nomination for the 2024 Socitm Awards

Digital ethics collection | Article

Autonomy: preserve human agency

Authored by William Barker, Martin Ferguson

Autonomy: preserve human agency. To enable people to make choices. To allow people to modify or override solutions when appropriate. This also requires people to have sufficient knowledge and understanding to decide. Recognising that to make choices, people need to have sufficient knowledge and understanding. It is important to involve stakeholders and interest groups in ethical risk assessment and design.

Key themes

Consent, choice, enhancing human agency and self-determination

Because we value the ability for humans to be autonomous and self-governing (positive liberty), humans’ freedom from external restrictions (negative liberties, such as freedom of movement or freedom of association).

Underpinning this is the fact that each individual has an inherent worth and we should not undermine respect for human life (human dignity), we need to ensure that AI and big data systems do not negatively affect human agency, liberty, and dignity.

Areas of focus

Human Agency: users should be able to make informed autonomous decisions regarding Smart Information Systems.

Human oversight: may be achieved through governance mechanisms such as human-on-the-loop, human-in-the-loop, human in-command.

Specific operational ethics requirements

It is essential that any technology respects and promotes human liberty and dignity. We recommend the following approaches:

  • Ensure the protection of the stakeholders’ human agency and positive liberty by keeping them informed, ensuring that they are neither deceived nor manipulated, and can meaningfully control the system
  • Ensure the protection of the stakeholders’ negative liberty by ensuring that they have the freedom to use the system and that they are not restrained in functionality and opportunity
  • Ensure the protection of the stakeholders’ human dignity by ensuring that the system is not used to directly or indirectly affect or reduce their autonomy or freedom, and does not violate their self-respect
  • Ensure the protection of and respect for the stakeholders’ privacy
  • Ensure the protection of the quality and integrity of data
  • Ensure the protection of access to the data
  • Ensure the protection of data rights and ownership

Ethical data handling considerations

Engaging with people

How can people engage with you about the project? How can people correct information, appeal or request changes to the product/service? To what extent? Are appeal mechanisms reasonable and well understood?

Ongoing implementations

Are you routinely building in thoughts, ideas and considerations of people affected in your project? How? What information or training might be needed to help people understand data issues? Are systems, processes and resources available for responding to data issues that arise in the long-term?

Autonomy scrutiny questions

Q: How do we ensure that relevant stakeholders and interest groups in the design and decision-making are consulted? 

Remember:

Stakeholder and interest group involvement in ethical risk assessment and design is key to maintaining public and community confidence and trust in Digital, Data and Technology (DDaT) outcomes.

This means:

It is important that stakeholder input does not simply reflect the same ethical perspectives as those already held within your organisation. External input from a more truly representative body of those likely to be impacted by specific DDaT outcomes is required.

Q: How do we balance user expectations and the reality of what DDaT can deliver?

Remember:

In creating DDaT approaches, consider how stakeholders’ expectations of a particular solution may diverge from the reality of what can be delivered.

This means:

We always have an ethical duty of care to ensure stakeholders are properly informed about not just the benefits but also the limitations and risks of a particular emerging approach or solution.

Autonomy resources: supporting principles into practice

Socitm’s resource hub collections on smart places, location intelligence and harnessing data each address place-based ethical change that reflects the practical application of non-maleficence attributes. Likewise, from an operational perspective, the Planting the flag – a new local normal initiative, drawing on the ideas and experience from members of Socitm and its partner associations, reflects the attributes in practice.

Similar approaches are at the heart of Doughnut Economics, the Doughnut Economics Action Lab (DEAL), City Portraits and their supporting database of project materials that model ethical ways in which “people and planet can thrive in balance”. They build on the earlier related concepts like the International City/County Management Association (ICMA) “Building Digitally Inclusive Communities framework.

Likewise, the SHERPA project, that analyses how AI and data analytics impact ethics and human rights, reflects many of the attributes in practice. In dialogue with stakeholders, the project is developing novel ways to understand and address these challenges to find desirable and sustainable solutions that can benefit both innovators and society. It offers a series of workbooks and practical case studies on the ethical use, design and implementation in step with an autonomy approach.

Following on this work, the High-Level Expert Group on Artificial Intelligence (AI HLEG) has published the Ethics Guidelines for Trustworthy Artificial Intelligence and Assessment List for Trustworthy AI (ALTAI). The ALTAI guidance and web-based tool outline the key stages of ethical practice in an accessible and dynamic checklist that guides developers and deployers of AI in implementing such principles in practice.

The OECD Principle Resources linked to Human-centred values and fairness address how AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and should include appropriate safeguards to ensure a fair and just society.

The Digital Catapult outlines a set resources based around the benefice theme that they have identified in partnership with the Digital Ethics Lab and Alan Truing Institute, as part of their wider ethical typology and tools mapping work.

Understanding artificial intelligence ethics and safety is the Alan Turing Institute guide for the responsible design and implementation of AI systems in the public sector aimed at everyone involved in the design, production, and deployment of a public sector

Responsible Research and Innovation toolkit examines issues related to research and innovation, to anticipate their consequences, and to involve society in identify how science and technology can help create human and environmental, wellbeing-focussed outcomes.

AI Now – Algorithmic Accountability Policy Toolkit provides practitioners with a basic understanding of global public sector use of algorithms including a breakdown of key concepts and questions that may come up when engaging with this issue.

Data Ethics Decision Aid: DEDA helps data analysts, project managers and policy makers to recognise ethical issues in data projects, data management and data policies. Developed in close cooperation with data analysts from the City of Utrecht, DEDA is a toolkit facilitating initial brainstorming sessions to map ethical issues in data projects, documenting the deliberation process and furthering accountability towards the various stakeholders and the public.

The Institute of Electrical and Electronics Engineers Ethics in Action Resources provide a range of materials on ethical standards development around the theme of autonomy = preserve human agency as follows:

Case Studies