Join Shropshire's Assistant Director of Transformation as he shares his experience of getting started with transformation and pitfalls to avoid.

Justice: be fair

Explore thought leadership, tools and practical insights to support local public service organisations championing leading-edge practices in the ethical use of emerging technologies and data as part of their wider digital placemaking and service transformation activities.

Authors and contributors: William Barker, Martin Ferguson

Justice: “Be Fair”: That all benefits and risks should be distributed fairly. Solutions should promote fair treatment and equitable outcomes. Specific issues include algorithmic bias and equitable treatment. Consider whether a technology could produce or magnify unequal outcomes, and if so how to mitigate this.

Key themes

Combating algorithmic bias, equitable treatment, consistency, shared benefits, shared prosperity, fair decision outcomes.

Diversity, non-discrimination, and fairness: avoidance and reduction of bias, ensuring fairness and avoidance of discrimination, and inclusive stakeholder engagement.

Because bias can be found at all levels of the AI and data analytics systems (datasets, algorithms, or users’ interpretation), it is vital that this is identified and removed. Systems should be designed, deployed and used with an inclusionary, fair, and non-discriminatory agenda. Requiring the developers to include people from diverse backgrounds (e.g., different ethnicities, genders, disabilities, ideologies, and belief systems), stakeholder engagement, and diversity analysis reports and product testing, are ways to include diverse views in the systems lifecycle.

Areas of focus

Avoidance of unfair bias: Take care to ensure that data sets used by AI systems do not suffer from the inclusion of inadvertent historic bias, incompleteness and bad governance models.

Accessibility and universal design: systems should be user-centric and designed in a way that allows all people to use solutions and services, regardless of their age, gender, abilities or characteristics.

Society and democracy: the impact of the system on institutions, democracy and society at large should be considered

Auditability: the enablement of the assessment of algorithms, data and design processes.

Minimisation and reporting of negative impacts: measures should be taken to identify, assess, document, minimise and respond to potential negative impacts of AI systems

Trade-offs: when trade-offs between requirements are necessary, a process should be put in place to explicitly acknowledge the trade-off, and evaluate it transparently

Redress: mechanisms should be in place to respond when things go wrong

Specific operational ethics requirements

This requirement is important to prevent harmful discrimination against individuals or groups in society, owing to a lack of diversity when organisations use AI and big data systems. It also aims to take a proactive approach and proposes that organisations should aim to do good with their systems in relation to fairness, diversity, and non-discrimination by focusing on the following:

  • Ensure the avoidance of discrimination; and reduction of harmful bias;
  • Ensure fairness and diversity; and
  • Ensure the inclusion and engagement of stakeholders.

Ethical data handling considerations

Limitations in data sources

Are there limitations that could influence your project’s outcomes? Consider:

  • bias in data collection, inclusion/exclusion, analysis, algorithms
  • gaps or omissions in data
  • provenance and data quality
  • other issues affecting decisions, such as team composition

Ethical and legislative contexts

What existing ethical codes apply to your sector or project? What legislation, policies, or other regulation shape how you use data? What requirements do they introduce? Consider: the rule of law; human rights; data protection; IP and database rights; antidiscrimination laws; and data sharing, policies, regulation and ethics codes/frameworks specific to sectors (eg health, employment, taxation).

Justice scrutiny questions

Q: How can we ensuring DDaT can help promote equitable outcomes?

Remember:

Emerging approaches need to ensure that they mitigate the risk of producing or magnifying disparate impacts, resulting in inequitable outcomes that make some people and communities better off and others worse off.

This means:

The ethical risk that solutions might induce disparate impacts needs to be actively considered; these impacts should be anticipated, actively monitored for, carefully examined and mitigated, in order to enable acceptable ethical and equitable outcomes

Q: How do we factor in the “bigger picture” to ensure just outcomes?

Remember:

Always keep in mind the wider context in which emerging approaches exist and the purpose intended, as well as considering the direction in which the technology we introduce today may head in the future.

This means:

Our operational and design considerations should never be isolated from the “bigger picture” of social and technological ecosystems that encompass factors and risks that are not always under our control.

Justice resources: supporting principles into practice

Socitm’s resource hub collections on smart places, location intelligence and harnessing data each address place-based ethical change that reflects the practical application of non-maleficence Likewise, from an operational perspective, the Planting the flag – a new local normal initiative, drawing on the ideas and experience from members of Socitm and its partner associations, reflects the attributes in practice.

Similar approaches are at the heart of Doughnut Economics, the Doughnut Economics Action Lab (DEAL), City Portraits and their supporting database of project materials that model ethical ways in which “people and planet can thrive in balance”. They build on the earlier related concepts like the International City/County Management Association (ICMA) “Building Digitally Inclusive Communities

Likewise, the SHERPA project, that analyses how AI and data analytics impact ethics and human rights, reflects many of the attributes in practice. In dialogue with stakeholders, the project is developing novel ways to understand and address these challenges to find desirable and sustainable solutions that can benefit both innovators and society. It offers a series of workbooks and practical case studies on the ethical use, design and implementation in step with a justice approach.

Following on this work, the High-Level Expert Group on Artificial Intelligence (AI HLEG) has published the Ethics Guidelines for Trustworthy Artificial Intelligence and Assessment List for Trustworthy AI (ALTAI). The ALTAI guidance and web-based tool outline the key stages of ethical practice in an accessible and dynamic checklist that guides developers and deployers of AI in implementing such principles in practice.

The OECD Principle Resources linked to Human-centred values and fairness address how AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, and should include appropriate safeguards to ensure a fair and just society.

The Digital Catapult outlines a set resources based around the benefice theme that they have identified in partnership with the Digital Ethics Lab and Alan Truing Institute, as part of their wider ethical typology and tools mapping

Doteveryone has created a Consequence Scanning Manual, an iterative development tool, to help organisations think about the potential impact of their solutions or service on people and society. It is aimed at anyone directly or indirectly involved with the design of public sector digital and data solutions or services.

The Data Justice Lab data literacy guidebook provides an overview of different types of tools designed to be used in educating citizens about datafication and its social consequences. For anyone working directly or indirectly with data in the public sector, including data practitioners (statisticians, analysts and data scientists), policymakers, operational staff and those helping produce data-informed insight, it is designed to ensure the highest ethical standard of their projects.

Fairness, Accountability, and Transparency in Machine Learning Principles and Best Practices outlines the basis of a Social Impact Statement for Algorithms that should be should be revisited and reassessed (at least) three times during the design and development process at the design stage, pre-launch, and post-launch.

Aequitas is an open-source bias audit toolkit for data scientists, machine learning researchers, and policymakers to audit machine learning models for discrimination and bias, and to make informed and equitable decisions around developing and deploying predictive tools.

The Institute of Electrical and Electronics Engineers Ethics in Action Resources provide a range of materials on ethical standards development around the theme of benefice = do good as follows: