Join Shropshire's Assistant Director of Transformation as he shares his experience of getting started with transformation and pitfalls to avoid.

Involving communities in system design can lessen bias

Public sector organisations should involve communities in designing systems as a way to avoid biased algorithms, Wednesday’s session on ethics heard.

William Barker, Socitm’s associate director and practice lead on digital ethics, said this means changing the way consultations work. “We ask people to say something but the decision has already been made on what will happen,” he said adding that the public sector needs to switch from “we’ll listen to you but will do this anyway” to engagement that genuinely influences decisions.

Greater public involvement should lessen bias, in areas such as race or gender, as diverse groups are more likely to spot problems than homogenous ones. Such involvement can support core ethical principles including justice, the autonomy of users and the explicability of systems (see below).

Work to tackle algorithmic bias is part of both ‘ethics by design’ and ‘ethics of use’, the latter focusing on how service users and staff use digital technology. In both, it makes sense to consider ethics early on, although this often does not happen: “We tend to apply the technology then ask the questions,” said Barker.

A third area of societal ethics involves considering how digital technology impacts on society, with Tim Berners-Lee, the inventor of the world wide web, making a case for universal internet access that is particularly relevant given widespread dependence on remote learning and working during the coronavirus pandemic. Barker said that local public services can play a particularly important role in this area by undertaking ‘ethical digital place-making’ with systems tailored for their areas, such as Barking and Dagenham’s use of data to target resources and engage people within the London borough.

Despite the phenomenal pace of change and the demands of coping with the pandemic, Barker said that ethical considerations have to be embedded in how organisations develop and use technology. “It’s more than just a tick-list,” he said. “It is how we can get these ethical considerations into our operational, intellectual and systematic DNA.”

Five core ethical principles

  1. Beneficence: do good.
    Benefits of work should outweigh potential risks. Digital, data and technology that interact with people’s wellbeing, finances, relationships and health particularly require robust ethical principles.
  2. Non-maleficence: do no harm.
    Risks and harms need to be considered holistically, rather than just for the individual or organisation. Privacy and security risks require social and organisational design, not just technological.
  3. Autonomy: preserve human agency.
    To make choices, people need to have sufficient knowledge and understanding. It is important to involve stakeholders and interest groups in ethical risk assessment and design.
  4. Justice: be fair.
    Specific issues include algorithmic bias and equitable treatment. Consider whether a technology could produce or magnify unequal outcomes, and if so how to mitigate this.
  5. Explicability: operate transparently.
    Be ready to explain a system’s working as well as its outputs. Make all stages of the implementation process open to public and community scrutiny.

See and read more

A recordings of today’s events is available here: https://vimeo.com/427742830/0d6e8144fc

Tim Berners-Lee article on universal internet access: Covid-19 makes it clearer than ever: access to the internet should be a universal right

Soctim Report: Barking and Dagenham: using data and technology to improve people’s lives

Oxford Internet Institute Digital Ethics Lab

The Alan Turing Institute