Get set to use everything in your membership in 2025

Ethical framework for Smart Information Systems

Explore thought leadership, tools and practical insights to support local public service organisations championing leading-edge practices in the ethical use of emerging technologies and data as part of their wider digital placemaking and service transformation activities.

Authors and contributors: William Barker, Martin Ferguson

The term Smart Information Systems covers the interplay of emerging technology solutions and platforms that combine and use Artificial Intelligence to analyse, describe and predict information from Big Data as outlined below.

Digital ethics smart information systems graph
Image: Smart Information Systems graph
  • Emerging “enabling” technologies – The range of new technologies that organisations are seeking to adopt and develop to change the way they approach digital transformation.
  • Artificial intelligence – A set of sciences, theories and techniques whose purpose is to reproduce by a machine the cognitive abilities of a human being. Current developments aim, for instance, to be able to entrust a machine with complex tasks previously delegated to a human.
  • Big Data – The term “Big Data” refers to a large heterogeneous data set (open data, proprietary data, commercially purchased data) which enables the adoption of a Predictive analytics process of extracting information from data and using it to predict trends and behaviour patterns. The enhancement of predictive web analytics calculates statistical probabilities of future events online.

Source: BCS SHERPA workshop: Investigating the Ethical and Human Rights Implications of Smart Information Systems (SHERPA project)

At an operational and place-based level the European High-Level Expert Group and SHERPA programme has identified the following key requirements for assessing ethical practice in the application of an SIS . The seven requirements are all inter-related, equally important, mutually supportive, and should be implemented and evaluated throughout the lifecycle of an SIS:

  • Human Agency, Liberty and Dignity: Because we value the ability for humans to be autonomous and self-governing (positive liberty), humans’ freedom from external restrictions (negative liberties, such as freedom of movement or freedom of association), and because we hold that each individual has an inherent worth and we should not undermine respect for human life (human dignity), we need to ensure that AI and big data systems do not negatively affect human agency, liberty, and dignity.
  • Technical Robustness and Safety: Because we value humans, human life, and human resources, it is important that the system and its use is safe (often defined as an absence of risk) and secure (often defined as a protection against harm, i.e., something which achieves safety). Under this category we also include the quality of system decisions in terms of their accuracy, reliability, and precision.
  • Privacy and Data Governance: Because AI and big data systems often use information or data that is private or sensitive, it is important to make sure that the system does not violate or infringe upon the right to privacy, and that private and sensitive data is well-protected. While the definition of privacy and the right to privacy is controversial, it is closely linked to the importance of an individual’s ability to have a private life, which is a human right. Under this requirement we also include issues relating to quality and integrity of data (i.e., whether the data is representative of reality), and access to data, as well as other data rights such as ownership.
  • Transparency: Because AI and big data systems can be involved in high-stakes decision-making, it is important to understand how the system achieves its decisions. Transparency, and concepts such as explainability, explicability, and traceability relate to the importance of having (or being able to gain) information about a system (transparency), and being able to understand or explain a system and why it behaves as it does (explainability).
  • Diversity, Non-discrimination and Fairness: Because bias can be found at all levels of the AI and big data systems (datasets, algorithms, or users’ interpretation), it is vital that this is identified and removed. Systems should be deployed and used with an inclusionary, fair, and non-discriminatory agenda. Requiring the developers to include people from diverse backgrounds (e.g., different ethnicities, genders, disabilities, ideologies, and belief systems), stakeholder engagement, and diversity analysis reports and product testing, are ways to include diverse views in these systems.
  • Individual, Societal and Environmental Wellbeing: Because AI and big data systems can have huge effects for individuals, society, and the environment, systems should be trialed, tested, and anomaly-detected to ensure the reduction, elimination, and reversal of harm caused to individual, societal and environmental well-being.
  • Accountability: Because AI and big data systems act like agents in the world, it is important that someone is accountable for the systems’ actions. Furthermore, an individual must be able to receive adequate compensation in the case of harm from a system (redress). We must be able to evaluate the system, especially in the situation of a bad outcome (audibility). There must also be processes in place for minimisation and reporting of negative impacts, with internal and external governance frameworks (e.g., whistleblowing), and human oversight.

Source: Guidelines for the ethical use of AI and Big Data systems (SHERPA project)

Wider points to consider

The following points to consider are by no means exhaustive so should be seen as a starting point to further discovery around building up an Ethical framework for Smart Information Systems…

  • Privacy: Does the use of the technology raise concerns that people’s privacy might be at risk or endangered?
  • Personal Data: Does the technology or its use presume a particular group or person “own” the data? If so, who?
  • Security: Does the technology use personally-identifying data? If so, is this data stored and treated securely?
  • Inclusion of stakeholders: Are people affected by the technology involved in any way with its use or implementation? Do they have an opportunity to have a say in how the technology impacts them?
  • Consent of stakeholders: Have people affected by the technology been given an opportunity to consent to that technology existing or having the impact that it does on their lives?
  • Loss of employment: Does the use of the technology put people’s jobs at risk, either directly or indirectly?
  • Autonomy/agency: Does the use of the technology impact in any way on people’s freedom to choose how to live their lives?
  • Discrimination: Can/does the technology or its use lead to discriminating behaviour in any way? Does the technology draw on data sets that are representative of those stakeholders affected by the technology?
  • Potential for criminal use: Could the technology be used for criminal or other ends which were not envisaged or intended by its developers?
  • Trust: Does the technology impact people’s trust in organisations, other people, or the technology itself?
  • Power asymmetries: Can or does the technology exacerbate existing power asymmetries by, for instance, giving a large amount of power to those already holding power over other people?
  • Inequality: Can or does the technology reduce inequalities in society or exacerbate them?
  • Fairness: Is the technology fair in the way in which it treats those affected by it? Are there unfair practices which arise in relation to the technology?
  • Justice: Does the technology or its use raise a feeling of injustice on the part of one or more groups affected?
  • Freedom: Does the technology or its use raise questions regarding freedom of speech, censorship, or freedom of assembly?
  • Sustainability: Is the technology or its use sustainable, or does it draw on limited natural resources in some way?
  • Environmental impact: Does the technology have any impact on the environment, and if so what?

Adapted from: Understanding ethics and human rights in Smart Information Systems – SHERPA analysis (SHERPA project)

Principles into practice

SHERPA logo
Image: SHERPA project logo

More widely, a number of key resources that SHERPA programme has developed to help embed Principles into Practice at the heart of adopting emerging technologies and data solutions include:

SIENNA logo
Image: SIENNA project logo

See also the SIENNA project that looks at ethical issues in three new and emerging technology areas: human genomics, human enhancement and human-machine interaction. The SIENNA team is currently developing methodological handbooks for the ethical, legal and social analysis in human genomics, human enhancement and artificial intelligence and robotics (see below).

Sienna Typology: How AI and robotics will have a significant impact on society, values and human rights

EFFECT ON VALUES AND PEOPLE

Values affected:

  • Accountability
  • Autonomy
  • Equality
  • Fairness
  • Human dignity
  • Justice
  • Integrity of the person
  • Privacy
  • Security
  • Solidarity

People highly affected:

  • Consumers
  • Disfavoured or ’excluded’ people
  • First responders
  • Healthcare providers
  • Inhabitants of poor countries
  • Patients
  • Recipients of insurance & social benefits
  • Regulators & policy-makers
  • Tenants (e.g. minority ethnic groups)
  • The elderly
  • Workers

TYPES OF IMPACT

Positive:

  • Improve medical diagnostics
  • Advance cybersecurity
  • Improve decision-making via data analysis
  • Enhance healthcare
  • Improve elder care
  • Advance language translation
  • Improve voting security
  • Introduce new forms of co-operation & inclusion
  • Reduce repetitive tasks

Dual:

  • Alter human relations
  • Alter legal frameworks
  • Alter moral conceptions
  • Change understandings of personhood
  • Create unintended consequences
  • Intensify big data analytics
  • Escalate surveillance
  • Increase profiling
  • Increase leisure

Negative:

  • Harm/threat of harm from autonomous weapons
  • Loss of control over privacy & personal data
  • Bias & discrimination
  • Cyber warfare
  • Increase class/wealth domination
  • Diminish media pluralism
  • Greater energy consumption

See the AI and Robotics infographic (PDF) (SIENNA project)

Case Studies