Join your nearest Empowering Women training (running in July and September only)

Digital ethics collection | Article

Ethics and data protection in Artificial Intelligence

Authored by William Barker, Martin Ferguson

Given the global significance of the challenges posed by harnessing emerging technologies and data, the International Conference of Data Protection and Privacy Commissioners (which includes the UK ICO) has initiated a declaration on ethics and data protection in artificial intelligence. It sets out the following guiding principles as its core values to preserve human rights in the development of AI and data analytics.

ICDPPC declaration on ethics and data protection in AI

  1. Artificial intelligence and machine learning technologies should be designed, developed and used in respect of fundamental human rights and in accordance with the fairness principle, in particular by:
    • considering individuals’ reasonable expectations by ensuring that the use of artificial intelligence systems remains consistent with their original purposes, and that the data are used in a way that is not incompatible with the original purpose of their collection;
    • taking into consideration not only the impact that the use of artificial intelligence may have on the individual, but also the collective impact on groups and on society at large; and
    • ensuring that artificial intelligence systems are developed in a way that facilitates human development and does not obstruct or endanger it, thus recognizing the need for delineation and boundaries on certain uses.
  2. Continued attention and vigilance, as well as accountability, for the potential effects and consequences of, artificial intelligence systems should be ensured, in particular by:
    • promoting accountability of all relevant stakeholders to individuals, supervisory authorities and other third parties as appropriate, including through the realization of audit, continuous monitoring and impact assessment of artificial intelligence systems, and periodic review of oversight mechanisms;
    • fostering collective and joint responsibility, involving the whole chain of actors and stakeholders, for example with the development of collaborative standards and the sharing of best practices;
    • investing in awareness raising, education, research and training in order to ensure a good level of information on and understanding of artificial intelligence and its potential effects in society; and
    • establishing demonstrable governance processes for all relevant actors, such as relying on trusted third parties or the setting up of independent ethics committees.
  3. Artificial intelligence systems transparency and intelligibility should be improved, with the objective of effective implementation, in particular by:
    • investing in public and private scientific research on explainable artificial intelligence;
    • promoting transparency, intelligibility and reachability, for instance through the development of innovative ways of communication, taking into account the different levels of transparency and information required for each relevant audience;
    • making organizations’ practices more transparent, notably by promoting algorithmic transparency and the auditability of systems, while ensuring meaningfulness of the information provided;
    • guaranteeing the right to informational self-determination, notably by ensuring that individuals are always informed appropriately when they are interacting directly with an artificial intelligence system or when they provide personal data to be processed by such systems; and
    • providing adequate information on the purpose and effects of artificial intelligence systems in order to verify continuous alignment with expectation of individuals and to enable overall human control on such systems.
  4. As part of an overall “ethics by design” approach, artificial intelligence systems should be designed and developed responsibly, by applying the principles of privacy by default and privacy by design, in particular by:
    • implementing technical and organizational measures and procedures – proportional to the type of system that is developed – to ensure that data subjects’ privacy and personal data are respected, both when determining the means of the processing and at the moment of data processing;
    • assessing and documenting the expected impacts on individuals and society at the beginning of an artificial intelligence project and for relevant developments during its entire life cycle; and
    • identifying specific requirements for ethical and fair use of the systems and for respecting human rights as part of the development and operations of any artificial intelligence system.
  5. Empowerment of every individual should be promoted, and the exercise of individuals’ rights should be encouraged, as well as the creation of opportunities for public engagement, in particular by:
    • respecting data protection and privacy rights, including where applicable the right to information, the right to access, the right to object to processing and the right to erasure, and promoting those rights through education and awareness campaigns;
    • respecting related rights including freedom of expression and information, as well as non-discrimination;
    • recognizing that the right to object or appeal applies to technologies that influence personal development or opinions and guaranteeing, where applicable, individuals’ right not to be subject to a decision based solely on automated processing if it significantly affects them and, where not applicable, guaranteeing individuals’ right to challenge such decision; and
    • using the capabilities of artificial intelligence systems to foster an equal empowerment and enhance public engagement, for example through adaptable interfaces and accessible tools.
  6. Unlawful biases or discriminations that may result from the use of data in artificial intelligence should be reduced and mitigated, including by:
    • ensuring the respect of international legal instruments on human rights and non-discrimination;
    • investing in research into technical ways to identify, address and mitigate biases;
    • taking reasonable steps to ensure the personal data and information used in automated decision making is accurate, up-to-date and as complete as possible; and
    • elaborating specific guidance and principles in addressing biases and discrimination, and promoting individuals’ and stakeholders’ awareness.

Source: ICDPPC Declaration on Ethics and Data Protection in Artificial Intelligence

National Data Strategy: adopting a responsible data approach

Within the UK public sector, the focus around these issues has been around establishing a strategic framework and supporting guidance to underpin ethics and data protection in artificial intelligence. The Government’s National Data Strategy commits public sector organisations to adopting a responsible data approach which means “data is handled in a way that is lawful, secure, fair, ethical, sustainable and accountable, while also supporting innovation and research”.

The strategy notes that the Government has a responsibility to ensure that there is a clear and predictable legal framework for data use that can both spur the innovative use of data, especially for purposes in the public interest, and earn people’s trust. It also recognises the government has a further responsibility to ensure that the infrastructure on which data relies is secure, sustainable and resilient enough to support ongoing digitalisation, economic growth and changes to the way that we live and work, together with a clear commitment that the public sector must also be transparent and prepared to open itself up to scrutiny over its own use of data.

At an operational level, organisations are seen as having responsibilities to upskill themselves so that they can both manage and use data efficiently as a strategic resource, and ensure such use is lawful, secure, unbiased and explainable. Likewise, organisations are expected to place a greater value on ensuring that they have the right skills to collect, organise and manage data. In order to be effective, organisations are encouraged also to ensure that they account for biases arising from data or algorithm use, as identified in the Centre for Data Ethics and Innovation (CDEI) interim report on the issue.

The responsible data approach advocated in the strategy is built around wider research that suggests transparency around how data is used is important for building public trust and the importance of trust as an enabler for public sector data sharing. The strategy notes that… “whilst new technologies may help to create safe and secure environments for sharing data, including personal data; nevertheless, ethical and legal questions remain“.

The Government strategy recognises that the public sector will only be able to build and maintain public trust by ensuring and clearly demonstrating that its approach to data is rooted in appropriate levels of transparency, robust safeguards and credible assurances. Public sector organisations will need to open themselves up to scrutiny, increase public engagement and improve the publishing of data by which progress can be measured.

Principles into practice: what can help

At an official level, the recently refreshed Data Ethics Framework guides appropriate and responsible data use in government and the wider public sector. Whilst in the research and statistics community, the UK Statistics Authority has established the National Statistician’s Data Ethics Advisory Committee and developed a self-assessment tool to help researchers and statisticians consider the ethics of their use of data.

More widely, the Information Commissioner’s Office has published new Guidance on AI and data protection aimed at two audiences:

  • those with a compliance focus, such as data protection officers (DPOs), general counsel, risk managers, senior management, and the ICO’s own auditors; and
  • technology specialists, including machine learning experts, data scientists, software developers and engineers, and cybersecurity and IT risk managers.

The guidance also clarifies how to assess the risks to rights and freedoms that AI can pose from a data protection perspective; and the appropriate measures you can implement to mitigate them. It corresponds to data protection principles, and is structured as follows:

  • Part one addresses accountability and governance in AI, including data protection impact assessments (DPIAs);
  • Part two covers fair, lawful and transparent processing, including lawful bases, assessing and improving AI system performance, and mitigating potential discrimination;
  • Part three addresses data minimisation and security; and
  • Part four covers compliance with individual rights, including rights related to automated decision-making.

The guidance emphases the importance of the wider accountability principle and makes individuals and originations responsible for complying with data protection and for demonstrating that compliance in any AI system. In an AI context, it sees accountability as requiring you to:

  • be responsible for the compliance of your systems;
  • assess and mitigate its risks; and
  • document and demonstrate how your systems are compliant and justify the choices you have made.

The ICO has also developed a framework for auditing AI focusing on best practices for data protection compliance, whether you design your own AI system or implement one from a third party. It provides a clear methodology to audit AI applications and ensure they process personal data fairly. It comprises:

  • auditing tools and procedures that we will use in audits and investigations;
  • this detailed guidance on AI and data protection; and
  • forthcoming toolkit designed to provide further practical support to organisations auditing the compliance of their own AI systems.

The ICO Accountability Framework, which is divided into 10 categories, contains expectations and examples of how your organisation can demonstrate its accountability. The ICO sees accountability is as one of the key principles in data protection law – it makes you responsible for complying with the legislation and says that you must be able to demonstrate your compliance.

The ICO guidance is also supplemented by the following online resources:

In addition, a number of expert institutions have produced guidance around the need for greater algorithmic transparency, particularly within the public sector.

  • The Open Data Institute Data Ethics Canvas and ODI Open design patterns helps to identify and manage ethical issues, at the start of a project that uses data and throughout. They encourage you to ask important questions about projects that use data, and reflect on the responses.
  • The Alan Turing Institute publication Understanding artificial intelligence ethics and safety is a guide for everyone involved in the design, production, and deployment of a public sector AI project, from data scientists and data engineers to domain experts and delivery managers.
  • Doteveryone has created a Consequence Scanning Manual, which is an iterative development tool to help organisations think about the potential impact of their solutions or service on people and society. It is designed for use by anyone directly or indirectly involved with the design of public sector digital and data solutions or services.
  • The Data Justice Lab data literacy guidebook provides an overview of different types of tools that aim at educating citizens about datafication and its social consequences. It is targeted at anyone working directly or indirectly with data in the public sector, including data practitioners (statisticians, analysts and data scientists), policymakers, operational staff and those helping produce data-informed insight, to ensure the highest ethical standard of their projects.
  • Decision-making in the Age of the Algorithm is a NESTA guide for public sector organisations on how to introduce AI tools so that they are embraced and used wisely by practitioners.
  • A Royal Statistical Society (RSS) and Institute and Faculty of Actuaries (IFoA) Guide for Ethical Data Science is intended to complement existing ethical and professional guidance and is aimed at addressing the ethical and professional challenges of working in a data science setting.

Case Studies