Those living in fear of artificial intelligence (AI) going horribly awry, be calmed: a 10-point code of conduct for the fantastical/terrifying stuff has been published, so everything might be ok.
Penned by ‘innovation foundation’ Nesta, the Asimov-esque guide is specifically aimed at the application of AI in the public sector – and implores Westminster to be as transparent as can be about how algorithms are constructed.
Nesta director Eddie Copeland has kindly prepared his AI-behaviour directives in this blog, in which he notes: ‘While debate may continue on the pros and cons of creating more robust codes of practice for the private sector, a stronger case can surely be made for governments and the public sector.
‘After all, an individual can opt-out of using a corporate service whose approach to data they do not trust. They do not have that same luxury with services and functions where the state is the monopoly provider.’
Anyway, here’s the man’s code of conduct:
- Every algorithm used by a public sector organisation should be accompanied with a description of its function, objectives and intended impact, made available to those who use it
- Public sector organisations should publish details describing the data on which an algorithm was (or is continuously) trained, and the assumptions used in its creation, together with a risk assessment for mitigating potential biases
- Algorithms should be categorised on an algorithmic risk scale of 1-5, with 5 referring to those whose impact on an individual could be very high, and 1 being very minor
- A list of all the inputs used by an algorithm to make a decision should be published
- Citizens must be informed when their treatment has been informed wholly or in part by an algorithm
- Every algorithm should have an identical sandbox version for auditors to test the impact of different input conditions
- When using third parties to create or run algorithms on their behalf, public sector organisations should only procure from organisations able to meet principles 1-6
- A named member of senior staff (or their job role) should be held formally responsible for any actions taken as a result of an algorithmic decision
- Public sector organisations wishing to adopt algorithmic decision making in high-risk areas should sign up to a dedicated insurance scheme that provides compensation to individuals negatively impacted by a mistaken decision made by an algorithm
- Public sector organisations should commit to evaluating the impact of the algorithms they use in decision making and publishing the results
Seems like a pretty sensible list to me. But what do YOU think? Do you agree with all 10 conditions? Are there important points missing? Do you not care at all because you believe the earth is soon to be slain by the cosmic entity Cthulhu, thus AI is the least of our problems? It would be good to hear from you.