The government needs to identify and embed authoritative ethical principles and issue accessible guidance on AI governance to those using it in the public sector, the Committee on Standards in Public Life has said.
In a report, Artificial Intelligence and Public Standards, the standards watchdog said that government and regulators must also establish a coherent regulatory framework that sets clear legal boundaries on how AI should be used in the public sector.
Jonathan Evans, Chair of the Committee, said: “Artificial intelligence – and in particular, machine learning – will transform the way public sector organisations make decisions and deliver public services. Demonstrating high standards will help realise the huge potential benefits of AI in public service delivery. However, it is clear that the public need greater reassurance about the use of AI in the public sector.
“Public sector organisations are not sufficiently transparent about their use of AI and it is too difficult to find out where machine learning is currently being used in government.”
Lord Evans added: “Explanations for decisions made by machine learning are important for public accountability. Explainable AI is a realistic and attainable goal for the public sector - so long as public sector organisations and private companies prioritise public standards when they are designing and building AI systems.”
The CSPL report suggests that data bias remains a serious concern. It said further work was needed on measuring and mitigating the impact of bias to prevent discrimination via algorithm in public services.
One of the report's 15 recommendations (see below) says that providers of public services, both public and private, "must consciously tackle issues of bias and discrimination by ensuring they have taken into account a diverse range of behaviours, backgrounds and points of view".
Lord Evans said: “Our message to government is that the UK’s regulatory and governance framework for AI in the public sector remains a work in progress and deficiencies are notable. The work of the Office for AI, the Alan Turing Institute, the Centre for Data Ethics and Innovation (CDEI), and the Information Commissioner’s Office (ICO) are all commendable. But on transparency and data bias in particular, there is an urgent need for practical guidance and enforceable regulation.”
The CSPL report concludes that the UK does not need a new AI regulator, but that all regulators must adapt to the challenges that AI poses to their specific sectors. It endorses the government’s intentions to establish CDEI as an independent, statutory body that will advise government and regulators in this area.
Lord Evans added: “All public bodies using AI to deliver frontline services must comply with the law surrounding data-driven technology and implement clear, risk-based governance for their use of AI. Government should use its purchasing power in the market to set procurement requirements that ensure that private companies developing AI solutions for the public sector appropriately address public standards.”
The CSPL’s recommendations to government, national bodies and regulators
The Committee makes eight recommendations to government, national bodies and regulators to help create a strong and coherent governance and regulatory framework for AI in the public sector.
Recommendation 1: Ethical principles and guidance
There are currently three different sets of ethical principles intended to guide the use of AI in the public sector – the FAST SUM Principles, the OECD AI Principles, and the Data Ethics Framework. It is unclear how these work together and public bodies may be uncertain over which principles to follow.
a. The public needs to understand the high level ethical principles that govern the use of AI in the public sector. The government should identify, endorse and promote these principles and outline the purpose, scope of application and respective standing of each of the three sets currently in use.
b. The guidance by the Office for AI, the Government Digital Service and the Alan Turing Institute on using AI in the public sector should be made easier to use and understand, and promoted extensively.
Recommendation 2: Articulating a clear legal basis for AI
All public sector organisations should publish a statement on how their use of AI complies with relevant laws and regulations before they are deployed in public service delivery.
Recommendation 3: Data bias and anti-discrimination law
The Equality and Human Rights Commission should develop guidance in partnership with both the Alan Turing Institute and the CDEI on how public bodies should best comply with the Equality Act 2010.
Recommendation 4: Regulatory assurance body
Given the speed of development and implementation of AI, we recommend that there is a regulatory assurance body, which identifies gaps in the regulatory landscape and provides advice to individual regulators and government on the issues associated with AI. We do not recommend the creation of a specific AI regulator, and recommend that all existing regulators should consider and respond to the regulatory requirements and impact of the growing use of AI in the fields for which they have responsibility. The Committee endorses the government’s intention for CDEI to perform a regulatory assurance role. The government should act swiftly to clarify the overall purpose of CDEI before setting it on an independent statutory footing.
Recommendation 5: Procurement rules and processes
Government should use its purchasing power in the market to set procurement requirements that ensure that private companies developing AI solutions for the public sector appropriately address public standards. This should be achieved by ensuring provisions for ethical standards are considered early in the procurement process and explicitly written into tenders and contractual arrangements
Recommendation 6: The Crown Commercial Service’s Digital Marketplace
The Crown Commercial Service should introduce practical tools as part of its new AI framework that help public bodies, and those delivering services to the public, find AI products and services that meet their ethical requirements.
Recommendation 7: Impact assessment
Government should consider how an AI impact assessment requirement could be integrated into existing processes to evaluate the potential effects of AI on public standards. Such assessments should be mandatory and should be published.
Recommendation 8: Transparency and disclosure
Government should establish guidelines for public bodies about the declaration and disclosure of their AI systems.
Recommendations to front-line providers, both public and private, of public services
The Committee makes seven recommendations to front-line providers of public services to help establish effective risk-based governance for the use of AI.
Recommendation 9: Evaluating risks to public standards
Providers of public services, both public and private, should assess the potential impact of a proposed AI system on public standards at project design stage, and ensure that the design of the system mitigates any standards risks identified. Standards review will need to occur every time a substantial change to the design of an AI system is made.
Recommendation 10: Diversity
Providers of public services, both public and private, must consciously tackle issues of bias and discrimination by ensuring they have taken into account a diverse range of behaviours, backgrounds and points of view. They must take into account the full range of diversity of the population and provide a fair and effective service.
Recommendation 11: Upholding responsibility
Providers of public services, both public and private, should ensure that responsibility for AI systems is clearly allocated and documented, and that operators of AI systems are able to exercise their responsibility in a meaningful way.
Recommendation 12: Monitoring and evaluation
Providers of public services, both public and private, should monitor and evaluate their AI systems to ensure they always operate as intended.
Recommendation 13: Establishing oversight
Providers of public services, both public and private, should set oversight mechanisms that allow for their AI systems to be properly scrutinised
Recommendation 14: Appeal and redress
Providers of public services, both public and private, must always inform citizens of their right and method of appeal against automated and AI-assisted decisions.
Recommendation 15: Training and education
Providers of public services, both public and private, should ensure their employees working with AI systems undergo continuous training and education.
Source: Artificial Intelligence and Public Standards – A Review by the Committee on Standards in Public Life.