On 20 May 2020, the Information Commissioner’s Office (ICO) in the United Kingdom published a guide titled ‘Explaining decisions made with AI’. The guide aims to provide practical advice to organisations and help explain the processes, services and decisions delivered or assisted by artificial intelligence (AI).
As noted in the guide, organisations are increasingly using AI to support or make decisions about individuals. The guide is divided into three parts:
- Part I: The basics of explaining AI.
- Part II: Explaining AI in practice.
- Part III: What explaining AI means for your organisation.
AI can be defined in as an umbrella term for a range of algorithm-based technologies that solve complex tasks by carrying out functions that previously required human thinking. Decisions made using AI are either fully automated, or with a ‘human in the loop’. As with any other form of decision-making, those impacted by an AI supported decision should be able to hold someone accountable for it.
As noted in the guide, giving individuals explanations of AI-assisted decisions helps to ensure that the use of AI is human-centric. As long as there are well-designed processes to contest decisions and continuously improve AI systems based on customer feedback, people will likely have the confidence to express their point of view.
The guide recommends six main types of AI explanations:
- Rationale explanation: The reasons that led to a decision, delivered in an accessible and non-technical way.
- Responsibility explanation: Who is involved in the development, management and implementation of an AI system, and who to contact for a human review of a decision.
- Data explanation: What data has been used in a particular decision and how.
- Fairness explanation: Steps taken across the design and implementation of an AI system to ensure that the decisions it supports are generally unbiased and fair, and whether or not an individual has been treated equitably.
- Safety and performance explanation: Steps taken across the design and implementation of an AI system to maximize the accuracy, reliability, security and robustness of its decisions and behaviors.
- Impact explanation: Steps taken across the design and implementation of an AI system to consider and monitor the impacts that the use of an AI system and its decisions has or may have on an individual, and on wider society.
The ICO’s guide is accessible here.
Please note: The information contained in this note is for general guidance on matters of interest, and does not constitute legal advice. For any enquiries, please contact us at [email protected].