The European Commission’s High Level Expert Group on Artificial Intelligence (AI) has published the Draft Ethics Guidelines for Trustworthy AI (Draft Guidelines on AI) for consultation. As explained in the Draft Guidelines on AI:
[AI] is one of the most transformative forces of our time, and is bound to alter the fabric of society. It presents a great opportunity to increase prosperity and growth, which Europe must strive to achieve. Over the last decade, major advances were realised due to the availability of vast amounts of digital data, powerful computing architectures, and advances in AI techniques such as machine learning. Major AI-enabled developments in autonomous vehicles, healthcare, home/service robots, education or cybersecurity are improving the quality of our lives every day. Furthermore, AI is key for addressing many of the grand challenges facing the world, such as global health and wellbeing, climate change, reliable legal and democratic systems and others expressed in the United Nations Sustainable Development Goals.
Having the capability to generate tremendous benefits for individuals and society, AI also gives rise to certain risks that should be properly managed. Given that, on the whole, AI’s benefits outweigh its risks, we must ensure to follow the road that maximises the benefits of AI while minimising its risks. To ensure that we stay on the right track, a human-centric approach to AI is needed, forcing us to keep in mind that the development and use of AI should not be seen as a means in itself, but as having the goal to increase human well-being. Trustworthy AI will be our north star, since human beings will only be able to confidently and fully reap the benefits of AI if they can trust the technology.
According to the Draft Guidelines on AI, trustworthy AI has two components: (i) it should respect fundamental rights, applicable regulation, and core principles and values, ensuring an “ethical purpose”; and (ii) it should be technically robust and reliable since, even with good intentions, a lack of technological mastery can cause unintentional harm. The Draft Guidelines on AI accordingly set out a framework for trustworthy AI:
- Chapter I deals with ensuring AI’s ethical purpose, by setting out the fundamental rights, principles and values with which it should comply.
- Chapter II lists the requirements for trustworthy AI and offers an overview of the technical and non-technical methods that can be used for its implementation, tackling both ethical purpose and technical robustness.
- Chapter III operationalises the requirements by providing a concrete assessment list for trustworthy AI, which is then adapted to specific use cases.
The deadline for submissions is 1 February 2019, and can be made via the online form here. The call for consultation is addressed to all stakeholders, including companies, organisations, researchers, public services, institutions, individuals or other entities. The final version is expected to be published in March 2019, and will include a mechanism for voluntary endorsement by stakeholders.
The consultation form is accessible here.
The Draft Guidelines on AI are accessible here.
Please note: The information contained in this note is for general guidance on matters of interest, and does not constitute legal advice. For any enquiries, please contact us at [email protected].