The policy recommendations of the EU High Level Group on AI were published 26th June 2019. DataEthics.eus cofounder Gry Hasselbalch was one of the 52 experts assigned with the tasks to create ethical guidelines (published earlier this year) and policy recommendations for the development of AI in Europe. Here she presents in a blog post what she sees as the most important recommendations and why.
Over the last year I took part in the EU High Level Expert Group on AI. A couple of months ago we published the first deliverable of the group, which consisted of a set of ethics principles and requirements for AI. Today, we launched the Policy and Investment recommendations for Trustworthy AI. In this last stretch of work with the policy recommendations, I have in particular invested efforts in the following key areas due to their significance for the implementation of the ethics framework:
- Ensuring investment in and policy on Trustworthy AI
AI ethics needs action. Many AI ethics principles and guidelines have been published lately, including the OECD AI ethics principles that 42 countries have adopted. While said to be very similar, I would nevertheless also argue that they differ in one central way, which is that the EU guidelines have an actual framework for enforcement and implementation. Firstly, the guidelines’ core foundation is a European Fundamental Rights framework that in fact forms part of the core principles of European law. Secondly, they are followed up by a piloting phase that will be used to test the assessment list included in the guidelines. Thirdly, they are supplemented with a set of policy recommendations with the aim of not only creating a competitive space for AI innovation, but to create the space for the development of Trustworthy AI embedded with human rights principles and ethical norms.
Thus, the publication of an ethical framework for AI in Europe should not be seen as an end result, meaning they do not stand alone. They should rather be seen as the first step in a process. The policy recommendations now published reiterate throughout the document that the realization of Trustworthy AI requires a genuine investment in all its components: technical implementation of the GDPR (e.g. state of the art anonymization, privacy preserving traceability of data etc. etc.), ethics and social impact assessment, training of businesses, of staff, education of the public and policymakers, adaption and reform of regulatory frameworks, innovation and research and much more.
Why does AI ethics need all this? Because AI ethics doesnt come in a bottle. It cannot be injected in any easy manner.
AI ethical dillemas and questions we for example very often find in the ‘in between”, e.g. in the complementarity between humans and AI systems, between the formal requirements and one’s own social responsibility etc. AI ethics requires that we assess the potential “trickle down effects” of the new relations between humans and AI, society and AI etc. That we understand AI technologies as part of a “whole” – whole organisation, whole society (with different requirements and demands) – and always place the human being at the centre. AI ethics requires a multifaceted interconnected response and preventative action: technical, organisational, legal, economic, cultural/interpersonal. And even when we try to do ethics with our full concentrated awareness, we need to always consider that there are trade offs between different values, interests and even ethical principles.
- A data infrastructure based on personal control
AI is data hungry. It evolves, learns, predicts and decides on data. As such many ethical aspects of AI are data ethical in nature. An overarching issue is here the data of people used for AI innovation. To ensure a new paradigm in which data asymmetries between institutions, businesses and individuals are harnessed, we need a technological personal data infrastructure based on individual empowerment and personal data control. Privacy by design approaches, personal data protection etc. is emphasized throughout the recommendations and specifically in recommendation 18.6 we call for an active strategy on this type of infrastructure based on personal data control: “Develop mechanisms for the protection of personal data, and individuals to control and be empowered by their data.”
A current movement in technology and business development is actually addressing exactly that with the creation of personal data management systems and services. These are interoperable services that enable individuals to share data and either donate or activate their data for personal benefits (such as personalised finance or medicine) while being in control of the use of their data. Examples of these can be found in the Mydata.org network of entrepreneurs, activists, academics, corporations, public agencies, and developers and, in technological initiatives such as Solid etc.
See also Dataethics.eus data ethics FaQ and principles
- Protecting and empowering Children
AI feeds on data, personalised AI on data profiling. While most adults grew up in a fairly unmonitored environment with limited public archives to evidence our different developmental paths into adulthood, childhood is today increasingly turned into data, data analytics and data monitoring. AI solutions can do good things for personalised learning. It can do good things for inclusion of socially marginalised children or children with disabilities. And so on. But if introduced without a thorough data ethics assessment (not to mention legal tests), the risks are far greater than the benefits. One important issue addressed in the AI policy recommendation is the idea of an “untouched childhood”. I’ve raised this as an area of critical concern, not just because I consider children a vulnerable group that needs special protection and freedom to grow in a healthy unmonitored environment, but to safeguard the very future of Europe by considering the values invested in the technological aspects of Europe’s children’s childhood. We urgently need a more reflective approach to the introduction of data intensive technologies in children’s lives at home and in school. Recommendation 4.1 therefore says the following:
“Establish a European Strategy for Better and Safer AI for Children, in line with the European Strategy for a Better Internet for Children, designed to empower children, while also protecting them from risks and harm. The integrity and agency of future generations should be ensured by providing Europe’s children with a childhood where they can grow and learn untouched by unsolicited monitoring, profiling and interest invested habitualisation and manipulation. Children should be ensured a free unmonitored space of development and upon moving into adulthood should be provided with a “clean slate” of any public or private storage of data related to them. Equally, children’s formal education should be free from commercial and other interests.”
This also includes an enhanced vigilance of various interests in the technology environments of children’s education. Recently, a school teacher sighed and told me about how she had suddenly found herself at a full day marketing session for a new smart online tool she could use for teaching. The course was imposed on her from the headmaster who had not seen the embedded interests in this “free” training course. I have previously worked for many years in close proximity with educators and educational systems in Europe (to be precise 10 years in the Insafe network under the EC Better Internet For Kids Strategy) and have myself experienced how big technology companies (yes in particular Facebook, Microsoft and Google were very present in this space) were moving in on this space in various more or less transparent ways. One thing is the mismatch of commercial and educational purposes, the other is the data ethics. Most educators that use the educational smart tools introduced have no idea (how could they) about the data profiles created on their students, for what purposes, and in whose interest. In addition to the commercial data profiling, there are the increased use of various forms of public institution and state data analytics and data profiling of children that are used not just to improve children’s learning and wellbeing at school, but often also to control and audit them and their families (e.g. this case/in Danish).
- A distributed ethics approach: A cross disciplinary network of Trustworthy AI Research
We need a distributed cross-disciplinary ethics approach; one that is not siloed within one academic discipline or one all-embracing institution. Ethics research and awareness should be diffused and integrated in all disciplines related to the development of AI from computer science, to mathematics and philosophy, humanism and social sciences. I have supported the idea of a European research network for Trustworthy AI (rather than just one centre); 16.6 Develop a cross-cutting network focused on Trustworthy AI across European universities and research institutions.
I followed a Twitter thread the other day that discussed a major financial investment in one ethics research center. One concern raised was the centralization of ethics research, which could potentially end up turning into a centralization of interests. Although the investment went to a very credible world class research institution, I do share some of the concerns. Any concentration of resources holds the potential for a concentration of power, which could have consequences for the diversity and distribution of knowledge and ideas.
In addition, although ethics does indeed originally stem from philosophy, it is not exclusively a philosopher’s task. Philosophy was a general framework for understanding human existence in sciences before it was an academic discipline. Thus, ethical theories, critical ethical thinking, ethical awareness and application is core to several other academic disciplines such as e.g. the humanities, legal and social sciences and many more. It is therefore pivotal that we include and recognize the voices of all relevant disciplines in the development of ethics awareness in the evolution of AI that may contribute with essential methodologies, theories and empirical research. For that reason, I have also supported the idea that the network of research centres would not only be an “ethics” network in the traditional sense of the word. That it will be a Trustworthy AI research network that covers all aspects of trustworthy AI, including not only the ethics, but also the law, the technology, the culture and the power dynamics.
What about the “ethics washing” ?
A lot has been said about the industry’s role in the AI High Level Expert Group and in particular in the development of the ethics guidelines and the concept of “Trustworthy AI”. At some point, the group’s ethics guidelines were even described as “ethics washing” primarily due to the fact that AI areas of high ethical risk changed title from “red lines” to “critical concerns” (no, the areas were not removed from the guidelines last minute, they just changed title, and one on AI consciousness put in a footnote). As I’ve argued before it is incredibly important that we are vigilant of the different interests that shape these processes especially in a time where stories about big tech industries’ not very transparent lobbying of policy and investments in academic research are emerging almost on a daily basis. However, we need to be observant of not just industry interests, but of interests in general that do not serve the human interest – companies’, states’, and even individual people’s personal interests. (see also: Making sense of data ethics). The power dynamics of processes like these are much more complex than what I would argue has been simplistically presented as a clear interest cut between civil society and industry. Even within civil society there are individual power struggles and within the industry of course also. Personally, in this very process, I was never that concerned with the very labelling/titles (“red lines” or “critical concerns”) of the critical areas of concern in relation to AI in the ethics guidelines as long as they were there. I even argued against including the “AI consciousness” critical concern as I think it diverts our attention from very present real critical areas of concern (and because I also believe that only humans can have consciousness). Though, massive citizen scoring, which is one critical concern mentioned in the ethics guidelines, is indeed a very problematic use of AI, the development of autonomous weapons is another extremely critical concern. They have continued to be subject matters of the High Level Group’s work with the policy recommendations with an additional few others, because, as I’ve also argued before, these areas that need more than an ethics guideline, they need a reflective strategy, research, policy and regulatory framework. I myself raised all the critical concerns in this last stretch of developing the policy recommendations and added a few new ones vocally and in writing during the process. And they are all there in the policy recommendations now. There is even a mentioning of them with the labelling “red lines” 29.3 Institutionalise a dialogue on AI policy with affected stakeholders to define red lines and discuss AI applications that may risk generating unacceptable harms) with reference to possible prohibition.
My hope is now that the concept of Trustworthy AI will actually become a real European investment and that the EU and member states will follow through on the recommendations on in particular the requirements and new types of innovation that spring from the concept of Trustworthy AI where the human interest is always at the centre.