BLOG: If we want to design ethical AI that benefits a human evolution, we need a way of talking about it that respects our human values and quirks. 

AI is everywhere. And nowhere. Because what do we actually mean when we talk about AI? Is it a sophisticated improvement of our outdated human software? Is it a possible Sci Fi scenario, where an out of human control machine out competes human kind? Or is it a commercial trade secret?

The way we talk about AI also defines what we think we can do with it and ask from it. In the current public debate the words used to describe AI also justify specific ways of looking at the world, the ideologies and interests held by prominent people and companies. Here’s a few musings:

Ray Kurzweil (Singularity)– Humans are machines: “Biology is a software process. Our bodies are made up of trillions of cells, each governed by this process. You and I are walking around with outdated software running in our bodies, which evolved in a very different era.”

Stephen Hawking – AI is a free agent: “The development of full artificial intelligence could spell the end of the human race….It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Larry Page – AI is a (Google) service: “Artificial intelligence would be the ultimate version of Google. The ultimate search engine that would understand everything on the web. It would understand exactly what you wanted, and it would give you the right thing. We’re nowhere near doing that now. However, we can get incrementally closer to that, and that is basically what we work on.”

I would argue that in spite of these characters’ (real or perceived) greatness in the field, they fail to address the core issue at stake in this debate, which is: What does it mean to be human? And how do we actually preserve the human in this fast paced AI evolution?

One might argue that they describe AI in a way that cloud our judgement and limit us as humans in what we think we can do with AI: If humans are just software, then of course we need an update. All software does. Doesn’t it? So just shut up. If the machines are our superiors, then it is already too late. We are doomed. Let it go. If AI is just one company’s great business adventure (a better search engine, a smarter health care solution etc.), then it is also the greatest trade secret. So keep your nose out of it. All these ways  of describing AI leave us powerless.

Before we can move on to a constructive discussion of the ethical implications of AI, we need to choose our words with care. The ethical design of AI that benefits a human evolution needs a language that not only describes AI, but more importantly also takes point of departure in what it means to be human.

I strongly believe that humans are not just data processing software. AI on the other hand is. It is by and large man made data processing systems that sieve through our societies defining, representing, reorganising culture, politics, our economy our identities. Powerful indeed, but if we address them for what they are, man made data processing systems, they are not unmanageable.

So let’s start from this:

Respect ourselves for what we are – we are humans with specific qualities (not predictable software) – we have will, creativity, unpredictability and intuition, consciousness (things that we still in science haven’t been able to fully understand, so how can we even think that we can replicate it in a machine? (Just please read the French philosopher Henri Bergson)

Let’s approach AI as man made data processing systems that can be managed and directed. Not as an uncontrollable free agent. (Please follow @FrankPasquale  for invaluable insights on this perspective)

And lastly AI is a shared good in society. It is not a trade secret, one company’s success and property.

And then there are a myriad of questions we might ask of AI, as individuals and as a community!

The Data Ethics AI Debate at DataEthicsForum 2017

John C. Havens and Katryna Dow visited us in Copenhagen for the last session of Data Ethics Forum 2017. We were setting the scene and talking about AI constructively with a view to a future where humans are still not only in the loop, but controlling the loop.

See their introductory talks here:

Keynote Katryna Dow, Meeco, European DataEthics Forum 2017 from DataEthics EU on Vimeo.

Keynote John C Havens, IEEE Global Initiative for Ethically Aligned Design of Artificial Intelligence and Autonomous Systems, European DataEthics Forum 2017 from DataEthics EU on Vimeo.

Does the A in AI stand for Artificial or doesnt it stand for Autonomous? Well, maybe neither. Perhaps it stands for “Anthro” (human)! Watch our debate.

For more information on what we are working on to develop an “anthro” evolution of AI see:

The IEEE Global Initiative for Ethically Aligned Design of AI and Autonomous Systems

The IEEE P7006 standard for Personal Data AI Agents