Ethics of AI : Building Technology that benefits people and society

Edited By – Akash Goel

Humans are intrinsically wired to resist technology. Technology has for hundreds of years introduced tension between the need for innovation and the pressure to maintain continuity and social order.

The very debates and concerns surrounding artificial intelligence, robotics, gene editing and others mirror the interpersonal and economic rationale that guided resistance to the printing press, farm mechanization, electricity and automobiles.

Despite centuries of protest, history has long proved that the majority of technologies initially refuted evolve into many of the world’s most consequential innovations. Those once feared as eliminating jobs or lessening human intellect result in the exact opposite. As we now know, cellphones are not only a tool for communication, but a global conduit for banking, education, medicine, transportation, social engagement and more.

 

Rethinking our association with AI

Public association with AI is mainly driven by Hollywood thanks to decades of films about an array of robots with mixed intentions. Films like Metropolis (1929), Starwars (1977), Terminator (1984), Short Circuit (1986), iRobot (2004), Wall-e (2008) and Ex Machina (2014).

However, billions of people around the world interact with AI on a daily basis through their phones and computers. AI powers the technology behind maps and search engines (Google), voice controls (Apple, Tencent), social networks (Facebook, Twitter), eCommerce (Amazon) and financial services companies (Visa, Stripe). These companies build products that millions of people use, which rely heavily on machine learning, natural language processing, computer vision and other components of AI.

Like the evolution of the cellphone, we need to expand our lens of understanding the true, beneficial potential of AI

This means using technology to lessen poverty, improve nutrition, eradicate diseases like cancer, stop or reverse climate change and, very importantly but most often forgotten, the more equal and fair distribution of resources and human rights.

 

The foundation for the Singularity

Individuals are understandably concerned given that experts can’t agree about whether AI will benefit or harm society. Still, others claim that the post-silicon era will start in about 10 to 15 years, replaced by optical, quantum, DNA, protein or other forms of technologies that will introduce the next level of computing powering the infrastructure of AI.

Regardless, we must be actively involved as we create and lay the foundation for good actions later. Like a child, if we miss out on educating and training AI now in proper, so understanding the matter of respect among humans, our environment and any living being, of inclusion and diversity in moral and free will, and of phasing consequences upon any action taken, we will have difficulties compensating for that as time progresses.

On the other hand, the past two decades since IBM Research’s Deep Blue in 1997 started with challenges beating humans in chess, followed by others such as Google’s Deep Mind beating humans in Go, beating humans in Poker, then in Civilization, now beating humans in Dota 2. We certainly can understand the challenge from a science perspective behind it. Any current attempt feels like any AI gets specifically trained to understand the weaknesses of humans and identify ways to beat them, not how to help them understanding the matter better or how to make use of it in an augmented way

 

Two key steps to applying ethical principles and moral values to AI:

  1. Involve and educate all the sectors of the society
  2. Define a unified set of values and principles that will guide the further development of AI

Unified Set of Principles & Values

A group within the Future of Life Institute https://futureoflife.org/ai-principles/  came up with 23 AI principles in the areas of research issues, ethics and values and longer-term issues. All of those are relevant and provide a framework to act upon based on the area AI gets developed and applied.

In essence it’s about 2 things:

1)   Democratizing AI by educating as many people as possible about the impact and reality of AI technologies.

2)   Sharing the prosperity created with AI. This means prioritizing practices to solve the world’s most pressing challenges including poverty, hunger and health.

Conclusion

Machines will reflect the values and principles of its creators and trainers. They will act based on the goals that they have been set up for

There should be ethical design principle for all who develop and apply AI. It won’t ensure the concrete ethical values, but it would ensure to limit the harm undirected intelligence could cause and push the move into beneficial AI. Therefore, as practitioners applying and developing AI, we should always answer the following:

  • What is really desirable about it?
  • For whom is it desirable?
  • For whom is it not desirable?

An AI-led future is as inevitable as that of electricity and farm mechanization. Generating  fear and reluctance against AI is the contrary of what we can do, determining the fundamental cornerstones of our future society need to be our key priority. We must get as many people as possible educated and enabled on the AI technologies Everyone should study the impact of AI and contribute to ethical development and implementation. Until then, we must adapt and embrace a new world of opportunities as enabled through technology.