Technoloy is however only one part of the equation. A very important question is: are people ready for the coming transformation? Do we have the answers to the legal and ethical quandaries that will certainly arise from the increasing integration of AI into our daily lives? Are we even asking the right questions?
A panel of academics and industry thinkers has tried to look ahead to 2030 and forecast how advances in AI might affect life in a typical North American city. The goal: to spark discussion about how to ensure the safe, fair and beneficial development of rapidly developing Technologies such as AI.
“Artificial Intelligence and Life in 2030”
Is the first product of the One Hundred Year Study on Artificial Intelligence (AI100). This ongoing project, hosted by Stanford University, will inform debate and provide guidance on the ethical development of smart software, sensors and machines. Every five years for the next 100 years, the AI100 project will release a report to evaluate the status of AI technologies and their potential impact on the world.
The AI100 Standing Committee is chaired by Barbara Grosz, Higgins Professor of Natural Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences, chairs the AI100 Standing Committee. She believes now is the time to consider the design, ethical and policy challenges that AI technologies raise. “If we tackle these issues now and take them seriously, we will have systems that are better designed in the future and more appropriate policies to guide their use.”
The report investigates eight areas of human activity in which AI technologies are already affecting urban life and will become even more pervasive by 2030, including transportation, home/service robots, health care, education, entertainment, low-resource communities, public safety and security, employment and the workplace.
Some of the biggest challenges in the next 15 years will be:
Creating safe and reliable hardware for autonomous cars and health care robots;
Gaining public trust for AI systems, especially in low resource communities;
Overcoming fears that the technology will marginalize humans in the workplace.
Issues of liability and accountability also arise with questions such as: who is responsible when a self-driven car crashes or an intelligent medical device fails? How can we prevent AI applications from being used for racial discrimination or financial cheating? The report doesn’t offer solutions but is intended to start a conversation between scientists, ethicists, policy makers, industry leaders and the general public.