“The Driver in the Driverless Car” – An Interview with Vivek Wadhwa
Vivek Wadhwa is a Distinguished Fellow at Carnegie Mellon University’s College of Engineering and a Director of Research at Duke University’s Pratt School of Engineering. He is a globally syndicated columnist for The Washington Post and author of The Immigrant Exodus: Why America Is Losing the Global Race to Capture Entrepreneurial Talent, which was named by The Economist as a Book of the Year of 2012, and of Innovating Women: The Changing Face of Technology, which documents the struggles and triumphs of women. Wadhwa has held appointments at Stanford Law School, Harvard Law School, and Emory University and is a faculty member at Singularity University.
We caught up with him to discuss his latest work, The Driver in the Driverless Car: How Our Technology Choices Will Create the Future.
Our future is a choice is between Star Trek and Mad Max, as you say in the book. Can you give us a quick glimpse into each world?
As I say in the book, the future hasn’t happened yet; it will be what we make it. As humans we are going to have to rise to the challenge. My own view is that we can do it, because we must. The Mad Max world cannot become normalized. Industry disruption will happen. Tens of millions of jobs will disappear, and our lives will change forever. What it means to be human will change forever as well. But this isn’t all bad, we can also build the utopia of the TV series Star Trek, in which the purpose of our existence shifts from making money to sustain ourselves to seeking enlightenment and helping uplift humanity.
You frame the book with three questions. Why and how did you pick those questions? Are these questions that matter to our leaders?
The three questions that matter are (1) Does the technology have the potential to benefit everyone equally? (2) What are the risks and the rewards? and, (3) Does the technology more strongly promote autonomy or dependence? Asking these three questions gives us a lens for assessing the value of new technology in terms of impact on society. These questions are interlinked, and the answers are not black and white. These aren’t also all the questions we need to ask, just the three that I thought were most relevant given where we are today.
In the book you explain some of the transformational shifts that are happening with AI, Avatars and personalized learning. Won’t these make us more unequal and create bigger divisions in our already polarized society?
The broad promise of this shift is breathtaking. We’re moving toward a technology-enabled era of learning in which every individual gets what he or she specifically needs and in which the pupils, with A.I. help, largely teach themselves.
These are not new concepts. Socrates also wanted his students to teach themselves. When avatars, A.I., and connected learning can radically improve the learning process through digitization and personalization, then anyone in the world with an Internet connection can gain access not only to information and coursework (as we can now) but also to a top-notch education. The children of the richest and of the poorest will learn using the same tools and the same A.I., just as the children of the richest and of the poorest use similar smartphones for communications and social media. When the professional humans’ role of broadcasting becomes one of guiding, the guides will be able to work with far more pupils, and to do it remotely, too. In fact, parts of this have been happening for years. British grandmothers have been teaching Indian kids using Skype. A number of Skype-based language and teaching businesses are operating right now.
There will always be benefits to physical presence, to being in the same room with fellow students and a teacher. But videobased learning and VR avatars can and will replace many in-class elements. What’s amazing, though, is that research proved more than decade ago that crude versions of this approach work. And they work even for the poorest of the poor with exceedingly modest resources.
How are humans becoming data? What will this mean for our lives? How can we trust an AI physician when we can’t trust anyone (see the Edelman Trust Barometer!)?
This coming era of personalized health care not only will let us effectively treat previously mysterious conditions but also will offer medical laypeople far broader capabilities to diagnose and treat themselves.
A.I.-based tools will be able to provide patients with the information and judgment requisite to interpreting their own blood results and taking into account their genes and the latest advances in medicine. This is something that physicians themselves struggle to do today, as the body of medical knowledge is growing far more rapidly than is the ability of doctors to assimilate new information. Interestingly, one of the recurrent objections to self-diagnosis is that patients can’t handle all the information and will be overwhelmed and confused. With shrinking testing costs and the availability of technology for self-guided medical care, the medical world will be forced to lift its game; and, in the shorter term, those savvy enough to understand this new type of medicine might be at an advantage.
We urgently need to fix the comprehensibility problem. Test results today are written in medical hieroglyphics; simple language could easily offer the average patient clearer understanding and a better footing for relevant questions. That is part of what a medical diagnostics device made in New Delhi, Healthcubed, does: simplify the outputs and make healthcare products resemble consumer products in their simplicity—and cost.
What are your personal views on autonomous killing machines? (I’m still haunted by the mechanical dog in Fahrenheit 451)
The military supporters of autonomous lethal force argue that robots in the battlefield might prove to be far more moral than their human counterparts. A robot programmed not to shoot women and children would not freak out under the pressure of battle. There would have been no Mai Lai massacre if the robots had been in charge, they say. Furthermore, they argue, programmatic logic has an admirable ability to reduce the core moral issue down to binary decisions. For example, a robot might decide in a second that it is indeed better to save the lives of a school bus full of children over the life of a single driver who has fallen asleep at the wheel.
I’m also a bit cynical: I don’t think the public really cares much about whether robots will be allowed to kill people, the notion seeming too abstract. The American public has never taken much interest in whether drones should be equipped for autonomous kill shots. In fact, the public has taken little interest in the question of robots being used to kill people even in the United States. In Dallas, police used a bomb carried on a robot to kill Micah Brown, the shooter who had allegedly killed seven officers at a protest rally. Few questioned this use of force. And the first autonomous robots’ use on the battlefield would likely be far away, as the battlefields on which drones first killed were, in Afghanistan and Pakistan.
The Open Roboethics initiative is advocating an outright ban on autonomous lethal robots, a call echoed by nearly every civil rights organization and by many politicians. The issue will play out over the next few years. It will be interesting to see not only what final decision comes from world governance bodies such as the United Nations but also the decision of the U.S. military establishment and its willingness to sign an international accord on the matter.
Will robots eat my job? In societies with over 25% unemployment, we see violent insurrection, failure of law and order, and even famine. How can we survive this shift without a breakdown in civilization?
Humanity as a whole can benefit from having intelligent computer decision makers helping us. A.I., if developed correctly, will not discriminate between rich and poor, or between black and white. Through smartphones and applications, A.I. is more or less equally available to everyone. The medical and legal advice that A.I. dishes out will surely turn on circumstance, but it won’t be biased as human beings are; it could be an equalizer for society. So we truly can share the benefits equally.
That is the good thing about software-based technologies: once you have developed them, they can be inexpensively scaled to reach millions-or billions.
In fact, the more people who use the software, the more revenue it produces for the developers, so they are motivated to share it broadly. This is how Facebook has become one of the most valuable companies in the world: by offering its products for free-and reaching billions. In considering benefits, we may make the mistake of forgetting that an A.I., no matter how well it emulates the human mind, has no genuine emotional insight or feeling. There are many times when it is important that somebody performing something we classify as a job be connected emotionally with us. We are known to learn and heal better because of the emotional engagement of teachers, doctors, nurses, and others.
The good news is that the engineers and policy makers are working on regulating A.I. to minimize the risks. The tech luminaries who are developing A.I. systems are devising things such as kill switches and discussing ethical guidelines. The White House hosted workshops to help it develop possible policy and regulations, and it released two papers offering a framework for how government-backed research into artificial intelligence should be approached and what those research initiatives should look like.
The central tenet is the same as my book’s: that technology can be used for good and evil and that we must all learn, be prepared, and guide it in the right direction.
I found it particularly interesting that the White House acknowledged that A.I. will take jobs but also help solve global problems. The report concluded: ‘Although it is very unlikely that machines will exhibit broadly applicable intelligence comparable to or exceeding that of humans in the next 20 years, it is to be expected that machines will reach and exceed human performance on more and more tasks.’
How should marketers start to prepare themselves, and their companies, in this age of continuous disruption?
What happens when your refrigerator talks to your toothbrush, your gym shoes, your car, and your bathroom scale? They will all have a direct line to your smartphone and tell your digital doctor whether you have been eating right, exercising, brushing your teeth, or driving too fast.
I have no idea what they will think of us or gossip about; but I know that many more of our electronic devices will soon be sharing information about us-with each other and with the companies that make or support them.
The Internet of Things (I.o.T.) is a fancy name for the increasing array of sensors embedded in our commonly used appliances and electronic devices, our vehicles, our homes, our offices, and our public places. Those sensors will be connected to each other via Wi-Fi, Bluetooth, or mobile-phone technology. Using wireless chips that are getting smaller and cheaper, the sensors and tiny co-located computers will upload collected data via the Internet to central storage facilities managed by technology companies. Their software will warn you if your front door is open, if you haven’t eaten enough vegetables this week, or if you have been brushing your teeth too hard on the left side of your mouth. The l.o.T. will be everywhere, from heart-rate monitors in your watches to breathing monitors stitched into your child’s pajamas. It will help us learn from our behaviors, manage our environment, and live a richer life.
But there is a really dark side to this machine vigilance. The Internet of Things will offer unprecedented spying possibilities, from the insurance company monitoring how you drive by using an accelerometer device in your car (which insurance giant Generali is already doing, under a scheme it calls Pago coma Conduzco, “Pay as I Drive”!) to the little Samsung Paddle placed under your pillow that records your sleep cycles and vital signs, to the camera in your TV that gets hacked and allows people to watch you.
Marketers are going to have to understand how technology impacts the everyday lives of their customers and where to draw the line. The Wild West seems tame by comparison.
Our government has seemingly abdicated all responsibility for global warming. Do you think they will step up to face the challenges you highlight in the book? What must be done?
Increases in government regulations are rarely productive and often harm innovation. But it may be prudent to expand the equipment authorization program of the FCC. This requires the testing of radio-frequency devices used in the United States to ensure that they operate effectively without causing harmful interference and that they meet certain technical requirements. In the future, these requirements could include the encryption of data and other secutity safeguards. This is particularly important given that our Internet of Things devices are mostly manufactured in China. The security holes could allow snooping on an unprecedented level-in homes as well as offices.
I have another radical thought: What if we mandated that businesses create systems that allow customers to control their own data-to see what is being collected and to alert them when those data are stolen?
This has long been a pipe dream of privacy activists and an ideal of defenders of electronic civil society such as the Electronic Frontier Foundation. But we are actually tantalizingly close to having the capability of creating such a system. My colleagues at Stanford Law School, along with many others, have been researching how this would work. Roland Vogl, who heads Codex, the Stanford Center for Legal Informatics, envisages a system that will allow people to manage and analyze all of their structured data, including those generated by Internet of Things devices. End users will connect their devices to a “personal dashboard,” through which they will be able to monitor and control their data. They will select which data can be shared and with which companies. Vogl says there are already some implementations of these technologies, such as OpenSensors and the Wolfram Connected Devices Project. The solutions aren’t difficult. We just need the motivation, regulation, and coordination.
The alternative, in today’s wild, wild west of Internet of Things development, is a runaway increase in security nightmares. It will be better to set the standards now and ensure a safer cyber world for our children and ourselves than to try locking the door once all the wrong people have our data. This again is where you must be involved: we need the public demanding these protections. But first we must understand the key issues. You can also exercise the same choice as I am: until I am convinced that there is enough security, I am not going to be buying an I.o.T. home device.
Whew, I hope we wake up. Thanks for your time.
INTERVIEW by Christian Sarkar.