Leadership in the Time of AI

Fritzky Chair James Whittaker on the inevitability of artificial intelligence: will you profit or perish?

The Foster School has never seen a Fritzky Chair like James Whittaker. The visionary engineer, acclaimed author and irreverent raconteur comes to Foster by way of Google and Microsoft, where he currently serves as distinguished engineer and technical evangelist. As the Edward V. Fritzky Visiting Chair in Leadership for the 2018-19 academic year, Whittaker views himself as a kind of high-tech prophet, come to ready the school for artificial intelligence and its inevitable transformation of lives and livelihoods around the world. Foster Business asked him to elaborate.

James Whittaker

What keeps you up at night?
James Whittaker: Knowing that we are the first generation of humans smart enough to build artificial life more intelligent than we are, and the first generation stupid enough to actually do it. I lay awake at night trying to figure out how we can get this right.

How do you define artificial intelligence?
Traditional software is deterministic. Given data, it does exactly what it is programmed to do. Its code paths and output are predictable. AI is nondeterministic. Given data, it learns what to do based on a model of behavior. The more data it gets, the more it learns. We can program traditional software. AI, we simply supervise. It’s a fundamentally new way to compute.

How does AI work?
By removing the human user and its decision-making ability and replacing it with machine automation. Let’s take the self-driving car to illustrate. First, think about how you’d teach your kid to drive. The first part is mechanics: how to operate the brakes, the steering wheel, the turn signals, etc. The second stage is awareness: what to look out for when you’re driving and how to avoid breaking any motor laws. Like your kid, a self-driving car is trained to master the mechanics and awareness, using sensors to see and hear the environment around it and to react to what it encounters. But here’s where self-driving cars really leave humans behind: they can communicate with every other autonomous car through the “Internet of Things.” When one of them learns something, they all learn something. Humans can’t compete. Machines will be much better drivers.

That doesn’t sound so scary.
Most AI scenarios aren’t scary. The scary part is once they learn enough to have a mind of their own. If those cars ever tire of driving us… what then?

So that is scary. Could that actually happen?
Well, we’re conscious, aren’t we? Truth is, we aren’t certain exactly how humans became conscious. At some point in our evolution, our brains got complicated enough that we evolved a conscious mind. AI brains grow in a similar manner. If it is only complexity that spawns consciousness, then yes, machines could develop their own will and create their own wants and desires.

Of course, this is still in the realm of theory. In fact, it’s my area of inquiry right now. But if machines become conscious, it will likely be learned, not programmed, which means we won’t be able to directly control it.

Why is AI inevitable?
There is so much data in the world—people were generating 10 exabytes (or 1 billion gigabytes) per day in 2015—that there’s no way that humans or even software can handle it all. Only AI. On top of that, AI promises enormous efficiencies in every business and in every facet of life. For instance, a swarm of tiny connected robots could keep your house clean. Micro sensors in your body could monitor your nutrition and health. On the downside, if software was good at killing blue-collar jobs, AI is going to be good at killing both blue- and white-collar jobs.

How can businesses survive in the age of AI?
By rethinking all our software processes from an artificial intelligence point of view. The currency of AI is data. We need to reduce all our business processes to data and develop sensors and devices that generate this data. And then we need to teach our machines to learn from the data. Think about autonomous cars, which are not programmed to drive themselves, they’re trained. We need to train our businesses to drive themselves. The people who do this best and most thoughtfully are the ones that are going to thrive in the economy of the future.

How can individuals survive in the age of AI?
Develop creativity. Be innovative. If something can be reduced to data, it’s going to be done by a machine far better than by any human being. Creativity can’t be so easily reduced to data. It’s the process of turning nothing into something. I’m not saying that machines will never learn creativity, but it’s one of the last things they will learn.

Is this transformation going to hurt?
In the 1990s, the transition from a world powered by humans to a world powered by software shook the foundations of our society. Companies that got ahead of it succeeded. Companies that fell behind it went to their graves. The same will be true when society shifts from traditional software to AI. Companies are going to rise and others are going to fall. It’s just that the stakes are much higher now. If we don’t think through AI, we’re in trouble. And this isn’t just me warning about the future. This is Bill Gates, Elon Musk, Stephen Hawking. It’s a fundamentally new species that we’re going to be dealing with, one that’s smarter and more capable than we are. It changes the game so fundamentally that humans may not even be a part of the equation if we’re not careful.

Why is it so important to address the impact of a technology that doesn’t fully exist?
We failed to think through the software revolution. We ceded our future to technologists like Gates, Mark Zuckerberg, Larry Page and Steve Jobs. These individual humans made a bunch of decisions about how we all live today, for good and bad. They gave us productivity and security vulnerabilities. They gave us search and sold results to the highest bidder. They gave us the power to share… fake news and cyberbullying. This time, business leaders need to help drive us forward and not leave it to the tech industry alone. The more people who understand AI and its legal, ethical and moral ramifications, the more voice we will have in how we live in the future.

We’re heading into unexplored territory where there is huge opportunity but also serious risks to humanity. The ship has already sailed, and there’s no bringing it back into harbor. We need to steer it the best we can.

Leave a Reply