Peter Norvig is a legendary A.I. researcher who co-authored Artificial Intelligence: A Modern Approach, the leading textbook in the field, and he’s co-teacher of a class on A.I. that signed up 160,000 students and helped kick off the current round of massive open online classes. He was previously the head of Google’s core search algorithms group and NASA’s senior computer scientist.
Peter is a featured speaker during a fireside chat session at the annual BayBrazil conference on August 22 at SRI International in Menlo Park that will bring together leaders from Brazil & the U.S. to discuss science, technology and innovation. Our team sat down with him to get his point of view on where machine learning is headed, how it impacts the future of work and society, the current shortage of A.I. experts in the field and the best approach to training the field’s leaders of tomorrow in Brazil and beyond.
There are still some tickets available to join us for BayBrazil’s 8th annual conference here.
BayBrazil: Back in 1956, scientists John McCarthy and Marvin Minsky proposed that a “two-month, 10-man study of artificial intelligence” would unlock the secrets of the human mind. More than six decades later, A.I. researchers still struggle with how to replicate the problem-solving capabilities of an adult human mind. What are some of the complexities involved in making general artificial intelligence a reality in the future?
Peter: Yes, in the early days, there were proposals about being able to solve all of A.I. in a matter of months. I think we shouldn’t really look at as the founders of the field were naive and ignorant and didn’t know how hard it was. I think it was more like: they were proposing the maximum amount that funders will pay for. If we ask funders for a billion dollars a year for 50 years, most researchers know the answer would be no, so instead, we’ll ask for a small amount at a time and see how far we can get.
A.I. is really the study of decision making under uncertainty, and so A.I. does its best when it has lots of examples to draw on. If you have millions of cat pictures, then it’s really good at identifying cats. If you can play millions of games of chess, it becomes really good at playing chess. The hard part is when there is no common experience to draw upon, and many of those things are the everyday and general types of activities, not the specialized ones. It’s actually easier for us to build an A.I. system that’s an expert radiologist and can read X-rays and diagnose diseases as good or better than a doctor than it is, say to make a robot dishwasher.
The reason is because all X-rays are similar because come from a similar device and they show similar things. Whereas everybody’s kitchen is different and everybody’s individual dishes are different and it’s hard to acclimate to that. These general everyday tasks are harder because there’s more variety in them and when we make something technical we make it more specialized. We make it more exacting. Those are the things that are easy for A.I. The things that seem easy are often the things that are harder.
BayBrazil: There’s been a lot of buzz around self-driving cars, but many experts in the field now acknowledge it will likely be many years before we can remove humans from behind the steering wheel. Have there been recent advances that might speed up this process, and how long do you think it will be before autonomous vehicles are generally available?
Peter: On one hand, self-driving cars seem like an easy target because humans make lots of mistakes. They get drunk and drive, for example. More than one million people a year die in car accidents and the vast majority of those are due to human error. We should be able to eliminate that.
On the other hand, maybe we’re not so bad. A death occurs really just once in about 10 million miles of driving. The leading self-driving car companies have driven just about that amount; somewhere around that 10 million-mile mark. But, there’s just not enough data to show that they’re better at avoiding deaths yet because you would expect zero or one death from the amount of driving that they’ve done so far. We need 10 or 100 or 1,000 times more experience before we can show for sure that it’s better.
By that definition, there can’t be a breakthrough that immediately proves that they’re better. This can only happen over the course of a long time trying to figure that out. Now, I think we are on a good path. One thing that has been good recently is the commoditization of some technologies required. But, one big problem that still remains is just the expense.
I’ve talked to many car manufacturers and they’ve said: “Self-driving cars are great, and we’d be willing to invest a couple of thousand dollars’ worth of hardware to make our cars be able to do that.” Yet, for companies developing and testing out new technology, they’re thinking more in tens of thousands of dollars worth of hardware for specialized cameras, LIDAR-range sensing devices and so on. That’s still a big gap.
While there’s still a big gap, I think we’re closing in on that. We’re starting to see LIDAR sensors that are 10 or 100 times cheaper than the older versions. We’re also seeing very cheap, very small radar and tech devices that can augment the cameras and so on. I think that’ll be important.
If you compare this to an earlier revolution, the computer industry in the 1960s, the first IBM mainframe cost millions of dollars. But, how many of those could they really sell? How much can they affect our everyday lives? The answer to that is they had a relatively small impact for us as a society at first, but as soon as they got everything to 100 or 1,000 times cheaper, we suddenly had PCs on everybody’s desk. That made a huge difference. It wasn’t that there was just one big breakthrough in software; it was that we were able to deploy computers in a much broader way. I think we’re on that path for self-driving cars today.
BayBrazil: What are some of the most intriguing examples of pairing up humans and machines to partner and do something that they can not do on their own today?
Peter: This idea of pairing of humans and machines is really interesting. I think we’re seeing more and more of that. If you look back to say in the 1980s or so, we expected that A.I. was going to replace humans. But, now we’re thinking more along the lines that A.I. is an augmentation, and there are different and new ways of pairing or partnering between humans and machines.
For a while, there was a lot of talk of doing that for chess. There were tournaments where humans could be in charge of a team of programs. Those teams of a human paired with a computer were better than any individual human or any individual computer. That was true for a number of years. We called these centaurs because centaurs are half-human and half-horse.
Then, the AlphaZero came along and a single computer could now wipe out these centaur teams as well as everybody else. So, that superior man-machine combination was short-lived. I think that was easy to predict: that whatever the humans were doing for centaur teams in terms of higher-level strategy, was not a hard thing for a computer to develop the skills for. The reason it’s not hard is because chess is a really constrained game. You know exactly what the possible moves are. It’s a literal black and white game and that’s what computers excel at.
A more recent example of a powerful man-machine combination is in intelligence operations, things like defending against attacks such as cyber attacks, terrorist or military attacks. In this instance, there are tens or hundreds of past examples of the type of things that we’re trying to protect against out of millions of possible attacks.
The human knows the history of those and has formed some generalization, but there just isn’t enough data for a computer to form a good model. Yet, computers are really good at looking at everything. They’re good at monitoring every port and every access point coming in. While the humans are good at saying: “That’s funny. Let’s examine that a little bit more.”
Then, the human and the machine go back and forth. The machines do what they’re good at: sorting through and presenting data insights. The humans do what they’re good at like saying: “Hey, I never thought about it before, but if I were the bad guy, here is a particularly new kind of attack that nobody has ever seen before, but maybe they’re trying this new thing. Let’s try to figure out if that’s what’s actually going on.”
BayBrazil: There’s been concern about what many people have referred to as an “A.I. skills crisis.” One report at the end of 2017 estimated there were about 300,000 A.I. professionals worldwide, but millions of roles available. How can we close that gap?
Peter: I talk to a lot of startup companies. That’s one of the first things they always complain about. They say: “We want to go out, and we want to hire a top Ph.D. in A.I. from Stanford or M.I.T. or one of those good schools, but we can’t because Google and Facebook already got them all. What are we going to do?”
One of my colleagues has a good analogy for that. She says: “If you were opening a restaurant and you told me: ‘What I really need now is a Ph.D. in stove design from M.I.T., you’d say, ‘No, you don’t need that. What you need now is a chef who knows how to cook tasty food and the chef will tell you what stove to buy. You don’t need to design a new stove.'”
I think that’s where we are in A.I. today. I think, in terms of the skills gap, we have enough of those Ph.Ds who are inventing new algorithms, techniques, and tools. The real gap now is in the number of people who know how to use these tools. That is a different type of skill, and some of that skill is industry-specific.
I don’t think we’re really in a worse position with A.I. than we were with the introduction of any other similar type of technology. I’m sure that there was, say, a bulldozer skills gap when bulldozers were first introduced, and nobody knew how to drive them. We learned how to fill that gap. We’ll do the same for A.I.
BayBrazil: The public education system in Brazil is deficient when compared to other regions such as China, Germany, Japan, the U.K. and the U.S. What do you think is the best approach to making quality A.I. education more accessible in countries like Brazil? It seems like A.I. is a leapfrog technology that is critical for more Brazilians to know.
Peter: Certainly, you can send some people to schools in other countries, and they can come back and bring with them what they learned. I spent some time working on online classes, in large part, to help people in countries or in situations where they didn’t have access to education in the traditional way. I think we’ve developed a pretty good solution that works well in subject areas, particularly in S.T.E.M. or technical subject areas where there’s more of a black and white or right and wrong answers. I think these online classes work for highly motivated self-starting persons. We can now reach that type of person who maybe never before had access to this type of high-level top education and now they can. That works well.
However, online classes don’t often provide one-to-one interaction with a teacher or other mentor. And, this turns out to be not an adequate solution for the person who’s not so self-motivated or is unsure about what they want to focus on in the field. All the information can be there, but if you’re not motivated to study and work hard, then it is not going to help you. So far, we have only a partial solution that can help make quality education more accessible in countries such as Brazil. We can certainly train a lot of these top students who are very self-motivated, but we’re not going to reach everybody.
BayBrazil: What do you think will help attract more young people to enter the A.I. field?
Peter: I do think having someone to coach and motivate young people is an important aspect. Letting them know that they can help change the world in great ways is motivating to many young people. I see that more and more with my own daughters. It’s not just that they want to have a good job and a comfortable life. It’s that they want to do something good to change the world. And, showing what A.I. can do in that way, I think, is important.
Then, I think the next step is really to show how relevant A.I. is. A lot of people are exposed to computer technology and see how useful it is. They’re carrying a phone around with them all the time and they’re either using the various different apps. A lot of times, they don’t really understand where that comes from or understand that it’s relevant to them, or that they could participate in that. They know they can participate as a user, but they don’t often think or say: “This is the kind of thing that I could make, or if I wanted to make something new, I could do that.” I think that’s the next step.
Hear more from Peter Norvig on Aug 22, when he will join BayBrazil annual conference for a chat with Vicente Silveira, BayBrazil Chairman.
Details at https://www.