Intel subsidiary

Mobileye CEO on solving the self-driving car problem

FILE PHOTO: Amnon Shashua, co-founder of Israeli start-up Mobileye, gestures during an interview with Reuters at his second high-tech company OrCam office in Jerusalem
If a self-driving car gets into an accident, is the hard drive to blame? Source: Reuters

Even before Intel bought Israeli sensor and chipmaker Mobileye in August for $15 billion, the company was the talk of the town in the car industry. The tech company previously struck a deal with BMW and Intel in 2016 to develop self-driving cars and bring them to market by 2021 and has also been working with many other big names in the car industry, including VW, Audi, Ford and Renault/Nissan.

Founded by Amnon Shashua in 1999, Mobileye developed a special camera technology that comes installed in most new cars. The cameras work measure the distance to the nearest obstacles and use the car’s speed to calculate if an accident is likely. Alarm signals then help prevent collisions. Carmakers see it as crucial technology for autonomous cars that let the passengers — including the person behind the wheel — relax, watch a movie or have a conference call.

Mr. Shashua, a professor of mathematics and computer science at the University of Jerusalem, is still Mobileye’s boss. He sat down with WirtschaftsWoche, a sister publication of Handelsblatt, to discuss the challenges self-driving cars face before becoming widely adopted. He believes better regulations and standards are required to make clear who bears the blame when accidents happen.

WirtschaftsWoche: Mr. Shashua, sceptics like VW CEO Matthias Müller have already declared that the self-driving car is nothing but hype. Customers are also apparently not prepared to let algorithms drive them from A to B. Your growth story is based on the expectation that autonomous vehicles will soon be reality. Did you miscalculate?

Mr. Shashua: I don’t know how one could make that assessment. However, I can tell you exactly what we plan to do, and even exactly when we will do it: Together with our parent company Intel and our partner BMW, we will be offering fully autonomous vehicles with the highest levels of autonomy, 4 and 5, starting in 2021. This means that you will no longer be driving yourself. Instead, you can read or write e-mails while sitting in a moving car.

Audi has only recently started selling a vehicle at level 3, which means that the driver must still be able to intervene if necessary. Most manufacturers are not even naming any specific dates for fully autonomous cars. Are you too ambitious?

Of course, the last percentage points on the way to 100 percent autonomy are the most difficult. But all the technical ingredients for a completely self-driving car are there. In your Audi example — with Mobileye technology, by the way — a congestion autopilot saves the driver the tiring task of driving in stop-and-go traffic.

The real challenge is not of a technological nature, but of a legal and social nature.

The autopilot is currently only allowed at speeds of up to 60 kilometers per hour (37 mph).

We are working on the next step with Japanese and US manufacturers. As early as 2019, we will be driving autonomously in their countries at highway speeds.

You recently expanded your long-standing development partnership with BMW to include Fiat-Chrysler. Do you intend to attract other manufacturers to your platform?

Absolutely. If the technology and markets are a good fit for us. And there are a number of those cases.

What are they?

I can’t tell you any names yet. However, we are in very intensive talks with seven other car manufacturers. We will soon be welcoming some of them as new partners. It is up to the customer whether they prefer to join into the BMW platform or to join in a bilateral partnership with Mobileye/Intel.

What are the biggest challenges on the road to the autonomous car in a network like that?

The real challenge is not of a technological but of a legal and social nature. The regulatory authorities play a key role in this respect. We have noticed that the authorities in most countries are determined not to stand in the way of technical development, so they only issue a few simple standards. However, we need more not less regulation and standards, but more — and more concrete ones.

You, of all people, are saying that?

Yes. Because otherwise it won’t work. People will not accept autonomous driving if the course of an accident and the question of guilt cannot be clarified beyond doubt, especially when there are deaths involved.

Artificial intelligence will control cars with complex, multilayered algorithms. If an accident happens, nobody will be able to decipher the billions of computing operations to understand what went wrong with the car.

That’s exactly why we need clear rules that establish a framework. It will not be enough to say that the technology is in principle 90 percent safer than human drivers, but that when an accident occurs we don’t know why the artificial intelligence has decided one way and not differently. Society will not accept that. Such a system lacks the rules created by humans. And if these rules were actually to be suspended, test cars would have to cover more than 30 billion kilometers (18.6 billion miles) in order to generate sufficient empirical data to perfect the artificial intelligence in such a way that accidents can be virtually ruled out. That isn’t feasible.

main 124713730 source Arne Dedert DPA – Self-driving car no driver autonomous Continental demonstration Sept 2017 IAA Frankfurt car show
The future is already here. Source: DPA

But that’s precisely the approach taken by Google and Tesla. They have their test cars drive as many miles as possible in order to train their AI systems to perfection with the resulting data. Are you saying this doesn’t work?

It does work technologically, if you were to drive so many miles. But not socially, because the problem of the black box still exists. When people cause accidents, we accept that. We know that our species is not infallible. But there will be a huge outcry if a computer kills a person. This is not anticipated in our system of values. Without reliable rules, accidents could put an end to the development of autonomous driving. And there will be accidents, because self-driving cars and those with human drivers will exist in parallel for many years to come. And people make mistakes. So we need rules to help rule out that the robot car was to blame.

What could these rules look like?

They have to be programmed as algorithms that absolutely ensure that if the autonomous vehicle has adhered to them, there is no chance whatsoever that it has caused an accident. This must then be incorporated into the law. The only rules we have today are traffic regulations.

And what’s wrong with them?

Nothing. But it is not enough for manufacturers to program their algorithms according to the traffic regulations in such a way that cars do not run red lights or drive faster than 50 kilometers per hour in urban areas. In between, there is a plethora of variables and situational considerations that people master with knowledge of the rules, but also with experience and discretionary powers. It is precisely this scope of discretion that has to be formalized for the machine — and then made binding for all manufacturers.

Do you seriously want to try to predict all critical traffic situations and translate them into formulas?

It is possible. In fact, we have already developed such a collection of formulas. To this end, we analyzed 6 million accidents in detail. Some 99.4 percent of them fall into one of 37 typical scenarios. Our mathematical models cover them all.

Can you give us an example?

The driver of the rear car is to blame for rear-end collisions, because the assumption is that he or she has not kept sufficient distance. With self-driving cars, it is relatively easy to calculate whether the car responsible for the rear-end collision was at a sufficient distance from the car in front: All you need is the speed of both drivers, road conditions, visibility and braking paths. This is all data that we can collect. If the car that hit the car in front were a robot car, we would simply check whether it was keeping a minimum distance driving within the speed limit, and whether all functions were intact. This data could be used to rule out the possibility that the robot car was to blame.

Even if it works, you have to convince a lot of partners that your list of formulas works. How do you intend to do that?

We are already talking to numerous well-known manufacturers that are currently developing self-driving cars. The initial reactions are very encouraging. After all, manufacturers also have a vital interest in a reliable legal framework.

Stefan Hajek is editor at WirtschaftsWoche, a sister publication of Handelsblatt. To contact the author: stefan.hajek@wiwo.de

We hope you enjoyed this article

Make sure to sign up for our free newsletters too!