The promise of a true self-driving car on public roads is no longer the stuff of future-think sci-fi movies. We’ve all seen the streaming video of the Google self-driving car, the egg-shaped electric pod, motoring around the company's corporate campus.


But how close is the Google self-driving car to being ready for navigating the nation’s motorways?

Consumer Reports welcomed Ron Medford, the director of safety for Google’s self-driving car project, to our headquarters to chat about the vehicle's progress.

Google self-driving car from behind.

Did Google build these prototype cars?

We designed them in-house and built them with our partners from scratch. We provided the sensors and software, and worked with Roush [Enterprises, a prototype-car manufacturer], Continental, Bosch, and many others to assemble it. We also developed some of our own sensors, such as the long-range 200-meter and medium-range lasers on the roof.

Is Google Car street legal?

Yes. It’s a low-speed vehicle, and it can travel on roads with speed limits up to 35 mph under California law. Keep in mind that these are prototypes, designed for learning and rapid iteration. This is the first incarnation of something that could go in a lot of different directions; it’s our long-term intention that’s important here, not the specifics of this vehicle. (Watch the Google self-driving car in action.)

There’s some concern that a self-driving computer cannot anticipate bad or impaired drivers’ behavior on the road. How does Google address that?


In terms of anticipatory driving behavior, it detects and reacts fast. It’s not programmed to react like, “Oh that’s a drunk driver,” but rather it looks at distance and behavioral patterns. Self-driving cars never get sleepy or distracted like humans, and our car’s ability to see 360-degrees around, up to 200 meters out, and simultaneously track many objects means it can potentially respond more quickly than humans in many scenarios.

Some self-driving systems already on the road react differently to different objects in the road. In some cases, if the object is not shaped like a car or pedestrian or cyclist, it won’t react at all. How does the Google system react?

When in self-driving mode, our software interprets hundreds of objects with distinct shapes—such as cyclists and pedestrians—and detects and recognizes things like traffic signals and signs. With data collected from driving nearly two million miles on public roads, we’ve developed models for identifying objects and predicting what they are likely to do in a given situation. For instance, if a car is approaching a four-way stop at a high speed, there are various probabilities that the other car will stop normally, screech to a stop, or run the stop sign. We then respond accordingly.

Asking the car to do a necessary maneuver contrary to the rules of the road would seem to be a tricky software-coding situation. How do you approach that?


We’ve taught the car to make decisions by combining existing models of how objects in the world are expected to behave with real-time info about how they’re actually behaving. So, for example, if we came upon a construction site that required us to follow signs and cones across a double yellow line, the car is programmed to do it, given all the information it has about its situation and the applicable rules of the road.

What if the computer senses a “no win” situation? There’s going to be an accident, no matter what path is taken. Then what?


We get asked about a lot of hypothetical scenarios. But really, there are far too many variables involved to be able for us to answer hypothetical situations. Our vehicle is programmed to try to avoid hitting any objects. So as to any particular situation, the vehicle’s performance would depend entirely on a large number of facts particular to that situation.

Ron Medford, the director of safety for the Google self-driving car project
Ron Medford, the director of safety for the Google self-driving car project