In what was once an industrial-age foundry along the Allegheny River in Pittsburgh, Carnegie Mellon University has created a long-standing hub for the development of autonomous vehicle technology—the National Robotics Engineering Center.

The university’s pioneering work with the government’s Defense Advanced Research Projects Agency dates back to 1984 and has led to the creation of many of the vehicles that occupy the facility today.

The building’s high bay has a crane capable of lifting 10 tons, and the huge open space is littered with prototypes with names like Crusher (an unmanned military ground vehicle), Chimp (a robot with thumbs that can grasp tools), and Boss (a 2007 Chevy Tahoe modified to drive itself).

Some of the center’s major clients—including NASA, Caterpillar, Ford Motor Co., John Deere, and multiple arms of the Defense Department—are underwriters of advanced autonomous vehicle technology.

Although much of this technology was originally intended for the battlefield, it has become increasingly clear in recent years that self-driving cars and trucks—animated by computer code—will be sharing the roads with ordinary drivers in the near future. And in places like Mountain View, Calif., Pittsburgh, and Phoenix, this is already happening in the form of on-the-road testing. Pittsburgh was also the place Uber chose to launch its prototype test fleet of self-driving taxis last year.



Philip Koopman is a computer and electrical engineering professor at Carnegie Mellon who often spots Uber’s self-driving taxis while riding a bus downtown. These days, his job at NREC is to stress-test the software that guides the center’s self-driving car prototypes. He and his team of computer wizards throw dilemmas at the vehicles in the form of confounding nuggets of code.

One day they might try to ensure a programmed speed limit holds steady in self-driving mode. On another, they’ll corrupt map data to see how the vehicles respond. Do the cars stop entirely, or crash? Or act confused?

Sometimes Koopman’s team’s experiments are simulated inside the enormous facility. Other times they work around the country at their clients’ testing facilities, riding alongside vehicles they are trying to befuddle.

It’s a lopsided competition, for sure. “We’ve broken everything we’ve touched,” says Koopman.

Although that sounds like bad news, Koopman’s crew prefers to interpret the failures it causes as counterintuitive moments of progress for the evolution of machine-driven vehicles.

This kind of work takes time, Koopman says, and is important, especially when you’re dealing with “a shiny toy that can kill people.”

Where the driverless cars are

Are We There Yet?

Fully self-driving technology is at a critical juncture in its development. For a few years now, test fleets have been operating on public roads, and, for the most part, those fleets have coexisted fairly well with human drivers and pedestrians. That alone can seem like such a miracle of modern engineering that people might assume the full deployment of self-driving cars is all but inevitable, and near-term.

Most experts consulted for this story tend to agree that, technologically, we are about 85 to 90 percent of the way to perfecting the hardware, guidance systems, and software to make vehicles that can reliably and safely drive themselves. Almost all of the fully autonomous vehicles currently allowed on public roads are still under the direct supervision of human pilots, and they’re only driving on roads that have been heavily studied and mapped in three dimensions.

Ford Motor Co. executive vice president Raj Nair says you get to 90 percent automation pretty quickly once you understand the technology you need. “It takes a lot, lot longer to get to 96 or 97,” he says. “You have a curve, and those last few percentage points are really difficult.”

Almost every time auto executives talk about the promise of self-driving cars, they cite the National Highway Traffic Safety Administration statistic that shows human error is the “critical reason” for all but 6 percent of car crashes.

But that’s kind of misleading, says Nair. “If you look at it in terms of fatal accidents and miles driven, humans are actually very reliable machines. We need to create an even more reliable machine.”

Some of the gnarliest issues are still to be solved. There are technical hurdles for the industry to overcome, like perfecting the sensors that enable cars to “see” in all conditions. There are legal questions, such as whether a car company will accept liability when the driver is its software. Ethical challenges may prove even harder. Should a self-driving car swerve to avoid a young child, risking the life of its owner-occupant? And for every real-life situation researchers like Koopman and his team identify, there are likely hundreds and thousands of others no one has yet thought of.

As daunting as that sounds, there has been a lot of progress in the technological foundation of autonomous driving in the last few years. And even more hype. So how long will it take to get from test cars to real-world autonomous vehicles?

Most industry analysts believe it will take many more years—even decades— before they replace human-driven cars in significant numbers. Market forecaster Moody’s projects they won’t be a majority of active cars before 2045.

Still, driverless technology is one of the major trends in the auto world, along with the rise of electric vehicles, the growth of ride sharing, and increasing Internet connectivity.

It seems an autos revolution is upon us.

Mike Ableson, the vice president of global strategy at General Motors, says GM expects to see more industry change over the next five years than in the last 50. “We’ve solved a lot of the really hard problems as far as the environment we operate in,” he says of reaching the threshold of full driving automation. “There’s not a lot of fundamental invention that’s got to go on. It’s more development and refinement and validation.”

For all the uncertainty, there’s a good deal of agreement on the biggest technical issues that still need to be solved, which boil down to three main areas: sensor technology (for “seeing” the road and any potential obstacles), mapping (for spatial orientation), and software (for thinking and problem-solving).

1. Sensor Technology

Just like a human driver uses eyes to see the road ahead and transfers visual data to the brain, an automated vehicle will have to use a combination of sensors to transmit data about the nearby environment to its computer processors. Think how much safer a human driver would be if she had eight eyes, not two.

Prototype vehicles today are equipped with bulky equipment on the roof, where it’s easier for sensors to get a 360-degree view of the vicinity. All that gear is basically a collection of two different types of sensors. First is an array of cameras, which takes in the same type of visual information that the human eye does—only in multiple directions at the same time—then feeds that information to a computer. With enough cameras, blind spots are eliminated. Narrower-range cameras can clearly see distances beyond human vision. Wide-angle cameras offer superior peripheral vision.

Mobileye, a company based in Israel that develops cameras, hardware, and software for much of the auto industry, is marketing systems that use eight cameras spaced around the vehicle, along with chips and software to process that visual data.

The second type of object-detecting sensors includes radar and lidar. They use radio waves or light pulses to scan the road ahead for potential obstacles. This can work in tandem with cameras, leveraging the strengths and weaknesses of each technology. Researchers and automakers are still working out what combination of sensors creates the best balance of capability, complexity, and cost.

Human vision at night is limited, reduced to whatever headlights can illuminate. Meanwhile, some sensors, such as lidar, and more traditional radar, don’t need light to see, according to Michael Jellen, president and chief operating officer of Velodyne LiDAR, a leading industry supplier.

“Driving into the sunrise or sunset, of course, night driving—these are all extremely tough challenges for anything that’s not lidar-based,” Jellen says.

Radar is adept at calculating speed and distance. But it still has some limitations, including not being able to distinguish whether an upcoming obstacle is a living thing or a similarly sized rock, or whether a traffic light is red or green.

Lidar has been perhaps the most exotic, costly, and important technological piece of the self-driving puzzle. Pulsating lasers bounce off surrounding objects to generate a three-dimensional map. The third dimension is key because it gives the car the depth perception we humans naturally have, which is necessary to avoid crashes. Lidar systems are accurate, but cost as much as $7,500 per car. And they can be easily flummoxed by commonly occurring events such as rain and snow.

Some newer vehicles, such as the 2017 Cadillac CTS, are already equipped with a radio technology known as vehicle-to-vehicle or vehicle-to-infrastructure, which lets vehicles communicate with the infrastructure around them or directly with other cars on the road. An Audi system on some Q7 and A4 models already communicates with certain “smart stoplights” and tells drivers the seconds until a light will turn green.

Federal safety regulators see enormous potential safety benefits to V2V and V2I technologies, and have proposed that all cars eventually come with V2V equipment, enabling cars to talk to one another by broadcasting a stream of speed, acceleration, location, and braking information.



2. Mapping

Right now, GPS systems can pinpoint locations of phones and cars to within about 2 meters roughly 95 percent of the time. That’s accurate enough to navigate in traffic, but not good enough to let the car drive on its own.

That’s why researchers and carmakers are embarking on a massive endeavor to create high-definition 3D maps of the nation’s roads. Some of this mapping is already under way in cities where self-driving fleets and research vehicles have scanned roads using lidar.

These high-definition maps have been shown to be accurate to within a few centimeters. They can help self-driving cars navigate when conditions make it difficult for sensors to see the road. And they can assist self-driving cars in cutting through the chaos while merging onto a highway entrance ramp, joining a traffic circle, or traversing a bridge.

Ford’s test vehicles, for example, scan every road they drive to pinpoint locations of trees, fire hydrants, buildings, stop signs, and traffic lights—anything within 200 meters of the moving car. Once roads—and larger areas such as towns and cities—are fully mapped, cars will know whether a crosswalk exists even when painted lines are worn thin or covered by snow.

Some companies have begun crowdsourcing the job of gathering data for 3D maps instead of doing it all themselves. Newer-model cars already have built-in cameras for active-safety systems such as automatic braking and lane-change assist that are generating huge amounts of this kind of data.

Even temporary construction zones, potholes, and sinkholes could be identified and marked very quickly with crowdsourcing of real-time conditions, says Jim Zizelman, a vice president for electronics and safety at Delphi, an automotive and technology company.

This is one reason companies like Tesla are willing to roll out cars equipped with autonomous-vehicle hardware before having corresponding software written. By using the cars’ cameras to record data about accidents and near misses, Tesla says it can evolve and validate self-driving technology before activating it.

Other companies like Mobileye are planning a similar effort to gather data from the nearly 14 million newer-model cars already using their sensors for other semi-autonomous features.

One of the challenges of 3D maps using lidar is that they’ll need storage and processing powers well beyond what could fit in a car today. The crowd-sourced maps would transmit data 100,000 times faster than the 3D maps using lidar, says Zizelman, and they’ll be continually updated.

3. Software

Today’s self-driving cars are sometimes described like teenagers: relatively safe in limited situations, not nearly as safe as an experienced human driver.

Programming cars to make them safe enough to be let loose on busy roads requires painstaking programming of real-life situations, machine learning, and artificial intelligence so that they can recognize what’s happening in every conceivable circumstance. They have to process their environment and make safe decisions even about things they’re encountering for the first time.

There are essentially two ways to train a vehicle to anticipate the unexpected: Program in every possible eventuality, or teach a vehicle to learn and think for itself.

The system has to be one that sees pedestrians, bicyclists, and lanes and understands driver behavior, says Ford’s Nair. “It’s not just recognizing there’s a vehicle in front of it,” he says, explaining that it learns through driving experience, like people do.

A lot of the progress has come via analysis of test cars on actual roads and in programmed simulations of road driving. Programmers have worked out a lot of the basics of ordinary driving.

The trick is getting those rare situations, ones that might occur only once in a lifetime, written into code.

Researchers scan for weird incidents among the millions of test-car miles. This has yielded oddities, such as the Google car that stopped cold while a woman in an electric wheelchair did circles in the road ahead. She was chasing a duck with a broom. The poor car had never seen anything like it.

Ford’s test fleet of self-driving Fusion sedans are all learning from each other at the same time, Nair says. When one vehicle encounters a situation and a software engineer figures out a solution, it is then learned by every other self-driving car Ford owns, he says.

It’s a daunting task imagining and writing all that code, then testing it in labs and on roads. “We’re trying to replicate your brain,” Nair says. The auto industry seems unfazed that the human brain, which works so well to process data it absorbs, evolved over millions of years. It won’t take anywhere near that long to get a functional computer brain for self-driving cars, Nair says, because engineers will rely on experimenting and testing, and not random genetic mutation.

“Mother Nature, as good as she is, does a lot of work by accident,” Nair says. “If something doesn’t work, we can make a design change right away.”

The challenge might be easier if self-driving cars only had to worry about other predictable self-driving cars. But that won’t be the case. They’ll be sharing the road with unpredictable human drivers for decades, at least.

That’s where Waymo, a new company that was formerly Google’s self-driving car project, believes it has an edge. With deep roots in software, it has experimented with machine learning in projects like Google Translate and Google Photos. “To navigate city streets, we’ve had to train our software to be able to understand and predict how drivers and other users of the road will behave,” says Johnny Luu, a spokesman for Waymo.

Ford’s recent partnership with a small visual-software company echoes the industry’s growing interest in artificial intelligence. Nirenberg Neuroscience, a New York-based company founded by Weill Cornell Medical College professor Sheila Nirenberg, has developed software meant to mimic the code transmitted from the human eye to the brain. Nirenberg’s research is already helping robots recognize objects, read faces, and navigate complex situations. Her company has also developed a “bionic eye” to restore sight to patients with degenerative retinal diseases.

Ford hopes to use its partnership with Nirenberg to bring humanlike intelligence to driverless cars.

Nirenberg says she figured out that the human eye transmits to the brain only what it needs to know. The key to the artificial intelligence she’s developing is how to be just as effective working with a much smaller amount of data.

Evolution built that editing process over millions of years, Nirenberg told CR. “I figured out what evolution did and turned it into equations.” It’s nearly impossible to map all circumstances in all weather conditions, she says. Humans didn’t evolve with highly detailed maps in their heads. We can function in places we’ve never been before because our brains focus only on what they need to know at the moment.

Likewise, cars must be able to respond to something on the fly, to handle the unexpected, she says.“You want the flexibility to be like we are; you don’t have to know everything.”

Editor's Note: This article also appeared in the April 2017 issue of Consumer Reports magazine.