Share the post "Driver Less Car Face Hiccups"
In an effort to create driverless cars, Google, has run into an odd safety conundrum: Its cars don’t make enough mistakes. Recently, as one of Google’s self-driving cars approached a crosswalk, it did what it was supposed to do when it slowed to allow a pedestrian to cross, prompting its safety driver to apply the brakes. The pedestrian was fine, but not so much Google’s car, which was hit from behind by a human-driven sedan.
The search giant has autonomous test cars are programmed to follow the letter of the law. But it can be tough to get around if you are a stickler for the rules. One Google car, in a test in 2009, couldn’t get through a four-way stop because its sensors kept waiting for other (human) drivers to stop completely and let it go. The human drivers kept inching forward, looking for the advantage — paralyzing Google’s robot.
But it’s not just a Google issue. Researchers in the fledgling field of autonomous vehicles say that one of the biggest challenges facing automated cars is blending them into a world in which humans don’t behave by the book. Donald Norman, director of the Design Lab at the University of California, San Diego, said they have to learn to be aggressive in the right amount, and the right amount depends on the culture.
The way humans often deal with these situations is that they make eye contact. On the fly, they make agreements about who has the right of way. Traffic wrecks and deaths could well plummet in a world without any drivers, as some researchers predict. But wide use of self-driving cars is still many years away, and testers are still sorting out hypothetical risks, like hackers, and real world challenges, like what happens when an autonomous car breaks down on the highway. For now, there is the nearer-term problem of blending robots and humans. Already, cars from several automakers have technology that can warn or even take over for a driver, whether through advanced cruise control or brakes those apply themselves.