All around the world, automakers and tech companies are hard at work creating and refining autonomous vehicles to be ready for the public in the coming years. With that in mind, governments are also working hard, they are deliberating as to how best to ensure the safety of the public regarding these vehicles and tech. It’s telling that this is in fact part of the larger reason as to why the creation of self-driving cars is happening in the first place: the hope that these can be programmed to make fewer mistakes than us silly human drivers. Engineers believe that technology has the ability to respond more quickly to the problems that drivers face and with the fact that robots don’t get tired, distracted, or drunk (as we learned from the Terminator films) the appeal of this development is unmistakable. Right now Google, Apple and many others have been testing self-driving cars for some time and have sporadically released reports both as required and as marketing. In fact, this month marks Google’s efforts to continue advancing their research by pressing the National Highway Traffic Safety Administration to fast-track rules governing the testing of autonomous vehicles.
That sounds all well and good, but some remain skeptical. Mainly, nonprofit consumer safety organizations like Consumer Watchdog who have their own major concerns, and have just made requests of their own to NHTSA. First and foremost, Consumer Watchdog are asking that one of the rules are that a driver is always required behind the wheel, and all autonomous cars are equipped with a steering wheel, brake, and accelerator in case a human needs to take over the controls.
The organization has corroborated Google’s reports and noted that during 15 months and 424,331 miles of testing, the autonomous system failed 272 times, and that the human drivers felt the need to intervene an additional 69 times.
Consumer Watchdog’s Privacy Project Director, John M. Simpson issued a statement claiming that the reports prove how there are routine traffic situations where a self-driving system can’t handle, that a human driver can. He believes that it is important that an actual human driver would stay behind the wheel in order to be ready and to take control in those instances. He, along with the group, believes that self-driving robot cars are not yet ready to safely manage enough traffic situations without the need for the human element.
- Is Google going to publish a complete list of the situations self-driving cars cannot yet comprehend or handle and what will the NHTSA deal with such matters?
- What will happen when Google’s computer “driver” suddenly goes offline with a passenger in the car? Especially if the car is built without a steering wheel or pedals for the passenger to steer or accelerate/brake.
- Will Google agree to publically publish its software algorithms which will inform on how the company’s “artificial car intelligence” is programmed to “decide” what happens in the event of a collision or even potential collision? Does the Google car prioritize the safety of the occupants of the vehicle or pedestrians it encounters and how?
- Is Google publishing all video from the car and technical data including but not limited to radar and lidar reports, often associated with accidents or other anomalous situations?
- Is Google going to publish every piece of data in its possession that discusses and makes predictions regarding the safety of driverless vehicles?
- Does Google ever expect their cars to be involved in a fatal crash? If so when their system is found responsible for the cause of the accident, who, what and how would Google be held accountable?
- What evidence if Google using in order to prove that self-driving cars are safer than traditional modern vehicles?
- Is Google going to store, market, sell, or transfer the data gathered by the self-driving car, and utilize it for any purpose other than navigating the vehicle and protecting the passenger?
- The NHTSA’s performance standards are the way they are in order to encourage companies to prioritize safety technology. Why does it seem like Google trying to circumvent them by not yet providing the data in its possession concerning the length of time required to comply with the current NHTSA safety process?
- Is Google prepared with the technology in order to prevent criminals and hackers from seizing control of a driverless vehicle or any of its systems?
With this latest development and hopefully companies like Google responding in kind, when driverless cars are made available to the public they will indeed help to make the roads safer (and less congested, and more accessible) for everybody. It’s left for companies like Google to take the time to figure out how to improve safety through vehicle autonomy, and it is absolutely in our mutual interest for them actually to share that information. We’ve looked into what a possible “safety net” of human controls or other fail-safes for these systems but it’s still unknown if they will be in the final designs. However, a complete pushback, is more likely to keep us all in the dark longer leaving our future roads all the more murky and uncertain.