Self-Driving Cars Here By 2025, Mandatory By ?
Most of us suck at driving. Soon, we won't be allowed to drive.
Most of us suck at driving. Soon, we won’t be allowed to drive.
WSJ (“The Future, Coming Soon: Self-Driving Cars Mainstream by 2025“):
The consensus among auto industry technologists, gathered in Detroit this week forSociety of Automobile Engineers World Congress, is that by the middle of this decade, cars that can largely pilot themselves through traffic jams will be offered for sale. By 2020, cars capable of taking over most of the work of high speed driving could debut, and by 2025, fully autonomous vehicles might hit the streets in meaningful numbers.
[A]uto makers – and safety regulators in the U.S. and Europe – say they’re serious about pushing more autonomous braking and steering systems into cars and trucks, for one overriding reason: Most humans are depressingly bad drivers.
The National Highway Traffic Safety Administration, in a study of crash data harvested from black boxes installed in cars, found that just 1% of drivers involved in the collisions applied the brakes at full force before the collision. About 33% of the drivers in the crashes NHTSA analyzed didn’t apply the brakes at all, NHTSA researcher W. Riley Garrott told attendees at a session Wednesday.
About 220 people were killed in the 910,000 rear-end crashes the agency analyzed to learn about driver braking behavior, he said. Crashes in which braking mistakes were a factor cost society $45 billion, based on a survey of 2006-2008 data, Mr. Garrott said.
A separate study of driver behavior presented Wednedsay found that in a sample of crashes involving a car swerving out of the lane, 65% of the drivers were speeding, and 67% of drivers either sped up, or made no significant change in speed just prior to hitting a car, tree guard rail or roadside wall.
Driving is simultaneously both absurdly dangerous and complicated and yet remarkably boring. Those of us who spend a significant period of time every day fighting traffic for a repetitious commute naturally zone out, paying more attention to our in-car entertainment system and thinking about life’s many issues than to the task at hand. That we’re allowed to do these things is a function of a sheer lack of good alternatives to letting us. That appears to be coming to an end and soon.
Nissan Motor Corp., for example, says it’s working on a system that will take control of the steering if the driver fails to respond to an object, such as a parked car or pedestrian, detected by forward-looking radar sensors.
Other auto makers, and technology suppliers such as Continental, are expanding the capability of so-called “active safety” systems already on board many new cars and trucks. These systems are built around sensors that can look ahead, to the side or even behind the car to detect obstacles.
Onboard computers can calculate whether the car they’re in is closing too fast with the objects outside, and use the cruise control to slow the car down, or order up a warning signal to the driver, or if the driver still doesn’t react, engage the brakes.
Luxury car makers – and some mass market brands -are starting to present these driver assistance technologies as desirable safety upgrades. Marketing a car that can manage its own way through a 25 mile per hour rush hour crawl is a short step away.
It’ll be a while before this trickles down. While I’d be willing to pay some reasonable surcharge for this technology, I doubt I’ll ever buy a brand new car again. I’ve only done it for myself twice and not since 2001. (My late wife, on the other hand, insisted on having all the latest gadgets, so we bought two brand new vehicles for her.) Still, it tends to take a decade or so for the gee-whiz gear to migrate from an upcharge on a BMW to standard on a Kia.
Regardless, once these technologies are perfected, there’s going to be a heavy impetus to make them mandatory. Once robo-cars become standard, it’s going to be difficult to justify letting humans drive themselves in traffic.
Of course, Geddy Lee and company predicted this thirty-odd years ago.
Red Barchetta, where have I hear that name recently?
What happens when the self driven car you’re sitting in makes a decision that ends up causing an accident?
I think that would kill the US economy.
I wonder what kind of economic hit there would be if there were no crashes?.
I still think the basic premise has a problem. Instead of large hunks of metal being spirited through dynamic ever-changing roads, piloted by largely insolvent drivers with minimum insurance coverage,
we would have large hunks of metal being spirited through dynamic ever-changing roads, piloted by multinational corporations with deep pockets.
@markm: I guess I assume crashes will continue, probably much fewer, though not necessarily less dangerous.
I’ll repeat the prediction I posted earlier:
In 2028 the first self-driving cars will be introduced on the public roads.
By 2068, they will be standard.
By 2108, it will be illegal to drive on public roads. People can only legally drive in special “driving ” parks.
PD Shaw, above, touches on what I think is the fly in the ointment here. There is a big gap between something being technically feasible and actually happening. The problem will be liability.
“Thou shalt not make a machine in the likeness of a human mind.” We’re going to learn that lesson after the Butlerian Jihad.
I’m personally dubious that self-driving cars will become ubiquitous or mandatory. Public transportation just seems…..easier. Indeed, I think if this system is implemented, it will be implemented as a hybrid system –with a mix of trains, buses, or cars– and human hands will never be far from the controls.
One idea to improvedefensive driving: Install a six-inch blade coming out of the steering wheel, aimed at the driver.
Seriously, the Google cars are very cool. I read that one problem of robot cars is feedback to pedestrians at crosswalks. I think I read that Google was working on a robot head that would turn to ‘look’ at the walker, blink a pair of ‘eyes’ and nod it’s ‘head’, all to give the pedestrian the knowledge that they had been seen and can cross safely.
As I’ve said, I think that general purpose self-driving cars, especially in the sense of let the driver sleep (or pour himself a drink) is much further off than the general public believes.
Demonstrations have been on insanely well-mapped roads and with attentive monitors.
2025 is too close. It still has a science friction ring to it, but it falls to Gibson’s “the future is here, but not evenly distributed.” Today’s Google car is not ready to be evenly distributed. It isn’t good enough for sleeping drivers, or even drivers yakking happily on their telephones. 2035? Maybe.
@PD Shaw: Your posts are good and related to each other. Personally, I think we’re probably pretty close RIGHT NOW to an autonomous car that would, on average, have fewer accidents than a human-driven car, and could easily get better gas mileage. But it would likely still do some really, really stupid things in complex/unexpected situations. (I’m saying this as someone who has experience programming autonomous cars in a virtual environment.)
The fly in the ointment is liability, as mentioned above. Yes, there might be fewer overall accidents, but you could sue a mega-corporation for each of them instead of some average schmuck.
When you can let me pick the road, anywhere in the US, and you can supply a car to drive it, then we will by definition be “there.” As I understand it, we are nowhere near, with tests limited to roads which have been laser mapped down to the inch.
(Even if you think such mapping is a reasonable solution to the problem, consider the effort it woudl take to keep every road in the country mapped to the inch. Many counties are still mapping roads with GPS down to the meter.)
What we DO have today is pretty good driver assist, automatic braking, override on back-up, etc. Small changes with big safety and collision costs reduciotn.
The one thing the magic box can’t do is handle unexpected or deteriorating conditions. Sure, some humans are bad at it but even those people won’t want to be stranded by some robot in a dangerous situation. So the best you can hope for is an autonomous-assist system.
Not to mention, hacking into the system to make the vehicle a weapon or a means to kill the occupant. Scientists have already hacked in to cars via the mandated tire pressure monitoring system. And scientists aren’t the best hackers out there.
Liability has been mentioned.
In current, monitored semi-autonomous systems with professional operators, like airliners, it is seen that when the system fails or an unexpected situation occurs, the professional often has trouble coming up to speed (realize failure, apprise himself of the conditions, etc.) fast enough to avert disaster.
You have to look at these systems from the point of view of how they can fail, whether people will accept being murdered by a buggy programmer and who will pay for the failures.
But let’s look at it from an alteration of society point of view. Autonomous vehicles would most likely be required to query and follow local speed limits, etc. A good portion of the economy would grind to a halt if everyone followed the speed limit. Not only would municipalities lose speed trap revenue, but daily accomplishment by individuals would decline due to more time in transit. Plus, for all the automation, there will still be government vehicles which will not be autonomous, at least, fire, police. But then the mayor, and city council members, etc. then their cronies, all whizzing by the “subjects” who are subject to the laws the elite are exempted from.
I think this may be moved mostly by insurers. But then I work for one, so I would think that wouldn’t I?
Was that intentional? Probably a good descriptor though.
@Franklin: One of the issues is that a driverless car violates the Geneva Conventions that require motor vehicles to have drivers and for drivers to have the ability to control their cars.* Of course, treaties can be renegotiated or ignored, but I think there will need to be a reorientation about our assignment of morality or responsibility. I think part of the controversy over the use of drones is the lack of a human veto at the point of killing. Why do we have astronauts with pilot-like controls? IIRC from “The Right Stuff,” they weren’t necessary, we could have shot people up in rockets like cargo.
There is computer technology that is available on cars today that does exactly this. My car his some, and it is eight years old. What are you talking about?
I have several thoughts about this. Newer jets can be “flown by wire” : a teenager with mid-level tech skills (and many have advanced skills) could handle these jets if an emergency came up, since it involves following directions by radio, entering directional information into the plane’s computer, and operating some basic controls such as the flaps lever. It seems that we are close to that now with cars and gps systems. The down side of this would be the governments (Federal and state)
could access and use this system to control where and how far a driver can go in their own car!! Mileage per year could be reported and controlled. The newer cars are coming equipped with a black box that only the car dealers, manufacturers, and possibly the police can access. You, the car owner, are not allowed entry into the “black box.”
Many of you already know the hassle that those engine lights can cause, many of which are false.
It’s interesting to think about the changes this will cause. In behavior: what if you could sleep on your way to work? Would a 3 or 4 hour commute be unthinkable as it is now? In car design: if everyone is a passive passenger, would there be a need to have a car with a forward facing eleveated seat? Would there be a need for windows?
Now the radio is broke, the computer has crashed and oh yeah, the air traffic controllers are offline.
Or the everything works but the plane suddenly goes into a steep dive? Or take a bird strike near just after take off. Could the computer decide to land in the East River saving lives or will it turn into other traffic or crash into the eastern side of Manhattan.
The point is, there will always be situations that require judgement. The choice of the least of two worst options.
Imagine a cop in the road, and a sign that says “black ice on bridge.” How good does a self-driving car have to be to not simply avoid the cop as a “pedestrian” and then know the road is closed?
Basically to make a self-driving car as good as a slow witted driver it would have to recognize clothes that are “uniformish” and then recognize that two hands held palm outward mean “stop.”
Self-Driving Cars Here By 2025
The cops badge emits a signal that identifies him to the system as a law enforcement officer. Someone has to make a decision to close a road. It’s not very difficult to create a method that would enable them to make that data available to the larger system. A cell phone app could do it. I get look at my phone and get data on road conditions now. A lot of this technology already exists. To get to a fully automated system, it would need to be improved, but we are not talking rocket science here.
Think 21st century, not 20th.
@john personna: What you’re suggesting is already largely covered by some of those DARPA challenges and in various other situations – using computer vision to find the path. Yes it would help greatly to have a rough map of where the road *should* be. And it won’t be particularly difficult for autonomous vehicles to often be better at this than humans, particularly in low-visibility situations.
As I said, though, they’ll most certainly come across a situation that they can’t handle, and they’ll do something stupid. Preferably it would be to stop without causing another vehicle to rear end them, probably requesting operator override or something.
@john personna: We’ve already been over those types of situations in some thread on autosport.com. You’re not the first or last to think of these situations. Yes, police-directed traffic is something that will require a solution.
If it didn’t handle a policeman at all, yes it could result in a catastrophic collision (and seem exceedingly stupid). But around here, police-directed traffic is so rare that I still must insist the average number of accidents could easily be reduced.
@PD Shaw: Somehow the issue with the Geneva Conventions had escaped my notice. Thanks for the head’s up.
As a 30 year veteran in computer solutions design, the first thing I do is look for corner cases that “break it.”
When it isn’t hard to think of them, there are big design holes.
Someone else having thought of corner cases, and waved them away … is a good way to send a project to ruin. You solve these things. You do not say well, this, that, the other, and the other other conditions are rare, so let’s ship it.
(How good is the google system at distinguishing between running dogs and running children? Does it know to crash the car to save the child, but not to save the dog?)
Well, they have different shapes and a dog is almost certainly moving faster, so there are two things to work with to attack that problem.
I wonder what percentage of drivers actually crash the car to avoid hitting a kid. There is also the problem that making a radical maneuver to avoid hitting something can actually result in something worse, like hitting the three kids that you did not see off to the side.
@john personna: Again, you’re not the only person around (not even the only person in this thread) who has been programming for 30+ years. I’m not suggesting they skip the cases that are rare. But I *am* suggesting that even if they did, the average number of collisions would be reduced.
I don’t understand your statement about “waving them away.” Nobody I’ve seen has done that.
Don´t expect the traffic cops to be humans when the drivers are not.
@Tyrell: Check engine lights are problematic because of the amount of information connected to it. Minor things are routed to the same light as extreme things. It’s kind of ridiculous that in this day and age you’re still forced to access a scanner to read the codes. My old 89 honda CRX at least had a blinking light on the ecu on the driver’s side firewall/floor.
@anjin-san: The issue you bring up with the “no win” situation is why I am confident it’ll be a while before we have autodriving cars. Answering questions like that might actually be one of the toughest parts.
I had a friend who went to Carnegie and he was involved with one of the DARPA contests involving automated vehicles back around 2004.
So the roots of this run rather deep even at this stage.
Another thing I can pontificate about since I used to work in this area (Ran a group devoted to this in a Japanese corporation and also represented our company in one of those Japanese quangos developing the regulations around the technology.)
Bluntly: Not. Gonna. Happen. Well, I can see it possibly occuring in city-states like Singapore which could put an insane amount of infrastructure into place for the car to interact with and control what is and isn’t getting on to the road. AND mandate that all cars have such technology installed. But considering the sort of stuff the average driver has to deal with, there’s no way that an idiot signal-response loop (or many of them) are going to be able to handle all the situations that pop up, and when to know that a deliberate crash is sometimes the best response in a situation (when otherwise you’ll go over the side of a cliff, for instance.)
Now, it might be possible to “mechanize” a section of highway and allow cars go on to it with sensors so that that section of highway keeps the cars at non-dangerous speeds and with good distances between each other. But ALL of them would have to be outfitted and driving-by-human would have to be totally shut off for that section. And somehow you’d have to keep the deer/dogs/cats/raccoons/rabbits out of it. Good luck.
Been there, done that…..
Obviously if you had a reliable, disaster resistent, communication system in place, from every bridge and railroad crossing to every car, then self-driving cars would be that much easier.
Ten years to put them in place on every bridge and train crossing in America?
I’ve said “no way by 2025.”
You’ve wave away a lot to make that schedule.
@john personna: On another blog, somebody pointed out that such objections assume a very high level of awareness and skill on the part of the driver.
Gee, guys… at what point does the Nanny state go too far, huh?
@Eric Florack: The negative externalities of driving are tremendous. We accept them now because we gain so much in return. Driving is easily the top preventable cause of death. If we reach the point where machines can eliminate most of that risk while keeping almost all of the benefit, I think we’ll decide that it’s not reasonable to continue to allow humans to drive where they can kill other humans. The nanny state protects us from ourselves. This would be protecting us from others.