ADVERTISEMENT

Uber’s Self-Driving Car Didn’t Know Pedestrians Could Jaywalk

Aug 12, 2016
1,706
1,785
113
https://www.wired.com/story/ubers-self-driving-car-didnt-know-pedestrians-could-jaywalk/


The software inside the Uber self-driving SUV that killed an Arizona woman last year was not designed to detect pedestrians outside of a crosswalk, according to new documents released as part of a federal investigation into the incident. That’s the most damning revelation in a trove of new documents related to the crash, but other details indicate that, in a variety of ways, Uber’s self-driving tech failed to consider how humans actually operate.

The National Transportation Safety Board, an independent government safety panel that more often probes airplane crashes and large truck incidents, posted documents on Tuesday regarding its 20-month investigation into the Uber crash. The panel will release a final report on the incident in two weeks. More than 40 of the documents, spanning hundreds of pages, dive into the particulars of the March 18, 2016 incident, in which the Uber testing vehicle, with 44-year-old Rafaela Vasquez in the driver's seat, killed a 49-year-old woman named Elaine Herzberg as she crossed a darkened road in the city of Tempe, Arizona. At the time, only one driver monitored the experimental car’s operation and software as it drove around Arizona. Video footage published in the weeks after the crash showed Vasquez reacting with shock during the moments just before the collision.


The new documents indicate that some mistakes were clearly related to Uber’s internal structure, what experts call “safety culture.” For one, the self-driving program didn’t include an operational safety division or safety manager.

How do self-driving cars see me?
The most glaring mistakes were software-related. Uber’s system was not equipped to identify or deal with pedestrians walking outside of a crosswalk. Uber engineers also appear to have been so worried about false alarms that they built in an automated one-second delay between a crash detection and action. In addition, the company chose to turn off a built-in Volvo braking system that the automaker later concluded might have dramatically reduced the speed at which the car hit Herzberg, or perhaps avoided the collision altogether. (Experts say the decision to turn off the Volvo system while Uber’s software did its work did make technical sense, because it would be unsafe for the car to have two software “masters.”)

Much of that explains why, despite the fact that the car detected Herzberg with more than enough time to stop, it was traveling at 43.5 mph when it struck her and threw her 75 feet. When the car first detected her presence, 5.6 seconds before impact, it classified her as a vehicle. Then it changed its mind to “other,” then to vehicle again, back to “other,” then to bicycle, then to “other” again, and finally back to bicycle.

It never guessed Herzberg was on foot for a simple, galling reason: Uber didn’t tell its car to look for pedestrians outside of crosswalks. “The system design did not include a consideration for jaywalking pedestrians,” the NTSB’s Vehicle Automation Report reads
. Every time it tried a new guess, it restarted the process of predicting where the mysterious object—Herzberg—was headed. It wasn’t until 1.2 seconds before the impact that the system recognized that the SUV was going to hit Herzberg, that it couldn’t steer around her, and that it needed to slam on the brakes.

That triggered what Uber called “action suppression,” in which the system held off braking for one second while it verified “the nature of the detected hazard”—a second during which the safety operator, Uber’s most important and last line of defense, could have taken control of the car and hit the brakes herself. But Vasquez wasn’t looking at the road during that second. So with 0.2 seconds left before impact, the car sounded an audio alarm, and Vasquez took the steering wheel, disengaging the autonomous system. Nearly a full second after striking Herzberg, Vasquez hit the brakes.

In a statement, an Uber spokesperson said that the company “regrets the 2018 crash,” and emphasized that its Advanced Technologies Group has made changes to its safety program. According to Uber documents submitted to the NTSB as part of the investigation, Uber has changed its safety driver training in the 20 months since, and now puts two safety operators in each car. (Today, Uber tests self-driving cars in Pittsburgh, and will launch testing in Dallas this month.) The company has also changed the structure of its safety team and created a system where workers can anonymously report safety issues. “We deeply value the thoroughness of the NTSB’s investigation,” the spokesperson added.

Another factor in the crash was the Tempe road structure itself. Herzberg, wheeling a bicycle, crossed the street near a pathway that appeared purpose-built for walkers, but was 360 feet from the nearest crosswalk.

On November 19, the NTSB will hold a meeting on the incident in Washington, DC. Investigators will then release a comprehensive report on the crash, detailing what happened and who or what was at fault. Investigators will also make recommendations to federal regulators and to companies like Uber building the tech outlining how to prevent crashes like this in the future.

For Herzberg, of course, it’s too late. Her family settled a lawsuit with Uber just 11 days after the crash.
 
Diver-less cars are coming and it will save thousands of lives as the tech advances. It will allow the elderly far more mobility than they have today. It will allow drunks to get home safely as well as protect others on the road from drunk drivers. It will allow people to become more productive during their commutes.

There is no down side to this technology, mistakes will be made and their will be growing pains but lets not pretend that people don't die every day from human drivers.
 
So...the software detected a “vehicle othervehicleotherbicycleotherbicycle” in its path but figured it could just drive through it?

That's the other disturbing part of this in addition to not programming it to see pedestrians outside of crosswalks, is that apparently if it can't figure out what something is it was told to just drive right through it?
 
  • Like
Reactions: Joes Place
If everybody remembered the things we were taught in kindergarten, this world would be a better place.

Keep your hands to yourself.

Don't take others' belongings.

Look both ways before crossing the street.

It's so simple.
 
  • Like
Reactions: unIowa
Diver-less cars are coming and it will save thousands of lives as the tech advances. It will allow the elderly far more mobility than they have today. It will allow drunks to get home safely as well as protect others on the road from drunk drivers. It will allow people to become more productive during their commutes.

There is no down side to this technology, mistakes will be made and their will be growing pains but lets not pretend that people don't die every day from human drivers.

How much longer do you think you're going to be saying this? What's it been by now, 5 years?
 
That's the other disturbing part of this in addition to not programming it to see pedestrians outside of crosswalks, is that apparently if it can't figure out what something is it was told to just drive right through it?

With the technology still in it's infancy, it's unfortunately going to happen - I don't begin understand the programming error involved. Aside from patiently waiting for the technology to develop, I don't know what the solution is, aside from the obvious increased caution from the pedestrians involved.
 
20 month investigation in this one incident? The software now is probably nothing close to what it was when it happened
 
  • Like
Reactions: unIowa
Diver-less cars are coming and it will save thousands of lives as the tech advances. It will allow the elderly far more mobility than they have today. It will allow drunks to get home safely as well as protect others on the road from drunk drivers. It will allow people to become more productive during their commutes.

There is no down side to this technology, mistakes will be made and their will be growing pains but lets not pretend that people don't die every day from human drivers.
I have no problem with driverless cars. I’ll welcome them. But why in the hell would you not program an experimental vehicle to err on the side of safety? Here's the code: Detect an object in path...f'n STOP!
 
  • Like
Reactions: artradley
How much longer do you think you're going to be saying this? What's it been by now, 5 years?

Have people been saying autonomous vehicles are going to be here in under five years? Or have they been saying we're going to see a large percentage of autonomous vehicles within a decade or two?

They are going to be her, and that's a good thing. Our roads will be safer.
 
  • Like
Reactions: hawkifann
Diver-less cars are coming and it will save thousands of lives as the tech advances. It will allow the elderly far more mobility than they have today. It will allow drunks to get home safely as well as protect others on the road from drunk drivers. It will allow people to become more productive during their commutes.

There is no down side to this technology, mistakes will be made and their will be growing pains but lets not pretend that people don't die every day from human drivers.
There are huge downsides to self driving cars. Number one on the list is that they are going to drastically change the economy around the world by putting millions out of work.
 
https://www.wired.com/story/ubers-self-driving-car-didnt-know-pedestrians-could-jaywalk/


The software inside the Uber self-driving SUV that killed an Arizona woman last year was not designed to detect pedestrians outside of a crosswalk, according to new documents released as part of a federal investigation into the incident. That’s the most damning revelation in a trove of new documents related to the crash, but other details indicate that, in a variety of ways, Uber’s self-driving tech failed to consider how humans actually operate.

The National Transportation Safety Board, an independent government safety panel that more often probes airplane crashes and large truck incidents, posted documents on Tuesday regarding its 20-month investigation into the Uber crash. The panel will release a final report on the incident in two weeks. More than 40 of the documents, spanning hundreds of pages, dive into the particulars of the March 18, 2016 incident, in which the Uber testing vehicle, with 44-year-old Rafaela Vasquez in the driver's seat, killed a 49-year-old woman named Elaine Herzberg as she crossed a darkened road in the city of Tempe, Arizona. At the time, only one driver monitored the experimental car’s operation and software as it drove around Arizona. Video footage published in the weeks after the crash showed Vasquez reacting with shock during the moments just before the collision.


The new documents indicate that some mistakes were clearly related to Uber’s internal structure, what experts call “safety culture.” For one, the self-driving program didn’t include an operational safety division or safety manager.

How do self-driving cars see me?
The most glaring mistakes were software-related. Uber’s system was not equipped to identify or deal with pedestrians walking outside of a crosswalk. Uber engineers also appear to have been so worried about false alarms that they built in an automated one-second delay between a crash detection and action. In addition, the company chose to turn off a built-in Volvo braking system that the automaker later concluded might have dramatically reduced the speed at which the car hit Herzberg, or perhaps avoided the collision altogether. (Experts say the decision to turn off the Volvo system while Uber’s software did its work did make technical sense, because it would be unsafe for the car to have two software “masters.”)

Much of that explains why, despite the fact that the car detected Herzberg with more than enough time to stop, it was traveling at 43.5 mph when it struck her and threw her 75 feet. When the car first detected her presence, 5.6 seconds before impact, it classified her as a vehicle. Then it changed its mind to “other,” then to vehicle again, back to “other,” then to bicycle, then to “other” again, and finally back to bicycle.

It never guessed Herzberg was on foot for a simple, galling reason: Uber didn’t tell its car to look for pedestrians outside of crosswalks. “The system design did not include a consideration for jaywalking pedestrians,” the NTSB’s Vehicle Automation Report reads
. Every time it tried a new guess, it restarted the process of predicting where the mysterious object—Herzberg—was headed. It wasn’t until 1.2 seconds before the impact that the system recognized that the SUV was going to hit Herzberg, that it couldn’t steer around her, and that it needed to slam on the brakes.

That triggered what Uber called “action suppression,” in which the system held off braking for one second while it verified “the nature of the detected hazard”—a second during which the safety operator, Uber’s most important and last line of defense, could have taken control of the car and hit the brakes herself. But Vasquez wasn’t looking at the road during that second. So with 0.2 seconds left before impact, the car sounded an audio alarm, and Vasquez took the steering wheel, disengaging the autonomous system. Nearly a full second after striking Herzberg, Vasquez hit the brakes.

In a statement, an Uber spokesperson said that the company “regrets the 2018 crash,” and emphasized that its Advanced Technologies Group has made changes to its safety program. According to Uber documents submitted to the NTSB as part of the investigation, Uber has changed its safety driver training in the 20 months since, and now puts two safety operators in each car. (Today, Uber tests self-driving cars in Pittsburgh, and will launch testing in Dallas this month.) The company has also changed the structure of its safety team and created a system where workers can anonymously report safety issues. “We deeply value the thoroughness of the NTSB’s investigation,” the spokesperson added.

Another factor in the crash was the Tempe road structure itself. Herzberg, wheeling a bicycle, crossed the street near a pathway that appeared purpose-built for walkers, but was 360 feet from the nearest crosswalk.

On November 19, the NTSB will hold a meeting on the incident in Washington, DC. Investigators will then release a comprehensive report on the crash, detailing what happened and who or what was at fault. Investigators will also make recommendations to federal regulators and to companies like Uber building the tech outlining how to prevent crashes like this in the future.

For Herzberg, of course, it’s too late. Her family settled a lawsuit with Uber just 11 days after the crash.
so do the computer programmers go up on manslaughter charges? that is pretty egregious to assume all pedestrians only cross the street perpendicularly at official crossings.
 
so do the computer programmers go up on manslaughter charges? that is pretty egregious to assume all pedestrians only cross the street perpendicularly at official crossings.

That's another really tricky thing about autonomous vehicles is assigning liability and fault for accidents. Who needs car insurance if you have nothing to do with how the car is operating? Also, somebody at Uber should be charged with something for that death.
 
That's not the subject at hand. If someone if jaywalking and they get hit (by a human driver or a computer driver) they should not be surprised.

That's insane. People have eyes, and even if jaywalking is illegal, one is expected to not run people over if they're jaywalking. If you were jaywalking, and you saw a car from 500 yards coming at you, I would expect you to be surprised if it did not slow down and ran right over you.
 
You guys in this thread are acting like jaywalking is so bad while you're probably running red lights and breaking the law 10 times a day. "Serves the b*tch right!" lol. Got some real tough guys in this thread.
 
That's insane. People have eyes, and even if jaywalking is illegal, one is expected to not run people over if they're jaywalking. If you were jaywalking, and you saw a car from 500 yards coming at you, I would expect you to be surprised if it did not slow down and ran right over you.

You cannot bank on people going out of their way to cover up your illegal actions.
 
You guys in this thread are acting like jaywalking is so bad while you're probably running red lights and breaking the law 10 times a day. "Serves the b*tch right!" lol. Got some real tough guys in this thread.

But if I ran a red light and someone hit me, it's my fault. I would not be shocked to find out I get a ticket or a broken bone or dead. I broke the law.
 
https://www.wired.com/story/ubers-self-driving-car-didnt-know-pedestrians-could-jaywalk/


The software inside the Uber self-driving SUV that killed an Arizona woman last year was not designed to detect pedestrians outside of a crosswalk, according to new documents released as part of a federal investigation into the incident. That’s the most damning revelation in a trove of new documents related to the crash, but other details indicate that, in a variety of ways, Uber’s self-driving tech failed to consider how humans actually operate.

The National Transportation Safety Board, an independent government safety panel that more often probes airplane crashes and large truck incidents, posted documents on Tuesday regarding its 20-month investigation into the Uber crash. The panel will release a final report on the incident in two weeks. More than 40 of the documents, spanning hundreds of pages, dive into the particulars of the March 18, 2016 incident, in which the Uber testing vehicle, with 44-year-old Rafaela Vasquez in the driver's seat, killed a 49-year-old woman named Elaine Herzberg as she crossed a darkened road in the city of Tempe, Arizona. At the time, only one driver monitored the experimental car’s operation and software as it drove around Arizona. Video footage published in the weeks after the crash showed Vasquez reacting with shock during the moments just before the collision.


The new documents indicate that some mistakes were clearly related to Uber’s internal structure, what experts call “safety culture.” For one, the self-driving program didn’t include an operational safety division or safety manager.

How do self-driving cars see me?
The most glaring mistakes were software-related. Uber’s system was not equipped to identify or deal with pedestrians walking outside of a crosswalk. Uber engineers also appear to have been so worried about false alarms that they built in an automated one-second delay between a crash detection and action. In addition, the company chose to turn off a built-in Volvo braking system that the automaker later concluded might have dramatically reduced the speed at which the car hit Herzberg, or perhaps avoided the collision altogether. (Experts say the decision to turn off the Volvo system while Uber’s software did its work did make technical sense, because it would be unsafe for the car to have two software “masters.”)

Much of that explains why, despite the fact that the car detected Herzberg with more than enough time to stop, it was traveling at 43.5 mph when it struck her and threw her 75 feet. When the car first detected her presence, 5.6 seconds before impact, it classified her as a vehicle. Then it changed its mind to “other,” then to vehicle again, back to “other,” then to bicycle, then to “other” again, and finally back to bicycle.

It never guessed Herzberg was on foot for a simple, galling reason: Uber didn’t tell its car to look for pedestrians outside of crosswalks. “The system design did not include a consideration for jaywalking pedestrians,” the NTSB’s Vehicle Automation Report reads
. Every time it tried a new guess, it restarted the process of predicting where the mysterious object—Herzberg—was headed. It wasn’t until 1.2 seconds before the impact that the system recognized that the SUV was going to hit Herzberg, that it couldn’t steer around her, and that it needed to slam on the brakes.

That triggered what Uber called “action suppression,” in which the system held off braking for one second while it verified “the nature of the detected hazard”—a second during which the safety operator, Uber’s most important and last line of defense, could have taken control of the car and hit the brakes herself. But Vasquez wasn’t looking at the road during that second. So with 0.2 seconds left before impact, the car sounded an audio alarm, and Vasquez took the steering wheel, disengaging the autonomous system. Nearly a full second after striking Herzberg, Vasquez hit the brakes.

In a statement, an Uber spokesperson said that the company “regrets the 2018 crash,” and emphasized that its Advanced Technologies Group has made changes to its safety program. According to Uber documents submitted to the NTSB as part of the investigation, Uber has changed its safety driver training in the 20 months since, and now puts two safety operators in each car. (Today, Uber tests self-driving cars in Pittsburgh, and will launch testing in Dallas this month.) The company has also changed the structure of its safety team and created a system where workers can anonymously report safety issues. “We deeply value the thoroughness of the NTSB’s investigation,” the spokesperson added.

Another factor in the crash was the Tempe road structure itself. Herzberg, wheeling a bicycle, crossed the street near a pathway that appeared purpose-built for walkers, but was 360 feet from the nearest crosswalk.

On November 19, the NTSB will hold a meeting on the incident in Washington, DC. Investigators will then release a comprehensive report on the crash, detailing what happened and who or what was at fault. Investigators will also make recommendations to federal regulators and to companies like Uber building the tech outlining how to prevent crashes like this in the future.

For Herzberg, of course, it’s too late. Her family settled a lawsuit with Uber just 11 days after the crash.

Whoopsie daisy.
 
That's insane. People have eyes, and even if jaywalking is illegal, one is expected to not run people over if they're jaywalking. If you were jaywalking, and you saw a car from 500 yards coming at you, I would expect you to be surprised if it did not slow down and ran right over you.

You know, there was a human in the car, supposedly trained to identify these situations and prevent a problem. Sometimes humans make mistakes.

The important thing is that eventually the AI in these vehicles is going to make way fewer mistakes than humans. Even then, there will be instanced in which fatal accidents occur, traceable to a shortcoming in the vehicle’s capabilities.

The question is, would you prefer 5000 people killed each year, all traced to programming errors, or 30,000 killed each year due to human error.
 
You know, there was a human in the car, supposedly trained to identify these situations and prevent a problem. Sometimes humans make mistakes.

The important thing is that eventually the AI in these vehicles is going to make way fewer mistakes than humans. Even then, there will be instanced in which fatal accidents occur, traceable to a shortcoming in the vehicle’s capabilities.

The question is, would you prefer 5000 people killed each year, all traced to programming errors, or 30,000 killed each year due to human error.

That depends on whether humans become prohibited from self-driving and become slaves to AI in order to reach that goal.
 
That depends on whether humans become prohibited from self-driving and become slaves to AI in order to reach that goal.

I know that the navigation software sometimes gets confused and can't find its way. Imagine if you're stuck in a car circling the block and you can't take over and drive the last 50 yards yourself....
 
Diver-less cars are coming and it will save thousands of lives as the tech advances. It will allow the elderly far more mobility than they have today. It will allow drunks to get home safely as well as protect others on the road from drunk drivers. It will allow people to become more productive during their commutes.

There is no down side to this technology, mistakes will be made and their will be growing pains but lets not pretend that people don't die every day from human drivers.
So idiotic mistakes like this are acceptable, because people die anyways?

:confused:
 
ADVERTISEMENT
ADVERTISEMENT