I feel $21,000 fine per day sounds pretty like "we want to shut you down". He never presented comma.ai as an autopilot but as a very smart line assist and adaptive cruise control.
"As you are undoubtedly aware, there is a high likelihood that some drivers will use your product in a manner that exceeds its intended purpose."
The government appears well aware of this hand waving garbage that companies are trying with "it isn't really autopilot, keep your hands on the wheel wink." This is exactly government's job and is a fabulous example of a government reacting to ongoing events that have proven that so-called autopilot systems are inherently a danger to the public at this stage.
Yes. There are a number of federal agencies that are a shit-show at best but I can't remember a time in my adult life when I was disappointed in NHTSA.
NTSB has awesome investigators. Some of the smartest people I've met and worked with in my life. They are very serious and dedicated, while having to put up with a lot of roadblocks, nonsense, and other issues from both corporate America and other government agencies.
Can confirm at least though many years ago, the internal NTSB IT staff/programmers/DB were a joke. I guess that's just government though and like much of government, they relied a lot on outside contractors. For years they were a ColdFusion shop, so I'll let your mind start there.
Can you expand on how they are an inherent danger, rather than a possible danger? I only challenge you because Tesla has published several occasions where Autopilot has possibly saved lives by auto-braking.
> Can you expand on how they are an inherent danger, rather than a possible danger?
I do not believe those words (inherent & possible) are in any way mutually exclusive. They might even mean the same thing when describing a latent danger. So I do not quite understand the basis of your challenge. Also, I'm not parent poster.
My understanding of inherent might not be right, then. I think of it as "it is definitely a danger, and thus can only be described as dangerous," which I would argue is not mutually exclusive with possible but is mutually exclusive with "not always dangerous and maybe even anti-dangerous."
So what I mean is, the autopilot could turn out to be safer than manual, in my mind, so how is it "inherently" dangerous.
A swimming pool in your backyard is an inherent danger. That doesn't mean anyone will be injured or killed, but a deep body of water with sheer walls is inherently dangerous.
It would nonetheless also be accurate to describe it as a possible danger, because the danger only manifests in certain circumstances.
> So what I mean is, the autopilot could turn out to be safer than manual, in my mind, so how is it "inherently" dangerous.
That could well be the case but I don't think it eliminates the inherent danger. Maybe it's less dangerous, but cars on the road are just kind of dangerous in general.
I think it's also very generous to assume that the early auto pilot car systems we have today are necessarily safer than human drivers. Emergency braking is almost certainly a net gain. Self driving might be or might not be. A self driving add-on cooked up by geohot? I'm doubtful it's yet gone through the kind of engineering rigor we'd expect of a system like this.
"So what I mean is, the autopilot could turn out to be safer than manual, in my mind, so how is it "inherently" dangerous."
In the same way that the ocean is still inherently dangerous even if you're wearing a life jacket, or how airplanes are still inherently dangerous even though aircraft autopilot systems are effectively mainstream at this point.
It's inherently dangerous because it's a bunch of hardware and software that take over a 2 ton vehicle to turn it into a software guided missile. That's an inherently dangerous set-up that will need a lot of thinking to keep it safe for general use.
Just like a piano suspended from a rope is inherently dangerous. The danger may not manifest at all for that to be the case.
My statement was that "ongoing events that have proven that so-called autopilot systems are inherently a danger to the public at this stage". The important part to note is "at this stage" where at this stage refers to this stage of development and implementation.
Contrary to Tesla's publishing several occasions where Autopilot has possibly saved lives, there have been concrete incidents where people have died while Autopilot is being used. Tesla has backed away from these incidents with the same hand waving that NHTSA has specifically touched on here: that the user is exceeding the system's design purpose.
In this context ("at this stage"), where consumers have historically and will most likely continue to exceed these systems' design purposes and operate them outside of their intended scope, it seems very clear to me that these systems are inherently dangerous to the public. That is, there is no way to make these systems not a danger to the public in their current form.
The NHTSA are fine with auto-braking - indeed, they want it to be standard on all cars in the future[1]. What they're not OK with is companies marketing glorified cruise control as a self-driving feature. Elon Musk likes to lump his implementation of both under the same "Autopilot" umbrella for PR reasons, but you can have the former without the latter - indeed, I'm pretty sure the most prominent example he gave of the former saving a life was wityh the latter switched off.
Proof that so-called autopilot systems are a danger to the public would be evidence that they perform worse than the average driver. I haven't seen such evidence, what are you referencing?
It's the other way around - you need a proof that they perform comparable to an average driver during expected use, and they do not have absurdly stupid corner cases during actually expected use.
We know, being humans, and that we have the ability to process lots of complex information in a way that's very difficult for computers to replicate. Hard AI doesn't actually exist (yet). We also have a 100 years of humans driving cars worldwide so we understand well what they're good at and what they're not, so laws & safety designs take all of this into consideration.
Each computer system will be encountering new, diverse things in the real world without a good understanding of how they'll perform. There are lots of crazy hard problems here that no one has solved yet. So to suggest we just automatically trust it because humans make mistakes is foolish when the consequences are so high. If someone came out with a surgery "autopilot" tomorrow, would you suggest it start giving triple bypass surgeries right away without FDA approval because humans make errors too?
One of the features of the common human firmware is self-preservation instinct. It lets us trust that our fellow drivers, while still prone to mistakes, won't generally make obviously suicidal errors. Can one say the same about a new ML algorithm running on some board designed half a decade ago? How exactly would one know, without a thorough audit?
We've been dealing with those corner cases for thousands of years and we know them pretty well. Given that we all run on pretty much the same hardware and firmware (with minor differences), you could say humans have been thoroughly tested for few millennia, including a hundred or so years of road testing.
So yeah, I a bit of thorough testing of a completely new hardware running completely new software isn't too much to ask for.
When you measure the safety of autonomous cars by counting the miles they travel, and not counting the miles that they intentionally avoid or defer to a human, you have succeeded in creating a metric that looks like it's useful for comparison, but is actually completely meaningless. It's like measuring typing speed while ignoring typos.
Until autonomous vehicles are subject to the full spectrum of conditions that human drivers face, incidents/mile is not a meaningful measure of comparative safety. And until then, the burden of proof should be on the creators.
Not true. Lots of accidents happen in "good" conditions because humans suck at driving.
The way you compare, is you give a self driving car to a person, and compare actually usage for that person, and see if owning a self driving car increases or reduces accidents for that person.
Being perfect only half the time is still revolutionary.
The frequency with which an autonomous driving system relinquishes control is one important metric for estimating the actual risk. Measures like this are better and faster than using the actual tally of catastrophic events, because there is more, and more frequent, information. Engineers working in high-risk fields have been doing this sort of analysis for decades, and there is no reason to make an exception for autonomous road vehicles.
I have a 100% perfect safety record over probably hundreds of thousands of miles of driving a class B truck with my knees instead of my hands, using cruise control, on straight empty highways in Nevada, Idaho, and Utah, with perfect weather and good road conditions. There's nothing revolutionary about my ability to perfectly drive with my knees...I just very carefully selected the conditions where I was willing to do it.
You may have that perfect safety record, but other people don't.
People still crash every day in good conditions. That's thousands of lives that could be saved with our imperfect, sunny weather on the highway only self driving car.
You underestimate how bad human drivers are even in perfect conditions.
You are completely missing the point, so I'll try explaining via a Reductio ad absurdum hypothetical.
Lets say that Semi-Autonomous Cars are currently tested on about 80% of the tasks that humans currently face in the real world, and that through some feat of engineering and design we don't have to worry about the ridiculously messy transitions between Autonomous Mode and Human Mode.
And let's say that for those 80% of tasks, the Semi-Autonomous Cars have a 0.05% accidents/100k miles incident rate.
And let's also say that Humans on average have a 1% accidents/100k miles incident rate.
What you're telling me is that the Semi-Autonomous car has a much better safety record, so give us Semi-Autonomous RIGHT NOW OR WE'RE ALL GONNA DIE!!!
But what I'm telling you is that you don't know enough to make that decision yet, because you don't know exactly how well humans do on the 80% that Semi-Autonomous cars currently handle, you merely know the average accident rate over the current 100% of scenarios. For all you know, Human drivers could have their average 1% accident rate as a result of a 0.0% accident rate for that 80% subset and a 5% accident rate on the remaining 20% that Semi-Autonomous cars can't handle. And if that were the case, then forcing us all to use Semi-Autonomous cars would actually increase the average accident rate from 1% to 1.004%.
Until you fully understand what Semi-Autonomous Cars are capable of, AND know how well human drivers do on that restricted subset, you can't definitively say that current technology is better than humans.
I understand what you are saying, I am just disagree with the facts.
"Human drivers could have their average 1% accident rate as a result of a 0.0% accident rate for that 80% subset "
No they couldn't, because they don't.
I am asserting that for this specific 80% of perfect conditions, humans are still terrible drivers. And that being better than them is EASY, because of just how terrible humans are at driving (even in "perfect conditions").
The numbers were deliberately exaggerated to make the point. The fact stands though that until you know what the numbers are, the best answer is not easy to come by.
> I am asserting that for this specific 80% of perfect conditions, humans are still terrible drivers. And that being better than them is EASY, because of just how terrible humans are at driving (even in "perfect conditions").
And I am asserting the opposite: that the appearance of safety of autonomous vehicles is the result of highly selective conditions with near laboratory levels of control, the likes of which are so monumentally easy to handle that even humans, as shitty and inattentive as they are at driving, can handle with comparative levels of safety. And I certainly think it's possible that computers will lag behind humans for another 20-50 years while we slowly develop the massive body of fast-heuristics research necessary to make NP-complete planning decisions with the speed and capability of even below-average humans.
Just two days ago, driving on a roundabout some lady enters the roundabout right in front of my car. Fortunately it was a two lane roundabout and the left lane was empty so I could swerve to avoid crashing into her left hand side door (I was coming straight at her).
At the traffic light I asked her if she had not seen me and she went 'Seen what? Where?', so apparently she had not seen me at all, which is pretty impressive given that I was less than 5 meters away from her when it happened.
This sort of thing happens with some regularity. All it takes for a situation like that to turn into an accident with possibly a fatality is for me also not to pay attention.
so we should wait till a statistically-significant percentage of people die before we tell everyone, "yep, definitely dangerous. Lets add some regulation" ?
I wonder if they could shut him down on the basis of car insurance requirements and the legal definition of "driver". If "driver" is defined as the "person" inputting control commands, then either you dont consider the computer a person, and then nobody is legally "driving" the car, or else the software somehow meets the definition of a "person" or driver, in which case its an unlicensed driver operating the vehicle and insurance may not have to cover it.
The NHTSA has been a strong proponent of self-driving technology and has gone out of its way to make sure there is a path to doing it legally, without pulling the legal 'gotcha' that you described here.
The device would probably be SAE Level 2, and should be regulated as such.
beyond the obviousness of what you have overlooked, its also worth pointing out that not all autonomous car usage will be substitutive. What if i just send my autonomous car to go take a picture of something? I might not have done it before, but now I will because it only costs a few cents/mile and none of my time.
Another really cool use case will be sending your car to drive around the block for an hour or go back to your house and park, while you are going out to places. Beats paying the $10 to park for an hour.
Lots of interesting effects there, all the way up to basic urban land-use decisions.
Suddenly the price people will pay for parking is going to be closely related to the price of gas. The maximum value of a parking space in the city is basically the cost to send the car back out to some suburban parking garage during the day and have it come back in later for you. My guess is that the market-clearing price is going to be a lot lower than it is today. Lots of urban parking structures might end up getting repurposed for other uses, if you can't fill them at hundreds of dollars per space per month.
In the short run I could see the average self-driving car's gas usage being higher than a conventional car's, but in the longer run the same self-driving features might make fully electric cars more palatable. Long charging times don't really matter if you can let the car go and charge itself while you're at work, for e.g.
And if that place that it goes to charge itself happens to have a pile of solar and wind, even better! Way easier to do a bit installation in a less populated area than right in the centre of downtown.
By "we want to shut you down" I meant the NHTSA isn't presenting him with so many regulations and requirements that meeting them is impossible, effectively ending the product. They are definitely using the fine to force him to comply with their request to meet the very minimum bar of answering their questions about his product. This, as mentioned, makes total sense to me in terms of what government should do to protect the safety of its citizens.
Actually even the $21k fine should be nothing if he got $3M in venture capital. What's questionable is whether somebody can build safely an almost-self-driving car without a good team and with many years of experience in the field.
The fine is punative. Noncompliance is often not acceptable when it comes to safety and environmental regulations. When I worked in plants, you can bet there were only two groups that would put the fear of God in operations: OSHA and the local environmental regulations agency. Fines could be thousands per hour (or per infraction, which could be collected hourly). Even if it's unreasonable (and for damn sure it can be - Argon metering was my personal worst hated), you better comply until you successfully sue or negotiate it down.
And it is likely necessary. Meeting regulations is far harder and more expensive than just the feel good that you met the reqs. If there wasn't a stick held up as a credible threat, plants would just trash everything. Not because they're evil, but because plants simply can't care - they aren't people. Thusly these agencies are impersonal and brutal in kind.
Not to dispute your point, which I agree with, but --- argon metering? How can that be a safety or environmental hazard? You don't want to lose it overboard because it's expensive, but it's a noble gas, so it should be inert, right? What am I missing?
True, but so can nitrogen, propane, neon, carbon dioxide, hydrogen and flourine (although you'd need to be pretty unlucky to survive long enough breathing flourine to asphyxiate).
I know argon is denser than air at STP, and so tends to accumulate in enclosed spaces --- but propane is even denser. (Leading cause of catastrophic accident in yachts: gas explosion. It's not like you can have a hole in the bottom to let the gas out in the event of a leak.) And any plant which deals with liquid gasses is going to prioritise ventilation anyway, which should easily deal with any gas buildup.
The parent sounded like there was something specific to do with argon which made the regulators tetchy, and I wouldn't have thought that asphyxiation risk would be enough. Am I wrong?
Propane has a very strong odor added to it (by the propane manufacturers), which means it'll be pretty damn obvious the moment you enter a propane-rich environment.
The fine is only if they fail to provide the requested information by the deadline given. The information requested is not that great and the deadline is entirely reasonable, so the fine is very much not "we want to shut you down," just "we want you to provide this information."
Well, the information requested, if you read the actual definitions, uses a lot of "any" and "all", so if you take that literally it's not exactly an elevator pitch style description they are asking for.
"Describe in detail the features of the comma one"
and
"Provide a detailed description of the conditions under which you believe a vehicle equipped with comma one may operate safely ... [and] a detailed description of the basis for your response to [the previous question] including a description of any testing or analysis..."
and
"Describe in detail any steps you have taken to ensure that the installation of the comma one ... does not have unintended consequences..."
I'm not saying the request is unreasonable, but to say that "the information requested is not that great" isn't really true. To completely answer those questions would take a lot of effort. (Assuming of course that he's done such testing. His canceling the product rather than respond makes you think he realized his answers would not be credible...)
How much is "a lot"? This seems like a couple of engineer-days if it gets down into the dirty details, which isn't huge.
Ironically, if he hasn't done any such testing, the answers would be much easier to generate. "We haven't done any." Not that the NHTSA would take it well....
I could imagine taking a week to do it, and he has 11 days.
I've been involved in writing similar descriptions to provide to NASA, and when you get down to exactly how each module works and need to enumerate every branch, it ends up being a lot of work.
He would have had 11 days to produce something. If he'd wanted to play ball, he could have produced something in the way of a response, then waited for a response to that requesting clarification, then provided more information, etc. I've dealt with similar agencies and often as long as you are being cooperative, the "clock" is fairly easy to reset.
It would appear that he didn't have any interest in cooperating at all, though. Which inevitably leads me to wonder if it's not more of a face-saving maneuver; better to blame failure on the pesky regulators than admit your technology isn't up to the investor pitches you might have made.
I would have assumed that anyone serious about releasing this type of product would have made contact with the relevant government departments and worked with them along the way to ensure they meet the requirements.
Also, the documentation being asked for is something the company should be able to put together fairly easily because they ought to have asked and answered these questions along the way to building the product.