> If 10,000 people were killed on the way to making driverless cars safe, and this wild west approach ended up getting them into mass market use a few years earlier, it would save the lives of thousands of people overall.
This is a perfect example of the biggest and IMO most compelling critique of utilitarianism -- namely, that it's too susceptible to bullshitting.
People decide what they want to do, then completely invent (or, in more sophisticated cases, carefully fabricate) numbers and time-frames in order to hide their unethical opportunism behind the facade of objective moral assessment.
This is especially problematic when you have to estimate things that are pretty much impossible to estimate without completely bullshitting. For example, "number of people killed by not following certain safety standards", or "impact on time-to-completing of some particular safety standard". We're talking about a novel technology, so there's no way anyone has even remotely high confidence in a sufficiently bounded estimate of these numbers. It's all just bullshitting.
> Is thus a bad thing? Is it not justified by utilitarianism?
Lots of things are susceptible to misuse. That doesn't invalidate them unless they are in fact misused.
Moreover I don't see how you can conclude on a whim what's possible to estimate and what is not. Given these things take expertise in the relevant field and more than just casual analysis.
Finally, it's not like there is a small margin of error. Can you even really comprehend 30,000 deaths every single year caused by conventional cars and drivers? This is fricking 9/11 times ten happening every year, and you're not even willing to consider that speeding up r&d might save a few lives?
> Moreover I don't see how you can conclude on a whim what's possible to estimate and what is not
Yes, exactly. Like in so many cases, coming up with good values for the parameters is often the most difficult part of the analysis... that was exactly my point!
> Given these things take expertise in the relevant field and more than just casual analysis.
Except in this case, there are a lot of experts, and they all seem to be taking roughly the same approach.
> Finally, it's not like there is a small margin of error
It's not clear to me, at this point, what your assumptions even are. What percentage of those deaths would be prevented by self-driving technology? How many new types of accidents would be caused by self-driving technology? By what percentage would that number decrease if the regulatory environment were more friendly? And what's your basis for these numbers?
Also, it's perhaps telling that there really hasn't been a single example of regulators preventing self-driving up until this one, which is rather extreme. Google, Uber, and the scattering of startups and car companies have all taken a more cautious road to deployment of entirely their own making.
> and you're not even willing to consider that speeding up r&d might save a few lives?
There's a big difference between speeding up r&d and rushing unproven tech to market.
I would absolutely consider speeding up R&D to save that number of lives. I'm 100% behind self-driving cars. I think self-driving cars will eventually reduce the number of road deaths significantly.
I don't think the case has been made yet that today's tech will reduce the number of road deaths. I don't think it's unreasonable to be skeptical of the claim "this works great, trust me".
This is a perfect example of the biggest and IMO most compelling critique of utilitarianism -- namely, that it's too susceptible to bullshitting.
People decide what they want to do, then completely invent (or, in more sophisticated cases, carefully fabricate) numbers and time-frames in order to hide their unethical opportunism behind the facade of objective moral assessment.
This is especially problematic when you have to estimate things that are pretty much impossible to estimate without completely bullshitting. For example, "number of people killed by not following certain safety standards", or "impact on time-to-completing of some particular safety standard". We're talking about a novel technology, so there's no way anyone has even remotely high confidence in a sufficiently bounded estimate of these numbers. It's all just bullshitting.
> Is thus a bad thing? Is it not justified by utilitarianism?
These are two VERY different questions.