Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To get that certification, do they test you first, or do they wait for you to screw up and then take it away from you??


We know, being humans, and that we have the ability to process lots of complex information in a way that's very difficult for computers to replicate. Hard AI doesn't actually exist (yet). We also have a 100 years of humans driving cars worldwide so we understand well what they're good at and what they're not, so laws & safety designs take all of this into consideration.

Each computer system will be encountering new, diverse things in the real world without a good understanding of how they'll perform. There are lots of crazy hard problems here that no one has solved yet. So to suggest we just automatically trust it because humans make mistakes is foolish when the consequences are so high. If someone came out with a surgery "autopilot" tomorrow, would you suggest it start giving triple bypass surgeries right away without FDA approval because humans make errors too?


One of the features of the common human firmware is self-preservation instinct. It lets us trust that our fellow drivers, while still prone to mistakes, won't generally make obviously suicidal errors. Can one say the same about a new ML algorithm running on some board designed half a decade ago? How exactly would one know, without a thorough audit?


They test you first, but if despite passing the test you screw up sufficiently badly later they decertify you.

It's a belt-and-suspenders system.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: