Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think this kind of misses what's actually challenging with LLM code -- auditing it for correctness. LLMs are ~fine at spitting out valid syntax. Humans need to be able to read the output, though.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: