Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Holy shit, I feel the same. I was arguing with an LLM one day about how to do Kerberos auth on incoming HTTP requests. It kept giving me bogus advice that I could disprove with a tiny snip of code. I would explain. It would react just like yours. After a few rounds, it would give the first answer again. Awful. So infuriating.

I had a similar issue with GNU plot. The LLM-suggested scripts frequently had syntax errors. I say: LLMs are awesome when they work, else they are a time suck / net negative.





Sometimes they just get into "argument simulator mode". There's a lot of training data of people online having stupid arguments.

You can write any program you want, as long as it is flappy bird in reactjs.


Willing to name “an LLM”?

Was this a local model?


Good question. It was not my intent to be evasive about the LLM. I should have included it in my origial post. I tried the free versions of both OpenAI ChatGPT and Google Gemini. To be clear, when I say "free", I mean just go to the website and start chatting with the bot.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: