Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's because it's just a language model. It's been trained to find a probable completion to a piece of text, predict a likely next word. It's not trained for human interaction. It's not an agent. It has no motives or goals. It might seem like it does, but that's more of a side-effect of language modelling.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: