Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> LLMs have no ability to understand when to stop and ask for directions.

I haven't read TFA so I may be missing the point. However, I have had success getting Claude to stop and ask for directions by specifically prompting it to do so. "If you're stuck or the task seems impossible, please stop and explain the problem to me so I can help you."



Ok I think the confusion arises because of the probabilistic nature of LLM responses that blurs the line between "intelligent vs not".

Let's take driving a car as an example, and a random decision generator as a lower bound on the intelligence of the driver.

- A professionally trained human, who is not fatigued or unhealthy or substance-impaired, rarely makes a mistake, and when they do, there are reasonable mitigating factors.

- ML models, OTOH, are very brittle and probabilistic. A model trained on blue tinted windshields may suffer a dramatic drop in performance if ran on yellow-tinted windshields.

Models are unpredictably probabilistic. They do not learn a complete world model, but the very specific conditions and circumstances of their training dataset.

They continue to get better, and you are able to induce a behavior similar to true intelligence more and more often. In your case, you are able to get them to stop and ask, but if they had the ability to do this reliably, they would not make mistakes as agents at all. Right now they resemble intelligence under a very specific light, and as the regimes under which they resemble one get bigger, they will get to AGIs. But we're not there yet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: