Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There was an interesting substack that went through the logic of this type of failure[1].

The tl;dr is that phrasing the question as a Yes/No forces the answer into, well, a yes or a no. Without pre-answer reasoning trace, the LLM is forced to make a decision based on it's training data, which here is more likely to not be from 2025, so it picks no. Any further output cannot change the previous output.

[1] https://ramblingafter.substack.com/p/why-does-chatgpt-think-...



That does make sense given the prompt "What is the current year and is 2026 next year?" provides the correct answer.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: