Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"just" advertisements when ChatGPT is designed to be sycophantic and manipulative


No its not designed to be that way. I literally can't get ChatGPT to agree with me on 5.2 with many things. Its just not possible.. I'd request you to give me an example of it being sycophantic and encouraging delusional behaviour (as a shareable link).


https://arstechnica.com/tech-policy/2026/01/chatgpt-wrote-go...

Everyone who works at chatgpt has blood on their hands


Do you have any evidence that this claim is true? That it was "designed" for this? That would be a pretty difficult conspiracy for them to keep secret


They famously literally had to turn down the sycophancy. ChatGPT is designed to get you addicted to it: https://openai.com/index/sycophancy-in-gpt-4o/


Do you think this is evidence that it was designed that way, as opposed to the exact opposite?

It was not designed that way, so they had to specifically do extra things to make it not that way.

This is like saying that python is designed to be slow and linking to a post about speeding up the python interpreter as evidence




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: