Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Only because they have also been exposed to contrary points of view. If you actively indoctrinate a human into a point of view, they are very likely to maintain that point of view no matter how odious it is. If you train an LLM on odious input it will produce odious output, just like humans. I really see no substantive difference.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: