Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

From the preface to the 20th anniversary edition of Godel, Escher, Bach:

"Meaning cannot be kept out of formal systems when sufficiently complex isomorphisms arise. Meaning comes in despite one's best efforts to keep symbols meaningless! ...When a system of "meaningless" symbols has patterns in it that accurately track, or mirror, various phenomena in the world, then that tracking or mirroring imbues the symbols with some degree of meaning -- indeed, such tracking or mirroring is no less and no more than what meaning is. Depending on how complex and subtle and reliable the tracking is, different degrees of meaningfulness arise."

In other words, when one can reliably ask a language model a question and get a sensible answer, one is forced to conclude that it does in some sense "understand" what it is saying. This is also I think the essential philosophical thrust of the Turing Test, which is often misunderstood as a mere benchmark.

(I notice a common objection to examples of LLMs clearly demonstrating understanding is "it saw something similar in the training set". That may be true (though unfalsifiable) in any given instance, but the number of permutations of things LLMs correctly respond to far exceed the size of any training set. They are certainly generalizing, and interpreting their inputs on a conceptual level.)



It’s a type of understanding, but one we are not really familiar with, because it seems to understand tasks that it was trained on, but fails and other very basic ways on simple but different tasks.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: