What problems have LLMs (so models like ChatGPT, Claude, Gemini, etc, not specific purpose algorithms like MCTS tuned by humans for certain tasks like AlphaGo or AlphaFold) solved that thousands of humans worked decades on and didn't solve (so as OP said, novel)? Can you name 1-3 of them?
Iām not redefining anything, that's the definition of "novel" in science. Otherwise, this comment would be "novel" too, because I bet you won't find it anywhere on Google, but no one would call it novel.
Show me these novel problems, that were solved by LLMs, name more than 3 then.
You're seriously insisting that the definition of novel in science only includes things that thousands of people have worked on for decades and haven't solved?
An example problem includes the "Erdos set" problems (see problem 124).
But also, LLMs have solved Olympia problems, see the results of IMO 2025. You can say that these are not interesting or challenging problems, but in the context of the original discussion, I don't think you can discount them as "novel". This is what the original comment said:
> Not so much amazing as bewildering that certain results are possible in spite of a lack of thinking etc. I find it highly counterintuitive that simply referencing established knowledge would ever get the correct answer to novel problems, absent any understanding of that knowledge.
I think in this context, it's clear that IMO problems are "novel" - they are applying knowledge in some way to solve something that isn't in-distribution. It is surprising that this is possible without "true understanding"; or, alternatively, LLMs do have understanding, whatever that means, which is also surprising.