A very subjective opinion. CPU performance depends on many factors. The article does not give the results of `perf` performance tests. There are no given kernel parameters either at compile time or set late `sysctl` parameters. How is the RAM allocated to the CPU cores? How are IRQs used? Does RAM have support for error correction? The only fact the article claims is that by setting the `mitigations=off` flag in the system configuration there is some misbalance which can make certain tasks execute slower under certain conditions.
I agree there could be more details about the specific system used.
At the same time, I'd take this as more valuable & informative than not, to be a reasonable indicator of what most users might expect. There are quite a large number of different tests features here that give quite good representation.
A lot of your questions don't seem concerning to me. Perf is just one test, and we have plenty of other great real world tests here. Kernel parameters are set at boot, and those arguments are shown. RAM isn't allocated to CPUs: there is a single I/O Die which all ram is connected to & cores access it uniformly over Infinity Fabric. There's no tuning to IRQs implied but if you have similar hardware it should reproduce without fiddling, and more so, the IRQs should be working similarly in both cases; the default config is more than good enough to compare two cases with. I would not expect ECC to make a vast difference, and it's notable that some level of ECC already is built in to all DDR5 anyhow ("ECC" is now a matter of whether the ECC extends to the CPU or is on-chip only). None of these points seem strong enough that I'd consider discarding or ignoring these findings.
This article has a pretty good starting place that paints a pretty good general picture. It's a service & contribution & informs nicely. For users who have more specific concerns, they should follow up, and ideally, do as good a favor as Mr Larabel did & blog their findings.
That was kind of what sprung to my mind as well. Since I'm not really familiar with the mitigations or their low-level details, I wouldn't be able to speculate on what kinds of factors might cause the performance effects to vary randomly depending on specific circumstances, though.
Kind of like it turns out that the performance effects of various compiler optimizations can vary a lot depending on the exact memory layout of the compiled program, which might vary from one compiler version to another or due to minor changes to the program code. If you want generalizable results, the proper way to benchmark might be to test with multiple random layouts [1].
I don't know if there could be similar potential side effects from vulnerability mitigations that might cause their performance effects to vary in an effectively random way. Perhaps the mitigations are different enough to not vary so much. I don't really know. But the question kind of unavoidably comes to mind.
If I would guess, for this past 3 years, CPU and software are tested and optimized on a kernel with Spectre-like mitigation-enabled. That caused a lot of trend shifts like don't relies on automatic branch prediction, some code even explicitly avoid the speculative execution altogether, resulting no performance gain on mitigation-disabled. Even more, since they don't test on mitigation-disabled, they failed to noticed the performance issues under mitigation-disabled.
You are not kidding, I always thought Phoronix was a kind of no-nonsense website until I saw your comment and disabled uBlock Origin and the built-in Firefox blocker.
The web without an ad blocker is really something else.
Pardon my English, it's not my native language.