Most of that stuff depends on explicit CREATE STATISTICS commands being run in order to work around column correlations and stuff like that. The general assumption of independence among columns/attributes is pretty universal (as the paper actually says).
One of the most useful areas for future improvement is making plans more robust against misestimations during execution, for example by using techniques like role-reversal during hash joins, or Hellerstein's "Eddies".
> The general assumption of independence among columns/attributes is pretty universal (as the paper actually says).
So, the paper definitely talks about how independent column statistics are a problem with big tables in the default stats configuration.
...But the option of creating correlated, non-independent column statistics did not exist in PG until after this paper. Which was my point.
In my experience, flat out increasing statistics sample rates fixes 80%+ of the problems in this paper, with basically no downsides. (You can push that computation to downtime when no-one cares.)
One of the most useful areas for future improvement is making plans more robust against misestimations during execution, for example by using techniques like role-reversal during hash joins, or Hellerstein's "Eddies".