Unless there is scope for optimization, it should, by necessity, be that the representations are essentially the same, thus making it essentially a problem of parsing.
It's still not entirely clear to me how important this work is; though heavy weights like J. Tenenbaum and others continue to work on it.
The page says that many models can't be subsumed under PGMs, and yes that is true for
things like PPCGs and other recursive things (martingales, stochastic processes..).
However, things that PPLs are known for like the inverse graphics work, are really PGMs. It's entirely possible that what I'm asking is akin to questioning the significance of the Deep Neural networks and assorted frameworks, in contrast to chain rule; but considering that it is more than getting X% at Imagenet here, I think it is a reasonable question to ask. Is it about the representation or the implementation ?
The research listed in the page (http://probabilistic-programming.org/research/) appears to take the following courses,
Stochastic processes-ish,
- Inference techniques for handling recursion.
PGM-related work,
- Parallelization
- Optimizations for MCMC based on structure
(Old school) Theoretical CS,
- Formalisms reminiscent of Languages.
It's still not entirely clear to me how important this work is; though heavy weights like J. Tenenbaum and others continue to work on it.
The page says that many models can't be subsumed under PGMs, and yes that is true for things like PPCGs and other recursive things (martingales, stochastic processes..).
However, things that PPLs are known for like the inverse graphics work, are really PGMs. It's entirely possible that what I'm asking is akin to questioning the significance of the Deep Neural networks and assorted frameworks, in contrast to chain rule; but considering that it is more than getting X% at Imagenet here, I think it is a reasonable question to ask. Is it about the representation or the implementation ?