One of the things that will likely _characterize_ AGI are nondeterministic loops.
My bet is that if AGI is possible it will take a form that looks something like
x_(n+1) = A * x_n (1 - x_n)
Where x is a billions long vector and the parameters in A (sizeof(x)^2 ?) are trained and also tuned to have period 3 or nearly period three for a meta-stable near chaotic progression of x.
Whats confusing to me is the dual use of the word entropy in both the physical science and in communication. The local minimums are some how stable in a world of increasing entropy. How do these local minimums ever form when there's such a large arrow of entropy.
Certainly intelligence is a reduction of entropy, but it's also certainly not stable. Just like cellular automata (https://record.umich.edu/articles/simple-rules-can-produce-c...), loops that are stable can't evolve, but loops that are unstable have too much entropy.
So, we're likely searching for a system thats meta stable within a small range of input entropy (physical) and output entropy (information).
If you have any system that tries to gravitate to a local minimum it is almost impossible to not make Newton's fractal with it. Classical feed forward network learning does pretty much look like newtons method to me. Please take a look into https://en.m.wikipedia.org/wiki/Newton%27s_method
My bet is that if AGI is possible it will take a form that looks something like
Where x is a billions long vector and the parameters in A (sizeof(x)^2 ?) are trained and also tuned to have period 3 or nearly period three for a meta-stable near chaotic progression of x."Period three implies chaos" https://www.its.caltech.edu/~matilde/LiYorke.pdf
That is if AGI is possible at all without wetware.