Artificial neural networks do work well as a model of behavior of biological neural nets, up to a point. They are definitely a piece of the puzzle, a module to duplicate a certain type of information that is represented in biological brains. When they became suddenly famous in the 80s and 90s, the mistake was to assume we could build everything, including an artificial mind, if we just had a large enough neural net. That was essentially the IBM approach: just throw enough resources at it and it will become intelligent. Turns out, more is needed for building actual intelligence.
I firmly believe that neural nets and other techniques are still essential components needed for implementing artificial minds. We now know that processing power and storage space are alone are not enough, a brain needs actual software that tells it what to do with information and how to organize itself. That's essentially how I became very skeptical of the kind of brute force AI research that is being conducted today. For instance, modeling a synapse chemically down to the atomic level is nice for basic research, but it's definitely not the way to implement AI. For this, we need larger abstractions that are functionally equivalent and translate well into efficient computer code, and we need to figure out how to make these pieces of code interact with each other in a meaningful way. My wild guess would be that today we're not even constrained by computing power or storage needs, we just lack the correct design.
My own suspicion is that real "general" AI probably will be developed by reverse engineering the human brain and working backwards to the key processes and structures that provide general intelligence.
Of course, this is assuming that there isn't something deeply spooky going on driving human consciousness - which is a possibility I used to regard as terribly silly but some of the concepts alluded to (in all places) Neal Stephenson's Anathem have got be wondering about such things again.
She sees the right brain hemisphere as being our "consciousness" wetware connecting us to others.
Parts of the video are esoteric, but it's fascinating to hear this first-person account from a brain researcher, especially of the morning of her stroke when her left hemishphere was damaged by a spontaneous brain hemorrhage.
I firmly believe that neural nets and other techniques are still essential components needed for implementing artificial minds. We now know that processing power and storage space are alone are not enough, a brain needs actual software that tells it what to do with information and how to organize itself. That's essentially how I became very skeptical of the kind of brute force AI research that is being conducted today. For instance, modeling a synapse chemically down to the atomic level is nice for basic research, but it's definitely not the way to implement AI. For this, we need larger abstractions that are functionally equivalent and translate well into efficient computer code, and we need to figure out how to make these pieces of code interact with each other in a meaningful way. My wild guess would be that today we're not even constrained by computing power or storage needs, we just lack the correct design.