Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Unless we identify a problem set where event-based computing with spikes is the inherently natural solution, I find it hard to imagine that spiking networks will ever outcompete ANN solutions.

i'd guess that domain would be real time (unbuffered / unbatched) processing of raw sensory data. it seems reasonable that biological neural systems evolved for optimal processing of sensory information encoded temporally in spike trains, yet the few papers on neuromorphic computing i've seen tend to try and hammer spiking neural networks into a classic batch based machine learning paradigm and then score them against batch based anns.



This very much so.

On the other hand, even biology often uses rate codes which are inefficient and limited in what they can represent, compared to all those timing-sensitive codes, like latency codes, rank order codes, phase codes, pattern codes, population codes etc.

And when we look at the technical domain, event-based vision cameras spew out what could pass as spikes; but even in that area spiking networks have proven too limited compared to event-based algorithms that were only vaguely bioinspired. And this technology took about 20 years from conception to making a breakthrough on the market.

So the question is whether spiking networks are indeed the future of computation. But without doubt, the concept is very interesting academically. A bit like Haskell :D


> And when we look at the technical domain, event-based vision cameras spew out what could pass as spikes

i have not seen these, i'm curious. do they try and mimic early stages of the human visual system? (ie, a mechanical v1, with outputs that actually look like the spatial and frequency tuning that is often found in v1 neurons?)

edit: <3 wikipedia: https://en.wikipedia.org/wiki/Event_camera

> So the question is whether spiking networks are indeed the future of computation. But without doubt, the concept is very interesting academically. A bit like Haskell :D

or if it will be something we hand code at all... i suspect that the future of computation will be derived by the machines themselves. if one can use GANs to generate entire novel cryptosystems (i read a while back that google was doing this), it seems only natural that they could be used for finding optimal computational paradigms.

although many would argue that optimal computation is computation that is best understood by humans.


> i have not seen these, i'm curious.

The original incarnation goes by the name of Dynamic Vision Sensor (DVS), marketed by inivation, an ETH Zürich spinoff. Prophesee is another manufacturer with their own IP. I think Sony makes event-based camera's too; perhaps others as well (Samsung? or was it Huawei?)

They mimic the retina, each pixel emitting events ('spikes' if you wish) when luminance crosses a threshold. There is no frame clock, each pixel works asynchronously. The technology is known for extremely low latency, high temporal resolution and ultra-high dynamic range. Have a look ;)



I wonder if you could run a DVS in reverse and use it to drive an OLED display pixel by pixel


That just triggered me ... I actually have worked on event-based vision, old DVS cameras specifically some time ago ... and I actually did the processing side of things in Haskell :-)


What are these different types of codes you mention? Got a textbook/lecture series to learn more? :)


Um, Dayan & Abbott's "Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems" comes to mind. It's rather neuroscience heavy though.

Then there is "Spikes: Exploring the neural code" https://mitpress.mit.edu/books/spikes This one is focusing a lot on information theory.


I've looked at this before to some extent. The problem seems to me to be that (at least for training) adapting a model in 'real-time' in response to new input just seems too hard to do in practice. In reality when training a model there are usually lots of tweaks, fine-tuning and iteration needed that require human intervention, at which point you don't really need real-time anymore. Perhaps there is a case for using such approaches for inference although it's not something I've thought about too much.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: