For all the hype surrounding AI, virtual reality, and augmented reality as being the next big trends in tech, I really feel like brain/computer interfaces are what is really going to change our world in unprecedented ways. The technology is still a long, long ways from having a visible impact for most people, but I am more interested/excited/scared about this technology than anything else going on in the tech world right now.
There are some very difficult challenges in brain-computer interfaces (BCI), some of which involve huge barriers presented by human biology. It's rather depressing.
Presently, there is no good way to read data from the brain to a device. You can drill a hole in your head and patch-clamp some electrodes to your brain tissue, but this procedure is only recommended for treating debilitating diseases like Parkinson's. Furthermore, there is the issue of scar tissue developing over time as a small region of your brain is being bombarded with electricity / contact with foreign metals that trigger immune response.
I do think a spinal tap of some sort might make more sense (rather than patching directly into brain tissue like in the Matrix), but you still get scar tissue.
Non-invasive methods don't have scar tissue, but it's unclear whether measuring the surface activity of the brain yields sufficient information. Furthermore, they are highly noisy due to ambient static and muscle activity. You shouldn't move much if you want o.k. readings.
The BCI dream is to have a lightweight headset that you put behind your ears like in Iron Man 3 or Big Hero 6 (and then you control robots with your mere thoughts). However, I think that kind of technology is really far away, short of some huge breakthroughs in compressed sensing and possibly superconductor physics.
Even more difficult than reading data is writing it back in. There have been very early results with getting monkeys to telepathically control each other's muscle movements, but my understanding is that it's not very reliable.
On the other hand, going in the other direction (from machine to brain) is actually surprisingly easy. It turns out that, when you take what is possibly the best general-purpose signal and pattern interpretation and recognition engine in the entirety of our observable[1] universe (aka the human brain), and combine it with any kind of consistent, interpretable feedback from the outside world, after a little while it just kinda figures it out and goes along with it [2]. From stimulation under the tongue producing sight in the blind to a vibrating belt compass giving people a subcognitive sense of north vs south, the brain is really damn good at doing this. So in that sense, it's looking incredibly promising.
The success of innervated prosthetics[3] suggests that the peripheral nervous system is a viable point of access as well. And, what's more, these interfaces are already bidirectional.
This is absolutely wild conjecture, but I've had the thought for a while that you could conceivably induce (or externally grow and then graft) a new peripheral nervous "port" somewhere. It'd be a bit like teeing off of a plumbing line (understatement of difficulty level: over 9000), and then you'd have a dedicated cluster of new nerves you could start to "play" with, sending and receiving arbitrary signals. But there are some very difficult ethical questions involved for anyone who were hypothetically interested in pursuing this approach.
Ultimately, that's what I think (rightly) holds us back for BCI: invasive procedures are just too ethically questionable for anything other than self-modification, which is okay in the body-hacking community, but still decidedly un-mainstream.
[1] At this point in time, since we've yet to find extraterrestrial life, caveat emptor, etc etc etc
[2] Points-of-entry for googling and wiki'ing: "neural plasticity" and "sensory substitution"
[3] Terms to search for: "innervated prosthetics" and "targeted reinnervation"
My personal scifi idea: induce the growth of such a cluster of nerves in the neck, manipulate some of them to better respond to specific stimuli as an input, and try to force the brain to rewire itself to use the other nerves in the cluster as an output (not sure how to do this, but see reinforcement [1] for my current idea).
Then you make a necklace as an interface with the noninvasive BCI hardware, communicating through your skin. (considering all the other noninvasive options, a necklace seems to be the most appropriate IMHO)
I've got no idea on how to make the nerves susceptible enough for it to provide useful bandwidth, but maybe a properly primed stemcell injection could provide the growth of usable nerves for it.
From there you can give the brain new senses via various sensors, give us high speed mental computer interfaces (awesome for quick calculations) and ditch the need for physical remote controls.
> and try to force the brain to rewire itself to use the other nerves in the cluster as an output
I think that's very likely to be innately possible, given both research like you linked, as well as existing animal experiments with BCI. The brain is just an incredibly impressive thing.
> There are some very difficult challenges in brain-computer interfaces (BCI), some of which involve huge barriers presented by human biology. It's rather depressing.
Completely agree. I don't think BCI is likely to take off seriously until we assert more confident control over our own biology. In particular, techniques like CRISPR may eventually open the door to temporary, local and reversible genetic manipulation of body tissues that would allow human body to grow and shed tiny BCI connectors with desired specs.
This is probably a few hundreds of years of development (and political and moral arguments) away though.
> This is probably a few hundreds of years of development (and political and moral arguments) away though.
This. I might start getting excited about brain-computer interfaces once we have a proof-of-concept non-chemical local anaesthetic. Like, wire something into your finger neurons that erases pain signals sent from finger. That's several orders of magnitude simpler than a full-blown brain-computer interface, yet we're nowhere even close to it.
Even if you solve the scarring issue, the invasive option seems impractical. They might use it on patients that are locked out or missing senses. But what doctor would let a healthy person get brain surgery so they can use a computer slightly faster? Would the FDA ever approve something like that?
However there are other options. People have figured out how to hijack existing senses. There was a plate you could put on your tongue that would let you see images through the nerves on your tongue. There was a project that converted video to sound waves, so you could learn to see through your ears. And there was a guy who made a vibrating suit which could let deaf people hear again by converting sound to more noticeable vibration frequencies.
It's unclear what these extra inputs actually add over just presenting information on a computer monitor or HUD and using vision. I think brain interfaces are massively overhyped. Our brain is already super flexible and can learn to process info from existing interfaces just as well.
I recall in a sci-fi Asimov book, it was related to the 2001 series, there was a skull cap that everyone wore in the 3000's that once placed on your head was permanent as its tendrils weaved its way into your scalp. They were like nano-scale. That idea, although completely sci-fi, seems pretty awesome as there would be more direct input if the entire surface area of your brain was a read-write interface. Also that is scary in terms of abuse.
Optogenetics possibly provides a solution to the interface problem. When you can use light to read from and send signals to neurons, you don't have to worry about scar tissue at all, as far as I know.
I don't think optogenetics is feasible for human BCI. Optogenetics requires one to express channelrhodopsin proteins in the neurons of interest - which means you need to genetically modify the human at birth. We know how to do this in mice, but not humans since the gene expression pathway for neural proteins in humans is poorly understood (unlike mice and zebrafish).
Even with optimal designer baby scenarios, there's no hope for us adults; gene-editing therapies for adults is way harder and we'll likely be long dead before we see anything close.
CRISPR techniques allow us to modify gene expression in organisms long after birth; it has been heavily used in adult animals, but I don't think it's been used in humans just yet.
CRISPR is cool, but isn't the challenge of adult genome editing bottlenecked by the delivery vehicle? Last time I checked it's still hard to deliver viral vectors past the blood-brain barrier. Does CRISPR have a unique solution to that? I also would imagine that a "BCI" optogenetic adapter would be so massive that payload size alone (never mind electric properties) would never make it past the BBB.
But it is not applicable to humans because in order to use it with human brain one has to modify genes and breed humans from embrions with modified genes.
I'm really interested in those interfaces as well.
Some posit that one of the big advantages of the ways our brains evolved were in the increased size of our frontal lobe. They say this ultimately led to increased processing power by allowing our brains to have access to more data for making decisions or analyzing thoughts.
I feel like we've continued this growth in some ways. Low bandwidth access to information that doesn't reside in our brains, allowing us to further our capacity for thought and understanding.
Ray Kurzweil is an interesting fellow who's very much into this as well and sees our extension as an inevitability in the next few decades. Direct interfaces between the brain and the Internet (or some other network) to allow for even lower bandwidth access to information.
Since you said you were interested: there was an episode of Star Talk[1] a few months back where he dives into this stuff along with Dr. Tyson and a neuroscientist. Definitely recommend it as a fun primer for where we are, what we're doing, and where some people think we're going to go.
Are we talking about the same guy who sells supplements and takes hundreds of vitamins everyday, despite there being no evidence of positive effect ? (http://www.rayandterry.com/)
I believe that very little is known on the effects of this kind of supplementation. Avoiding dangerously high mineral content which could lead to illnesses like kidney stones, there doesn't seem to be strong evidence AGAINST. I therefore suggest perhaps we defer judgement as there also appears to be little evidence FOR.
I think that one of the major hurdles to BCI is the mechanism of connection. When electrodes are connected to neurons they have a tendency to be destroyed. However, one might imagine the development of sophisticated electromagnetic equipment which might suitable focus and modulate "receptor pathways" in the brain. This seems far off.
Eyes are convenient sensory organs to utilize for input. Getting directly to the brain seems more like trying to manipulate electron flow in a PCB without removing the external enclosure. (not to say that's impossible, just not as easy as cutting a trace and soldering in a connection)
The great thing about our brains is how malleable they are. We should be creating as low tech a mechanism as possible and then train our brains to use it.
I wanted to discuss an expansion of this idea, I am so glad you brought it up.
Less than 100 years ago we as humanity largely believed that we were fated with the brains and capacity we were born with. Today, we know this isn't true. In fact, cases like Phineas Gage make one wonder.[1]
Now to respond to you comment. There is no reason that we couldn't develop methodologies for utilizing our sensory input to train our brains in mechanisms we've not even considered or dreamed of today. Maybe we can achieve a hyper-reality where reality speed is 5x but after a small period our brains can adjust and then we can learn at 5x.
Our brains are limitless. Our ability to directly interface to them with electronics seems tenable today (but that may or may not always be true).
I had thoughts for some bci product. I have no clue about the feasibility.
Take an EEG like the emotiv Epoch. Get lucky and hope you can still get the raw data via emokit or something like that. Construct some sort of low level two character alphabet representation. I'd imagine something similar to EBCDIC cause I'm a tortured mainframe soul, so C1 is A and so on. Using some fancy ML (deep learning?), train for your alphabet representation using EEG data while loudly subvocaling "C" or something.
Once you get lucky and train a model, use your alphabet as some sort of string of consciousness recording/ instant note taking. Some how tag your thought notes so you can come back later or do some fancy searches. Do some fancy NLP on your notes or some conceptual blending or whatever is cool. Sell the software, but keep the notes "open" somehow.
I'd go on but that's my idea. Steal it and hire me.
The disease and safety aspects alone are pretty cool. Blindness and deafness will be cured, as will vertigo and a host of other diseases and malformations. Think Stephen Hawking, but actually walking around and talking again, having a totally normal life again. Also, imagine having IR vision for firefighters in a house fire, or a surgeon having basically X-Ray vision, or being able to see the elements in your tap water with the aid of an inbuilt spectrometer. Really cool stuff.
While the first part of this comment alone is compelling, what excites me is that the second half was already in our hands before slipping away.
The idea of IR vision firefighting, being able to see the elements in your water, etc, these are all basically ubiquitous computing, and we're kind of getting there with IoT putting sensors in everything. What is missing is a way of intuitively accessing and building it into our thought processes in a manner less clunky than fetching a phone, unlocking, launching an app.
What we need is Google Glass, or equivalent. The real place where Google Glass fell down was not in it's handling of privacy concerns, strictly, it was there attempt to try and make it exclusive and elite, like they did with the Google Mail launch, where you needed invites.
I can't help but feel that if Google had made the "explorer" program humbler and quieter, followed by an accessible general release, we'd have access to so much more today.
I have some friends that were on the team trying to make a sonar system for divers. It would take the sonar info from a receiver on your forehead and them play that out onto an array placed on your tongue or some patch of skin. Applications were in murky water search and rescue, navy seal operations, underwater construction, etc. The major issue was the failure modes. When the device flips out, or is struck, or has a battery failure, or is misproduced, what happens? For these body interfacing electronic devices, the failure modes are typically brutal. Reliability engineering is something that can't be hacked or dismissed and they are a major reason that many AR devices can't make it to market.
I don't think that Google Glass strictly needs to be considered body-interfacing to the degree that it's failure presents real hazards to the person, but what it does allow us to trivially bring a screen with data into not just day to day lives, but literally everything.
I think right now lots of people are trying to boil the ocean and try to make AR truly part of our bodies, when really there's a huge amount of low hanging fruit in simply getting information to be ever-present and available.
Flicking your eyes up is a good first step. Later, overlaying the same information into our field of view. Later still, integration of that data into the body, rather than just presentation of it to the physical senses.
I am excited by this, and would gladly augment my senses today if it were available.
However, I have known deaf people who say they have no interest in "curing" their deafness, and who suggest that the development and widespread use of such a complete cure would destroy a culture.
Absolutely, although I've always been interested in the aspect it can have on internal corporate communication. This is the #1 challenge/cost driver in every company today. Maybe I'm thinking in the realm of science fiction still, but to be able to effortlessly communicate the precise info every individual in an organization needs at the instant they need it would unleash an unprecedented explosion of economic growth around the world.
I hope not! I downsized to a dumbphone just to get away from email at night. The last thing I want to know is my boss's every whim for me.
I think you are thinking of a eusocial hivemind organization. Though they are very good at getting goals done, like building these tunnels or that termite mound, they are not very adaptable outside of their environmental niche. It's the opposite of capitalism you speak of. Democracy is a thousand voices all yelling different songs to make a chorus, and it is slow and messy. However, it tends to find the 'right' choice at a higher rate than the top down model.
Well I was thinking more in terms of not only top down communication, but also bottom up (reporting, identification of problems on the ground floor, etc). The vast majority of our economy is goal-driven, with a very small percentage of workers actually defining the goals, doing creative work, or doing research. Consider the impact on the world if the latest research from the lab was instantly decimated to the entire community, feedback provided, and new knowledge spread to industry. This is the positive aspect of the technology I am thinking about (of course many, many years in the future).
This is what will make the difference between us becoming the cybernetic overlords, and us getting crushed underneath of them. Whatever powers we grant to the machines, it is critical that we also take these powers for ourselves.
I'm still not convinced thinking, living, conscious machines are possible anytime soon, if ever. Until we understand the hard problem of consciousness and other fundamental mechanisms of the brain, we will never build true AI.
Increasing system complexity and hoping for a "singularity" event is cargo cult science[1]. It's like building runways on your private island and hoping airplanes will spontaneously land and produce wondrous treasures to trade.
We don't need to fully understand a processus in order to reproduce it. We can proceed by imitation, trial and error. As we mostly did for airplanes, for instance.
At some point we may create a machine exhibiting human-level intelligent behavior, and then the question of whether or not it has consciousness will not be answered but that will not matter much.
Kind of in 2001: "Well he acts as if he has genuine emotions. Of course he's programmed that way. That makes it easier for us to talk to him. But as to whether or not he has real feelings, it's something I don't think anyone can truthfully answer."
> We don't need to fully understand a processes in order to reproduce it.
That's true, but you still need to understand it enough
> We can proceed by imitation, trial and error. As we mostly did for airplanes, for instance.
Yes, but again, I would argue that initial attempts to create airplanes were cargo cult science, and that's why they failed. It's something you want to avoid, yet I see it all the time when the subject of AI comes up.
"In a cargo cult, you reproduce the appearance of the machine without understanding the principles behind the machine. You build radio stations out of straw. The cargo cult approach to aeronautics—for actually building airplanes—would be to copy birds very, very closely; feathers, flapping wings, and all the rest. And people did this back in the 19th century, but with very limited success." (LeCun [1])
When Yan LeCun started playing with artificial neural networks, did he know why they worked? Aren't artificial neural networks very much inspired from nature anyway, not unlike early airplane wings?
I mean between a cargo cult and a very reasonable attempt at mimicking nature, there is whole continuum and being quick at using the extreme label (that is, "cargo cult") to dismiss an approach just because you don't agree with it, seems quite dishonest to me.
What if consciousness is an irreducible property of matter? What if computers are already conscious, and their experience of executing code is analogous to our experience of thinking? How could we ever tell the difference?
Agreed, human equivalent AI is probably several hundreds of years off, at best. but we don't need anything that sophisticated to trigger an "overlords" scenario.
I'm picturing something like a very good legal AI, being abused by a foreign state's hackers to ensure the incarceration of political opponents, in preparation for a coup. We're not far off from that being a plausible scenario now, but if we all had the benefit of a personal legal AI, or access to a collective legal AI, then it would be a lot harder for anyone to use the legal system against an individual opponent, as a simple weapon.
That's a really good point. In a world where we use machine learning to decide how to diagnose illness, pass judgment, arrest people, and more, it could be manipulated by bad actors into enslaving us one way or another (no conscious AI needed).
Thats a very strange belief. Akin to believing we can't build cars until we understand how horses work exactly. AI has made huge progress in the past few years, without any input from neuroscientists. Stupid evolution made human brains with just random mutations and selection. Surely human engineers can do better.
The hard problem of consciousness is especially irrelevant to AI. The hard problem is a philosophical issue that by definition can't be solved with science. We could make an exact simulation of the brain that exactly reproduces human behavior in a computer, and philosophers would still be debating it.
It requires it, in the same way that cars require legs and submarines require fins, or planes require flapping wings. I don't think strong AI will be anything like humans. If nothing else, it will follow a totally different evolutionary path than the conditions that created human minds. Minds designed for navigating the social hierarchy of primates.
But the second point of my comment is that the philosophical debate about "consciousness" is disconnected from any empirical matter or anything connected to reality. Some of the advocates of the "hard problem" have stated that if neuroscientists came up with an exact theory of the human brain, that perfectly explained our behavior, they would still not be convinced. It's complete quasi-religious nonsense that's irrelevant to real AI or brain research.
The hard problem actually extends to the bedrock of reality. What is the relation of matter and qualia? You don't have a clear understanding of the hard problem, nor of your own experience, if you can deny it so off-handedly as you've done here. I do agree with you, though, that AI will be completely different from human intelligence.
Systems like AlphaGo do something like thinking. I suspect progress will crack along regardless of what philosophical types think about the hard problem of consciousness.
Also if you are a materialist the brain already is a thinking, living, conscious machine.
To add to ericjang's comments elsewhere in this thread, here are my thoughts on the two most popular BCI (although I'm used to seeing BMI -- brain-machine interface) technologies:
* Optogenetics advantages: don't need to insert things into tissue. Disadvantages: requires genetic engineering that is not approved for human use. Difficulty reaching deep brain targets. To image many neurons, might need to pump lots of light into an area, causing heat and cell death. Bulky.
* Microelectrode arrays (https://en.wikipedia.org/wiki/Multielectrode_array) like the Utah or Michigan array advantages: direct electrical recordings, relatively compact. Disadvantages: low density, invasiveness.
It is worth noting that most behavioral studies are done with MEAs are due to their convenience/form factor.
It is also important to note that these two technologies measure different things: an MEA can record extracellular voltages, so it's a relatively direct measurement of neural activity, compared to optogenetics, which relies on ion channels fluorescing, which is not exactly what most people really want to measure.
A word about invasiveness: when you insert large (>1um) things into tissue, you elicit a foreign body response, starting with inflammation and ending with scar tissue or gliosis. The issue with invasive probes like MEAs is that even if you manage to avoid major blood vessels and you control bleeding and inflammation, ultimately scar tissue forms around your probes. Scar tissue is a much worse electrical interface than plain tissue, and thus the effectiveness of your BCI/BMI implant diminishes.
Disclosure: I run a company (Paradromics, www.paradromics.com) developing next-generation microwire technology for brain-machine interfaces. Also, my strength is in software so while the gist of what I'm saying is probably mostly correct, I can't give details as well as others can.
Just to add clarification points, the term optogenetics is generally supposed to refer to any gene transfection that involves optical access, but in practice there are two separate categories:
* optogenetic stimulation: using light-activated ion channels to modulate neural activity
* optical imaging: using genetically targeted fluorescent proteins (mainly GCaMP) to observe the activity of neurons
This article is more the second kind, and the novel aspect is that this technology couples dopamine release to calcium rises, enabling imaging of the calcium signal to be used as a proxy for the neurotransmitter's release. It's more likely to be quite useful for investigating specific scientific questions about neurotransmitter signaling than a general purpose readout for BMI.
Multielectrode arrays pick up spikes or action potentials from individual neurons directly, so you get high temporal resolution but from a sparse sample of cells. This sample turns out to be enough to do a lot of interesting things, such as controlling a mouse cursor or a robotic arm. This is already in clinical trials at a few sites, including (my lab at) Stanford.
Imaging approaches are really powerful, but the signals are often slower. This has to do with the kinetics of the proteins themselves and the signal that is being detected (calcium is slower than voltage transients). But you get to see the activity of lots of neurons. There are voltage sensitive channels that you can image, although the signal to noise ratio isn't nearly as high as yet. It's not immediately clear how this could be used for a BMI in humans, mainly because you would absolutely need optical access to the brain to image, so you'd either be opening up a window or implanting some kind of imaging sensor. The less invasive approaches you were hinting at are mostly in the first category (stimulation), where for longer wavelengths of light you wouldn't need something as invasive.
Just to clarify, optically recording neural activity from ensamble of cells ("thoughts") has been done in neuroscience for many years using voltage / glutamate / calcium indicators + imaging. CNiFER is a breakthrough is because it provide ways to measure the "hormone" of brain, these are often molecules in tiny amount that modulate how cells in an area fire.
A not-so-good analogy is before we can measure electrical current in wires, now CNiFER allows us to measure electromagnetic field. This will help to explain why some headphones nearby will beep when your cellphone received a message.
I expect someone to figure out a way to exfiltrate upcoming passwords by analyzing the EM radiation of these aptly named CNiFERs. Imagine, an attacker knowing your next password before you begin to type it for the first time }:)
Or, more practically, an interrogator could just remind you of a time when you logged in to a particular account, and then try to capture a password as you relive the memory.
Lie detectors do not have a good track record for making things better. If you want to catch criminals there are many other ways. If there was a murder near you would you want people trying nuero-tech on you?
Imagine this being used on homosexuals in the middle east. Or imagine it being used on the modern day equivalents who most view similar to how homosexuals are viewed in the middle east. To take away the ability to pass is extremely dangerous for at risk minorities.
And everyone should fear because such tech could also be abused to falsely accuse you of being in some hated group. Police lie about drugs often enough to be a threat to liberty; they need less tools they can lie about, not more.
I think that depends on where you believe thoughts originate. If they originate within the brain, the ensemble firing together (in some sort of organized manner) is a thought. If you believe that they don't originate from within the brain, then the answer is unclear. More of a philosophy problem at that point, I think.
I think it should be called "digital hallucinations" - non-reproducible (but not totally random either) visualizations, renderings of some data gained from a device build according some biased and grossly oversimplified models. It might have strong correlation with micro changes in blood flow or biochemistry, etc. but obviously have nothing to do with thoughts.
I've never understood how these guys managed to get a billion dollars to fund this horseshit Brain Initiative in a time when NIH funding has been declining for decades. There is probably some serious graft/nepotism behind this.
This makes me think of John Scalzi's The Ghost Brigades. I wonder how far it is between mapping thoughts to being able to map someone's memories and consciousness. If such a thing is even possible at all.
In current MRI images of the brain, a million different data points are reduced to a single pixel in the image. Absent some kind of dramatic breakthrough in science, we are a long ways from being able to resolve individual data points inside of a person's brain.