Every so often I get an email by some dude claiming that ‘The Government’ has been tracking his thoughts for years using advanced mind reading skills that can identify telepaths such as himself. In fact the dude seems to email everyone in my department about his plight. He now even has a website where he lays down the evidence. I’m not sure how he expects a bunch of overstressed academics to help him, and therein, I think, lies his true delusion. Of course this set me to thinking about a more interesting but related question. The question is highly theoretical: if we had the most advanced, unlimited technology that would allow us to record and process activity from every neuron in our brains, would it be then possible to read our thoughts? I am sure that folks who study artificial intelligence have thought about this issue a lot more than I, but I think that one good place to start is by looking at current technologies that allow us to read-out neural activity and what their potential for “mind reading” really is.
One technology that has received a lot of attention is functional magnetic resonance imaging (fMRI). fMRI is a non-invasive measure of brain activity which works by measuring the so-called hemodynamic response, which corresponds to changes in blood flow that occur when a specific brain region is more active. This means that when neurons in a specific area of the brain are activated more, relative to those in neighboring areas, the active region receives more oxygenated blood. fMRI can detect these local increases in blood flow. Thus when a test subject is asked to think of a variety of things, say fish tacos or their favorite pet, or experience different emotions, such as love or fear, we can tell which brain areas are preferentially activated. In theory, one should be able to generate a catalog of activation patterns to a variety of thoughts and stimuli and therefore be able to predict what the individual is thinking. This is precisely what was done in a 2008 study, in which participants were asked to view a series of drawings of objects while fMRI data was collected. Investigators then were able to predict which object a participant was viewing solely on the fMRI signal. What was remarkable was that in some cases, they were able to use fMRI data from one set of participants to identify what another set of participants was looking at. This means that, at least for the broad categories used, there was consistent representation of an object across different participants’ brains.
So the question is, is this really mind reading? For one, in this study participants were actually observing the stimuli while predictions were made, they did not test whether this would work if participants were asked to think about the different objects without the visual stimulus present. Also, stimuli were presented for four seconds, which is about the time it takes for the hemodynamic response to occur. Our thoughts normally happen at a much faster time scale, which is impossible for the fMRI to resolve. The fMRI signal also has very low spatial resolution, which means that we get an idea of the general regions that are active, but not the specific set of neurons. Finally, fMRI shows you a differential signal, meaning that it is subtracted from background activity to highlight which areas are relatively more active when processing a specific stimulus. But it masks other background activity which could be important for encoding a thought or a stimulus. Thus, while fMRI might be useful for pinpointing the role of specific brain regions in processing different bits of neural information, it does not seem like this would be a useful technology to be able to read thoughts in real time.
Another cool experiment was done in patients being prepared for epilepsy surgery. In these patients electrodes which record brain activity from individual neurons, also referred to as ‘units’, are used to localize the source of the epileptic seizure—and this is done while the patient is awake and conscious. This study looked at patients in which electrodes were being placed either in the hippocampus or entorhinal cortex, two brain areas known to be involved in memory recall. Patients were then shown a series of short video clips, ranging from images of San Francisco, The Simpsons and the Oprah Winfrey Show. The investigators found that specific brain cells responded preferentially to the different video clips. So for example, one neuron would start firing like crazy when the subject was looking at a clip from Seinfeld, while another responded best when the clip was of Elvis. After a break, the subjects were then asked to think, using free recall, of as many of the different clips that they observed. Remarkably, the same neurons that were active during specific video clips were also active during free-recall of the clip. Which means that the investigators were able to predict which specific video clip the subject was thinking of, based on which brain cells were active. This supports to some degree the idea that the same neural circuits which are active when you perceive something are the same ones activated when you recall the same thing. However, in order to know what a person is thinking, you would need to first learn what neurons were activated when the person first perceived the event, and this will obviously vary from individual to individual. So again, not really a good method for cold mind reading.
While these technologies might not be useful for mind reading Houdini-style, they might still be helpful for creating brain/machine interfaces which may help people who are paralyzed due to spinal cord injury. A series of new technologies are currently being developed that allow people suffering from paralysis to control computers and prosthetic devices using their minds. This is done by reading brain activity from the areas of the cerebral cortex that control motor actions using what is called a multi-electrode array. This is a tiny array of mini-needles that when implanted on the surface of the brain can read out the activity of multiple neurons at once. Once this device is implanted, a patient is then asked to ‘think’ about performing various motor actions. A computer then reads out the combination of neural activity that are generated during this action and learns to recognize which actions the subject is thinking about. This can then be translated into concrete actions like moving a cursor on a screen or operating a robotic arm. This technology is still in its infancy and it will still be a long way until we can see a Robocop-style type of recovery of function. Moreover, due to brain plasticity, neural circuits change over time as new memories are learned and skills acquired. In fact, this is a problem with all the above technologies. Thus, specific patterns of neural activity that were active at some point during perception or action may be somewhat different when that sensory stimulus is revisited or action performed. This means that the computer has to recalibrate how to interpret neural activity patterns every time it wants to read someone’s mind.
Nevertheless, I still find it quite remarkable that we can even begin to approach something like mind reading using a bunch of hazy neural signals and some computing power, and I’m looking forward to seeing how this technology develops in the future.
Shinkareva SV, Mason RA, Malave VL, Wang W, Mitchell TM, Just MA. Using FMRI brain activation to identify cognitive states associated with perception of tools and dwellings. PLoS One. 2008 Jan 2;3(1):e1394. PubMed PMID: 18167553; PubMed Central PMCID: PMC2148074.
Gelbard-Sagiv H, Mukamel R, Harel M, Malach R, Fried I. Internally generated reactivation of single neurons in human hippocampus during free recall. Science. 2008 Oct 3;322(5898):96-101. Epub 2008 Sep 4. PubMed PMID: 18772395; PubMed Central PMCID: PMC2650423.
Hochberg LR, Serruya MD, Friehs GM, Mukand JA, Saleh M, Caplan AH, Branner A, Chen D, Penn RD, Donoghue JP. Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature. 2006 Jul 13;442(7099):164-71. PubMed PMID: 16838014.