Just few years ago researchers started to intercept brain electrical activity by placing a grid of sensors on the skull, and when they wanted to be more accurate on the brain itself, and demonstrated they could translate those electrical activities into “commands” that a robot (may be in the shape of a prosthetic arm) can execute. That was, and is, an amazing feat but it is actually based on a trick. The computer analysing the electrical activity generates a response and that response is analysed by the person being monitored. If the response is not satisfactory (like the robotic arm is not moving in the desired direction) the person will try to change his brain activity, hence its electrical signals, till it gets the desired result. In BCI, Brain Computer interface there is a lot of training going on and most of it is done by the brain/the person.
Hence we cannot say that a computer can “read” our brain/mind.
Progresses have been made and AI technologies applied to BCI have greatly reduced the person training time since now also the computer “learns”. Beyond the hype, however, there were no “thoughts” reading by a computer observing a brain.
This might be changing soon, however, as it has been shown in a research carried out by a team of scientists at CMU (Carnegie Mellon University).
They have been able to train a computer to read “the mind” of a person by showing it a number of thoughts and their related fMRI mapping.
Specifically, they trained the computer with 239 sentences associating the related fMRI mapping (fMRI shows the areas of the brain that are particularly active at a specific moment). The computer analysing the mapping was able to identify concepts located in specific areas of the brain. When the fMRI related to a 240th sentence (unknown to the computer) was analysed the computer was able to identify the “semantics” of that sentence, in other words what that brain was thinking. Notice that the mapping has to be done for each person, since it differs from one to the other. The areas that “light up” in my brain when I think about going to the seaside to windsurf are quite different from the ones lighting up in your brain when you think about the same things.
The actual “workings” is quite more complex: the computer can predict what areas will be “illuminated” when that 240th sentence is spoken or thought and the other way round it can predict a a given pattern of activity correlates to a certain string of concepts.
This result is really impressive: it shows that our thoughts have “a home” in our brain in terms of concepts that gets related one another. A computer can discover those locations and pinpoint the associations. Once this is done it is able to “see” what we “think”!
There is really nothing to be concerned about at this time. The training of the computer is quite complex and it requires the full agreement and support from the person so there are no risk of privacy breach. However a door has been opened, a door that many though was not existing. Yet it is there and it brings straight to our mind.