Big Brother’s Window Into Your Brain? Mind-reading Tech Is Already Here
libre de droit/iStock/Getty Images Plus
Article audio sponsored by The John Birch Society

Mind your manners, we were told as children. But may you one day have to mind your mind?

Mind-reading technology is still considered the stuff of science fiction, of, for example, the alien-origin children in movie Village of the Damned (1960) who could probe others’ thoughts. But China is already using “emotional surveillance technology” wireless sensors to monitor workers’ brain activity and emotional state. What’s more, this is mere small potatoes compared to what university researchers have just developed: a “noninvasive” means through which thoughts can be converted into text.

The technology is still “somewhat clunky,” as American Thinker’s Eric Utter puts it, before warning that, nonetheless, the “‘semantic decoder’ could one day be miniaturized and mobilized such that one’s most private thoughts could be made apparent anywhere and endlessly.”

Website Interesting Engineering reports on the technology, writing:

Researchers at The University of Texas [UoT] at Austin have decoded a person’s brain activity while they’re listening to a story or imagining telling a story into a stream of text, thanks to artificial intelligence and MRI scans.

The system does not translate or decode word-by-word but rather provides a gist of the imagination.

…Led by Jerry Tang, a doctoral student in computer science, and Alex Huth, an assistant professor of neuroscience and computer science at UT Austin, the work relies to an extent on a transformer model similar to the ones that power Open AI’s ChatGPT and Google’s Bard, the release [from UoT] said.

The technology also does not require the person to have surgical implants, unlike other language decoding systems.

…”For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences,” Huth said in a statement. “We’re getting the model to decode continuous language for extended periods of time with complicated ideas.”

As to how the method works, the Blaze explains:

Test subjects laying in an MRI scanner were each subjected to 16 hours of different episodes of the New York Times’ “Modern Love” podcast, which featured the stories.

With this data, the researcher’s AI model found patterns in brain states corresponding with specific words. Relying upon its predictive capability, it could then fill in the gaps by “generating word sequences, scoring the likelihood that each candidate evoked the recorded brain responses and then selecting the best candidate.”

When scanned again, the decoder was able to recognize and decipher test subjects’ thoughts.

While the resultant translations were far from perfect, reconstructions left little thematically to the imagination.

For instance, one test subject listening to a speaker say, “I don’t have my driver’s license yet,” had their thoughts decoded as, “she has not even started to learn to drive yet.”

In another instance, a test subject comprehended the words, “I didn’t know whether to scream, cry or run away. Instead, I said, ‘Leave me alone!’” and had those thoughts decoded as “Started to scream and cry, and then she just said, ‘I told you to leave me alone.”

The Texas researchers’ decoder was not only tested on reading verbal thoughts but on visual, non-narrative thoughts as well.

Test subjects viewed four 4-6 minute Pixar short films, which were “self-contained and almost entirely devoid of language.” They then had their brain responses recorded to ascertain whether the thought decoder could make sense out of what they had seen. The model reportedly showed some promise.

As with all technology, this semantic decoder does have legitimate applications. For instance, it “might help people who are mentally conscious yet unable to physically speak, such as those debilitated by strokes, to communicate intelligibly again,” the UoT writes at its website.

The researchers’ paper also “describes how decoding worked only with cooperative participants who had participated willingly in training the decoder,” the site later added. “Results for individuals on whom the decoder had not been trained were unintelligible, and if participants on whom the decoder had been trained later put up resistance — for example, by thinking other thoughts — results were similarly unusable.”

Interestingly, this is much as how in the aforementioned film, Village of the Damned, the scientist “managing” the telepathic children was able to briefly shield his deeper thoughts from their brain-probing by thinking of a “brick wall” — until they hacked past it and into his mind (video below. Warning: What follows, the presentation of one of cinema’s great endings, is a plot spoiler).

Researcher Tang mentioned the technology’s perils himself. “We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that,” he said.

Yes, and “I’m sure the researchers at the lab in Wuhan said the same thing circa 2019,” quipped Eric Utter in response.

“You can bet that tyrannical governments from China to Canada, and from Iran to Washington, D.C., would love to avail themselves of the ‘practical applications’ this interface would afford them,” he noted. For sure.

I’ve always believed mind-reading was possible. After all, basic psychology informs that thoughts are transmitted through the brain from neuron to neuron, across synapses, via electrochemical impulses. Thus, learn to monitor and decipher those impulses, and the result should be mind-reading — and, perhaps, the invasion of privacy’s final frontier.

So will there come a time when we’ll have to walk around trying to think only happy, “approved” thoughts? Will tomorrow we be canceled not just for what we say, but what we think?

For certain is this: One needn’t be a mind-reader to know that many of the megalomaniacal control freaks attracted to politics would relish that kind of power.