In today’s surreal science news, researchers in Kyoto, Japan have reportedly built a “dream-reading machine” using MRI and electroencephalography (come again?) technology, according to a study published today in Science.
A membership is required to view the entire study (you can view the abstract here), but Smithsonian took an inside look at how the findings may be the “first case” of objective data documenting the content of a human brain in dream-mode:
The research was done on three participants, each of whom took turns sleeping in a MRI scanner for a number of 3-hour-blocks over the course of 10 days. The participants were also wired with an electroencephalography (EEG) machine, which tracks the overall level of electrical activity in the brain and was used to indicate what stage of sleep they were in.
The deepest, longest dreams occur during REM sleep, which typically begins after a few hours of sleeping. But quick, sporadic hallucinations also occur during stage 1 of non-REM sleep, which starts a few minutes after you drift off, and the researchers sought to track the visualizations during this stage.
As the fMRI monitored blood flow to different parts of the subjects’ brains, they drifted off to sleep; then, once the scientists noticed that they’d had entered stage 1, they woke them up and asked them to describe what they were previously seeing while dreaming. They repeated this process nearly 200 times for each of the participants.
Despite already being the most patient human beings on the planet, the test subjects were then quizzed about the common items they saw in their dreams so that researchers could search for images of the items online. When the participants were shown the searched images while awake but still in the MRI scanner, the comparison to their dreaming readouts allowed the researchers to “isolate the particular brain activity patterns truly associated with seeing a given object from unrelated patterns that simply correlated with being asleep.”
Things only got more complicated from there: the comparison data was used to create a “learning algorithm” that created videos based on groups of images and text of the common items that the person most likely visualized while dreaming. When the participants were woken up for the 300001th time, their last descriptions were used to determine how often the algorithm predicted the class of items they’d seen in their dream.
The results? Accurate 60 percent of the time. While still relatively crude in its scope (the algorithm “was better at distinguishing visualizations from different categories than different images from the same category—that is, it had a better chance of telling whether a dreamer was seeing a person or a scene, but was less accurate at guessing whether a particular scene was a building or a street”) the new system may lend some desired insight into how our dreams can be analyzed on a more public scale, rather than scribbled musings in a private dream journal.
In other words, expect an iDream release date of no later than 2016.
Curtis Reframed: The Arizona Portfolios, an exhibit of 60 iconic photographs of the American West taken by… More