WASHINGTON: Scientists have developed a new artificial intelligence system that can decode the human mind, and interpret what a person is seeing by analysing brain scans.
The advance could aid efforts to improve artificial intelligence (AI) and lead to new insights into brain function.
Critical to the research is a type of algorithm called a convolutional neural network, which has been instrumental in enabling computers and smartphones to recognise faces and objects.
“That type of network has made an enormous impact in the field of computer vision in recent years,” said Zhongming Liu, an assistant professor at Purdue University in the US.
“Our technique uses the neural network to understand what you are seeing,” Liu said.
Convolutional neural networks, a form of “deep-learning” algorithm, have been used to study how the brain processes static images and other visual stimuli.
“This is the first time such an approach has been used to see how the brain processes movies of natural scenes – a step toward decoding the brain while people are trying to make sense of complex and dynamic visual surroundings,” said Haiguang Wen, a doctoral student at Purdue University.
The researchers acquired 11.5 hours of Functional magnetic resonance imaging (fMRI) data from each of three women subjects watching 972 video clips, including those showing people or animals in action and nature scenes.
The data was used to train the system to predict the activity in the brain’s visual cortex while the subjects were watching the videos.
The model was then used to decode fMRI data from the subjects to reconstruct the videos, even ones the model had never watched before.
The model was able to accurately decode the fMRI data into specific image categories. Actual video images were then presented side-by-side with the computer’s interpretation of what the person’s brain saw based on fMRI data.
Source: https://www.gadgetsnow.com