Scientists develop AI that creates pictures by 'reading people’s minds’
Scientists in Japan may have developed a way for artificial intelligence to “read” our minds and recreate the images we see.
The new AI-powered algorithm reconstructed around 1,000 images from brain scans, with 80% accuracy.
Researchers from Osaka University trained an AI to enable it to reproduce pictures of a toy bear, an aircraft, a clock, and a train from brain scans of people who looked at those images.
Yu Takagi and Shinji Nishimoto, professors at the Graduate School of Frontier Biosciences, published their results in a paper that will be presented at a computer vision conference in Vancouver in the summer.
Their paper is yet to be peer-reviewed but they believe the results could one day help explore how animals perceive the world, aid communication with paralysed people, and record dreams.
They researchers used the popular AI program Stable Diffusion, which has been developed by a British company, Stability AI.
It is being used to produce images from text prompts, also called text-to-image translations.
The professors modified Stable Diffusion to make links between the brain scans of four people and the 10,000 images they looked at.
The algorithm pulls information from parts of the brain involved in image perception, such as the occipital and temporal lobes, according to Yu Takagi, who led the research.
The scientists acknowledged that their experiment only involved four people and that extending the practice would involve retraining the program.
"We show that our method can reconstruct high-resolution images with high semantic fidelity from human brain activity," the team shared in the study published in bioRxiv. "Unlike previous studies of image reconstruction, our method does not require training or fine-tuning of complex deep-learning models."
Want a quick and expert briefing on the biggest news stories? Listen to our latest podcasts to find out