A team of researchers recently developed mind-reading AI that uses an individual’s personal preferences to create portraits of attractive people who don’t exist.
Computer generated beauty really resides in the viewer’s AI.
The big idea: Scientists from the University of Helsinki and the University of Copenhagen today published an article describing a system that uses a brain-computer interface to transmit data to an AI system, which then uses that data interpreted and used to train a data image generator.
According to a press release from the University of Helsinki:
First, the researchers gave a generative adversarial neural network (GAN) the task of creating hundreds of artificial portraits. The images were shown sequentially to 30 volunteers who were asked to look out for faces they found attractive while their brain reactions were recorded via electroencephalography (EEG).
The researchers analyzed the EEG data using machine learning techniques and connected individual EEG data to a generative neural network via a brain-computer interface (BCI).
Once the user’s preferences were interpreted, the machine would generate a new set of images that were optimized to be more attractive to the person whose data it was being trained on. Upon review, the researchers found that 80% of the personalized images produced by the machines passed the attractiveness test.
Background: Sentiment analysis is a big thing in AI, but it’s a little different. Typically, machine learning systems designed to observe human mood use cameras and rely on facial recognition. At best, that makes them unreliable to the general public.
However, this system relies on a direct connection to our brain waves. And that means it should be a pretty reliable indicator of positive or negative mood. In other words, the basic idea seems so good that you look at an image you like and then an AI tries to create more images that trigger the same brain reaction.
Take quickly: You could try hypothetically extrapolating the possible uses for such AI all day, never deciding whether or not it is ethical. On the one hand, there is a treasure trove of psychological insights that can be gleaned from a machine that can abstract what we like about a particular image without relying on our conscious understanding.
On the flip side, it’s absolutely terrible to think about what a company like Facebook (which is currently developing its own BCIs) or a political clout machine like Cambridge Analytica is, considering what bad actors can do with just a tiny amount of data could have to do with an AI system that knows how to skip a person’s consciousness and go directly to the part of their brain that likes things.
You can read the whole paper here.
Published on March 5, 2021 – 21:11 UTC