A new whitepaper published today by chip-maker – Nvidia – shows how artificial intelligence can be used to generate high-resolution fake celebs.
There seems to be a boom in the world of generating fake imagery using AI, with Nvidia being the latest company to flex their muscles with their very own AI powered image generator. Although this isn’t new, researchers believe this is the most photorealistic attempt ever made.
The video above shows the process in full, including the database of original celeb images that the system was trained on. The data science division at Nvidia used a process known as ‘generative adversarial networks’ (GAN) to generate the photos.
GAN’s are comprised of two different networks; one that builds images based on the data it’s being fed, and the second; which checks how realistic they are. Essentially the system teaches itself, and the quality of the images only improves overtime thanks to the continuous feedback loop.
By working in unison, these two networks can produce some pretty impressive fakes! The technology can also be applied to everyday landscapes and objectives.
Unfortunately there are some limitations to this method – the photos that are generated are relatively small compared to modern standards. There are also some tell-tale signs that the mugshots are a forgery – for a start the photos look very similar to the celebrities the system was trained on, there are also some minor glitches if you zoom into the picture.
So now I guess you’re asking ‘so what?’ Well it may seem like a bit of fun but these applications have real-world commercial applications – especially in the creative industries like advertising. Could this be the end of models as we know it?