Duke University researchers have announced that they have developed an artificial intelligence-based tool that can turn blurry and unrecognisable images of people’s faces into perfect computer-generated portraits in high definition.
According to the reports, traditional methods can only scale up a human face image up to eight times than its original resolution; however, the researchers from the Duke University have developed this AI tool called PULSE, which can create a realistic-looking image which is 64 times the resolution of the input photo. This tool searches through artificial intelligence-generated high-resolution faces images as an example and analyses facial features like fine lines, eyelashes and stubble to match ones that look similar to the input image after actual size compression.
When asked, co-author Sachit Menon from the Duke University stated to the media, “While the researchers focused on faces as a proof of concept, the same technique could, in theory, take low-res shots of almost anything and create sharp, realistic-looking pictures, with applications ranging from medicine and microscopy to astronomy and satellite imagery.”
Sign up for your weekly dose of what's up in emerging technology.
According to Duke University, the method for PULSE will be presented at the 2020 Conference on Computer Vision and Pattern Recognition (CVPR).
Facial features like eyes and lips are barely distinguishable in the blurry photo on the left. Enlarged more than 60 times (right) it’s a different story. Pic Courtesy: Duke Today
Download our Mobile App
Explaining the method, the university stated — they approached a different process, where instead of taking a low-resolution image and gradually adding new detail, the new AI tool “scours AI-generated examples of high-resolution faces, searching for ones that look as much as possible like the input image when shrunk down to the same size.”
Alongside the university also stated that the researchers used generative adversarial network, aka GAN for the method, where two neural networks are being trained on the same data set of photos. “One network comes up with AI-created human faces that mimic the ones it was trained on, while the other takes this output and decides if it is convincing enough to be mistaken for the real thing,” stated the research.
The researchers further claimed that their AI tool could create realistic-looking images from noisy, poor-quality input that other techniques can’t. “From a single blurred image of a face, it can spit out any number of uncannily lifelike possibilities, each of which looks subtly like a different person.”
Meet the authors: Sachit Menon, Alex Damian, McCourt Hu, Nikhil Ravi and Cynthia Rudin. Photo Courtesy: Duke Today
The research also stated that the photos that are pixelated where facial features are barely recognisable, the algorithm created by the university researchers could read those and manage some result out of it, stated study co-author Alex Damian.
Explaining the process, Damian stated — the artificial intelligence system can transform a human image with 16×16-pixels to 1024 x 1024 pixels in a concise amount of time. In this process, it adds more than a million pixels, akin to create the HD resolution. This tool also read minute details like pores, wrinkles, and wisps of hair that are impossible to grasp in the low-res photos; this makes it clear in the computer-generated versions.