Google unveiled Phorhum, a photorealistic 3D human reconstruction that can greatly help online apparel shopping. Phorhum is a method to photo-realistically reconstruct a dressed person’s 3D geometry and appearance as photographed in a single RGB image. The produced 3D scan of the subject accurately resembles the visible body parts and includes plausible geometry and appearance of the non-visible parts. The 3D scans of people wearing clothing have many use cases and are high in demand currently.
Image – GitHub
In the paper, ‘Photorealistic Monocular 3D Reconstruction of Humans Wearing Clothing’, which talks about Phorhum, the authors say, “The construction of our model is motivated by the breadth of transformative, immersive 3D applications that would become possible for clothing virtual apparel try-on, immersive visualisation of photographs, personal AR and VR for improved communication, special effects, human-computer interaction or gaming, among others”.
Further, the paper concludes, “Our method works well for a wide variation of outfits and for diverse body shapes and skin tones, and reconstructions capture most of the detail present in the input image.”
Phorhum is an end-to-end trainable, deep neural network methodology for photorealistic 3D human reconstruction that works with just a monocular RGB image. The pixel-aligned method estimates a detailed 3D geometry and unshaded surface colour and scene illumination. The 3D supervision alone is not sufficient for high fidelity colour reconstruction, so Phorhum introduced patch-based rendering losses that enable reliable colour reconstruction on visible parts of the human and detailed and plausible colour estimation for the non-visible parts.