Now Reading
How To Create A Game Character Face Using Python & Deep Learning

How To Create A Game Character Face Using Python & Deep Learning

MeInGame

3D face reconstruction has been widely used for gaming applications. Though in the existing methods,  game character customization methods require manual efforts from the users’ side to get the desired results. Not long ago, researchers from Netease Fuxi AI Lab and the University of Michigan proposed a new state-of-the-art method in this direction – MeInGame: Create a Game Character Face from a Single Portrait, which automatically reconstructs a face from a single image. The paper was submitted by Jiangke Lin, Yi Yuan, Zhengxia Zou and accepted at Association for the Advance of Artificial Intelligence (AAAI), 2021 .

MeInGame features are as following:

Register for our upcoming Masterclass>>
  1. MeInGame introduces a novel pipeline for the training of 3D face reconstruction for games algorithms.
  2. Provides cost-efficient facial texture acquisition.
  3. Provides a shape transfer algorithm that can convert 3D Morphable Face Model (3DMM) mesh to games, 

Results have shown that the MeInGame can produce a game character similar to the input image and has succeeded in cutting out the effect of lightning and occlusions.

Workflow of MeInGame

MeInGame takes the images as input and constructs a 3D character face on 3D Morphable Face Model (3DMM) and Convolutional Neural Networks (CNNs). This 3DMM face is then passed to the game mesh. Then a coarse texture C is created by UV wrapping this input image on the game mesh. This coarse texture C is used to predict the lighting coefficients and refined texture map, which are then fed to the differential renderer. This renderer tries to minimize the difference between the rendered image and the input image to get the desired results.

Looking for a job change? Let us help you.

Comparison of MeInGame with other methods

The results can be shown as:

Requirements

  • MeInGame is suitable for both Windows and Linux.
  • Python package versions:
    • CUDA 10.0
    •  PyTorch 1.4 
    • Tensorflow 1.14.
 pip install torch==1.4.0+cu100 torchvision==0.5.0+cu100 -f https://download.pytorch.org/whl/torch_stable.html
 pip install tensorflow==1.14
 pip install tensorflow-gpu==1.14.0 
  1. Make sure to link the environment to CUDA 9.0. For the colab notebook, you can install cuda – 9.0 by these commands.
 !wget https://developer.nvidia.com/compute/cuda/9.0/Prod/local_installers/cuda-repo-ubuntu1704-9-0-local_9.0.176-1_amd64-deb -O cuda-repo-ubuntu1704-9-0-local_9.0.176-1_amd64-deb
 !dpkg -i cuda-repo-ubuntu1704-9-0-local_9.0.176-1_amd64-deb
 !apt-key add /var/cuda-repo-9-0-local/7fa2af80.pub
 !apt-get update
 !apt-get install cuda-9.0 
  1. Make sure to install TensorFlow 1.12 and tensorflow-gpu 1.12 for this step.
 !pip uninstall -y tensorflow
 !pip install tensorflow==1.12
 !pip install tensorflow-gpu==1.12.0 
  1. Clone the required repository.
 %%bash
 git clone https://github.com/Microsoft/Deep3DFaceReconstruction
 cd Deep3DFaceReconstruction 
  1. Unzip the BaseFaceModel and put it into the required path.
 !tar -xvzf /content/BaselFaceModel.tgz
 %cp /content/PublicMM1/01_MorphableModel.mat /content/Deep3DFaceReconstruction/BFM/ 
  1. Download the expression basis from here and unzip it to get the correct destination path’s required files.
 !unzip /content/Coarse_Dataset.zip
 %cp /content/Coarse_Dataset/Exp_Pca.bin /content/Deep3DFaceReconstruction/BFM/ 
  1. Put .so file in the required path.

!cp /content/drive/MyDrive/rasterize_triangles_kernel.so /content/Deep3DFaceReconstruction/renderer/

  1. Lastly, download the pre-trained reconstruction model and put the files in a specific folder as shown below:
 !unzip /content/FaceReconModel.zip
 !mkdir /content/Deep3DFaceReconstruction/network 
 !cp /content/FaceReconModel.pb /content/Deep3DFaceReconstruction/network/ 

Update the environment variables with all the new .so file path for example:

 import os                
 os.environ["LD_LIBRARY_PATH"]+=":/content/Deep3DFaceReconstruction/renderer" 

Similarly, add all the required paths.

  1. Run the demo.py in the Deep3DFaceReconstruction folder to generate the required files.
 %cd /content/Deep3DFaceReconstruction/
 !python demo.py 

  Then follow the instructions from here, after cloning the MeInGame repository.

pip install "git+https://github.com/Agent-INF/[email protected]"

Installation 

 %%bash
 pip install torch==1.4.0+cu100 torchvision==0.5.0+cu100 -f https://download.pytorch.org/whl/torch_stable.html
 pip install opencv-python fvcore h5py scipy scikit-image dlib face-alignment scikit-learn tensorflow-gpu==1.14.0 gast==0.2.2
 pip install "git+https://github.com/Agent-INF/[email protected]"
 !pip uninstall -y tensorflow
 !pip install tensorflow==1.14
 !git clone https://github.com/FuxiCV/MeInGame 

Now, put all the required files in their destination path. All the instructions are mentioned here.

Result-Demo of MeInGame

The output generated by MeInGame can be seen as :

Training with CelebA-HQ dataset

  1. The first step is to create a dataset by the following command.

!python create_dataset.py

  1. Next, is to download the CelebA-HQ dataset and put it in the required destination folder. Check this, to download the dataset.
  1. Start the training by:

!python main.py -m train

EndNotes

This article has discussed a new method for creating 3D face construction that automatically creates a game character faces from a single image. MeInGames repository was recently made public, so stay tuned for new updates to use this new technology.

Official codes & docs are available at:

What Do You Think?

Join Our Discord Server. Be part of an engaging online community. Join Here.


Subscribe to our Newsletter

Get the latest updates and relevant offers by sharing your email.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top