What is Wav2Lip?

Wav2lip is an Ai model to use audio file to control lip-sync. The paper is here.

How to use Wav2Lip?

1. First download the code from Wav2Lip GitHub repository. Open a dos prompt, in the directory where you want to install, type the command:
>git clone https://github.com/Rudrabha/Wav2Lip.git
2. Download wav2lip_gan.pth and put it under “Wav2Lip\checkpoints” directory.
3. Setup a conda environment as the instruction. If you run into problems when you setup, you can download a wav2lip_env at Gumroad. Unzip and put it at your anaconda3 installation under “envs” directory.
4. Now prepare your input files. The first one is a wav file for your character to say or sing. Name it as “input_audio.wav” and put it under “assets” directory.
5. The second file is a video file in which the character’s lip movement is clear. Name this file as “input_vid.mp4” and put it under “assets” directory. These two files should have the same length.
6. If you run the project in Windows, open “inference.py” with your text editor. On line 277, change to
subprocess.call(command, shell=True)
7. Open an Anaconda Prompt. Run command:
>conda activate wav2lip_env
8. Still in Anaconda prompt, go to the directory “Wav2Lip” and run command:
>python inference.py –checkpoint_path checkpoints/wav2lip_gan.pth –face assets/input_vid.mp4 –audio assets/input_audio.wav –pads 0 10 0 0 –resize_factor 1
9. When it finishes, the new video is saved at “results\result_voice.mp4” directory.


Face animation using Ai model