What is AnimateDiff?

AnimateDiff is an AI video generator that uses Stable Diffusion along with motion modules. It provides text-to-image, camera movements, image-to-video, sketch-to-video. Here is the link of AnimateDiff paper.

How to install AnimateDiff from github ?

1. First download the code from AniamteDiff GitHub repository. Open a dos prompt, in the directory where you want to install, type the command:
>git clone https://github.com/guoyww/AnimateDiff.git
2. Next is to download Stable Diffusion v1.5 from huggingface. In dos command, type
>cd AnimateDiff
> cd models\StableDiffusion
>git clone https://huggingface.co/runwayml/stable-diffusion-v1-5/
3. Go to huggingface or civitai to download Stable Diffusion checkpoints, such as realisticVisionV51, or ToonYou_Beta6. Put them at “models\DreamBooth_LoRA.”
4. Go to hugging face animatediff to download motion modules, such as “mm_sd_v15_v2.ckpt” and “v3_sd15_mm.ckpt” and put them at “models\Motion_Module.”
5. Now it is time to configure a virtual environment to run the code. If you haven’t installed Anaconda3, go to install Anaconda3.
6. Then you setup a conda environment of animatediff. If you run into problems when you setup, you can download a ready-for-use animatediff env at Gumroad. Unzip it and put it at your anaconda3 installation “envs” directory.
7. After you install AnimateDiff, you are ready to generation videos. Here are the instructions of Text to Video, Camera movements, and Image to Video with AnimateDiff code.

What is AnimateDiff’s limitation?

The current version of AnimateDiff v3 can create 16 frames, about 2 seconds of animation. The image resolution is 256 or 512 pixels. When using it, you might get some good results for a scenery. However, the characters’ animation is not practically usable.