New AI Can Transform Still Images Into 3D Animations

  • New deep learning method uses convolutional neural networks to animate a character in 3D from a single still picture. 
  • It can make a character in the image walk out, sit, run or jump in 3D. 
  • The entire animation can be viewed interactively on monitors and VR/AR devices. 

In recent years, general animation from videos has led to numerous creative effects. Animating from a single image, instead of videos or series of photos, has also resulted in fascinating effects.

The Skinned Multi-Person Linear (SMPL) model and deep learning frameworks have proved to be extremely useful for 3D pose and shape estimation from a single image. So far, most of the single-image person animation techniques have focused on primarily 2D or pseudo-3D animation.

Recently, a research team at the University of Washington and Facebook developed a new technique that can transform still paintings and images into 3D animations. This deep learning method named “Photo Wake-Up” uses a convolutional neural network (CNN) to animate a character in 3D from a single still picture.

Bringing the Foreground Character to Life

Unlike cinemagraphs that use still images moving in a particular order to create a video, the new system takes a single image as input and provides a full 3D experience. It can make a character in the image walk out, sit, run or jump in 3D, and the entire animation can be viewed interactively on monitors and VR/AR devices.

Now, you must be wondering how does this new method work. Well, it involves 5 steps:

  1. Fit the image with a morphable body model
  2. Estimate the body label map
  3. Construct a mesh step-by-step guided by the morphable model
  4. Rig the mesh and estimate mesh skinning weights
  5. Finally, rebuild textures and put it on a blank background image.

The CNN relies on a pre-trained model SMPL and runs on NVIDIA TITAN GPUs. It works on a wide range of frontal images, from posters and arts to sports photos. Users can even edit the person in the photo, view reconstructed body in 3D, and explore it on AR devices, for instance, users can place the artwork on the wall and walk around the animation while it’s playing.

Read: arXiv:1812.02246 | University of Washington

The team demonstrated the algorithm on images of cartoon characters, graffiti, Stephen Curry (NBA player), and Picasso paintings. So far, they have tested the neural network on more than 70 images downloaded from the Internet.

Comparison and LImitations

Researchers compared their algorithm with other state-of-the-art related methods and evaluated outcomes with human studies. The results obtained via Photo Wake-Up appeared much less distorted and more realistic in still frames than other methods. Furthermore, these outcomes provide actual 3D experiences that were not possible with conventional techniques.

Read: New Artificial Intelligence Can Turn Anyone Into A Professional Dancer

The algorithm is still far from perfect and there is still much room for improvements. For instance, reflections and shadows haven’t been modeled yet, and sometimes it produces incorrect 3D pose due to which the whole body shape looks unrealistic. Also, certain occlusions need to be refined, such as legs crossed when sitting.

Written by
Varun Kumar

Varun Kumar is an experienced science and technology journalist interested in machines, AI, and space exploration. He received M.tech degree in computer science from Indraprastha University. To find out what his latest project is, feel free to directly email him at [email protected] 

View all articles
Leave a reply