AI Can Generate Millions of Artistic Combinations For Photorealistic Results

  • New style transfer algorithm uses neural networks to create stunning artistic combinations. 
  • It uses a linear style transfer framework that is efficient, effective, and retains content affinity. 
  • The algorithm can alter both images and videos in real-time. 

A style transfer model takes images as inputs and synthesizes it with the look from the former and fell from the latter. Several such models have been built in the last few couples of years. However, they are quite big in size and require a large number of computational resources to output impressive results.

Although existing models perform well, they fail to explore the feature covariance and generate artifacts in synthesized pictures. In other words, they have limited capabilities of generalizing to more applications, like video and image-realistic stylization.

To enhance these models, researchers at the University of California and NVIDIA have created a machine-learning technique for style transfer that is both efficient and effective.

How It Works?

Researchers have presented a theoretical analysis of linear style transfer framework. They developed transformation matrices and connected them to the style reconstruction objective that is extensively used in style transfer. They showed that the optimization of linear transformation methods could reduce losses in style reconstruction.

Overview of the proposed technique | Courtesy of researchers

This enables two lightweight convolution neural networks (CNNs) to replace any computation that put a lot of load on GPU (for instance, SVD decomposition) and modify the picture. The model is capable of implementing multiple levels of style changes in real-time. In fact, people can use it to alter both images as well as videos.

The new transformation matrix learning method is more efficient (handles 140 frames per second), flexible (combines different styles in one matrix), and retains content resemblance quality during the process of transformation.

Reference: arXiv:1808.04537 

Because of affinity preservation, the algorithm can create un-distorted image style transfer results, which is specifically useful for photo-realistic style transfer. One could further process these results by using a conventional filter like bilateral guided upsampling technique to render photo-realistic outcomes with superior object boundaries.

To demonstrate the efficiency of the model, they tested the algorithm on 4 style transfer task: video and image-realistic style transfer, artistic style transfer and domain adaptation.

The CNN is trained on over 80,000 pictures of scenery, people, animals and moving objects (taken from MS-COCO datasets and WikiArt encyclopedia), using NVIDIA TITAN Xp GPUs and PyTorch accelerated by CUDA deep neural network framework.

Overall, the algorithm offers an inexpensive and efficient solution to create a wide range of training data, which might help in several vision problems.

Read: Google AI Can Create Short Video Clips From Two Still Images

According to the researchers, their model will help users create more realistic content, and it will be extremely helpful to those who aren’t good at painting. It may be more prominent in real-world applications, for example, it could be utilized in real-time rendering and Virtual-Reality headsets.

Written by
Varun Kumar

I am a professional technology and business research analyst with more than a decade of experience in the field. My main areas of expertise include software technologies, business strategies, competitive analysis, and staying up-to-date with market trends.

I hold a Master's degree in computer science from GGSIPU University. If you'd like to learn more about my latest projects and insights, please don't hesitate to reach out to me via email at [email protected].

View all articles
Leave a reply