FaceShop: A Sketch-Based Editing Framework

  • FaceShop is a new image editing software that allows you to edit photos by ‘sketching’ a few strokes on top of them.
  • It enables inexperienced users to perform complex edits and create realistic pictures within minutes.

Users share millions of photos and videos everyday across popular social media networks. On Facebook, nearly 136,000 pictures are uploaded every minute, and 300 million per day. Furthermore, 95 million pictures and videos are shared on Instagram on a dally basis.

These statistics present a definite need for intuitive yet powerful image editing tools that let users apply advanced graphic functions easily. At present, there aren’t many tools for inexperienced users that can enable them to perform complex editing functions, like altering facial expression that looks real.

To fulfill this need, an international team of researchers has developed a sketch-based editing software, named FaceShop, that allows users to edit their pictures by ‘sketching’ several strokes on top of them.

It comes with a copy-paste feature, through which users can edit any specific portion of a picture by copying-pasting the part to be edited (enhanced) from another image. This eliminates the need of sketching or hand-drawing anything at all.

FaceShop Is Built On

Most of the applications use conventional, handcrafted methods that have numerous limitations. For example, they have limited, predefined sets of editing tasks that restrict users from performing unique operations. Some applications do provide flexibility but they have a steep learning curve.

FaceTime, on the other hand, is packed with an intuitive interface that allows inexperienced users to perform complex editing tasks within minutes. The framework is built on a machine-learning methods, which offers more real-time control over desired edits and create realistic pictures.

Web Interface

Specifically, it’s based on adversarial neural networks that contain 2 artificial intelligence systems fighting against each other. These 2 AIs learn from each other during training phase, and ultimately results in an autonomous system capable of producing realistic pictures, without requiring any human involvement in the loop.

According to the developers, the framework provides users with an optimum degree of control that works exceptionally well in practice.

Reference: arXiv:1804.08972v2

What happens in the backend

Overall, researchers have made 4 key contributions –

  1. The first end-to-end trained framework that combines image completion and translation for high-quality local image editing.
  2. An efficient sketch based editing with color constraints and smart copy-paste feature in a unified framework.
  3. Globally consistent, seamless local edits without requiring any post-processing.
  4. Interactive sketch-based interface for iterative workflow.

The team demonstrated their system via different examples. They showed how users can improve the make-up on a woman’s eye, bringing out the color of her eyes. They also slightly modified a woman’s nose and a strand of her hair (pushed away from face), and removed dark shadows on one side of her face. In all cases, the framework produced high-quality, realistic photos.

Read: AI Can Put Anyone In Any Pose | Synthesizing Human Images In Unseen Poses

What’s Next?

Developers plan to train their system on more diverse and even higher resolution image datasets, and develop additional kits for this framework to improve user interaction. Meanwhile, they will be trying leverage neural networks to perform sketch-based edits on videos.

Written by
Varun Kumar

I am a professional technology and business research analyst with more than a decade of experience in the field. My main areas of expertise include software technologies, business strategies, competitive analysis, and staying up-to-date with market trends.

I hold a Master's degree in computer science from GGSIPU University. If you'd like to learn more about my latest projects and insights, please don't hesitate to reach out to me via email at [email protected].

View all articles
Leave a reply