Skip to main navigation Skip to search Skip to main content

JPGNet: Joint Predictive Filtering and Generative Network for Image Inpainting

  • Qing Guo
  • , Xiaoguang Li
  • , Felix Juefei-Xu
  • , Hongkai Yu
  • , Yang Liu
  • , Song Wang
  • Tianjin University
  • Nanyang Technological University
  • University of South Carolina
  • Alibaba Group, USA
  • Zhejiang Sci-Tech University

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

44 Scopus citations

Abstract

Image inpainting aims to restore the missing regions of corrupted images and make the recovery result identical to the originally complete image, which is different from the common generative task emphasizing the naturalness or realism of generated images. Nevertheless, existing works usually regard it as a pure generation problem and employ cutting-edge deep generative techniques to address it. The generative networks can fill the main missing parts with realistic contents but usually distort the local structures or introduce obvious artifacts. In this paper, for the first time, we formulate image inpainting as a mix of two problems, i.e., predictive filtering and deep generation. Predictive filtering is good at preserving local structures and removing artifacts but falls short to complete the large missing regions. The deep generative network can fill the numerous missing pixels based on the understanding of the whole scene but hardly restores the details identical to the original ones. To make use of their respective advantages, we propose the joint predictive filtering and generative network (JPGNet) that contains three branches: predictive filtering & uncertainty network (PFUNet), deep generative network, and uncertainty-aware fusion network (UAFNet). The PFUNet can adaptively predict pixel-wise kernels for filtering-based inpainting according to the input image and output an uncertainty map. This map indicates the pixels should be processed by filtering or generative networks, which is further fed to the UAFNet for a smart combination between filtering and generative results. Note that, our method as a novel framework for the image inpainting problem can benefit any existing generation-based methods. We validate our method on three public datasets, i.e., Dunhuang, Places2, and CelebA, and demonstrate that our method can enhance three state-of-the-art generative methods (i.e., StructFlow, EdgeConnect, and RFRNet) significantly with slightly extra time costs. We have released the code at https://github.com/tsingqguo/jpgnet.
Original languageEnglish
Title of host publicationMM 2021 - Proceedings of the 29th ACM International Conference on Multimedia
Place of Publicationusa
PublisherAssociation for Computing Machinery, Inc
Pages386-394
Number of pages9
ISBN (Electronic)9781450386517
DOIs
StatePublished - Oct 17 2021
Event29th ACM International Conference on Multimedia, MM 2021 - Virtual, Online, China
Duration: Oct 20 2021Oct 24 2021

Conference

Conference29th ACM International Conference on Multimedia, MM 2021
Country/TerritoryChina
CityVirtual, Online
Period10/20/2110/24/21

Keywords

  • generative network
  • image inpainting
  • predictive filtering

Cite this