Comparison of Loss Functions for Deblurring Images with Conditional GANs

Main Article Content

Ayush Sharma, Anubhooti Papola

Abstract

Generative Adversarial Networks (GANs) have transformed the field of image synthesis, particularly with the introduction of Conditional GANs (cGANs) which allow for a more customized approach by integrating extra information throughout the generative process. The presence of blurry images can have a detrimental impact on image quality and can impede subsequent image processing activities. To combat image blurriness, we introduce a novel single-image blur removal technique that relies on conditional generative adversarial networks (CGAN). In this method, CGAN acts as the fundamental framework, taking the blurred image as supplementary conditional data and enforcing a Lipschitz constraint. The network architecture is trained using a combination of conditional adversarial loss, content loss, and perception loss to rectify the blurred regions and reconstruct the image. Through experimental evaluations, it is evident that the proposed approach outperforms existing algorithms in terms of blur removal, effectively diminishing blurriness while maintaining image sharpness.

Article Details

Section
Articles