Improvement of image inpainting methods based on generative models

Authors

  • M.V. Semankiv Vasyl Stefanyk Carpathian National University (Ivano-Frankivsk)
  • O.V. Tsikhun Vasyl Stefanyk Carpathian National University (Ivano-Frankivsk)

DOI:

https://doi.org/10.33216/1998-7927-2025-294-8-5-10

Keywords:

inpainting, GAN-network

Abstract

The volume of visual information surrounding us is constantly growing, which creates the need for intuitive tools for its processing. Research aimed at improving algorithms for adding or removing objects from graphical content is highly relevant and has significant practical potential. Systems employing such algorithms can be applied across various domains—from cinematography and photography to the restoration of artworks and even medicine. Image inpainting technology enables the automatic reconstruction of missing parts of an image, saving users time and effort. Despite the abundance of existing solutions for reconstruction tasks, there is still a lack of systems that comprehensively leverage generative models to achieve high-quality content restoration. This highlights the importance of research in this field, focusing on improving current algorithms. The assessment of the current state of the field shows considerable progress in the development of generative models. At the same time, further refinement is needed to enhance universality, optimize resource usage, and reduce computational costs. This work investigates methods of Generative Adversarial Networks (GANs) for image inpainting tasks and evaluates the effectiveness of existing approaches. A content generation method has been improved by introducing a unique loss function, which surpasses the considered competitive implementations in terms of both accuracy and execution time. Software has been developed to address practical tasks. Work has been carried out on a GAN-based system for graphical content restoration, with improvements achieved through mathematical tools. The results obtained in the course of the research demonstrate enhanced image restoration quality with the proposed GAN-based system, including high-quality object removal with minimal artifacts, even at low resolution and within a short processing time. However, it should be noted that restoration accuracy may decrease when dealing with complex or highly detailed images. The practical significance of this research extends to all fields where visual data processing is crucial. It lies in addressing the problem of low-quality object removal from images and advancing algorithms for replacing and removing specific regions in images using Generative Adversarial Networks.

References

1. Inst-Inpaint: Instructing to Remove Objects via Text Prompts. Cornell University. URL: https://arxiv.org/abs/2304.03246 (data accessed: 05/09/2025)

2. Image Inpainting with Cascaded Modulation GAN and Object-Aware Training. Cornell University. URL: https://arxiv.org/abs/2203.11947 (data accessed: 04/09/2025)

3. Remove Objects and Their Effects in Images with Paired Video-Frame Data. Cornell University. URL: https://arxiv.org/abs/2501.07397 (дата звернення 05.09.2025)

4. Barglazan A.-A., Brad R., Constantinescu C.Image Inpainting Forgery Detection: A Review. J. Imaging. 2024.Vol. 10, No. 2. P. 42. URL: https://doi.org/10.3390/jimaging10020042

5. What is a GAN?. Amazon.URL https://aws.amazon.com/what-is/gan/ (data accessed: 02/09/2025)

6. GPNet: Simplifying Graph Neural Networks via Multi-channel Geometric Polynomials. Cornell University. URL: https://arxiv.org/abs/2209.15454 (data accessed: 05/09/2025)

7. Goodfellow I., Bengio Y., Courville A. Deep Learning. Cambridge, MA: MIT Press, 2016. P.775.

8. Ma Y., Liu X., Bai S., Wang L., He D., Liu A. Coarse-to-Fine Image Inpainting via Region-wise Convolutions and Non-Local Correlation. The 28th International Joint Conference on Artificial Intelligence (IJCAI-19). 2019. P. 3123–3129. URL: https://doi.org/10.24963/ijcai.2019/433

Published

2025-10-25