Donate your essay and get 10$ for each one!
Upload your essay and after it checking you will get money in your bonus account.
How it works
Contents
Semantic image inpainting is a challenging task?Traditional methods do not recover images well due to the lack of high level context. So we propose a novel method for semantic image inpainting, which generates the missing content by conditioning on the available data. We use Deep Convolution Generative Adversarial Networks (DCGAN) to train a generated model, and then input the code containing the prior information into the model to get the inpainting image. We successfully implemented image restoration on three data sets, and the repair effect is better than the traditional method.
Traditional image restoration methods mostly restore images based on local or non-local information. Most existing methods are designed for single image restoration. Therefore, they are based on the useful part of the input image, using the a priori image to restore the incomplete part. However, all single image repair methods require appropriate information, which is constrained by similar pixels, structures, or small blocks in the input image, which is difficult to satisfy if the input image is severely lost and is arbitrarily shaped. So in these cases, these methods cannot recover lost information. We regard semantic image restoration as a constraint image generation problem and play an important role in the recent development of the generation model. In our case, it is a confrontation network. After the depth generation model is trained, we search in the potential space. The damaged image is closest to the encoding of the image, and then the generator is used to reconstruct the image using the encoding. Our definition of ‘closest’ is obtained by weighting the content lost by the context picture and penalizing the impractical image. Through the loss of different forms of regions, we evaluated our methods in three data sets: CelebA[1], SVHN[2]and StanfordCars[3]. The results show that in some challenging tasks of semantic image restoration, our method can obtain more realistic pictures than the best technology at this stage.
First, we preprocess the image to get a 64×64×3 image. Then we randomly select images in the dataset for training, using the DCGAN[4] model architecture of Radford et al. We randomly select the 100-dimensional random vector uniformly drawn by [-1,1], and generate a 64×64×3 image using the generator model. Then we use image to pass the discriminator. The discriminator model is basically constructed in the opposite model structure. In order to train the DCGAN model, we follow the optimization process using Adam[5] in the training process. We chose ? = 0.003 in all experiments. We also perform data enhancement of random horizontal jitter on the training image. In the patching phase, we need to use backpropagation to find z in the potential space. We use Adam to optimize and limit z to [-1,1] in each iteration, which is observed to produce more stable results. We evaluate our approach on three datasets:theCelebFaces Attributes Dataset (CelebA), the Street View House Numbers (SVHN) and the Stanford Cars Dataset.
The figure shows theproposedframeworkforinpainting.(a)Given a GAN model trained on real images:we iteratively update z to find the closest mapping on the latent image manifold, based on the desinged loss functions.(b) Manifold traversing when iteratively updating z using back-propagation.
Due to the large number of missing points, TV and LR based methods cannot recover enough image detail, resulting in very blurred and noisy images. However,Semantic image inpainting[9]based on DCGAN method can test a suitable image with a large amount of missing central important information, and has a good closeness to the original image.
When calculating the PSNR[10]and evaluating the degree of image restoration, the DCGAN method is higher than the TV and LR methods, and the good peak signal-to-noise ratio shows a good repair effect. However, sometimes the TV, LR model has a higher PSNR, but it can be visually seen that the repair effect DCGAN is better, which shows that the quantitative results are not well represented. When the facts are not unique, the truth of the different methods which performed. Therefore, sometimes it is necessary to judge by the human eye, and it is a good method to evaluate the image restoration effect by using the subjective consciousness of the person.
This work is supported in part by IBM-ILLINOIS Center for Cognitive Computing Systems Research (C3SR) – a research collaboration as part of the IBM Cognitive Horizons Network. This work is supported by NVIDIA Corporation with the donation of a GPU.
Semantic Image Inpainting with Deep Generative Models. (2021, Oct 15). Retrieved from https://papersowl.com/examples/semantic-image-inpainting-with-deep-generative-models/
Make sure your essay is plagiarism-free or hire a writer to get a unique paper crafted to your needs.
Our writers will help you fix any mistakes and get an A+!
GET QUALIFIED HELPHi! I'm Amy,
your personal assistant!
Don't know where to start? Give me your paper requirements and I connect you to an academic expert.
get professional helpshort deadlines
100% Plagiarism-Free
Certified writers