Switching furniture in room images using Generative Models

Author: Pol Albacar Fernandez

Virtual Room: https://eu.bbcollab.com/guest/2a5cce3d489a4db79b363b831ea59540

Date & time: 22/09/2021 – 8:45 h

Session name: Applications

Company name: StageInHome Labs SL

Supervisor: Javier Ruiz Hidalgo

Abstract:

This thesis aims to explore possible ways of automatically placing the desired furniture seen on a single image into room scenes using Generative Adversarial Networks (GANs). GANs have made great success in synthesizing high-quality images; however, how to manage the synthesis process of these models and customize the output image is much less explored. It has been found that modulating the input latent space of the generator can modify factors in the generated image, but such manipulation changes the entire image. In this work, two approaches are presented, both using Deep Learning Generative Models to automatically control locally the generated scenes. The first approach is based on taking advantage of the Visual Features learned from solving the task of image generation. The second approach is based on a novel generative architecture called ADGAN, where the core idea is to embed room elements into a latent space as independent codes and then achieve control of these elements by mixing their latent spaces to feed the generator through a block called Decompose Component Encoder. Visual results and comparisons between methods are presented.

Committee:

– President: Joost van de Weijer(UAB)
– Secretary: Ramon Morros(UPC)
– Vocal: Maria Vanrell Martorell(UAB)