SlideShare a Scribd company logo
3D Multi Object GAN
Fully Convolutional Refined Auto-Encoding Generative Adversarial
Networks for 3D Multi Object Scenes
8/31/2017
Real/Fake
Encoder Generator
Fully Convolution
x
Normal Distribution
z Generator
Discriminator
zenc
reshape
Code
DiscriminatorReal/Fake
Refiner
Refiner
xgen
xrec
Agenda
• Introduction
• Dataset
• Network Architecture
• Loss Functions
• Experiments
• Evaluations
• Suggestions of Future Work
Source Code:
https://github.com/yunishi3/3D-FCR-alphaGAN
Introduction
3D multi object generative models should be an extremely important tasks
for AR/VR and graphics fields.
- Synthesize a variety of novel 3D multi objects
- Recognize the objects including shapes, objects and layouts
Only single objects are generated so far
Simple 3D-GANs[1]
Simple 3D-VAE[2]
Multi Object Scenes
Dataset
• SUNCG dataset
Extracted only voxel models from SUNCG dataset
-Voxel size: 80 x 48 x 80 (Downsized from 240x144x240)
-12 Objects
[empty, ceiling, floor, wall, window, chair, bed, sofa, table, tvs, furn, objs]
-Amount: around 185000
-Got rid of trimming by camera angles
-Chose the scenes that have over 10000 amount of voxels.
-No label
From Princeton [3]
Challenges, Difficulties
Sparse, so much varieties
empty ceiling floor wall window chair bed sofa table tv furnitureobjects
92.466 0.945 1.103 1.963 0.337 0.070 0.368 0.378 0.107 0.009 1.406 0.846
Average occupancy ratio of each objects in dataset [%]
[%] Average occupancy ratio
Dining room
Bedroom
Garage
Living room
Network Architecture
Fully Convolutional Refined Auto-Encoding Generative Adversarial Networks
-Similar architecture with 3DGAN[1]
-Encoder, discriminator are almost mirrored from generator
-Latent space is fully convolutional layer (5x3x5x16)
-Fully convolution enables Zenc to represent more features
-Last activation of generator is softmax -> Divide 12 classes
-Code discriminator is fully connected (2 hidden layers)[4]
-Refiner is similar architecture of simGAN[5]
-Multi class activation
for multi object scenes
-Fully Convolution
Novel Contribution
Real/Fake
Encoder Generator
Fully Convolution
Normal Distribution
z Generator
Discriminator
zenc
reshape
Code
DiscriminatorReal/Fake
Refiner
Refiner
xgen
xrec
x
Network Architecture
Inspired by [1]
z
5x3x5x512 10x6x10x256 20x12x20x128 40x24x40x64 80x48x80x12
Each Network
-3D deconv (Stride:2)
-Batch Norm
-LRelu(Discriminator)
Relu(Encoder, Generator)
Last Activation
-Softmax
5x3x5x16
Reshape
& FC
batchnorm
relu
5x3x5
stride 2
batchnorm
relu
Generator Network
5x3x5
stride 2
batchnorm
relu
5x3x5
stride 2
batchnorm
relu
5x3x5
stride 2
softmax
[Wu et al. 2016, MIT]
Network Architecture
Inspired by [5]
Each Network
-3D deconv (Stride:1)
-Relu(Encoder, Generator)
-ResNet Block loops 4 times
(different weights)
Last Activation
-Softmax
Refiner Network
Unlabeled Real Images
Synthetic
Simulated images
Refined
Figure5. Exampleoutput of SimGAN for theUnityEyesgazeestimation dataset [40]. (Left) real imagesfrom MPIIGaze[43]. Our
refiner network doesnot useany label information from MPIIGazedataset at training time. (Right) refinement resultson UnityEye.
The skin texture and the iris region in the refined synthetic images are qualitatively significantly more similar to the real images
than to thesynthetic images. More examples areincluded in thesupplementary material.
maps
Conv
f@nxn
Conv
f@nxn
+
ReLU
ReLU
Input
Features
Output
Features
Figure6. A ResNet block with two n ⇥n convolutional layers,
and 214K real images from the MPIIGaze dataset [43]
– samples shown in Figure 5. MPIIGaze is a very chal-
lenging eye gaze estimation dataset captured under ex-
treme illumination conditions. For UnityEyes we use a
single generic rendering environment to generate train-
ing data without any dataset-specific targeting.
80x48x80x12
80x48x80x32 80x48x80x32
80x48x80x12
ResNet Block x4
3x3x3
relu
3x3x3
relu
3x3x3
Relu
Loss / Training
Encoder
Distribution GAN Loss
Reconstruction Loss
Discriminator discriminates real and fake scenes accurately
Generator fools discriminator
ℒ 𝑟𝑒𝑐 =
𝑛
𝑐𝑙𝑎𝑠𝑠
𝑤 𝑛 −𝛾𝑥𝑙𝑜𝑔 𝑥 𝑟𝑒𝑐 − 1 − 𝛾 1 − 𝑥 𝑙𝑜𝑔 1 − 𝑥 𝑟𝑒𝑐
Reconstruction accuracy would be high
w is occupancy normalized weights with every batch
GAN Loss
ℒ 𝐺𝐴𝑁 𝐷 = −log 𝐷 𝑥 − log 1 − 𝐷 𝑥 𝑟𝑒𝑐 − log 1 − 𝐷 𝑥 𝑔𝑒𝑛
ℒ 𝐺𝐴𝑁 𝐺 = − log 𝐷 𝑥 𝑟𝑒𝑐 − log 𝐷 𝑥 𝑔𝑒𝑛
min
𝐸
ℒ = ℒ 𝑐𝐺𝐴𝑁 𝐸 + 𝜆ℒ 𝑟𝑒𝑐
TrainingLoss
Generator with refiner
min
𝐺
ℒ = 𝜆ℒ 𝑟𝑒𝑐 + ℒ 𝐺𝐴𝑁 𝐺
Discriminator
min
𝐷
ℒ = ℒ 𝐺𝐴𝑁 𝐷
Learning rate: 0.0001
Batch size: 20(Base), 8(Refiner)
Iteration: 100000
(75000:Base, 25000:Refiner)
ℒ 𝑐𝐺𝐴𝑁 𝐷 = −log 𝐷𝑐𝑜𝑑𝑒 𝑧 − log 1 − 𝐷𝑐𝑜𝑑𝑒 𝑧 𝑒𝑛𝑐
ℒ 𝑐𝐺𝐴𝑁 𝐸 = − log 𝐷𝑐𝑜𝑑𝑒 𝑧 𝑒𝑛𝑐
Code discriminator discriminates real and fake distribution accurately
Encoder fools code discriminator
Code Discriminator
min
𝐶
ℒ = ℒ 𝑐𝐺𝐴𝑁 𝐷
Refiner is trained after 75000 iterations
Experiments
Refiner smooths and refines shapes visually.
Generated scenes from random distribution were not realistic
Generated from random distribution
FC-VAE 3D FCR-alphaGAN
Reconstruction
Real Reconstruction
Almost reconstructed, but small shapes have disappeared.
Before Refine After Refine
This architecture worked better than just VAE, but it’s not enough.
This is because encoder was not generalized to the distribution
Before Refine After Refine
Result
Numerical evaluation of reconstruction by IoU
Intersection-over Union(IoU) [6]
Reconstruction accuracy got high due to the fully convolution and alphaGAN
IoU for every class
IoU for all
Same number of latent space dimension
Same number of
latent space dimension
Evaluations
Interpolation
Smooth transition between scenes are built
Evaluations
Latent Space Evaluation
The 2D represented mapping by SVD of 200 encoded samples
Color:1D embedding by SVD of centroid coordinates of each scene
Fully convolution Standard VAE
Fully Convolution enables the latent space to be related to spatial context
This follows 1d embedding of centroid coordinates from lower right to upper left. This does not.
Evaluations
Latent space evaluation by added noise
The effects of individual spatial dimensions composed of 5x3x5 as the latent space.
Red means the level of changes given by normal distribution noises of one dimension.
・2,0,4 dimension changes objects in right back area.
・4,0,1 dimension changes objects in left front area.
・1,0,0 dimension changes objects in left back area.
・4,0,4 dimension changes objects in right front area.
Fully Convolution enables the latent space to be related to spatial context
Suggestions of Future Work
・Revise the dataset
This dataset is extremely sparse and has plenty of varieties. Floors and small objects are allocated to huge varieties of
positions, also some of the small parts like legs of chairs broke up in the dataset because of the downsizing. That makes
predicting latent space too hard. Therefore, it is an important work to revise the dataset like limiting the varieties or
adjusting the positions of objects.
・Redefine the latent space
In this work, I defined the latent space with one space which includes all information like shapes and positions of each
object. Therefore, some small objects disappeared in the generated models, and a lot of non-realistic objects were
generated. In order to solve that, it is an important work to redefine the latent space like isolating it to each object and
layout. However, increasing the varieties of objects and taking account into multiple objects are required in that case.
3D Multi Object GAN

More Related Content

PDF
DiscoGAN - Learning to Discover Cross-Domain Relations with Generative Advers...
PDF
DiscoGAN
PDF
Introduction of DiscoGAN
PDF
LSGAN - SIMPle(Simple Idea Meaningful Performance Level up)
PDF
Image-to-Image Translation
PDF
Image-to-Image Translation with Conditional Adversarial Nets (UPC Reading Group)
PDF
1시간만에 GAN(Generative Adversarial Network) 완전 정복하기
PDF
Finding connections among images using CycleGAN
DiscoGAN - Learning to Discover Cross-Domain Relations with Generative Advers...
DiscoGAN
Introduction of DiscoGAN
LSGAN - SIMPle(Simple Idea Meaningful Performance Level up)
Image-to-Image Translation
Image-to-Image Translation with Conditional Adversarial Nets (UPC Reading Group)
1시간만에 GAN(Generative Adversarial Network) 완전 정복하기
Finding connections among images using CycleGAN

What's hot (20)

PDF
Basic Generative Adversarial Networks
PPTX
ICASSP 2018 Tutorial: Generative Adversarial Network and its Applications to ...
PDF
그림 그리는 AI
PDF
Unsupervised learning represenation with DCGAN
PDF
Generative adversarial network_Ayadi_Alaeddine
PDF
Generative adversarial text to image synthesis
PDF
Generative adversarial networks
PDF
Variational Autoencoded Regression of Visual Data with Generative Adversarial...
PPTX
Generative Adversarial Networks (GAN)
PDF
Generative adversarial networks
PDF
A pixel to-pixel segmentation method of DILD without masks using CNN and perl...
PDF
Generative Adversarial Network (+Laplacian Pyramid GAN)
PDF
[GAN by Hung-yi Lee]Part 1: General introduction of GAN
PPTX
Spark algorithms
PDF
Introduction to Generative Adversarial Networks
PDF
Generative Adversarial Networks
PDF
Convolutional neural network in practice
PPTX
Gan seminar
PDF
Generative Adversarial Networks and Their Applications
PDF
Recent Progress on Utilizing Tag Information with GANs - StarGAN & TD-GAN
Basic Generative Adversarial Networks
ICASSP 2018 Tutorial: Generative Adversarial Network and its Applications to ...
그림 그리는 AI
Unsupervised learning represenation with DCGAN
Generative adversarial network_Ayadi_Alaeddine
Generative adversarial text to image synthesis
Generative adversarial networks
Variational Autoencoded Regression of Visual Data with Generative Adversarial...
Generative Adversarial Networks (GAN)
Generative adversarial networks
A pixel to-pixel segmentation method of DILD without masks using CNN and perl...
Generative Adversarial Network (+Laplacian Pyramid GAN)
[GAN by Hung-yi Lee]Part 1: General introduction of GAN
Spark algorithms
Introduction to Generative Adversarial Networks
Generative Adversarial Networks
Convolutional neural network in practice
Gan seminar
Generative Adversarial Networks and Their Applications
Recent Progress on Utilizing Tag Information with GANs - StarGAN & TD-GAN
Ad

Viewers also liked (8)

DOCX
Rpp revisi 2017 sejarah peminatan kelas 11 sma
PDF
IoTビジネスのフレームワーク、ロードマップ
PPTX
AI eats UX vol.2 Talk 20170913 -人工知能は「検索」体験をどう変えるか
PPTX
170130 IoT LT #23 (CESで見てきたハードウェアスタートアップを支えるエコシステム) @ソフトバンク
PDF
会社説明会資料【2012年卒新卒採用】
PDF
Lightning Network入門
PDF
0528 kanntigai ui_ux
PDF
女子の心をつかむUIデザインポイント - MERY編 -
Rpp revisi 2017 sejarah peminatan kelas 11 sma
IoTビジネスのフレームワーク、ロードマップ
AI eats UX vol.2 Talk 20170913 -人工知能は「検索」体験をどう変えるか
170130 IoT LT #23 (CESで見てきたハードウェアスタートアップを支えるエコシステム) @ソフトバンク
会社説明会資料【2012年卒新卒採用】
Lightning Network入門
0528 kanntigai ui_ux
女子の心をつかむUIデザインポイント - MERY編 -
Ad

Similar to 3D Multi Object GAN (20)

PDF
Faire de la reconnaissance d'images avec le Deep Learning - Cristina & Pierre...
PPTX
DEEP LEARNING TECHNIQUES POWER POINT PRESENTATION
PPTX
Introduction to convolutional networks .pptx
PDF
Comparison of Various RCNN techniques for Classification of Object from Image
PPTX
Cahall Final Intern Presentation
PPTX
Machine Learning - Convolutional Neural Network
PPT
lec6a.ppt
PDF
Overview of Convolutional Neural Networks
PPTX
The Technology behind Shadow Warrior, ZTG 2014
PPTX
IMAGE PROCESSING
PDF
Joey gonzalez, graph lab, m lconf 2013
PPTX
Deep learning requirement and notes for novoice
PPTX
Implementing a modern, RenderMan compliant, REYES renderer
PPTX
ImageNet classification with deep convolutional neural networks(2012)
PDF
AI_Theory: Covolutional_neuron_network.pdf
PDF
Eye deep
PPTX
Presentation vision transformersppt.pptx
PDF
Tensorflow London 13: Zbigniew Wojna 'Deep Learning for Big Scale 2D Imagery'
PDF
Hardware Acceleration for Machine Learning
PPTX
SeRanet introduction
Faire de la reconnaissance d'images avec le Deep Learning - Cristina & Pierre...
DEEP LEARNING TECHNIQUES POWER POINT PRESENTATION
Introduction to convolutional networks .pptx
Comparison of Various RCNN techniques for Classification of Object from Image
Cahall Final Intern Presentation
Machine Learning - Convolutional Neural Network
lec6a.ppt
Overview of Convolutional Neural Networks
The Technology behind Shadow Warrior, ZTG 2014
IMAGE PROCESSING
Joey gonzalez, graph lab, m lconf 2013
Deep learning requirement and notes for novoice
Implementing a modern, RenderMan compliant, REYES renderer
ImageNet classification with deep convolutional neural networks(2012)
AI_Theory: Covolutional_neuron_network.pdf
Eye deep
Presentation vision transformersppt.pptx
Tensorflow London 13: Zbigniew Wojna 'Deep Learning for Big Scale 2D Imagery'
Hardware Acceleration for Machine Learning
SeRanet introduction

More from Yu Nishimura (8)

PDF
ACL2018 出張報告
PDF
ICLR2018出張報告
PPTX
ぼくがd.schoolで学んだこと
PPTX
Relonch体験レポート
PPTX
CVPR 2017 報告
PDF
Drama2Vec
PPTX
Snap Inc.徹底分析
PPTX
シリコンバレーとスタンフォードに見るイノベーションの源泉
ACL2018 出張報告
ICLR2018出張報告
ぼくがd.schoolで学んだこと
Relonch体験レポート
CVPR 2017 報告
Drama2Vec
Snap Inc.徹底分析
シリコンバレーとスタンフォードに見るイノベーションの源泉

Recently uploaded (20)

PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PPTX
sap open course for s4hana steps from ECC to s4
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PPTX
Programs and apps: productivity, graphics, security and other tools
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Approach and Philosophy of On baking technology
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
Electronic commerce courselecture one. Pdf
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
KodekX | Application Modernization Development
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PPTX
Cloud computing and distributed systems.
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
cuic standard and advanced reporting.pdf
Building Integrated photovoltaic BIPV_UPV.pdf
sap open course for s4hana steps from ECC to s4
Per capita expenditure prediction using model stacking based on satellite ima...
Programs and apps: productivity, graphics, security and other tools
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Approach and Philosophy of On baking technology
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Electronic commerce courselecture one. Pdf
Diabetes mellitus diagnosis method based random forest with bat algorithm
KodekX | Application Modernization Development
20250228 LYD VKU AI Blended-Learning.pptx
NewMind AI Weekly Chronicles - August'25 Week I
Cloud computing and distributed systems.
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Understanding_Digital_Forensics_Presentation.pptx
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Dropbox Q2 2025 Financial Results & Investor Presentation
“AI and Expert System Decision Support & Business Intelligence Systems”
cuic standard and advanced reporting.pdf

3D Multi Object GAN

  • 1. 3D Multi Object GAN Fully Convolutional Refined Auto-Encoding Generative Adversarial Networks for 3D Multi Object Scenes 8/31/2017 Real/Fake Encoder Generator Fully Convolution x Normal Distribution z Generator Discriminator zenc reshape Code DiscriminatorReal/Fake Refiner Refiner xgen xrec
  • 2. Agenda • Introduction • Dataset • Network Architecture • Loss Functions • Experiments • Evaluations • Suggestions of Future Work Source Code: https://github.com/yunishi3/3D-FCR-alphaGAN
  • 3. Introduction 3D multi object generative models should be an extremely important tasks for AR/VR and graphics fields. - Synthesize a variety of novel 3D multi objects - Recognize the objects including shapes, objects and layouts Only single objects are generated so far Simple 3D-GANs[1] Simple 3D-VAE[2] Multi Object Scenes
  • 4. Dataset • SUNCG dataset Extracted only voxel models from SUNCG dataset -Voxel size: 80 x 48 x 80 (Downsized from 240x144x240) -12 Objects [empty, ceiling, floor, wall, window, chair, bed, sofa, table, tvs, furn, objs] -Amount: around 185000 -Got rid of trimming by camera angles -Chose the scenes that have over 10000 amount of voxels. -No label From Princeton [3]
  • 5. Challenges, Difficulties Sparse, so much varieties empty ceiling floor wall window chair bed sofa table tv furnitureobjects 92.466 0.945 1.103 1.963 0.337 0.070 0.368 0.378 0.107 0.009 1.406 0.846 Average occupancy ratio of each objects in dataset [%] [%] Average occupancy ratio Dining room Bedroom Garage Living room
  • 6. Network Architecture Fully Convolutional Refined Auto-Encoding Generative Adversarial Networks -Similar architecture with 3DGAN[1] -Encoder, discriminator are almost mirrored from generator -Latent space is fully convolutional layer (5x3x5x16) -Fully convolution enables Zenc to represent more features -Last activation of generator is softmax -> Divide 12 classes -Code discriminator is fully connected (2 hidden layers)[4] -Refiner is similar architecture of simGAN[5] -Multi class activation for multi object scenes -Fully Convolution Novel Contribution Real/Fake Encoder Generator Fully Convolution Normal Distribution z Generator Discriminator zenc reshape Code DiscriminatorReal/Fake Refiner Refiner xgen xrec x
  • 7. Network Architecture Inspired by [1] z 5x3x5x512 10x6x10x256 20x12x20x128 40x24x40x64 80x48x80x12 Each Network -3D deconv (Stride:2) -Batch Norm -LRelu(Discriminator) Relu(Encoder, Generator) Last Activation -Softmax 5x3x5x16 Reshape & FC batchnorm relu 5x3x5 stride 2 batchnorm relu Generator Network 5x3x5 stride 2 batchnorm relu 5x3x5 stride 2 batchnorm relu 5x3x5 stride 2 softmax [Wu et al. 2016, MIT]
  • 8. Network Architecture Inspired by [5] Each Network -3D deconv (Stride:1) -Relu(Encoder, Generator) -ResNet Block loops 4 times (different weights) Last Activation -Softmax Refiner Network Unlabeled Real Images Synthetic Simulated images Refined Figure5. Exampleoutput of SimGAN for theUnityEyesgazeestimation dataset [40]. (Left) real imagesfrom MPIIGaze[43]. Our refiner network doesnot useany label information from MPIIGazedataset at training time. (Right) refinement resultson UnityEye. The skin texture and the iris region in the refined synthetic images are qualitatively significantly more similar to the real images than to thesynthetic images. More examples areincluded in thesupplementary material. maps Conv f@nxn Conv f@nxn + ReLU ReLU Input Features Output Features Figure6. A ResNet block with two n ⇥n convolutional layers, and 214K real images from the MPIIGaze dataset [43] – samples shown in Figure 5. MPIIGaze is a very chal- lenging eye gaze estimation dataset captured under ex- treme illumination conditions. For UnityEyes we use a single generic rendering environment to generate train- ing data without any dataset-specific targeting. 80x48x80x12 80x48x80x32 80x48x80x32 80x48x80x12 ResNet Block x4 3x3x3 relu 3x3x3 relu 3x3x3 Relu
  • 9. Loss / Training Encoder Distribution GAN Loss Reconstruction Loss Discriminator discriminates real and fake scenes accurately Generator fools discriminator ℒ 𝑟𝑒𝑐 = 𝑛 𝑐𝑙𝑎𝑠𝑠 𝑤 𝑛 −𝛾𝑥𝑙𝑜𝑔 𝑥 𝑟𝑒𝑐 − 1 − 𝛾 1 − 𝑥 𝑙𝑜𝑔 1 − 𝑥 𝑟𝑒𝑐 Reconstruction accuracy would be high w is occupancy normalized weights with every batch GAN Loss ℒ 𝐺𝐴𝑁 𝐷 = −log 𝐷 𝑥 − log 1 − 𝐷 𝑥 𝑟𝑒𝑐 − log 1 − 𝐷 𝑥 𝑔𝑒𝑛 ℒ 𝐺𝐴𝑁 𝐺 = − log 𝐷 𝑥 𝑟𝑒𝑐 − log 𝐷 𝑥 𝑔𝑒𝑛 min 𝐸 ℒ = ℒ 𝑐𝐺𝐴𝑁 𝐸 + 𝜆ℒ 𝑟𝑒𝑐 TrainingLoss Generator with refiner min 𝐺 ℒ = 𝜆ℒ 𝑟𝑒𝑐 + ℒ 𝐺𝐴𝑁 𝐺 Discriminator min 𝐷 ℒ = ℒ 𝐺𝐴𝑁 𝐷 Learning rate: 0.0001 Batch size: 20(Base), 8(Refiner) Iteration: 100000 (75000:Base, 25000:Refiner) ℒ 𝑐𝐺𝐴𝑁 𝐷 = −log 𝐷𝑐𝑜𝑑𝑒 𝑧 − log 1 − 𝐷𝑐𝑜𝑑𝑒 𝑧 𝑒𝑛𝑐 ℒ 𝑐𝐺𝐴𝑁 𝐸 = − log 𝐷𝑐𝑜𝑑𝑒 𝑧 𝑒𝑛𝑐 Code discriminator discriminates real and fake distribution accurately Encoder fools code discriminator Code Discriminator min 𝐶 ℒ = ℒ 𝑐𝐺𝐴𝑁 𝐷 Refiner is trained after 75000 iterations
  • 10. Experiments Refiner smooths and refines shapes visually. Generated scenes from random distribution were not realistic Generated from random distribution FC-VAE 3D FCR-alphaGAN Reconstruction Real Reconstruction Almost reconstructed, but small shapes have disappeared. Before Refine After Refine This architecture worked better than just VAE, but it’s not enough. This is because encoder was not generalized to the distribution Before Refine After Refine
  • 11. Result Numerical evaluation of reconstruction by IoU Intersection-over Union(IoU) [6] Reconstruction accuracy got high due to the fully convolution and alphaGAN IoU for every class IoU for all Same number of latent space dimension Same number of latent space dimension
  • 13. Evaluations Latent Space Evaluation The 2D represented mapping by SVD of 200 encoded samples Color:1D embedding by SVD of centroid coordinates of each scene Fully convolution Standard VAE Fully Convolution enables the latent space to be related to spatial context This follows 1d embedding of centroid coordinates from lower right to upper left. This does not.
  • 14. Evaluations Latent space evaluation by added noise The effects of individual spatial dimensions composed of 5x3x5 as the latent space. Red means the level of changes given by normal distribution noises of one dimension. ・2,0,4 dimension changes objects in right back area. ・4,0,1 dimension changes objects in left front area. ・1,0,0 dimension changes objects in left back area. ・4,0,4 dimension changes objects in right front area. Fully Convolution enables the latent space to be related to spatial context
  • 15. Suggestions of Future Work ・Revise the dataset This dataset is extremely sparse and has plenty of varieties. Floors and small objects are allocated to huge varieties of positions, also some of the small parts like legs of chairs broke up in the dataset because of the downsizing. That makes predicting latent space too hard. Therefore, it is an important work to revise the dataset like limiting the varieties or adjusting the positions of objects. ・Redefine the latent space In this work, I defined the latent space with one space which includes all information like shapes and positions of each object. Therefore, some small objects disappeared in the generated models, and a lot of non-realistic objects were generated. In order to solve that, it is an important work to redefine the latent space like isolating it to each object and layout. However, increasing the varieties of objects and taking account into multiple objects are required in that case.