iclr9

ICLR 2019 论文列表

Deep Generative Models for Highly Structured Data, ICLR 2019 Workshop, New Orleans, Louisiana, United States, May 6, 2019.

FVD: A new Metric for Video Generation.
Context Mover's Distance & Barycenters: Optimal transport of contexts for building representations.
Variational autoencoders trained with q-deformed lower bounds.
Deep Generative Models for Generating Labeled Graphs.
Generating Diverse High-Resolution Images with VQ-VAE.
Perceptual Generative Autoencoders.
A Learned Representation for Scalable Vector Graphics.
Improved Adversarial Image Captioning.
Understanding the Relation Between Maximum-Entropy Inverse Reinforcement Learning and Behaviour Cloning.
Discrete Flows: Invertible Generative Models of Discrete Data.
Deep Random Splines for Point Process Intensity Estimation.
Visualizing and Understanding GANs.
Storyboarding of Recipes: Grounded Contextual Generation.
Understanding Posterior Collapse in Generative Latent Variable Models.
Correlated Variational Auto-Encoders.
Revisiting Auxiliary Latent Variables in Generative Models.
Adversarial Mixup Resynthesizers.
Disentangling Content and Style via Unsupervised Geometry Distillation.
DIVA: Domain Invariant Variational Autoencoder.
Fully differentiable full-atom protein backbone generation.
Interactive Image Generation Using Scene Graphs.
Structured Prediction using cGANs with Fusion Discriminator.
Generative Models for Graph-Based Protein Design.
HYPE: Human-eYe Perceptual Evaluation of Generative Models.
Bias Correction of Learned Generative Models via Likelihood-free Importance Weighting.
Point Cloud GAN.
On Scalable and Efficient Computation of Large Scale Optimal Transport.
Dual Space Learning with variational Autoencoders.
AlignFlow: Learning from multiple domains via normalizing flows.
WiSE-ALE: Wide Sample Estimator for Aggregate Latent Embedding.
Compositional GAN (Extended Abstract): Learning Image-Conditional Binary Composition.
Disentangled State Space Models: Unsupervised Learning of dynamics across Heterogeneous Environments.
On the relationship between Normalising Flows and Variational- and Denoising Autoencoders.
Interactive Visual Exploration of Latent Space (IVELS) for peptide auto-encoder model selection.
Adjustable Real-time Style Transfer.
Smoothing Nonlinear Variational Objectives with Sequential Monte Carlo.
A RAD approach to deep mixture models.
Learning to Defense by Learning to Attack.
Unsupervised Demixing of Structured Signals from Their Superposition Using GANs.
Learning Deep Latent-variable MRFs with Amortized Bethe Free Energy Minimization.
A Seed-Augment-Train Framework for Universal Digit Classification.
Generating Molecules via Chemical Reactions.