A hybrid convolutional variational autoencoder for text generation

Stanislau Semeniuta, Aliaksei Severyn, Erhardt Barth

Abstract

In this paper we explore the effect of architectural choices on learning a variational autoencoder (VAE) for text generation. In contrast to the previously introduced VAE model for text where both the encoder and decoder are RNNs, we propose a novel hybrid architecture that blends fully feed-forward convolutional and deconvolutional components with a recurrent language model. Our architecture exhibits several attractive properties such as faster run time and convergence, ability to better handle long sequences and, more importantly, it helps to avoid the issue of the VAE collapsing to a deterministic model.

OriginalspracheEnglisch
TitelEMNLP 2017 - Conference on Empirical Methods in Natural Language Processing, Proceedings
Seitenumfang11
Herausgeber (Verlag)Association for Computational Linguistics (ACL)
Erscheinungsdatum09.2017
Seiten627–637
ISBN (Print)978-194562683-8
DOIs
PublikationsstatusVeröffentlicht - 09.2017
Veranstaltung2017 Conference on Empirical Methods in Natural Language Processing - Copenhagen, Dänemark
Dauer: 09.09.201711.09.2017
Konferenznummer: 150071

Fingerprint

Untersuchen Sie die Forschungsthemen von „A hybrid convolutional variational autoencoder for text generation“. Zusammen bilden sie einen einzigartigen Fingerprint.

Zitieren