Pythae + Comet

Application: Reconstructing MNIST dataset images

Başak Buluz Kömeçoğlu
Heartbeat

--

The Pythae library, which brings together many Variational Autoencoder models and enables researchers to make comparisons and conduct reproducible research, is now integrated with Comet ML!

The Comet ML experiment tracking tool is very useful for researchers to store their experiment configs, track their training, and compare the results in an easy and understandable way through a visual interface.

Now let’s see in practice how to easily monitor an experiment with Comet ML in Pythae!

What is a (Variational) Autoencoder?

Images, texts, sounds, and more, produced realistically with deep neural networks, have come to the fore in recent years as the output of surprisingly talented models.

Although these generative models, which are well-designed and require huge data, appear in the literature with many different architectures, the Generative Attractive Networks and Variable Autoencoders model families are at the forefront of this race!

Autoencoders are non-generative models that aim to automatically learn to convert any data to code, consisting of two basic parts, an Encoder and a Decoder. Its main purpose is to compress the data given as input and reproduce it with as little loss as possible. The Variational Autoencoder family, on the other hand, includes generator models that have the ability to generate random codes by sampling and obtain new data with this code.

Denoising with Auto Encoders [Source]

The idea of Variational Autoencoder was presented in 2013 by Diederik P. Kingma and Max Welling in the article “Auto-Encoding Variational Bayes.”

A continuous space of faces generated by Tom White using VAEs [Image Source]

What distinguishes the models in this family from standard Autoencoders is that the input passed through the Encoder is encoded as a probability distribution. If this probability distribution is, for example, a normal distribution, the Encoder output will be the mean and variance values. By sampling these values, the code is obtained and this code can be solved with the help of Decoder. Although the Decoder structure is the same in Standard and Variational Autoencoder models, there is a difference in the Encoder structure.

Auto-Encoder vs Variational Auto-Encoder [Kaynak]

With Auto Encoder, they can be trained easily with a single loss function without the need for extra parameters. In Variational Encoders, on the other hand, it is difficult and challenging to find the balance between reconstruction and latent loss, so training is quite difficult. However, the fact that Auto Encoders are prone to overfitting causes Variational Auto Encoders to be preferred more.

Read the rest of this article here.

Editor’s Note: Heartbeat is a contributor-driven online publication and community dedicated to providing premier educational resources for data science, machine learning, and deep learning practitioners. We’re committed to supporting and inspiring developers and engineers from all walks of life.

Editorially independent, Heartbeat is sponsored and published by Comet, an MLOps platform that enables data scientists & ML teams to track, compare, explain, & optimize their experiments. We pay our contributors, and we don’t sell ads.

If you’d like to contribute, head on over to our call for contributors. You can also sign up to receive our weekly newsletter (Deep Learning Weekly), check out the Comet blog, join us on Slack, and follow Comet on Twitter and LinkedIn for resources, events, and much more that will help you build better ML models, faster.

--

--

Research Assistant at Information Technologies Institute of Gebze Technical University | Phd Candidate at Gebze Technical University