All Posts

On the convergence of Adversarial Autoencoders

We saw in a previous post how the Kullback-Leibler divergence influence a VAE’s encoder and decoder outputs. In particular, we could notice that whereas the encoder outputs are closer to a standard multivariate normal distribution thanks to the KL divergence, the result is far from being perfect and there are still some gaps. The Adversarial Autoencoder tends to fix that problem by using a Generative Adversarial Network rather than the KL divergence.

A short discussion on Regression and Classification

On April 24th Deep Learning for Sciences, Engineering, and Arts Meetup, the following problem was discussed: “Why for binary classification don’t we just pick up some values to represent the two possible outcomes (e.g. 0 and 1) and use regression with a linear output and a MSE loss?”. I had the impression that the provided answers were not totally clear for everybody. I am therefore writing this short note, hoping that the arguments presented below will help for a better understanding.

On the use of the Kullback–Leibler divergence in Variational Autoencoders

The loss function used for the training of Variational Autoencoders (VAEs) is divided in two terms. The first one measures the quality of the autoencoding, i.e. the error between the original sample and its reconstruction. The second term is the Kullback-Leibler divergence (abbreviated KL divergence) with respect to a standard multivariate normal distribution. We will illustrate with a few plots the influence of the KL divergence on the encoder and decoder outputs.

Blogging with Jupyter notebooks and Hugo

We are going to introduce a simplified workflow for publishing Jupyter notebooks on a website generated with Hugo. The python package nb2hugo will be used to convert the notebooks to markdown pages. The process will be fully automated thanks to Netlify. Once everything configured, you will just have to push your Jupyter notebooks to a Git repository to get them published on your website.

Playing with Pseudo-Random Number Generators (Part 3)

In Part 1 and Part 2, we showed some properties of a classic pseudo-random number generator, the linear congruential generator. In this part, we will introduce a more recent generator, splitmix64. Splitmix64 was created in 2013 as part of Java 8.

Playing with Pseudo-Random Number Generators (Part 2)

We introduced in Part 1 the linear congruential generators. In this second part, we will show some defects of such generators, considering the case where the modulus is a power of 2.

Playing with Pseudo-Random Number Generators (Part 1)

Random number generators have many applications. They can be used to introduce a part of “luck” in a game (for example drawing cards in poker) but also for other tasks such as Monte Carlo simulations (drawing multiple “random” samples). In this first part, we will introduce a very well known pseudo-random number generator, the linear congruential generator.

Programming a Decision Tree Predictor in Scala (Part 5)

We saw in Part 4 how to build a decision tree predictor. We are now going to create a predictor from a very classic machine learning data set, the Iris data set.

Programming a Decision Tree Predictor in Scala (Part 4)

We saw in Part 1 the basic structure of a decision tree. In Part 2 we created a class to handle the samples and labels of a data set. And in Part 3 we saw how to compute the leaves’ values to fit a data set. In this part, we are going to combine the previous results to build a decision tree predictor.

Programming a Decision Tree Predictor in Scala (Part 3)

We saw in Part 1 the basic structure of a decision tree and we created in Part 2 a class to handle the samples and labels of a data set. We are going to see now how to compute the prediction values of the leaves to fit a data set.