In this series of four (and eventually possibly more) blogs, I am going to look at classification methods, and in particular (at least in the first instance) I am going to look at neural network-type methods. This is a hot topic (again) at the moment, with the recent demonstration of “Deep Learning” techniques by companies such as Google.
The articles in this series are going to look at three foundational techniques for classification of this sort: logistic regression (part I and part II), feedforward neural networks (also known as multilayer perceptrons) (part III), and Restricted Boltzmann Machines (which form the basis of Deep Belief Networks) (part IV). In keeping with machine learning traditions, I am going to apply them to classification of the MNIST handwritten digits dataset (see part II).
Hopefully these posts will give a little bit of insight into the methods, as well as looking at their performance, and how and where they can be applied. I’ll give Matlab code for each of the methods, although I make no guarantee about its correctness or efficiency.