Tags: machine learning notes all
Last Edit: 2019-11-21
Notes on some machine learning related topics
Getting started in machine learning sometimes feels like drinking water from a firehose (pardon my cliché). The topic has so many academic roots in a lot of different disciplines (bayesian statistics, optimization, and information theory - oh my!) so I decided to keep a personal glossary of various machine learning concepts (namely relating to neural networks and natural language processing) for my own benefit. These notes might not be complete or accurate - if you would like to see an idea to be written about here feel free to shoot me an email!
Variational Inference consists of framing inference as an optimization process. For example when we are working with an intractable probability distribution , variational inference has significant gains over Markov-Chain Monte-Carlo estimation.
For data and latent variable , we have:
As a result, we approximate the conditional density of latent variables given observed variables by using optimization methods. For a distribution , we can approximate it with our own distribution such that we minimize the KL-Divergence between the two distributions:
We can re-write the KL-Divergence between and as:
Then, we can expand the conditional term (and apply the logarithm identities) to get:
The term makes computing the KL-Divergence intractable, since we assumed to be intractable. However, we can give a lower bound on this quantity since the KL-Divergence is always at least 0:
We define the L.H.S to be ELBO: evidence lower bound. This is equivalent to the negative KL-Divergence plus . The nice thing about this is that is a constant with respect to distribution , so we can minimize the KL-Divergence by maximizing ELBO, without calculating .
Autoencoders are models that consist of an encoder-model architecture where the encoder takes data and encodes it into a latent representation and the decoder takes a latent representation and approximates/re-generates the original data. The goal is to learn latent representations (posterior inference over ), as well as learn generation from latent spaces (marginal inference over ).
Autoencoders can be modelled using neural networks for both the encoder and decoder mechanisms. However, this can give us a lack of regularity in the latent space (i.e. non-continuous latent space) that makes generation hard for the decoder. We solve this using a variational autoencoder, which is an autoencoder that we regularize training for, not only so that we don't overfit but mainly so that the latent space is suitable for generation. We do this by encoding the autoencoder's input as a probability distribution.
In order to train our VAE, we must use backpropogation to compute the gradient of ELBO. However, since the network's nodes represent a stochastic process, we instead model stochastic neurons as having parameters and that allows us propogate errors meaningfully throughout the network. This is know as the re-parameterization trick.
The Maximum Likelihood Estimation for an n-gram can be given by the formula:
where is the frequency of in the corpus. Sequence generation can be performed by sampling from this distribution.
Perplexity is an intrinsic evaluation method for language models that captures information about the entropy within the test set. Perplexity for a given test can can be computed as:
This estimate can be approximated using the Markov Assumption (for a bi-gram model):
Perplexity can be thought of as the weighted average branching factor of a language, and generally lower perplexity is better.
Word Embeddings are vectors in some space such that they encode lexical semantics. For example, the vectors of cat
and kitten
will have a small vector distance whereas the vectors of cat
and chair
will be far apart.
For vectors and :
Where the range is and we can define distance as the complement .
Recurrent Neural Networks are like vanilla feed-forward networks, except they contain cycles, which allow the network to process sequential data. RNNs do this by mantaining a hidden state that is updated at time-step , and is later fed back into the network along with the network's previous output at time-step . The hidden state lets the network maintain context while processing the sequence.
Sometimes too much context can be a burden for the network, and results in the vanishing gradient problem where errors are propogated too far and tend to zero. This problem is resolved with models that manage context better, namely selectively remembering and forgetting parts of the context. Examples of these models are LSTMs (Long Short Term Memory) and GRUs (Gated Recurrent Units).
Given some peice of text, certain words are more important than others and we want our neural network to understand their relative importances accordingly.
Markov Processes are systems that are rooted in the Markov Assumption which states that given sequential events , , ... , we have that:
In other words, our process only depends on the previous state and is memory-less.
A Markov Matrix is a stochastic matrix, which means that the columns of are probability vectors that model some distribution. In other words, the columns of sum to 1 and obey the axiom of probability that each entry is non-negative. The reason that these stochastic matrices are called Markov Matrices is because doesn't change with respect to time. In other words, we have that at time , the probability distribution (across the states represented by the vector) is where is the distribution at time . This is nice because we can easily compute the exponentiation of matrices using diagonalization.
The steady state distribution of is the distribution of as tends to . This means that , which implies that is an eigenvector of that corresponds to an eigenvalue of 1. In fact, the largest eigenvalue that can have is 1.
Let be a square matrix with all non-negative values, with an eigenvalue such that is maximized. Then, 1) we have that is an eigenvalue of with a positive eigenvector and 2) the algebric and geometric multiplicity of is 1.
Let be a real, symmetric matrix such that . Then, we have that 1) all the eigenvalues of are real and 2) there exists an orthonormal basis of eigenvectors for .
Thanks for reading!
More Posts