Improving PILCO with a Neural Network Gaussian Process
Introduction and Overview
PILCO (opens in a new tab) is a Model-based reinforcement learning alorithm that utilizes Gaussian Process inference to create a simulated environment of a robotics control problem to learn how to solve it from scratch. The algorithm is able to solve non-trivial reinforcement learning problems from scratch with limited data and time. Unfortunately, Gaussian Processes are not the greatest at scaling up to more complex non smooth dynamics models.
To fix this issue, a new varient of PILCO called DeepPILCO was created. DeepPILCO (opens in a new tab) replaced the Gaussian Process in PILCO with a Bayesian Neural Network which allowed the algorithm to model more complex systems. Unfortunately, having a neural network instead of a Gaussian Process requires more data to train and takes away form the efficiency of the algorithm.
Fortunately, a 2017 paper by Lee et al. (opens in a new tab) discovered that some deep neural networks actually converge to a Gaussian process in an infinite limit. An exact relation for this was derived in the form of a Neural Network Gaussian Process and the Neural Tangent Kernel. My research sought to analyze if there would be any performance gains by replacing PILCO's Gaussian Process with a Neural Network Gaussian Process.
A Quick Overview of the PILCO Algorithm
Consider a dynamical system of the form
where is a vector describing the state of the system at time , is the control signal applied at time , is an unknown function governing the transitional dynamics of the physical system, and is a random gaussian noise.
The goal of PILCO is to find the cotroller , a deterministic policy which is a function of the current state and parameters that minimizes the sum over the average cost over time
To learn the function , PILCO uses Gaussian Process regression. A Gaussian Process is defined by its mean and covariance functions. PILCO utilizes a mean of zero and the Squared Exponential covariance function (kernel) which is defined as
A GP can be thought of as a probability distribution over the possible functions that can approximate the initial givern data which will be generated by applying the initial policy in PILCO. PILCO’s GP uses inputs and outputs
to predict the latent function which in turn is used to predict the difference between states given the control signal and the previous state . This probability distribution is assumed to be Gaussian and gives a one-step prediction of the dynamics of environment. The values are hyperparameters learned via Expectation Maximization in the original PILCO algorithm.
PILCO creates a long term cost by evalauting these transitions over time . To do this, the predictive distribution needs to be calculated. Although an exact representation can be given by an integral, is is not analytically tractable so it is assumed to be Gaussian and then approximated via Gaussian Moment Matching. is then given by
where
The total cost than given by summing over
for a suitable analytically tractable cost function .
By minimizing with respect to the cost function , the parameters of are learned until convergence to a good but not necessarily optimal policy .
Neural Network Gaussian Process modifications to PILCO
Theoretically speaking, in a very general sense, a simple Neural Network can be defined as
where the matrices , are the parameters of the Neural Network and is an element-wise activation function. The goal of the Neural Network is to then learn a relation between the input data and the outputs by minimizing the loss
given by the sum of the difference of the training samples and outputs of the Neural Network which can be found via gradient descent
When the training of is frozen, training a Neural Network under gradient descent is equivalent to kernel regression where acts as the map. In this low dimensional case, this is equivalent to linear regression.
Furthermore, when is sampled from and the dimensions of and tend towards infinity, our Neural Network converges to a Gaussian Process. When this infinite limit has a closed form, it is known as a Neural Network Gaussian Process (NNGP). This NNGP has a kernel that depends on the architecture of the neural network and was to act as drop in replacement for PILCO.
Conclusion
The results of my project are detailed in my paper (opens in a new tab)