Skip to main content

Project Reinforcement Learning in a Continuous State Space

Participants

Prof. Dr. Jochen Garcke, Irene Klompmaker

Description

Reinforcement learning is an aspect of machine learning where representations of functions over a high-dimensional space are necessary. The underlying problem is that of an agent that must learn behaviour through trial-and-error interactions with a dynamic environment which is described by a state space.

This project is concerned with the discretisation necessary in the case of a continuous state space. One uses dynamic programming methods to (numerically) estimate the utility of taking actions in states of the world; this gives rise to functions over the state space. In the closely related field of optimal control discretisation techniques using finite-element or finite difference methods are widely used. Since the number of dimensions can be large one runs into the curse of dimensionality. We propose to investigate two modern numerical methods for function representations in high dimensions, sparse grids and sums of separable function, in this context. They will allow to break the curse of dimensionality to a certain extent. This project aims to make significant progress for function approximation in continuous state spaces of up to, say, 10 dimensions using an adaptive sparse grid approach.