+1 vote
in Reinforcement Learning by
How to define States in Reinforcement Learning?

1 Answer

0 votes
by

The problem of State Representation in Reinforcement Learning (RL) is similar to problems of feature representation, feature selection and feature engineering in Supervised or Unsupervised Learning.

A common approach to modelling complex problems is Discretization. At a basic level, this is splitting a complex and continuous space into a grid. Then you can use any of the classic RL techniques that are designed for discrete, linear, spaces.

Using tabular learning algorithms is another good approach to define states given that they have reasonable theoretical guarantees of convergence, which means if you can simplify your problem so that it has, say, less than a few million states, then this is worth trying.

Most interesting control problems will not fit into that number of states, even if you discretize them. This is due to the curse of dimensionality. For those problems, you will typically represent your state as a vector of different features - e.g. for a robot learning to walk, various positions, angles, velocities of mechanical parts. As with supervised learning, you may want to treat these for use with a specific learning process. For instance, typically you will want them all to be numeric, and if you want to use a neural network you should also normalize them to a standard range (e.g. -1 to 1).

...