Asking for #rl opinions (#2).

What is deep reinforcement for you? Is it just RL with neural networks?

If so, should we call previous work from the 80s/90s deep RL? If not, what are the peculiar features of deep RL?

@proceduralia It's representation learning all the way down 🤔

No strong thoughts about this. I do enjoy the ease with which we can think about neural networks as basis function generators in terms of processing the observations into something inherently useful, while not needing as much hand tuning.

@tw_killian @proceduralia agreed with representation learning. Not sure what to call pre-XXI RL, but a lot of the mainstream algorithms nowadays are old ideas (Q-learning, max-entropy, Bellman eq, natural gradient) that scale thanks to more powerful function approximation / representations