what's the weirdest thing you've stumbled upon in #RL #reinforcementlearning ?

i'll start:

if using neural nets* you don't actually need any rewards to train an agent optimally on episodic cartpole.

* or positive initializations

@psc I have experienced this for CartPole as well as for one of my other environments (a custom driving environment) on some occasions 😅 Turns out sometimes a randomly initialized policy is a decent policy.
@ashishgaurav_13 in cartpole initial random policies don't do well, but they still *always* converge to optional policy even with zero rewards!
@psc Also, I think the zero rewards phenomena can be explained. From https://arxiv.org/abs/1601.06569 Theorem 2 (and maybe this is more well known and I'm not aware), a reward R can be replaced by aR+b where a,b are constants and we will get the same optimal policies. So if the reward is 1 at every timestep, we can shift/scale it to all 0 (subtracting -1) and still get an optimal policy. Right?
Towards Resolving Unidentifiability in Inverse Reinforcement Learning

We consider a setting for Inverse Reinforcement Learning (IRL) where the learner is extended with the ability to actively select multiple environments, observing an agent's behavior on each environment. We first demonstrate that if the learner can experiment with any transition dynamics on some fixed set of states and actions, then there exists an algorithm that reconstructs the agent's reward function to the fullest extent theoretically possible, and that requires only a small (logarithmic) number of experiments. We contrast this result to what is known about IRL in single fixed environments, namely that the true reward function is fundamentally unidentifiable. We then extend this setting to the more realistic case where the learner may not select any transition dynamic, but rather is restricted to some fixed set of environments that it may try. We connect the problem of maximizing the information derived from experiments to submodular function maximization and demonstrate that a greedy algorithm is near optimal (up to logarithmic factors). Finally, we empirically validate our algorithm on an environment inspired by behavioral psychology.

arXiv.org
@ashishgaurav_13 sorta, but technically if the pole goes beyond the threshold it's a reward of 0, which would correspond to -1 in your example.
The trick is that the agent never really "sees" that zero... The episodes are terminated and there's an implied zero. So shifting treasure rewards down by 1 still results in zeros everywhere... Yet the agent will still find optimal policy!
@psc Sorry for stretching the argument here, I'm just trying to understand your perspective better 😅 I agree that the agent never sees that 0 and it actually terminates before seeing the 0. If it doesn't, we can say R(s,a)=1 (please correct me if I am wrong). If so, R(s,a) is just a constant, and following the Theorem I cited, if we replace it with 0 (shifting/scaling), the set of optimal policies shouldn't change. Hence, with feedback of R(s,a)=0 we can still learn an optimal policy, right?
@psc Sorry it may have seemed like I repeated my argument. My point is - is this a valid explanation for why we can learn an optimal policy with R=0?

@ashishgaurav_13 so you're partially correct, except for the termination bit. the theory in that paper is for infinite-horizon discounted MDPs (which is why they do things like
\[ (I - \gamma P_{\pi})^{-1} \]
).

so their theory doesn't apply to the "applied" setting of implemented cartpole, where we artificially terminate episodes.

this has motivated me to write this up this finding more formally... stay tuned!

@ashishgaurav_13 (forgot i'm no longer on mathstodon.xyz, so that's why my latex wasn't rendering)... @thegradient do you think this is a feature that could be added? 😄