decision-diffuser
decision-diffuser copied to clipboard
Why choose total reward of entire trajectory as label?
I have some confusions about 'label'(rewards) setting, the horizon of state and action sequences are fixed, but rewards array is from 'start' to end of entire episode.
Why?
Hi! I think conditioning on the total trajectory reward is done to sample actions that not only attempts to maximize the reward over the current sampling horizon but also aim to maximize the total reward to go of the trajectory. Note that states are sampled by the diffusion model, so the horizon is used to pass a batch to the model. However, since the model is conditioned on returns, it is not necessary to use the same horizon for them.
Hope this helps!