decision-diffuser icon indicating copy to clipboard operation
decision-diffuser copied to clipboard

Why choose total reward of entire trajectory as label?

Open Chenrf1121 opened this issue 1 year ago • 1 comments

I have some confusions about 'label'(rewards) setting, the horizon of state and action sequences are fixed, but rewards array is from 'start' to end of entire episode. Why? image

Chenrf1121 avatar Aug 08 '24 05:08 Chenrf1121

Hi! I think conditioning on the total trajectory reward is done to sample actions that not only attempts to maximize the reward over the current sampling horizon but also aim to maximize the total reward to go of the trajectory. Note that states are sampled by the diffusion model, so the horizon is used to pass a batch to the model. However, since the model is conditioned on returns, it is not necessary to use the same horizon for them.

Hope this helps!

atagle123 avatar Sep 12 '24 22:09 atagle123