Ability to collect and visualize a video (agent's behavior in the environment) for RL experiments
When running RL experiments, there is a huge value in watching agent's interaction with the environment via a video.
One way to implement this: Collect RGB frames and compile them into video or slider with certain frequency for good RL debugging experience.
TODO: more details needed for this feature's implementation
Would love to see this implemented
It is going to be Indeed, but hopefully not by us. This is a relatively low hanging fruit right now, but the version of Aim we are working on right now is going to allow the users to build it themselves with very few lines of python code. We should have an alpha soon.
Yeah awesome, I'd be down to help implement it once the new version gets released.
Bump for this. How difficult would it be to implement video support?