avalanche-rl
avalanche-rl copied to clipboard
Avalanche fork adding RL support
Hello, I am writing to inform you that there is a problem between the avalanche-rl and DQNStrategy. Unfortunately, in avalanche-rl when a strategy uses EWC plugin, below error will be...
Hello, In avalanhce-rl, after 1000 experiences and 1000 steps per experience, the below error will be risen. It is necessary to manage memory during training correctly. torch.cuda.OutOfMemoryError /Avalanche/avalanche-rl/avalanche_rl/training/strategies/rl_base_strategy.py", line 377,...
Hello, I am writing to inform you that unfortunately there is a problem when a scenario will be evaluated after training. In other words, when the below function will be...
I have to work on CRL for path generation of a drone. For learning, I tried to run 'simple_dqn.py' from this repository but it shows error related to avalanche training...
Here is a running example of stable baselines 3 PPO agent. It requires more custom strategy class, so this does not inherit from the `RLBaseStrategy` in the avalanche-rl repo. Main...
I think Avalanche is ready to integrate the CRL benchmarks. @NickLucche do you agree with moving the benchmarks in avalanche? Once they are in the main repository, they would be...
The repo doesn't look too pretty atm, we could use some Ops
Implement PPO algorithm from scratch following the framework guidelines so that we can easily integrate it with all available features.
Current module structure is borrowed from avalanche but file location for rl utils isn't super-obvious. I believe we should move stuff around in order to make it more predictable for...