avalanche-rl icon indicating copy to clipboard operation
avalanche-rl copied to clipboard

Avalanche fork adding RL support

Results 10 avalanche-rl issues
Sort by recently updated
recently updated
newest added

Hello, I am writing to inform you that there is a problem between the avalanche-rl and DQNStrategy. Unfortunately, in avalanche-rl when a strategy uses EWC plugin, below error will be...

bug

Hello, In avalanhce-rl, after 1000 experiences and 1000 steps per experience, the below error will be risen. It is necessary to manage memory during training correctly. torch.cuda.OutOfMemoryError /Avalanche/avalanche-rl/avalanche_rl/training/strategies/rl_base_strategy.py", line 377,...

bug

Hello, I am writing to inform you that unfortunately there is a problem when a scenario will be evaluated after training. In other words, when the below function will be...

bug

I have to work on CRL for path generation of a drone. For learning, I tried to run 'simple_dqn.py' from this repository but it shows error related to avalanche training...

bug

Here is a running example of stable baselines 3 PPO agent. It requires more custom strategy class, so this does not inherit from the `RLBaseStrategy` in the avalanche-rl repo. Main...

I think Avalanche is ready to integrate the CRL benchmarks. @NickLucche do you agree with moving the benchmarks in avalanche? Once they are in the main repository, they would be...

enhancement

The repo doesn't look too pretty atm, we could use some Ops

documentation
enhancement

Implement PPO algorithm from scratch following the framework guidelines so that we can easily integrate it with all available features.

enhancement
good first issue

Current module structure is borrowed from avalanche but file location for rl utils isn't super-obvious. I believe we should move stuff around in order to make it more predictable for...

enhancement