rl-agents
rl-agents copied to clipboard
Implementations of Reinforcement Learning and Planning algorithms
Hi Edouard, Thank you for your amazing contribution at the first place. I am a beginner and I would like to ask a few questions about how to run your...
Hi Edouard, Thank you for your amazing contribution at the first place. I am currently studying DQN network(image-input with convolutional network) and want to implement it to highway-env. I have...
Hello KexianShen, I am a beginner and would like to ask you some questions. How to run Rl-agent-dqn in a script on Highway-env instead of through the command line.Thank you...
Hi, I find that I cannot perform deepcopy (from rl_agents.agents.common.factory import safe_deepcopy_env) of the environment when the observation type is "GrayscaleObservation". It works for the "Kinematics" observation type. It says:...
Hi, I am currently working with [this configuration](https://github.com/eleurent/rl-agents/blob/master/scripts/configs/HighwayEnv/agents/DQNAgent/ego_attention.json), where you use different kinds of layers. How can I change the structure? Is it possible directly from your JSON file or...
I discovered while digging through the code that a certain state value called `previous_state` of the DQN algorithm (and possibly some others) is being cached on the `act()` and `action_distribution`...
Hi, i've trained a DQN model with social attention in HighwayEnv. I've used egoattention with 2 heads but i don't understand the results. In the first video you can see...
Hi Edouard, I wanted to know if you have already tested Prioritized experience replay for the memory? I noticed that there is only standard replay memory in this project. So,...
I specifically want to save the DQN network i've been working on. I went through the documentation and can't find anything. I tried using pickle for my agent after training...
in this script: [https://colab.research.google.com/github/eleurent/highway-env/blob/master/scripts/highway_planning.ipynb](url) The code only runs one episode, how could I run more episodes to update the agent, and save the model?