Wenwei Qiu
Wenwei Qiu
Same question here, I want to use my customized env, pure handmade, wish to find some more examples or tutorials, THANKS! 遇到了类似的问题,我想使用我自定义的环境,但`add_new_env.py`仍不能解答全部疑惑,希望可以在文档中加入更多普适性示例或教程,谢谢!
> 你好,如果要修改representation为RNN,需要配置以下信息: > > use_rnn: True > rnn: "GRU" > recurrent_layer_N: 1 > fc_hidden_sizes: [64, ] > recurrent_hidden_size: 64 > N_recurrent_layers: 1 > dropout: 0 您好,按此方法修改后,报错如下 ``` Traceback (most recent...
我的配置文件就是从simple_spread_v3.yaml修改而来,使用这个yaml也会报错: `RuntimeError: For unbatched 2-D input, hx should also be 2-D but got 3-D tensor` 事实上直接使用simple_spread_v3.yaml运行mpe环境的测试,当配置文件修改为RNN相关设置后也会报错 ``` Traceback (most recent call last): File "/Users/hawkq/Desktop/frigatebird_multi/testrun.py", line 13, in runner.run() File "/opt/anaconda3/envs/xuance_marl/lib/python3.8/site-packages/xuance/torch/runners/runner_marl.py",...
> 你好,请问在VDN或MADDPG这类算法上测试过吗?是否也存在同样问题?我需要判断据此判断一下问题出现在哪个环节 您好,因readthedocs提供的配置文件有限,仅将MADDPG算法配置文件修改并添加如下内容: ``` agent: "MADDPG" # the learning algorithms_marl env_name: "fb" env_id: "fb_v0" env_seed: 1 continuous_action: True learner: "MADDPG_Learner" policy: "MADDPG_Policy" representation: "Basic_RNN" vectorize: "DummyVecMultiAgentEnv" runner: "MARL" distributed_training: False...
VDN修改为`representation: "Basic_RNN"`后,由于动作连续,因此报错,但简单调整为离散动作后能够运行
> 你好,请问在VDN或MADDPG这类算法上测试过吗?是否也存在同样问题?我需要判断据此判断一下问题出现在哪个环节 经过测试,VDN能够运行,MADDPG能够运行,MAPPO报错: ``` Traceback (most recent call last): File "C:\Users\HawkQ\Desktop\frigatebird_multi\new_run.py", line 30, in Agent.train(configs.running_steps // configs.parallels) # Train the model for numerous steps. File "D:\Software\Anaconda\envs\xuance_marl\lib\site-packages\xuance\torch\agents\core\on_policy_marl.py", line 287, in train...
> 你好,请问你的问题解决了吗? 抱歉还没有,目前只能暂时不使用RNN进行训练