I met a similar problem with #963, when I was trying to run the last part in tutorial04_visualize.ipynb.
!python ../flow/visualize/visualizer_rllib.py data/trained_ring 20
2020-08-21 01:10:07,524 WARNING worker.py:673 -- WARNING: Not updating worker name since setproctitle is not installed. Install this with pip install setproctitle (or ray[debug]) to enable monitoring of worker processes.
2020-08-21 01:10:07,538 INFO resource_spec.py:216 -- Starting Ray with 2.0 GiB memory available for workers and up to 1.01 GiB for objects. You can adjust these settings with ray.init(memory=, object_store_memory=).
2020-08-21 01:10:08,514 INFO trainer.py:371 -- Tip: set 'eager': true or the --eager flag to enable TensorFlow eager execution
2020-08-21 01:10:10,410 INFO rollout_worker.py:770 -- Built policy map: {'default_policy': <ray.rllib.policy.tf_policy_template.PPOTFPolicy object at 0x7fd753f9f240>}
2020-08-21 01:10:10,411 INFO rollout_worker.py:771 -- Built preprocessor map: {'default_policy': <ray.rllib.models.preprocessors.NoPreprocessor object at 0x7fd753f93f98>}
2020-08-21 01:10:10,411 INFO rollout_worker.py:372 -- Built filter map: {'default_policy': <ray.rllib.utils.filter.NoFilter object at 0x7fd753f93e10>}
2020-08-21 01:10:10,414 INFO multi_gpu_optimizer.py:93 -- LocalMultiGPUOptimizer devices ['/cpu:0']
Traceback (most recent call last):
File "../flow/visualize/visualizer_rllib.py", line 386, in
visualizer_rllib(args)
File "../flow/visualize/visualizer_rllib.py", line 155, in visualizer_rllib
agent.restore(checkpoint)
File "/home/zl/.conda/envs/flow/lib/python3.7/site-packages/ray/tune/trainable.py", line 341, in restore
self._restore(checkpoint_path)
File "/home/zl/.conda/envs/flow/lib/python3.7/site-packages/ray/rllib/agents/trainer.py", line 559, in _restore
self.setstate(extra_data)
File "/home/zl/.conda/envs/flow/lib/python3.7/site-packages/ray/rllib/agents/trainer_template.py", line 161, in setstate
Trainer.setstate(self, state)
File "/home/zl/.conda/envs/flow/lib/python3.7/site-packages/ray/rllib/agents/trainer.py", line 855, in setstate
self.workers.local_worker().restore(state["worker"])
File "/home/zl/.conda/envs/flow/lib/python3.7/site-packages/ray/rllib/evaluation/rollout_worker.py", line 712, in restore
self.policy_map[pid].set_state(state)
File "/home/zl/.conda/envs/flow/lib/python3.7/site-packages/ray/rllib/policy/policy.py", line 250, in set_state
self.set_weights(state)
File "/home/zl/.conda/envs/flow/lib/python3.7/site-packages/ray/rllib/policy/tf_policy.py", line 269, in set_weights
return self._variables.set_weights(weights)
File "/home/zl/.conda/envs/flow/lib/python3.7/site-packages/ray/experimental/tf_utils.py", line 186, in set_weights
self.assignment_nodes[name] for name in new_weights.keys()
AttributeError: 'numpy.ndarray' object has no attribute 'keys'
Yes, this is a problem, obviously the official does not plan to fix it
Meet the problem now. Does anybody know how to fix it?
Meet the problem now. Does anybody know how to fix it?
Not yet