VectorizedMultiAgentSimulator icon indicating copy to clipboard operation
VectorizedMultiAgentSimulator copied to clipboard

Ray: Support multiple policies

Open florin-pop opened this issue 1 year ago • 2 comments

Hello,

The ray example was super helpful in getting things up and running, however, when I tried to configure the PPOTrainer to use one policy per agent, the wrapper provided by VMAS could not be used as is.

My configuration:

"multiagent": {
    "policies": {
        f"agent_{i}": (PPOTorchPolicy, None, None, {})
        for i in range(n_agents)
    },
    "policy_mapping_fn": lambda agent_id: f"agent_{agent_id}",
},

The error:

ValueError: Have multiple policies {}, but the env <vmas.simulator.environment.rllib.VectorEnvWrapper object at 0x71ec0c8dcbb0> is not a subclass of BaseEnv, MultiAgentEnv, ActorHandle, or ExternalMultiAgentEnv!

PS: I'm not 100% sure if this is a feature request or a misuse from my side, as I was trying to make each agent have its own policy and not share the policy model across the agents.

florin-pop avatar Jun 11 '24 09:06 florin-pop

Yes so unfortunately by default vmas is not compatible with the Multiagent interface of RLLib because rllib does not allow to subclass both VectorEnv and MultiAgentEnv (a genius choice, I know).

So I went with subclassing only VectorEnv.

If you want to see how we use vmas multiagent in rllib with the option to share or not policy and critics see https://github.com/proroklab/HetGPPO

matteobettini avatar Jun 11 '24 18:06 matteobettini

Here was my attempt to poke them about this https://github.com/ray-project/ray/issues/26006 which ended into the void of the stale bot.

After this, I made my own https://github.com/facebookresearch/BenchMARL training library ;)

matteobettini avatar Jun 11 '24 18:06 matteobettini