Tutorial for ML-Environment example creation.
We currently have example projects for ML-Agents, settings that are explained, code examples on how to setup and run an environment. I am however missing one step, and that is how you come from "setting up an agent" to "have the agent successfully train".
Even in what i perceive as simple games, i run into issues where i am unsure what is holding the agent back from perfecting it's gameplay. It seems to learn all the concepts of the game, but then can't put it together enough to come to the right conclusions. Where i am uncertain what options or information might even be missing. Learning effectively plateaus where i think it shouldn't.
For this reason, it might be helpful if some of the example projects, came with a step-by-step on why the approach used, is the right approach for the job.
- What is an example where stacked vectors are mandatory for learning to function properly?
- What is a use case for more than 1 hidden layer? (i know in flat data categorization, but i can't wrap my head around, and can't find resources on how you would view game sensor data as such data categorization problems)
- What are typical situations where the agents just has too little information to complete it's job? (EI: If the agent can't perceive an obstacle, it cannot avoid it. But if an obstacle is moving erratically, how can the agent properly make that out from sensory data?)
Typical environment/sensor setup and the reasonings behind them for the examples we have, i think would achieve this. Or at least help deducing how more complex agents or scenarios would have to be tackled.