RLGC icon indicating copy to clipboard operation
RLGC copied to clipboard

Bug in the 2-area Kundur case

Open frostyduck opened this issue 4 years ago • 1 comments

I've had some free time in the last few days and I probably figured out a bug where, during training, the agent cannot overcome the reward boundary of -602 in your Kundur 2-area case. The fact is that during training and testing in the environment (Kundur's scheme), short circuits are not simulated. I checked it out. That is, the agent learns purely on the normal operating conditions of the system. In this case, the optimal policy is never to apply the dynamic brake, i.e. actions are always 0, which corresponds to the specified value of the reward (-602 or 603).

I'm guessing it has something to do with the PowerDynSimEnvDef modifications. Initially, you used PowerDynSimEnvDef_v2, and now I am working with PowerDynSimEnvDef_v7.

frostyduck avatar May 26 '21 01:05 frostyduck

Thanks for your feedback and your email. I will look into this and reply to you asap.

qhuang-pnl avatar Jun 02 '21 08:06 qhuang-pnl