Yuanmo

Results 52 comments of Yuanmo

> Do you have a colab you can share with some sample data? here are the sample data,which has 8 classes,shape of 150x150,in 3 channels.I‘d appreciate much if you can...

> Could you please reproduce the error in one of the tutorial notebooks, and share the Colab? That will be most helpful to future users who have this problem. Here...

I need that too. This dataset is good, but the T-pose is very strange.

@roger-creus Hi Roger, can you check this issue?

@dominikonysz We have uploaded the update and marked you as a co-author. Thank you for the issue.

Sorry for the late reply! Actually, for these discrete actions, inference will output the logits rather than raw actions. An example is ``` from rllte.env import make_multibinary_env from rllte.xplore.distribution import...

Since we gonna publish a formal version soon, you're recommended to use the latest repo code to get more stable performance.

Environment: [DMControl](https://github.com/google-deepmind/dm_control) Completed: 1. Soft Actor-Critic (SAC) **27** tasks reported in [pytorch_sac](https://github.com/denisyarats/pytorch_sac). Two examples: - sac_dmc_state_humanoid_run (2 seeds, 10M steps) - sac_dmc_state_quadruped_walk (10 seeds, 2M steps) Model import example: ```...

Environment: [Envpool Atari Games](https://github.com/Farama-Foundation/Arcade-Learning-Environment) **synchronous mode** Completed: 1. Proximal Policy Optimization (PPO) **57** Atari games reported in [Agent57: Outperforming the Atari Human Benchmark](https://arxiv.org/abs/2003.13350). Two examples: - ppo_atari_Breakout-v5 (10 seeds, 10M...

Environment: [Envpool Procgen Games](https://github.com/openai/procgen) **synchronous mode** Completed: 1. Proximal Policy Optimization (PPO) - ppo_procgen_bigfish (10 seeds, 25M steps) - ppo_procgen_bossfight (10 seeds, 25M steps) - ppo_procgen_caveflyer (10 seeds, 25M steps)...