agents icon indicating copy to clipboard operation
agents copied to clipboard

AttributeError: 'NoneType' object has no attribute 'dumps' when creating TFPyEnvironment

Open makisgrammenos opened this issue 3 years ago • 0 comments

Hey everyone, I trying to create a Custom Environment python environment with tf_agents and wrap it to TFPyEnvironment to make is a tensorflow environment. The thing is that I am getting the following error when trying to wrap it on TFPyEnvironment. The environment tries to simulate a classification issue using DICOM images (Dataset)

Any suggestions?

conda run -n tf --no-capture-output --live-stream python /home/makis/rsna/qlearning.py

2022-03-17 15:41:29.126260: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-03-17 15:41:29.132995: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-03-17 15:41:29.133657: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-03-17 15:41:29.134332: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-03-17 15:41:29.134709: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-03-17 15:41:29.135167: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-03-17 15:41:29.135602: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-03-17 15:41:29.597474: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-03-17 15:41:29.597968: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-03-17 15:41:29.598381: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:936] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2022-03-17 15:41:29.598790: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1525] Created device /job:localhost/replica:0/task:0/device:GPU:0 with 4783 MB memory:  -> device: 0, name: NVIDIA GeForce GTX 1060 6GB, pci bus id: 0000:01:00.0, compute capability: 6.1
2022-03-17 15:41:30.007430: I tensorflow/stream_executor/cuda/cuda_dnn.cc:368] Loaded cuDNN version 8202
2022-03-17 15:41:30.278478: I tensorflow/core/platform/default/subprocess.cc:304] Start cannot spawn child process: No such file or directory
Traceback (most recent call last):
  File "/home/makis/rsna/qlearning.py", line 178, in <module>
    train_env.close()
  File "/home/makis/anaconda3/envs/tf/lib/python3.10/site-packages/tf_agents/environments/tf_py_environment.py", line 198, in close
    self._env.close()
  File "/home/makis/anaconda3/envs/tf/lib/python3.10/site-packages/tf_agents/environments/batched_py_environment.py", line 188, in close
    self._execute(lambda env: env.close(), self._envs)
  File "/home/makis/anaconda3/envs/tf/lib/python3.10/site-packages/tf_agents/environments/batched_py_environment.py", line 106, in _execute
    return self._pool.map(fn, iterable)
  File "/home/makis/anaconda3/envs/tf/lib/python3.10/multiprocessing/pool.py", line 364, in map
    return self._map_async(func, iterable, mapstar, chunksize).get()
  File "/home/makis/anaconda3/envs/tf/lib/python3.10/multiprocessing/pool.py", line 473, in _map_async
    self._check_running()
  File "/home/makis/anaconda3/envs/tf/lib/python3.10/multiprocessing/pool.py", line 350, in _check_running
    raise ValueError("Pool not running")
ValueError: Pool not running
Exception ignored in: <function Pool.__del__ at 0x7ff3f8bb9480>
Traceback (most recent call last):
  File "/home/makis/anaconda3/envs/tf/lib/python3.10/multiprocessing/pool.py", line 268, in __del__
  File "/home/makis/anaconda3/envs/tf/lib/python3.10/multiprocessing/queues.py", line 372, in put
AttributeError: 'NoneType' object has no attribute 'dumps'

Here is the environment code

class ClassificationEnv(py_environment.PyEnvironment):
    def __init__(self):
        super().__init__()
        self._action_spec = array_spec.BoundedArraySpec(
            shape=(), dtype=np.int32, minimum=0, maximum=1, name='action')
        self._observation_spec = array_spec.BoundedArraySpec(
            shape=(1, 256, 256, 64), dtype=np.float32, name='observation')
       
        self._data = Dataset(batch_size=1)

        self._state, self._stateLabel = self._data.__getitem__(0)

        self._episode_ended = False
        self._totalActions = 0

        self._lendata = self._data.__len__()
        self._dataindx = 0

        self.correctClassifications = 0
        self.falseClassifications = 0

    def action_spec(self):
        return self._action_spec

    def observation_spec(self):
        return self._observation_spec

    def _reset(self):
        self._state, self._stateLabel = self._data.__getitem__(0)

        self._dataindx = 0

        self._episode_ended = False
        return ts.restart(np.array(self._state, dtype=np.float32))

    def _step(self, action):

        self._totalActions += 1

        if self._episode_ended:
            # The last action ended the episode. Ignore the current action and start
            # a new episode.
            return self.reset()

        # Make sure episodes don't go on forever.
        if self._totalActions == self._lendata - 1 or self.falseClassifications > self._lendata // 2:

            self._episode_ended = True
        else:

            # self._state ,stateLabel = self.data.__getitem__(self._dataindx)

            if action == self._stateLabel[0]:

                reward = 1  # np.array(1,dtype=np.float32)
                self.correctClassifications += 1
            elif action != self._stateLabel[0]:

                reward = -1  # np.array(-1,dtype=np.float32)
                self.falseClassifications += 1
            else:
                raise ValueError('Invalid action: %s' % action)
            self._dataindx += 1

            self._state, self._stateLabel = self._data.__getitem__(self._dataindx)
            self._state = np.array(self._state, dtype=np.float32)

        if self._episode_ended or self.falseClassifications == self._lendata // 2:
        
            return ts.termination(np.array(self._state, dtype=np.float32), reward)
        else:

            return ts.transition(
                np.array(self._state, dtype=np.float32), reward=reward, discount=np.array(1.0, dtype=np.float32))

makisgrammenos avatar Mar 17 '22 14:03 makisgrammenos