per icon indicating copy to clipboard operation
per copied to clipboard

divide by zero error

Open richielo opened this issue 7 years ago • 8 comments

Hello, thank you for the work. I am facing the issue of dividing by zero error in the line below when calling the sample function to sample memory. Any idea why?

is_weight /= is_weight.max()

richielo avatar Nov 08 '18 19:11 richielo

It's caused by this and it's actually raising from the np.power in the line above.

I've forked and partially fixed the issues and made a couple other changes (plus made the PER memory more configurable).

  • I avoid the division by zero by changing the beta step to always stay just below 1: np.min([1. - self.e, ...]).
  • I also found that the SumTree sometimes pulls uninitialized samples in its batch (these show up as simply a 0), which can cause exceptions down the line if you don't guard for it. I haven't root caused that yet, but I just discard those samples when they happen and raise a warning. It happens rarely enough that discarding shouldn't cause any impact on training. The warning looks like this:
/Users/{user}/repos/per/prioritized_memory.py:48: UserWarning: Pulled 1 uninitialized samples
  warnings.warn('Pulled {} uninitialized samples'.format(uninitialized_samples))

I'm happy to PR my changes into this repo if @rlcode wants them.

stormont avatar Feb 21 '19 01:02 stormont

Hi guys, @rlcode - many thanks for your work. I also observed uninitialized samples pulling and got the mentioned unwanted "0". I didn't figure out yet the problem and I wonder if the sampling process works as it supposed to work. @stormont did you figure out the root caused?

emunaran avatar Mar 20 '19 11:03 emunaran

As referenced in (Schaul et al., 2015), as TD error approaches 0 we will have divide by zero errors. They fix this via:

image

Where epsilon is a small value to prevent this. I think you are missing this from your algorithm? I am pretty confident that if you have been testing on cartpole you with never run into this issue, however in discrete state spaces (like mazes) this becomes a real problem.

josiahls avatar Aug 24 '19 21:08 josiahls

Hello, I also find that the uninitialized samples will be sampled and got the unwanted data "0". I tried to find out the root caused but failed. Did you guys figure out the reason? @stormont @emunaran Many thanks!

yougeyxt avatar Sep 04 '19 00:09 yougeyxt

Also, according to the paper, when store a new transition (s, a, r, s_) to the memory the priority should be the maximum priority among the leaf node right? But in the code it used the TD error of the s and s_ which is different from the paper. I am wondering whether this is a bug or not.

yougeyxt avatar Sep 04 '19 00:09 yougeyxt

Hello, I also find that the uninitialized samples will be sampled and got the unwanted data "0". I tried to find out the root caused but failed. Did you guys figure out the reason? @stormont @emunaran Many thanks!

Hi there! I faced the same issue and what I did is to sample another value of that same interval, until it is not an integer (given that the capacity is initialized to np.zeros ). In the prioritized memory I added the following:

for i in range(n):
            a = segment * i
            b = segment * (i + 1)
            while True:
                s = random.uniform(a, b)
                (idx, p, data) = self.tree.get(s)
                if not isinstance(data, int):
                    break
            priorities.append(p)
            batch.append(data)
            idxs.append(idx)

This did the trick for me. Hope it does the same to you.

Jspujol avatar Nov 07 '19 16:11 Jspujol

If anyone is still wondering why it pulls 0 from the replay memory, it is because the location in the replay memory that was sampled was not filled out yet and thus contained the initial values with which we initialized the replay buffer. i.e., 0's. If you set a condition that the training does not start until the buffer is completely filled, then you never encounter this issue.

being-aerys avatar Dec 01 '20 06:12 being-aerys

Hello, I also find that the uninitialized samples will be sampled and got the unwanted data "0". I tried to find out the root caused but failed. Did you guys figure out the reason? @stormont @emunaran Many thanks!

Hi there! I faced the same issue and what I did is to sample another value of that same interval, until it is not an integer (given that the capacity is initialized to np.zeros ). In the prioritized memory I added the following:

for i in range(n):
            a = segment * i
            b = segment * (i + 1)
            while True:
                s = random.uniform(a, b)
                (idx, p, data) = self.tree.get(s)
                if not isinstance(data, int):
                    break
            priorities.append(p)
            batch.append(data)
            idxs.append(idx)

This did the trick for me. Hope it does the same to you.

Brilliant!

ZINZINBIN avatar Mar 28 '23 07:03 ZINZINBIN