SAGPool
SAGPool copied to clipboard
why optimizer.zero_grad() after optimizer.step()?
in train epoch: why optimizer.zero_grad() after optimizer.step()? Does it matter? It's usually optimizer.step()--loss.backward()--optimizer.step()