[BUG] A dataclass in "next" doesn't get copied over in `step_mdp`
Describe the bug
A dataclass in "next" doesn't get copied over in step_mdp.
To Reproduce
import dataclasses
from tensordict.tensordict import TensorDict
import torch
from torchrl.envs import step_mdp
@dataclasses.dataclass
class Test:
a: int
dataclass = Test(1)
new_tensordict = step_mdp(
TensorDict(
{"next": {"state": torch.tensor([1.0]), "test": dataclass}}, batch_size=1
)
)
print(new_tensordict.keys())
>> _StringKeys({'state': tensor([1.])})
Expected behavior
"test" in the example should be copied over.
System info
Using version 0.3.0 for both tensordict and torchrl.
Checklist
- [x] I have checked that there is no similar issue in the repo (required)
- [x] I have read the documentation (required)
- [x] I have provided a minimal working example to reproduce the bug (required)
import dataclasses from tensordict.tensordict import TensorDict import torch import copy from torchrl.envs import step_mdp
@dataclasses.dataclass class Test: a: int
dataclass = Test(1)
Deep copy the dataclass before passing it to step_mdp
dataclass_copy = copy.deepcopy(dataclass)
new_tensordict = step_mdp( TensorDict( {"next": {"state": torch.tensor([1.0]), "test": dataclass_copy}}, batch_size=1 ) ) print(new_tensordict.keys())### 1919
Non tensor data is not well supported atm but it's going to be a feature of the next release! Stay tuned! Let's use this thread to talk about the feature once the PR lands
Absolutely, I appreciate the heads up! It's great to hear that non-tensor data will be supported in the next release. This feature will certainly enhance the flexibility and usability of the library. I look forward to exploring it further once the PR lands. Feel free to reach out anytime if you need assistance or feedback during the development process.