speed up loading of namespaces: return shallow copy in build_const_args
Motivation and description
I am trying to speed up the loading of namespaces in pynwb. Sometimes it takes up to 6 seconds on initial load. I was tracing through the code to see what could be causing the slowness and I came across the a deepcopy in a low level function build_const_args that gets called a lot during namespace loading. I replaced this with a shallow copy and noticed a significant improvement in load time.
IMPORTANT: I am not familiar enough with the code to know whether this change is going to break anything.
This is one of two PRs I am submitting to try and speed things up.
How to test the behavior?
Run this script twice before the change and once after the change. The first time will download the needed data and will save the loaded file segments to a cache directory. The second time and third times it is run, it will not include the download time. On my machine it takes around 4 sec to load before the change and around 1.5 sec after the change.
import time
import remfile
import pynwb
import h5py
def example_slow_load_namespace():
# https://neurosift.app/?p=/nwb&dandisetId=000409&dandisetVersion=draft&url=https://api.dandiarchive.org/api/assets/c04f6b30-82bf-40e1-9210-34f0bcd8be24/download/
h5_url = 'https://api.dandiarchive.org/api/assets/c04f6b30-82bf-40e1-9210-34f0bcd8be24/download/'
disk_cache = remfile.DiskCache('test_cache')
remf = remfile.File(h5_url, disk_cache=disk_cache)
timer = time.time()
with h5py.File(remf, 'r') as h5f:
with pynwb.NWBHDF5IO(file=h5f, mode='r', load_namespaces=True) as io:
nwbfile = io.read()
print(nwbfile)
elapsed = time.time() - timer
print('Elapsed time:', elapsed)
if __name__ == '__main__':
example_slow_load_namespace()
Checklist
- [x] Did you update
CHANGELOG.mdwith your changes? - [x] Does the PR clearly describe the problem and the solution?
- [x] Have you reviewed our Contributing Guide?
- [ ] Does the PR use "Fix #XXX" notation to tell GitHub to close the relevant issue numbered XXX when the PR is merged?
@oruebel @rly
Codecov Report
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 88.88%. Comparing base (
b0f068e) to head (da2c1d3). Report is 30 commits behind head on dev.
Additional details and impacted files
@@ Coverage Diff @@
## dev #1103 +/- ##
==========================================
- Coverage 88.88% 88.88% -0.01%
==========================================
Files 45 45
Lines 9836 9835 -1
Branches 2795 2795
==========================================
- Hits 8743 8742 -1
Misses 776 776
Partials 317 317
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
The change to a shallow copy should be fine. The only reason I would suspect a deepcopy is needed is if we wanted to modify an independent copy. We do a modification here:
@classmethod
def build_const_args(cls, spec_dict):
''' Build constructor arguments for this Spec class from a dictionary '''
ret = super().build_const_args(spec_dict)
if isinstance(ret['dtype'], dict):
ret['dtype'] = RefSpec.build_spec(ret['dtype'])
return ret
lines 276 -282
And also in namespace.py
if parent_cls.def_key() in spec_dict:
spec_dict[spec_cls.def_key()] = spec_dict.pop(parent_cls.def_key())
if parent_cls.inc_key() in spec_dict:
spec_dict[spec_cls.inc_key()] = spec_dict.pop(parent_cls.inc_key())
I would need to dive in deeper to see if the order in which we call these methods should not conflict with us using a shallow copy. I'll tackle this next week when I am back.
The only reason I would suspect a deepcopy is needed is if we wanted to modify an independent copy.
I think this will require careful testing. We should check if/where the spec object is actually being modified and why. If the spec is being modified downstream, then I'd suspect that this could lead to issues when reading multiple files where you could get undesirable side-effects where the spec if modified when reading file A and then when reading file B it would see the modifications made when reading A. I'm not sure whether that is actually the case or whether using a deepcopy was just done to be extra careful.
The only reason I would suspect a deepcopy is needed is if we wanted to modify an independent copy.
I think this will require careful testing. We should check if/where the spec object is actually being modified and why. If the spec is being modified downstream, then I'd suspect that this could lead to issues when reading multiple files where you could get undesirable side-effects where the spec if modified when reading file A and then when reading file B it would see the modifications made when reading A. I'm not sure whether that is actually the case or whether using a
deepcopywas just done to be extra careful.
Agreed.
The only reason I would suspect a deepcopy is needed is if we wanted to modify an independent copy.
I think this will require careful testing. We should check if/where the spec object is actually being modified and why. If the spec is being modified downstream, then I'd suspect that this could lead to issues when reading multiple files where you could get undesirable side-effects where the spec if modified when reading file A and then when reading file B it would see the modifications made when reading A. I'm not sure whether that is actually the case or whether using a
deepcopywas just done to be extra careful.
I checked this out over here: https://github.com/hdmf-dev/hdmf/pull/1152#issue-2412235347
tl;dr the deepcopy doesn't protect from mutation anyway because of when it is called/what calls it, the main thing deepcopy seems to be doing is giving derived objects a new id