Reading ply file is too slow (1.5 GB)
I have some code that does what I need it to do, but when testing with a larger sample of data (about 1.5GB) it seems to hang on reading the file to memory. I have waited indefinitely for this to complete: plydata = PlyData.read('C:/Users/Jim/Desktop/data.ply')
There's an efficient loader for large elements via memory-mapping, but it requires:
- Binary mode, not ASCII
- No list properties
- Underlying file can be memory-mapped
If those requirements aren't satisfied, then it will fall back to one of the slow loaders.
By the way, here's the logic for deciding which reader it can use: https://github.com/dranjan/python-plyfile/blob/9f8e8708d3a071229cf292caae7d13264e11c88b/plyfile.py#L638-L656
The middle branch, where _can_mmap(stream) and not self._have_list evaluates to True and text is False, is the only one I would expect to load your data efficiently.
Is there a way to use the memory-mapping while the .ply file includes lists?
Unfortunately elements that contain list properties can't be memory-mapped.
I see. Thanks.