Proper incremental processing
The demo loads and renders a gcode file incrementally. But the method it uses processGCode always does a complete render, for every chunk of gcode processed. This leads to a quadratic increase in total render time. This is felt with large models and with volumetric rendering. In the latter case it feels as if it grinds to a halt.
The solution would be to only render the gcode that was just processed and keep the previously rendered parts.
UPDATE: this is partially fixed by #136 which renders a model in an animation loop upon load.
What's still to be done is handling the case where gcode is actually parsed in chunks. The problem here is that it is unknown upfront how many layers there will be eventually. This will not work well with the gradient used for line rendering, either you cannot compute the color gradient or you have to re-render the whole file.
TODO:
- [ ] create new demo for incremental parsing & processing of gcode
- [ ] only render the new gcode
- [ ] disable the gradient (line rendering)
- [ ] optional: find a way to re-render the model once every
- [ ] optional: pass the expected # of layers to the
processGCodefunction
I see some issues with keeping objects on the scene. As the number of meshes increases, the performance drops significantly, even for small models like the benchy. BatchedMesh is what optimizes the whole thing.
The challenge with BatchedMesh is that we can't add geometries after the fact, as it has preallocated memory. I solved that by batching in chucks, keeping the number of objects on the scene relatively low. But we don't have the option to render line per line at the moment. If that's something we actually want, I have some ideas.
If we're able to parse in a background worker in a way the partial results are available to the main thread, we'd be able to batch the current progress at each animation frame. The whole parsing and interpreting could be non-blocking. That way, we don't have to know anything about the size of what's parsed, we'd just render what's available as fast as the CPU is able to interpret.
Now, using background workers in javascipt is a skill I have yet to develop and I have no idea if that's even possible.
Did some digging and I think I have a viable approach. I'll start on a POC after we have a first alpha version released.
It would combine requestAnimationFrame and promises.
Basically, all the requestAnimationFrame would do is the batching of available meshes and add them to the scene. As soon as the parsing starts, promises would be leveraged to distribute the heavy calculations in small units. When a promise is resolved, the result is kept available for the batching method to render on the next frame.
If we can have the geometries generated in order, the progressive rendering will have a slick result that's as fast as the computer can be. We'll probably be able to chain successive calls to process gcode even before rendering has finished.
I think this issue is mostly addressed by the handling of streams in https://github.com/xyz-tools/gcode-preview/commit/aca7a5974b259ff211d9b472f613e13a5bc5a33a
The other thing was the gradient that was once used with line rendering. This isnt a focus right now and if it needs improvement it can be done in a shader without much impact on the overal render architecture.
Closing for now