PreFab icon indicating copy to clipboard operation
PreFab copied to clipboard

High Memory Usage During Prediction and Correction for Large Layouts

Open bahremsd opened this issue 1 year ago • 2 comments

Thank you for providing such a valuable library. However, after reading the GDS of a device with a footprint of 400 microns by 6 microns and performing prediction or correction, I encounter a memory problem, as the memory usage increases significantly, exceeding 100 GB. Could you consider optimizing the process to achieve the best performance while reducing memory consumption, particularly for larger layouts?

bahremsd avatar Feb 18 '25 06:02 bahremsd

Thank you for the feedback!

This is definitely something that needs to be improved (and we are working on it). These models work on a very fine resolution (1 nm/px), so large structures will use massive arrays. You can load in smaller parts of the device, but this isn't practical in real use cases (especially for devices with many features).

Anyway, we have a way of processing large devices in a memory-efficient way—just need to find a nice way of adding it in. Will keep you updated.

Dusandinho avatar Feb 19 '25 20:02 Dusandinho

Following up on this. The new release includes a way to run predictions on GDS cells through predict.predict_gdstk. All of the conversions happen on the server, which should take the memory load off of the local machine. It's also set up in a way that significantly reduces prediction time for large devices/layouts.

Expecting to change over the regular array-based predictions to this new approach, so let me know if you come across any issues when using it.

Other improvements on this to come.

Dusandinho avatar Mar 19 '25 14:03 Dusandinho

Thank you, Dusan. I appreciate the effort you put into making the process more memory-efficient and practical. I’ll test it further and share any feedback, but this is already a big step forward.

bahremsd avatar Aug 19 '25 11:08 bahremsd