taichi icon indicating copy to clipboard operation
taichi copied to clipboard

Release memory after every mesh optimization iterations without ti.reset()

Open Szy-Young opened this issue 1 year ago • 1 comments

Hi! I am trying to build a mesh optimization pipeline by combining Taichi with PyTorch. My general pipeline looks like the pseudo-code below:

import torch
import taichi as ti

# To-be-optimized geometry parameters
params = torch.randn(..., requires_grad=True)
optimizer = torch.optim.Adam([params], lr=lr)

for it in range(n_iter):
    verts0, faces0 = param_to_mesh(params)   # PyTorch-based operations
    
    ti.init(arch=ti.cuda)

    verts0_ti, faces0_ti = cast_tensor_to_field(verts0, faces0)    # Cast PyTorch tensors to Taichi fields
    verts_ti, faces_ti = process_mesh(verts0_ti, faces0_ti)    # Taichi-based operations

    verts = verts_ti.to_torch()
    verts.requires_grad_(True)

    # Backward propagation
    loss = compute_loss(verts)
    ....

    ti.reset()

As you can see, I create some intermediate variables as Taichi fields in every iteration and they will continuously occupy the memory (I know the simple "del" cannot release the memory) and lead to the OOM error. Since my mesh topology may change during the optimization (shape of verts/faces may change), I cannot create these Taichi fields before the whole optimization either. So I have to repetitively use ti.init() and ti.reset() in every iterations to release the memory, which makes the code run slower.

I wonder if there is another way to release all the memory occupied by Taichi fields without ti.reset(). Note that I don't need to delete the specified fields, since all the Taichi fields are free to be removed at the end of every iteration.

Any suggestions are genuinely appreciated! Thanks

Szy-Young avatar Oct 16 '24 09:10 Szy-Young

same problem, wondering if you have a solution now

IVBA20000 avatar Dec 25 '24 16:12 IVBA20000