TenSEAL icon indicating copy to clipboard operation
TenSEAL copied to clipboard

Speed up im2col_encoding for GPUs

Open dimk1 opened this issue 1 year ago • 2 comments

Currently, in encrypted inference, we encrypt images one by one when calling ts.im2col_encoding() function and do encrypted inference on "model(context, x_enc, windows_nb)" on one sample x_enc. I think we have the most problematic performance bottleneck here. The acceleration effect of GPU is seen most when we do inference on batches of data and running model(batch_x), where batch_x is a 3D or 4D tensor (#num of samples, width, height). But now the #num_of_samples=1 in encrypted inference and the GPU utilization is very low. I looked if there is a way of "CKKSTensor - Batching" in TenSEAL, but I could not find any. Have you considered this feature as an improvement? It would speed up things a lot.

dimk1 avatar Dec 06 '24 20:12 dimk1

Nothing is running on GPU. Ciphertext computation (even with a batch of 1) could benefit from running in GPU, but this is not the case. Everything in running on CPU. So this is clearly out of reach.

youben11 avatar Dec 06 '24 20:12 youben11

Ok, sorry for the misunderstanding, I'm new to FHE and TenSEAL. So there is no way to speed up this operation to run in batches > 1?

dimk1 avatar Dec 06 '24 20:12 dimk1