server icon indicating copy to clipboard operation
server copied to clipboard

Using shm in bls

Open badskeet opened this issue 3 years ago • 2 comments

I want to use shared memory in bls but didn't found similar examples, only in clients. Are there such examples?

badskeet avatar Aug 31 '22 17:08 badskeet

Hi @badskeet Can you explain a little more about your use case please? The shared memory feature is used for communication between the client and server. BLS shouldn't need to use shared memory since the process has the entire request. cc: @Tabrizian

nv-kmcgill53 avatar Sep 02 '22 17:09 nv-kmcgill53

Hi. I want to store a vector which is very heavy to calculate and is calculated for a specific shape. In my case, on the first request for a new shape, the vector would be calculated, but on subsequent requests with the same shape, there would be no. This would significantly optimize performance in my case.

badskeet avatar Sep 20 '22 13:09 badskeet

Hi @badskeet, the shared memory implementation is completely transparent to the Python model (i.e. you won't interact directly with shared memory). You can create a tensor and store it as one of the attributes of your Python model. If Python backend sees that the tensor is already in shared memory and is not deallocated, it will not copy it to shared memory again which should hopefully speedup your model.

Tabrizian avatar Oct 31 '22 20:10 Tabrizian

Closing issue due to lack of activity. Please re-open the issue if you would like to follow up with this issue.

jbkyang-nvi avatar Nov 22 '22 02:11 jbkyang-nvi