Using shm in bls
I want to use shared memory in bls but didn't found similar examples, only in clients. Are there such examples?
Hi @badskeet Can you explain a little more about your use case please? The shared memory feature is used for communication between the client and server. BLS shouldn't need to use shared memory since the process has the entire request. cc: @Tabrizian
Hi. I want to store a vector which is very heavy to calculate and is calculated for a specific shape. In my case, on the first request for a new shape, the vector would be calculated, but on subsequent requests with the same shape, there would be no. This would significantly optimize performance in my case.
Hi @badskeet, the shared memory implementation is completely transparent to the Python model (i.e. you won't interact directly with shared memory). You can create a tensor and store it as one of the attributes of your Python model. If Python backend sees that the tensor is already in shared memory and is not deallocated, it will not copy it to shared memory again which should hopefully speedup your model.
Closing issue due to lack of activity. Please re-open the issue if you would like to follow up with this issue.