Make sharp().toBuffer() return Buffers with transferrable backing stores
Feature request
What are you trying to achieve?
I'm working in a multi-threaded Node.js application where sharp().toBuffer() output is passed to worker_threads via postMessage. To avoid copying overhead, I'd like to transfer the backing ArrayBuffer (i.e., use zero-copy transfers).
However, the current Buffer returned by toBuffer() may point to a shared or larger memory pool (via node::Buffer::New(...) with internal backing), making the .buffer potentially non-transferrable or containing unrelated memory slices.
Specifically you can test this by calling require('worker_threads'). isMarkedAsUntransferable(buffer) which today returns true.
When you searched for similar feature requests, what did you find that might be related?
Didn't find anything relevant to transferrable ArrayBuffer as a backing store.
What would you expect the API to look like?
Ideally, the existing toBuffer() call would already return a Buffer backed by a tightly sliced ArrayBuffer that is safe to transfer.
If not feasible as the default, a new option like:
sharp(...).toBuffer({ transferrable: true });
Images returned by sharp as Buffer objects do not come from the shared pool so should already be transferable.
https://github.com/nodejs/node/issues/55593 suggests this might be a recent Node.js change/feature/bug.
Can you point me to the place where those buffers are created in sharp? Also looked at this comment which may indicate that there is a pooling behavior: https://github.com/nodejs/node/issues/55593#issuecomment-2500316828
libvips (or one of its dependencies) allocates the memory, sharp uses NewOrCopy to wrap it as a Buffer instance.
https://github.com/lovell/sharp/blob/031c808aa5f8763e1d95ea0b7c1e462714501834/src/pipeline.cc#L1355
For Node.js this is a zero-copy operation. For recent versions of Electron that enable its memory cage feature, this involves a copy, which I guess might potentially involve an allocation from the pool.
Thanks for sharing the code sample. While it may be zero-copy for Node.js, the resulting buffer is still marked as untransferable.
To address this without introducing a copy, we could allocate the memory ourselves and wrap it with a shared_ptr:
std::shared_ptr<char> dataPtr(new char[baton->bufferOutLength], [](char* data) {
delete[] data;
});
baton->bufferOut = dataPtr.get();
Napi::Buffer<char> data = Napi::Buffer<char>::New(
env,
dataPtr.get(),
baton->bufferOutLength,
[](Napi::Env, char*, std::shared_ptr<char>* hint) {
delete hint;
},
new std::shared_ptr<char>(std::move(dataPtr))
);
(note: I haven't used Napi myself, but a similar approach works for Nan)
This avoids pooled memory, ensures transferability, and retains zero-copy. It would be great to have this as the default behavior.
Not sure how this behaves under Electron, so I’m open to thoughts there. Also, I believe the enhancement label should stay—this is a real inefficiency for multi-threaded applications.
This still feels like something that needs to be corrected at the Node.js and/or Node-API level. If these are marking externally-allocated and therefore transferable Buffers as untransferable then I think that's a better place to deal with the cause, rather than attempt to deal with the effects everywhere else.
I think Node.JS behavior is both correct and expected — digging into node_buffer.cc, I think the behavior here does relate directly to how memory ownership is tracked for external ArrayBuffer backing stores.
In particular, whether a buffer is transferable or not depends on whether Node can prove that its memory is externally owned and properly finalizable.
In CallbackInfo::CreateTrackedArrayBuffer() (see here), Node constructs the BackingStore using:
std::unique_ptr<BackingStore> bs =
ArrayBuffer::NewBackingStore(data, length,
[](void*, size_t, void* arg) {
static_cast<CallbackInfo*>(arg)->OnBackingStoreFree();
}, self);
What’s key here is that Node avoids marking such ArrayBuffers as untransferable only if: 1. The backing store is created with a proper finalizer. 2. Node has full control over its deallocation via the CallbackInfo object.
If instead a Buffer is created with pooled memory or shared allocator slabs (like NewOrCopy() often does in sharp), Node can’t guarantee memory safety across threads — and (I believe) V8 errs on the side of caution, marking .buffer as untransferable.
From what I can tell, Node.js always marks all externally-allocated Buffer instances as untransferable:
https://github.com/nodejs/node/blob/004ecc12eb43c1253274728209c0790f06820437/src/node_buffer.cc#L460-L464
The call chain from sharp to here is Napi::Buffer::NewOrCopy -> Napi::Buffer::New -> napi_create_external_buffer -> node::Buffer::New. A finalizer callback is provided all the way upstream.
Perhaps a new "this data is transferable" option exposed at the napi_create_external_buffer level (and then exposed downstream) would help solve this?
Hi all,
I've also encountered this issue and created a minimal repository to reproduce the problem of transferring sharp's Buffer (specifically, its underlying ArrayBuffer) between threads.
Repo: https://github.com/yourusername/piscina-and-sharp
In my tests:
- On Node.js v20.x, the
ArrayBufferfromsharp().toBuffer()can be transferred successfully. - On Node.js v22.x, attempting the same transfer results in a
DataCloneError: Cannot transfer object of unsupported type.
The repository includes examples using both plain worker_threads and Piscina to demonstrate this. Hope this helps with diagnosing the issue!