Documentation request: Example on how to save as buffer
Documentation request
I would very much like to see an example of how to properly save as buffer.
Say i have to 4 strings:
const fileNameA = "file-a.txt"
const fileContentA = "foo"
const fileNameA = "file-b.txt"
const fileContentA = "bar"
How can i use Archiver to save these files as a tgz-buffer in memory?
I am also interested in. I would like to generate a zip and download it like a "blob" for exemple.
You can pipe archive to a Writable stream and manually handle the chunks. Example:
import archiver from 'archiver';
import fs from 'fs';
import { Writable } from 'stream';
/**
* @param {archiver.Archiver} archive
*/
async function toBuffer(archive) {
// create writable and save chunks
/** @type {Uint8Array[]} */
const chunks = [];
const writable = new Writable();
writable._write = (chunk, encoding, callback) => {
// save to array to concatenate later
chunks.push(chunk);
callback();
};
// pipe to writable
archive.pipe(writable);
await archive.finalize();
// once done, concatenate chunks
return Buffer.concat(chunks);
}
const archive = archiver('tar');
archive.file('./file-a.txt', { name: 'file-a.txt' });
archive.file('./file-b.txt', { name: 'file-b.txt' });
toBuffer(archive).then(buffer => {
console.log('buffer:', buffer.toString());
// additional: save buffer to disk to verify contents
fs.writeFileSync('./my-zip.tgz', buffer);
});
The idea is to implement Writable._write (either by extending Writable or assigning a function), save the chunks in an array, and finally concatenate them once the archive is done.
Wondering if this is a viable addition to the library?