IfcStreamer fails with Safari at File System API
Describe the bug 📝
Hello,
I just upgraded to latest 2.3.0 version. Thanks a lot, still discovering and File System API do the job near perfect.
I have tested and it is ok for Chrome, Firefox, Opera, However it directly fails while opening any Streaming model with Safari and Safari IOS.
According the MDN (https://developer.mozilla.org/en-US/docs/Web/API/File_System_API). There is a lack of support to .createWritable method for these browsers currently.
So, we get these error:
Unhandled Promise Rejection: TypeError: (await (await this.getDir(this.baseDirectory)).getFileHandle(e, { create: !0 })).createWritable is not a function.
I can detect and turn of caching (or library can do it itself) in this condition or if we have an (auto/manuel)option to fallback to the old IndexedDB method until Safari support .createWritable. Or any other way to write these files to the File_System_API without .createWritable?
is it possible? Thanks.
Reproduction ▶️
No response
Steps to reproduce 🔢
loading Any streaming model with streamer.useCache=true option with Safari browser. (it works without cache)
System Info 💻
Binaries:
Node: 20.14.0 - ~/.nvm/versions/node/v20.14.0/bin/node
Yarn: 1.22.22 - ~/.nvm/versions/node/v20.14.0/bin/yarn
npm: 10.7.0 - ~/.nvm/versions/node/v20.14.0/bin/npm
Browsers:
Chrome: 129.0.6668.90
Safari: 18.0
npmPackages:
@thatopen/components: ^2.3.0 => 2.3.0
Used Package Manager 📦
npm
Error Trace/Logs 📃
Unhandled Promise Rejection: TypeError: (await (await this.getDir(this.baseDirectory)).getFileHandle(e, { create: !0 })).createWritable is not a function.
Validations ✅
- [X] Read the docs.
- [X] Check that there isn't already an issue that reports the same bug to avoid creating a duplicate.
- [X] Make sure this is a repository issue and not a framework-specific issue. For example, if it's a THREE.js related bug, it should likely be reported to mrdoob/threejs instead.
- [X] Check that this is a concrete bug. For Q&A join our Community.
- [X] The provided reproduction is a minimal reproducible example of the bug.
Hello again,
As a workaround, I have overwritten the .add method of StreamerFileDb by injecting outside. The good news is, it still uses same file system API, however as my reading, the only way to write a file without .createStream() is .createSyncAccessHandle() which widely supported but it needs dedicated web worker.
I know, there is no webworkers in the current base afaik, so I'am not sure that how to do a good request for a merge.
My current DRAFT patch is very simple at below and works every browser with File System API. I just changed one line regarding to move to web worker actually.
@HoyosJuan @agviegas please, your comments.
It is overwritten on the created instance.
const streamer = this.#components.get(OBCF.IfcStreamer);
const cacheWebWorker = new Worker("./cache-dedicated-web-wroker.js");
streamer.fileDB.add = async (filename, data) => {
return await new Promise((resolve, reject) => {
const internalFileName = streamer.fileDB.encodeName(filename);
cacheWebWorker.postMessage([
streamer.fileDB.baseDirectory,
internalFileName,
data,
]);
cacheWebWorker.onmessage = function (msg) {
if (msg.data === true) {
streamer.fileDB.updateLastAccessTime(internalFileName);
resolve();
} else {
reject(msg.data);
}
};
});
};
and the web worker is:
onmessage = async function (msg) {
try {
const baseDir = msg.data[0];
const fileName = msg.data[1];
const data = msg.data[2];
const dir = await (
await navigator.storage.getDirectory()
).getDirectoryHandle(baseDir, {
create: !0,
});
const handle = await dir.getFileHandle(fileName, {
create: !0,
});
//const writable = await handle.createWritable();
const writable = await handle.createSyncAccessHandle({
mode: "readwrite",
});
await writable.write(data);
await writable.close();
postMessage(true);
} catch (e) {
postMessage(e);
}
};
Hi @fethigurcan thank you very much for this! Did you notice any change in the performance? If now, I think we can add this to the library :)
I did not measure but it seems same or may be feels small performance improvement due to service worker on separate thread.
I have an question according this. do we really need to await for this .add function. .add function caches and other loading processes continue in parallel?
@fethigurcan maybe not, we can try skipping that and see if it works.
By the way, I'm aware of a bottleneck in streaming that awaits getting the files one by one instead of in parallel. I plan to fix this in this milestone, so together with this, the overall speed of loading and streaming experience should improve notably
Hey @fethigurcan I've tried your solution in @thatopen/[email protected] but it doesn't seem to work properly. Any ideas on what might be wrong?
Hello, @agviegas currently I just get and read the code of alpha6 without executing.
Generally code seems ok but I am not sure that if you can define a DEDICATED web worker without a separated .js file or not. did you try it with a separated file?. (I will also check at my environment in the day)
I have tested by quick copy pasting to my code. I have directly copied the Blob block. Blob inline script works fine as dedicated web worker. So, the problem is not here or not in the web worker code. Still investigating. Do have any errors?
I have noticed one difference. in my code "return await new Promise..." but in the alpha6 code there is no await. however it seems ok without await. I will directly test with alpha6 as library in the day
I have done some test and experiments;
first thing that I have detected that these 2 methods are conflicting with worker creation: (in my env, I set this property just after creation so it terminates the worker)
set memoryCleanTime(value) {
this._memoryCleanTime = value;
this.dispose();
this.setupMemoryCleanup();
}
dispose() {
if (this._intervalID !== null) {
window.clearInterval(this._intervalID);
}
this._worker.terminate();
}
when I comment the termination call above, it works for first few caching operations (for example it handles 3-6-10 caching calls) and then it stucks at somewhere without any errors.
but when we create Worker everytime in the .add method like this, it works without any problem (but possibly with a performance drawback. though, It is the most safe method for testing)
async add(name, buffer) {
return await new Promise((resolve, reject) => {
const internalFileName = this.encodeName(name);
this._worker = new Worker(URL.createObjectURL(this._blob));
this._worker.onmessage = (msg) => {
if (msg.data === true) {
console.log("onmessage:"+internalFileName);
this.updateLastAccessTime(internalFileName);
resolve();
} else {
console.log("onmessage error:"+msg.data);
reject(msg.data);
}
};
this._worker.postMessage([this.baseDirectory, internalFileName, buffer]);
});
}
So, when we try to reuse, It seems that some conflict or lifecycle thing happens.
however, as you remember, the patch code of mine above not creates the Worker every time and it creates once in the same context(clue??) before patching the method, and I have tested and actively using it in every browser including mobile ones without any single problem with very big models like 2-5.5GB multiple models over 2 weeks. So I am a little bit confused for today :) I'll check later again but wanted to share some of my experiments for now.
Hey @fethigurcan thank you so much for sharing all your discoveries. We use workers without files in other parts of the library and they work fine 🤔
In theory, return await new Promise should be exactly the same as return new Promise, so I don't think the error lies there.
I think the problem is precisely in the logic for terminating / resetting the worker. I will investigate this further and let you know of any news I discover.
Yes you are right it must be about lifecycle of worker.(Other assumptions proved as false)
But, I noticed another thing yesterday, I was tested with my (patched) code which depends 2.3.0 and everything was Ok. To test alpha6 I had created a branch and shared my experiences with you. The last thing I have noticed that, when I activate my original patch with alpha6 and it stucks at somewhere too. So, it is possible that the stuck issue is not directly related and/or more probably my environment has some incompatibilities with the latest alpha releases because I have some other simple url injections to work with multiple streaming models.(I saw that you added some new things to handle that but not tested yet) So, if you fix dispose() and memoryCleanTime() methods, I believe it will be work as expected :)
Hello @agviegas ,
This seems ok. In this patch, I directly patched the prototype first, cause I have detected some internal calls to this method also.
But the important change is, sharing same worker.onmessage event for different calls. (with correlation id s). older patch sometimes confuses about parallel calls I believe. It seems perfect at my tests.
//Patch for StreamerFileDb
const cache_webworker_code = `
addEventListener("message", (event) => {
const correlationId = event.data[0];
try {
async function main() {
const baseDir = event.data[1];
const fileName = event.data[2];
const data = event.data[3];
const dirRef = await navigator.storage.getDirectory()
const dir = await dirRef.getDirectoryHandle(baseDir, { create: true });
const handle = await dir.getFileHandle(fileName, { create: true });
const writable = await handle.createSyncAccessHandle({ mode: "readwrite" });
await writable.write(data);
await writable.close();
postMessage([correlationId,true]);
}
main();
} catch (e) {
postMessage([correlationId,false,e]);
}
});
`;
const cache_webworker_blob = new Blob([cache_webworker_code], {
type: "application/javascript",
});
const cacheWebWorker = new Worker(URL.createObjectURL(cache_webworker_blob));
const cacheWebWorkerCorrelation = {};
cacheWebWorker.onmessage = (msg) => {
const correlationPromise = cacheWebWorkerCorrelation[msg.data[0]];
if (correlationPromise) {
delete cacheWebWorkerCorrelation[msg.data[0]];
if (msg.data[1] === true) {
correlationPromise.resolve();
} else {
correlationPromise.reject(msg.data[2]);
}
} else console.error(`Ignored unknown cache resolver ${msg.data[0]}`);
};
FRAG.StreamerFileDb.prototype.add = async function (filename, data) {
const internalFileName = this.encodeName(filename);
const correlationId = crypto.randomUUID();
const waiter = new Promise(
(resolve, reject) =>
(cacheWebWorkerCorrelation[correlationId] = {
resolve: resolve,
reject: reject,
})
);
cacheWebWorker.postMessage([
correlationId,
this.baseDirectory,
internalFileName,
data,
]);
await waiter;
this.updateLastAccessTime(internalFileName);
};
//End of Patch for StreamerFileDb
Hey @fethigurcan we are currently working on a system that should greatly improve all these problems. We'll keep you updated
Hey @agviegas
Just faced this error in Flutter using the InAppWebView to display some models. Is there any update on this?
And @fethigurcan are you still using your patch? How did it work out until now?
Thanks!
Hey @agviegas
Just faced this error in Flutter using the InAppWebView to display some models. Is there any update on this?
And @fethigurcan are you still using your patch? How did it work out until now?
Thanks!
Yes I am still using with the latest release. May be I refactored and/or fixed some minor things. so if you want to use let me inform. I will check and send the latest patch code.
Hey @agviegas Just faced this error in Flutter using the InAppWebView to display some models. Is there any update on this? And @fethigurcan are you still using your patch? How did it work out until now? Thanks!
Yes I am still using with the latest release. May be I refactored and/or fixed some minor things. so if you want to use let me inform. I will check and send the latest patch code.
Thanks! Yes, I would appreciate it very much if you would share your current status of the patch.
Currently we are not streaming the tiles from a remote system but from the host directly. I would like to test whether the cache could still be used here and what influence it has on performance.
This is fixed with fragments 2.0. We might try reimplementing a simpler streaming for fragments in the future, but now, without streaming, you should be able to open gigabytes of BIM data fast with a much simpler set up