Memory leak when adding a bookmark link in post editor
Issue Summary
Seeing a memory leak within Ghost when adding a Bookmark Link to my site. Seems to be only happening to certain URL's so far such as
- https://www.amazon.com.au/Amazon-Basics-Hardside-Expandable-Suitcase/dp/B074MDKZM7?th=1
- https://10play.com.au/live/ten
We can see the memory spike and it takes just 2 links to max out my 512Mb limit on my Ghost Install and then OOM.
Could be related to forum post
Steps to Reproduce
- Create a new post
- Add a "Bookmark Link" for any request
- Continue to add Bookmark Links
- Observe crash or unreleased memory in monitoring
Ghost Version
5.94.0
Node.js Version
18.20.4
How did you install Ghost?
Docker on Linux (Also seems to be an issue on un-dockerised as well)
Database type
MySQL 8
Browser & OS version
Chrome / MacOS
Relevant log / error output
[2024-09-09 06:38:45] INFO "GET/ghost/api/admin/oembed/url=https%3A%2F%2Fwww.amazon.com.au%2FAmazon-Basics-Hardside-ExpandableSuitcase%2Fdp%2FB074MDKZM7&type=bookmark" 200 16047ms
NotFoundError: Image not found
<--- Last few GCs --->
[11:0xbb098b0] 85788090 ms: Scavenge 249.9 (257.8) -> 249.9 (258.1) MB, 10.5 / 0.0 ms (average mu = 0.966, current mu = 0.544) allocation failure;
[11:0xbb098b0] 85788545 ms: Mark-sweep 249.9 (258.1) -> 249.0 (258.1) MB, 454.8 / 0.0 ms (average mu = 0.917, current mu = 0.182) allocation failure; scavenge might
not succeed
[11:0xbb098b0] 85789161 ms: Mark-sweep 250.0 (258.1) -> 249.8 (259.1) MB, 610.8 / 0.0 ms (average mu = 0.806, current mu = 0.010) allocation failure; scavenge might
not succeed
<--- JS stacktrace --->
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
1: 0xb82c6c node::Abort() [node]
2: 0xa9bf08 [node]
3: 0xd44220 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [node]
4: 0xd443f0 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool [node]
5: 0xf22ad4 [node]
6: 0xf34a6c v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node]
7: 0xf10bf0 v8::internal::HeapAllocator::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment)[node]
8: 0xf11bc8 v8::internal::HeapAllocator::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node]
9: 0xef4858 v8::internal::Factory::NewFillerObject(int, v8::internal::AllocationAlignment,v8::internal::AllocationType, v8::internal::AllocationOrigin) [node]
10: 0x129af5c v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*,v8::internal::Isolate*) [node]
11: 0x167f96c [node]
Code of Conduct
- [X] I agree to be friendly and polite to people in this repository
@JamesMarino do you see similar memory spikes when uploading an image into the editor?
@JamesMarino do you see similar memory spikes when uploading an image into the editor?
@kevinansfield I uploaded an image to the editor of size 5MB, did this about 6 times got the following increase in memory as seen below
Garbage collection seems to be working and memory is freed without an OOM crash it seems.
I was digging into profiling the oembedService.fetchOembedDataFromUrl method and noticed an external library for the link scraping and meta tag parsing here.
It seems to be building some sort of structure with 22,000 objects using 117MB of memory which I assume is not being freed for some reason.
Could the issue potentially lie in here in some inefficient code upstream in this library? I did notice as well the package.json version for the library is a few versions behind too.
It seems to build some sort of structure with 22,000 objects using 117MB of memory which I assume is not being freed for some reason.
Sounds more like inefficient code (causing an OOM) as opposed to a memory leak... I suggest editing the post title to change "Memory leak" to "OOM".
Our bot has automatically marked this issue as stale because there has not been any activity here in some time.
The issue will be closed soon if there are no further updates, however we ask that you do not post comments to keep the issue open if you are not actively working on a PR.
We keep the issue list minimal so we can keep focus on the most pressing issues. Closed issues can always be reopened if a new contributor is found. Thank you for understanding 🙂