Memory leak / Large images kill classifier process OOM
Describe the bug
After some time recognize throws an error and stops working (see additional context)
To Reproduce Steps to reproduce the behavior:
- Start manual process.
- Wait.
- Check errors
Recognize (please complete the following information):
- JS-only mode: No
- Enabled modes: All
Server (please complete the following information):
- Nextcloud: 22
- OS: Ubuntu 20.04
- RAM: 24
- Processor Architecture x86_64
- Database: PostgreSQL 13
Additional context
Error message from bash:
Classifying photos of user 4479F707-255F-465B-9468-9D92669CDB71
Failed to classify images
An exception occurred while executing a query: SQLSTATE[HY000]: General error: 7 server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
Nextcloud log:
[core] Error: Movie preview generation failed Output: ["ffmpeg version 4.2.4-1ubuntu0.1 Copyright (c) 2000-2020 the FFmpeg developers"," built with gcc 9 (Ubuntu 9.3.0-10ubuntu2)"," configuration: --prefix=/usr --extra-version=1ubuntu0.1 --toolchain=hardened --libdir=/usr/lib/x86_64-linux-gnu --incdir=/usr/include/x86_64-linux-gnu --arch=amd64 --enable-gpl --disable-stripping --enable-avresample --disable-filter=resample --enable-avisynth --enable-gnutls --enable-ladspa --enable-libaom --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libcdio --enable-libcodec2 --enable-libflite --enable-libfontconfig --enable-libfreetype --enable-libfribidi --enable-libgme --enable-libgsm --enable-libjack --enable-libmp3lame --enable-libmysofa --enable-libopenjpeg --enable-libopenmpt --enable-libopus --enable-libpulse --enable-librsvg --enable-librubberband --enable-libshine --enable-libsnappy --enable-libsoxr --enable-libspeex --enable-libssh --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx265 --enable-libxml2 --enable-libxvid --enable-libzmq --enable-libzvbi --enable-lv2 --enable-omx --enable-openal --enable-opencl --enable-opengl --enable-sdl2 --enable-libdc1394 --enable-libdrm --enable-libiec61883 --enable-nvenc --enable-chromaprint --enable-frei0r --enable-libx264 --enable-shared"," libavutil 56. 31.100 / 56. 31.100"," libavcodec 58. 54.100 / 58. 54.100"," libavformat 58. 29.100 / 58. 29.100"," libavdevice 58. 8.100 / 58. 8.100"," libavfilter 7. 57.100 / 7. 57.100"," libavresample 4. 0. 0 / 4. 0. 0"," libswscale 5. 5.100 / 5. 5.100"," libswresample 3. 5.100 / 3. 5.100"," libpostproc 55. 5.100 / 55. 5.100","[mov,mp4,m4a,3gp,3g2,mj2 @ 0x563a3443a700] moov atom not found","/tmp/oc_tmp_c1YWtl: Invalid data found when processing input"]
GET /core/preview?fileId=13377535&x=256&y=256&a=true&v=616ef18336c79
from 192.168.10.2 by 3A60C52D-9415-4F28-A2B7-71A8CBD7A9E3 at 2021-11-02T22:34:12+01:00
+ direclty after such an error:
[index] Error: An exception occurred while executing a query: SQLSTATE[HY000]: General error: 7 server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
GET /core/preview?fileId=13377748&x=256&y=256&a=true&v=616f1e70cccac
from 192.168.10.2 by 3A60C52D-9415-4F28-A2B7-71A8CBD7A9E3 at 2021-11-02T22:34:26+01:00
PostgreSQL log (literally millions entries, log = 1.4 GB):
2021-11-02 15:34:44.619 UTC [3225304] nextcloud@nextcloud STATEMENT: INSERT INTO "oc_systemtag_object_mapping" ("objectid", "objecttype", "systemtagid") VALUES($1, $2, $3)
2021-11-02 15:34:44.620 UTC [3225701] nextcloud@nextcloud ERROR: duplicate key value violates unique constraint "oc_systemtag_object_mapping_pkey"
2021-11-02 15:34:44.620 UTC [3225701] nextcloud@nextcloud DETAIL: Key (objecttype, objectid, systemtagid)=(files, 13400109, 316) already exists.
From within the PostgreSQL logs I cannot find anything which indicates a closing of the connection.
I am not sure but from my point of view the tagging lets the app Preview Generator do something which makes it crash?!
And recognize writes everything multiple times to the database.
I have disabled the preview generation for category "Movie" - let's see how it will do.
So now it simply fails with a:
root@wwwubuntu:/var/www/cloud# sudo -u www-data php ./occ recognize:classify -vvv
Failed to classify images
Within Nextcloud log there are not entries which indicates why it fails.
This may due to your CPU architecture. You can try running the classifier script directly to check:
$ apps/recognize/bin/node apps/recognize/src/classifier_imagenet.js path/to/a/photo.jpg
I do not think that this is a CPU architecture issue as I "show" my VM what kind of CPU I am using and it also has the mentioned AVX flag:
Architektur: x86_64
CPU Operationsmodus: 32-bit, 64-bit
Byte-Reihenfolge: Little Endian
Adressgrößen: 48 bits physical, 48 bits virtual
CPU(s): 8
Liste der Online-CPU(s): 0-7
Thread(s) pro Kern: 1
Kern(e) pro Socket: 8
Sockel: 1
NUMA-Knoten: 1
Anbieterkennung: AuthenticAMD
Prozessorfamilie: 23
Modell: 49
Modellname: AMD EPYC 7272 12-Core Processor
Stepping: 0
CPU MHz: 2894.363
BogoMIPS: 5788.72
Virtualisierung: AMD-V
L1d Cache: 512 KiB
L1i Cache: 512 KiB
L2 Cache: 4 MiB
L3 Cache: 16 MiB
NUMA-Knoten0 CPU(s): 0-7
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Full AMD retpoline, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Markierungen: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid pni pclmulqdq ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt tsc_de
adline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw perfctr_core ssbd ibrs ibpb stibp vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 rdseed adx smap clflushopt clwb sha_ni x
saveopt xsavec xgetbv1 xsaves clzero xsaveerptr wbnoinvd arat npt nrip_save umip rdpid arch_capabilities
Also a test run worked fine:
sudo -u www-data apps/recognize/bin/node apps/recognize/src/classifier_imagenet.js /mnt/cloud/nextcloud/3A60C52D-9415-4F28-A2B7-71A8CBD7A9E3/files/IMG_20211102_222424.jpg
2021-11-03 10:54:07.618633: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2021-11-03 10:54:09.609194: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 36000000 exceeds 10% of free system memory.
2021-11-03 10:54:09.665774: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 144000000 exceeds 10% of free system memory.
2021-11-03 10:54:09.696105: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 144000000 exceeds 10% of free system memory.
2021-11-03 10:54:09.719646: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 144000000 exceeds 10% of free system memory.
2021-11-03 10:54:09.742251: W tensorflow/core/framework/cpu_allocator_impl.cc:80] Allocation of 144000000 exceeds 10% of free system memory.
{
className: 'caldron',
probability: 0.557533860206604,
rule: { label: 'cooking', threshold: 0.1 }
}
{
className: 'cup',
probability: 0.08837056905031204,
rule: { label: 'cup', threshold: 0.1 }
}
{
className: 'coffee mug',
probability: 0.043676361441612244,
rule: { label: 'coffee', threshold: 0.2, categories: [ 'beverage' ] }
}
{
className: 'tiger cat',
probability: 0.01587754860520363,
rule: {
label: 'cat',
priority: 5,
threshold: 0.1,
categories: [ 'animal' ]
}
}
{
className: 'egyptian cat',
probability: 0.009242596104741096,
rule: {
label: 'cat',
priority: 5,
threshold: 0.1,
categories: [ 'animal' ]
}
}
{
className: 'pitcher',
probability: 0.008557061664760113,
rule: { label: 'beverage', threshold: 0.1 }
}
{
className: 'teapot',
probability: 0.006404006388038397,
rule: { threshold: 0.1, categories: [ 'beverage' ] }
}
["cooking"]
Nice! That's good. Then we can try to increase the log verbosity to debug, to see what's happening when the app does the same thing.
You mean the Nextcloud log itself or just the app as I run it already with -vvv
The nextcloud log, yeah, the command doesn't print any logs at the moment (which I'm aware isn't ideal).
The problem of setting the general log to Debug is that I have billions of "deprecated" things and the log is rotating a lot:
{"reqId":"YYJe_HfHlXE-HTCFlBOEDAAAVBE","level":0,"time":"2021-11-03T11:05:48+01:00","remoteAddr":"192.168.10.2","user":"3A60C52D-9415-4F28-A2B7-71A8CBD7A9E3","app":"video_converter","method":"GET","url":"/apps/logreader/poll?lastReqId=YYJe-3fHlXE-HTCFlBOECwAAVxE","message":"/appinfo/app.php is deprecated, use \\OCP\\AppFramework\\Bootstrap\\IBootstrap on the application class instead.","userAgent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:94.0) Gecko/20100101 Firefox/94.0","version":"22.2.0.2"}
So I left it running at debug level & it crashed again - also again with an SQL exception.
Classifying photos of user 4479F707-255F-465B-9468-9D92669CDB71
Failed to classify images
An exception occurred while executing a query: SQLSTATE[HY000]: General error: 7 server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The debug log of Nextcloud does not show anything useful.
Do you still get errors in postgres? that may be the reason the connection is terminated
Next step of trying: Bypass my DB loadbalancer and connect directly to the master DB.
Do you still get errors in postgres? that may be the reason the connection is terminated
I am not sure if this is a Postgres error or a HAProxy.
How is recognize communicating with Postgres? Is it doing kind of a batch-job? So recognizing everything before inputing stuff into the DB or is it running on a syncronised base?
There's two points of communication with the db. During a classification run, each file is tagged separately, but there's a sort of batch job that adds the unrecognized tag to all files that have been processed but yielded no tags.
Hmmm it might be possible, as my users have tons of pictures that it might run into a database timeout? I will keep an eye on the direct Postgres communication to verify if it has another issue or not.
On HAProxy the regular timeout is set on 30 minutes
I spoke too soon, it appears I already refactored the latter function to tag files individually: https://github.com/marcelklehr/recognize/blob/master/lib/Service/TagManager.php#L62
Is this also part of the current version of your app from the Nextcloud store?
Ah. Good point :D let's see
Yep: https://github.com/marcelklehr/recognize/blob/v1.6.10/lib/Service/TagManager.php#L62
But why did I saw so many "tag already existing" database entries?
If you still see those, that's definitely something we should investigate. Possibly that's a nextcloud bug.
Yup, it's still appearing at the DB log:
2021-11-03 11:38:59.521 UTC [3509011] nextcloud@nextcloud STATEMENT: INSERT INTO "oc_systemtag_object_mapping" ("objectid", "objecttype", "systemtagid") VALUES($1, $2, $3)
2021-11-03 11:38:59.522 UTC [3509011] nextcloud@nextcloud ERROR: duplicate key value violates unique constraint "oc_systemtag_object_mapping_pkey"
2021-11-03 11:38:59.522 UTC [3509011] nextcloud@nextcloud DETAIL: Key (objecttype, objectid, systemtagid)=(files, 9710849, 1) already exists.
Could this come from my testing during spring/summer when I tried to have my GPU working with recognize?
Could this come from my testing during spring/summer when I tried to have my GPU working with recognize?
I doubt it.
Hmmm... not sure what or why this is happening then.
Additional: After I changed to direct DB connections it keeps running and running and so on - no timeout so far.
So maybe it does some asynchrone stuff in the background?
So no timeout but the "generic" issue again:
Classifying photos of user 4479F707-255F-465B-9468-9D92669CDB71
Failed to classify images
Classifier process error
Maybe you're running out of memory?
Nope - my VM has 24 GB of dedicated RAM. Yesterday when recognize was able to run my VM consumed around 14 to 15 GB RAM - so it should have enought space.
I have also checked PHP-FPM log and Apache logs - nothing would indicate that I am running out of memory.
Is there any kind of resetting recognize?`
Even manually like droping tables at the DB and removing files from the server?!
Maybe it would make sense have a clear beginning
In the settings there's a button that resets the tags