Out of memory: Killed process 1680404 (opencloud) total-vm:17239740kB...
Out of memory: Killed process 1680404 (opencloud) total-vm:17239740kB, anon-rss:11584020kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:24268kB oom_score_adj:0 2025-07-24T02:01:16.665086+02:00 pi kernel: [65667.196863] Out of memory: Killed process 1682955 (opencloud) total-vm:17025884kB, anon-rss:12785864kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:26440kB oom_score_adj:0 2025-07-24T02:06:49.696520+02:00 pi kernel: [66000.187638] Out of memory: Killed process 1686575 (opencloud) total-vm:17176784kB, anon-rss:12798440kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:26580kB oom_score_adj:0 2025-07-24T02:12:10.145965+02:00 pi kernel: [66320.669511] Out of memory: Killed process 1690485 (opencloud) total-vm:17882900kB, anon-rss:12787128kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:27624kB oom_score_adj:0
Describe the bug
I removed about 100β―GB of data and emptied the trash bin using the web interface. After restarting the opencloud container, it started consuming all available RAM until it crashed and starts doing the same. Now since 2 days
Steps to reproduce
- delete 100 GB of files
- restart container
- check ram usage, syslog
Expected behavior
no crash :-)
Actual behavior
crash
Setup
OpenCloud 3.2.0 OpenCloud Web UI 3.2.0
docker-compose setup running in podman
**console output**
2025-07-24T01:56:42.596890+02:00 pi kernel: [65393.075584] Out of memory: Killed process 1680404 (opencloud) total-vm:17239740kB, anon-rss:11584020kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:24268kB oom_score_adj:0
2025-07-24T02:01:16.665086+02:00 pi kernel: [65667.196863] Out of memory: Killed process 1682955 (opencloud) total-vm:17025884kB, anon-rss:12785864kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:26440kB oom_score_adj:0
2025-07-24T02:06:49.696520+02:00 pi kernel: [66000.187638] Out of memory: Killed process 1686575 (opencloud) total-vm:17176784kB, anon-rss:12798440kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:26580kB oom_score_adj:0
2025-07-24T02:12:10.145965+02:00 pi kernel: [66320.669511] Out of memory: Killed process 1690485 (opencloud) total-vm:17882900kB, anon-rss:12787128kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:27624kB oom_score_adj:0
**docker logs**
endless no. of record lines written like:
2025/07/21 22:16:11 /var/lib/opencloud/storage/users/users/71497aeb-1e3b-499a-97eb-74110d009780/.oc-nodes/locks/,CREATE,c520f166-d3b2-4ab1-811c-e313ec9ec599.mlock
2025/07/21 22:16:11 /var/lib/opencloud/storage/users/users/71497aeb-1e3b-499a-97eb-74110d009780/.oc-nodes/locks/,"CLOSE_WRITE,CLOSE",c520f166-d3b2-4ab1-811c-e313ec9ec599.mlock
2025/07/21 22:16:11 /var/lib/opencloud/storage/users/users/71497aeb-1e3b-499a-97eb-74110d009780/.oc-nodes/locks/,DELETE,c520f166-d3b2-4ab1-811c-e313ec9ec599.mlock
$... grep -c DELETE
22217
$... grep -c CREATE
21985
$... grep -c CLOSE_WRITE
21966
**docker-compose.yml**
# Collabora ########################################
services:
collabora:
cap_add:
- MKNOD
command:
- coolconfig generate-proof-key && /start-collabora-online.sh
entrypoint:
- /bin/bash
- -c
environment:
DONT_GEN_SSL_CERT: yes
aliasgroup1: https://wopi.domain.tld:443
extra_params: |
--o:ssl.enable=false \
--o:ssl.ssl_verification=false \
--o:ssl.termination=true \
--o:welcome.enable=false \
--o:net.frame_ancestors=cloud.domain.tld
--o:welcome.enable=false
password: SECRET_PASSWORD
username: admin
healthcheck:
test:
- CMD
- curl
- -f
- http://localhost:9980/hosting/discovery
image: collabora/code:24.04.13.2.1
container_name: opencloud-wopi.app
restart: unless-stopped
ports:
- 9980:9980
network_mode: host
# labels:
# traefik.enable: "true"
# traefik.http.routers.collabora.entrypoints: https
# traefik.http.routers.collabora.rule: Host(`api.domain.tld`)
# traefik.http.routers.collabora.service: collabora
# traefik.http.routers.collabora.tls.certresolver: http
# traefik.http.services.collabora.loadbalancer.server.port: "9980"
# logging:
# driver: local
# Collaboration ##################################################
collaboration:
command:
- -c
- opencloud collaboration server
depends_on:
collabora:
condition: service_healthy
opencloud:
condition: service_started
entrypoint:
- /bin/sh
environment:
COLLABORATION_APP_ADDR: https://api.domain.tld
COLLABORATION_APP_ICON: https://api.domain.tld/favicon.ico
COLLABORATION_APP_INSECURE: "true"
COLLABORATION_APP_NAME: CollaboraOnline
COLLABORATION_APP_PRODUCT: Collabora
COLLABORATION_CS3API_DATAGATEWAY_INSECURE: "true"
COLLABORATION_GRPC_ADDR: 0.0.0.0:9301
COLLABORATION_HTTP_ADDR: 0.0.0.0:9300
COLLABORATION_LOG_LEVEL: error
COLLABORATION_WOPI_SRC: https://wopi.domain.tld
MICRO_REGISTRY: nats-js-kv
MICRO_REGISTRY_ADDRESS: 127.0.0.1:9233
OC_URL: https://cloud.domain.tld
image: opencloudeu/opencloud-rolling:latest
container_name: opencloud-api.app
ports:
- 9301:9301
- 9300:9300
network_mode: host
labels:
traefik.enable: "true"
traefik.http.routers.collaboration.entrypoints: https
traefik.http.routers.collaboration.rule: Host(`wopi.domain.tld`)
traefik.http.routers.collaboration.service: collaboration
traefik.http.routers.collaboration.tls.certresolver: http
traefik.http.services.collaboration.loadbalancer.server.port: "9300"
# logging:
# driver: local
restart: always
volumes:
- /docker/opencloud/config:/etc/opencloud:rw
# draw.io ##############################################
drawio-init:
command:
- -c
- cp -R /usr/share/nginx/html/draw-io/ /apps
entrypoint:
- /bin/sh
image: opencloudeu/web-extensions:draw-io-1.0.0
container_name: opencloud-drawio.app
user: root
volumes:
- /docker/opencloud/apps:/apps:rw
# externalsites.init #####################################
externalsites-init:
command:
- -c
- cp -R /usr/share/nginx/html/external-sites/ /apps
entrypoint:
- /bin/sh
image: opencloudeu/web-extensions:external-sites-1.0.0
container_name: opencloud-external_sites.app
user: root
volumes:
- /docker/opencloud/apps:/apps:rw
# jsonviewer.init ##############################################
jsonviewer-init:
command:
- -c
- cp -R /usr/share/nginx/html/json-viewer/ /apps
entrypoint:
- /bin/sh
image: opencloudeu/web-extensions:json-viewer-1.0.0
container_name: opencloud-json_viewer.app
user: root
volumes:
- /docker/opencloud/apps:/apps:rw
# opencloud ###############################################
opencloud:
command:
- -c
- opencloud init || true; opencloud server
ports:
- 9200:9200
- 9233:9233 #changed
network_mode: host
depends_on:
drawio-init:
condition: service_completed_successfully
externalsites-init:
condition: service_completed_successfully
jsonviewer-init:
condition: service_completed_successfully
progressbars-init:
condition: service_completed_successfully
unzip-init:
condition: service_completed_successfully
entrypoint:
- /bin/sh
environment:
COLLABORA_DOMAIN: api.domain.tld
COMPANION_DOMAIN: import.domain.tld
# FRONTEND_APP_HANDLER_SECURE_VIEW_APP_ADDR: eu.opencloud.api.collaboration.CollaboraOnline
GATEWAY_GRPC_ADDR: 0.0.0.0:9142
# GRAPH_AVAILABLE_ROLES: b1e2218d-eef8-4d4c-b82d-0f1a1b48f3b5,a8d5fe5e-96e3-418d-825b-534dbdf22b99,fb6c3e19-e378-47e5-b277-9732f9de6e21,58c63c02-1d89-4572-916a-870abc5a1b7d,2d00ce52-1fc2-4dbc-8b95-a73b73395f5a,1c996275-f1c9-4e71-abdf-a42f6495e960,312c0871-5ef7-4b3a-85b6-0e4074c64049,aa97fe03-7980-45ac-9e50-b325749fd7e6
IDM_ADMIN_PASSWORD: SECRET_PASSWORD
IDM_CREATE_DEMO_USERS: "false"
MICRO_REGISTRY_ADDRESS: 127.0.0.1:9233 #changed
NATS_NATS_HOST: 0.0.0.0
NATS_NATS_PORT: "9233"
NOTIFICATIONS_SMTP_AUTHENTICATION: auto
NOTIFICATIONS_SMTP_ENCRYPTION: starttls
NOTIFICATIONS_SMTP_HOST: smtp.domain.tls
NOTIFICATIONS_SMTP_INSECURE: "false"
NOTIFICATIONS_SMTP_PASSWORD: SECRET_PASSWORD
NOTIFICATIONS_SMTP_PORT: "587"
NOTIFICATIONS_SMTP_SENDER: OpenCloud notifications <[email protected]>
NOTIFICATIONS_SMTP_USERNAME: [email protected]
OC_ADD_RUN_SERVICES: notifications
OC_INSECURE: "false" #changed
OC_LOG_COLOR: "true"
OC_LOG_LEVEL: error
OC_LOG_PRETTY: "true"
STORAGE_USERS_POSIX_WATCH_FS: "true"
OC_PASSWORD_POLICY_BANNED_PASSWORDS_LIST: banned-password-list.txt
OC_URL: https://cloud.domain.tld
ONLYOFFICE_DOMAIN: onlyoffice.domain.tld
PROXY_CSP_CONFIG_FILE_LOCATION: /etc/opencloud/csp.yaml
PROXY_ENABLE_BASIC_AUTH: "false"
PROXY_TLS: "false"
SEARCH_EXTRACTOR_TYPE: tika
SEARCH_EXTRACTOR_TIKA_TIKA_URL: https://search.domain.tld
FRONTEND_FULL_TEXT_SEARCH_ENABLED: "true"
# Keycloak START
KEYCLOAK_DOMAIN: sso.domain.tld
PROXY_AUTOPROVISION_ACCOUNTS: "true"
PROXY_ROLE_ASSIGNMENT_DRIVER: oidc
OC_OIDC_ISSUER: https://sso.domain.tld/realms/openCloud
PROXY_OIDC_REWRITE_WELLKNOWN: "true"
PROXY_USER_OIDC_CLAIM: preferred_username
GRAPH_ASSIGN_DEFAULT_USER_ROLE: "false"
GRAPH_USERNAME_MATCH: none
OC_EXCLUDE_RUN_SERVICES: idp
WEB_OPTION_ACCOUNT_EDIT_LINK_HREF: https://sso.domain.tld/realms/openCloud/account
# Keycloak END
# Link password config START
# Disable the password policy.
# OC_PASSWORD_POLICY_DISABLED
# Define the minimum password length.
# OC_PASSWORD_POLICY_MIN_CHARACTERS
# Define the minimum number of uppercase letters.
# OC_PASSWORD_POLICY_MIN_LOWERCASE_CHARACTERS
# Define the minimum number of lowercase letters.
# OC_PASSWORD_POLICY_MIN_UPPERCASE_CHARACTERS
# Define the minimum number of digits.
# OC_PASSWORD_POLICY_MIN_DIGITS
# Define the minimum number of special characters.
# OC_PASSWORD_POLICY_MIN_SPECIAL_CHARACTERS
# Path to the βbanned passwords listβ file.
# OC_PASSWORD_POLICY_BANNED_PASSWORDS_LIST
# Disable automatic Password protection for publich links
OC_SHARING_PUBLIC_SHARE_MUST_HAVE_PASSWORD: "false"
# Link password config END
image: opencloudeu/opencloud-rolling:latest
container_name: opencloud.app
# logging:
# driver: local
restart: unless-stopped
volumes:
- /docker/opencloud/config:/etc/opencloud:rw
- /docker/opencloud/app-registry.yaml:/etc/opencloud/app-registry.yaml:rw
- /docker/opencloud/apps.yaml:/etc/opencloud/apps.yaml:rw
- /docker/opencloud/config/banned-password-list.txt:/etc/opencloud/banned-password-list.txt:rw
- /docker/opencloud/csp.yaml:/etc/opencloud/csp.yaml:rw
- /docker/opencloud/data:/var/lib/opencloud:rw
- /docker/opencloud/apps:/var/lib/opencloud/web/assets/apps:rw
- /docker/opencloud/proxy.yaml:/etc/opencloud/proxy.yaml
# progress.init ####################################
progressbars-init:
command:
- -c
- cp -R /usr/share/nginx/html/progress-bars/ /apps
entrypoint:
- /bin/sh
image: opencloudeu/web-extensions:progress-bars-1.0.0
container_name: opencloud-progress_bars.app
user: root
volumes:
- /docker/opencloud/apps:/apps:rw
# traefik:
# command:
# - --log.level=ERROR
# - [email protected]
# - --certificatesResolvers.http.acme.storage=/certs/acme.json
# - --certificatesResolvers.http.acme.httpChallenge.entryPoint=http
# - --certificatesResolvers.http.acme.caserver=https://acme-v02.api.letsencrypt.org/directory
# - --api.dashboard=true
# - --entryPoints.http.address=:80
# - --entryPoints.http.http.redirections.entryPoint.to=https
# - --entryPoints.http.http.redirections.entryPoint.scheme=https
# - --entryPoints.https.address=:443
# - --entryPoints.https.transport.respondingTimeouts.readTimeout=12h
# - --entryPoints.https.transport.respondingTimeouts.writeTimeout=12h
# - --entryPoints.https.transport.respondingTimeouts.idleTimeout=3m
# - --providers.docker.endpoint=unix:///var/run/docker.sock
# - --providers.docker.exposedByDefault=false
# - --accessLog=true
# - --accessLog.format=json
# - --accessLog.fields.headers.names.X-Request-Id=keep
# image: traefik:v3.3.1
# labels:
# traefik.enable: "false"
# traefik.http.middlewares.traefik-auth.basicauth.users: #admin:$$$$apr1$$$$4vqie50r$$$$YQAmQdtmz5n9rEALhxJ4l.
# traefik.http.routers.traefik.entrypoints: https
# traefik.http.routers.traefik.middlewares: traefik-auth
# traefik.http.routers.traefik.rule: Host(`traefik.opencloud.test`)
# traefik.http.routers.traefik.service: api@internal
# traefik.http.routers.traefik.tls.certresolver: http
# logging:
# driver: local
# networks:
# opencloud-net:
# aliases:
# - api.domain.tld
# - cloud.domain.tld
# - wopi.domain.tld
# ports:
# - published: 80
# target: 80
# - published: 443
# target: 443
# restart: always
# volumes:
# - certs:/certs:rw
# - /var/run/docker.sock:/var/run/docker.sock:ro
# unzip #################################
unzip-init:
command:
- -c
- cp -R /usr/share/nginx/html/unzip/ /apps
entrypoint:
- /bin/sh
image: opencloudeu/web-extensions:unzip-1.0.2
container_name: opencloud-unzip.app
user: root
volumes:
- /docker/opencloud/apps:/apps:rw
# tika #################################
tika-init:
image: tika:latest-full
container_name: opencloud-tika.app
# release notes: https://tika.apache.org
restart: unless-stopped
ports:
- 9998:9998
companion:
image: transloadit/companion:5.5.2
container_name: opencloud-companion.app
restart: unless-stopped
ports:
- 3020:3020
environment:
NODE_TLS_REJECT_UNAUTHORIZED: 0
COMPANION_CLIENT_ORIGINS: "true"
COMPANION_ALLOW_LOCAL_URLS: "true"
COMPANION_DATADIR: /tmp/companion/
COMPANION_DOMAIN: 127.0.0.1:9200
COMPANION_PROTOCOL: http
COMPANION_PATH: /companion
UPLOAD_URLS: import.domain.tld
COMPANION_ONEDRIVE_KEY: ""
COMPANION_ONEDRIVE_SECRET: ""
COMPANION_TUS_DEFERRED_UPLOAD_LENGTH: "false"
volumes:
- /docker/opencloud/companion:/tmp/companion/
# radicale #################################
radicale:
image: opencloudeu/radicale:latest
container_name: opencloud-radicale.app
restart: unless-stopped
ports:
- 5232:5232
volumes:
- /docker/opencloud/radicale/config:/etc/radicale/config
- /docker/opencloud/radicale/data/:/var/lib/radicale
networks: {}
Additional context
Add any other context about the problem here.
Every time I start the main opencloud container, it continuously runs into an out-of-memory (OOM) condition within 5 minutes.
I've used a script to visualize RAM Usage for here. It restarts automatically the container if the memory usage gets too high it restarts container
This is how it looks right now:
Monitoring RAM usage. Threshold: 80%. Output every 10s. (CTRL+C to stop)
[β] Started container opencloud.app
[β] RAM usage: 4.32/15 GB (28% / +28% / 0 sec.)
[β] RAM usage: 5.98/15 GB (39% / +9% / 10 sec.)
[β] RAM usage: 7.7/15 GB (46% / +2% / 20 sec.)
[β] RAM usage: 6.82/15 GB (44% / -2% / 30 sec.)
[β] RAM usage: 7.3/15 GB (46% / +2% / 40 sec.)
[β] RAM usage: 6.94/15 GB (45% / 0% / 50 sec.)
[β] RAM usage: 6.94/15 GB (45% / 0% / 60 sec.)
[β] RAM usage: 6.94/15 GB (45% / 0% / 70 sec.)
[β] RAM usage: 8.1/15 GB (52% / +4% / 80 sec.)
[β] RAM usage: 9.12/15 GB (59% / +2% / 90 sec.)
[β] RAM usage: 9.2/15 GB (59% / 0% / 100 sec.)
[β] RAM usage: 8.99/15 GB (58% / -1% / 110 sec.)
[β] RAM usage: 9.70/15 GB (63% / 0% / 120 sec.)
[β] RAM usage: 9.73/15 GB (63% / 0% / 130 sec.)
[β] RAM usage: 9.37/15 GB (61% / -5% / 140 sec.)
[β] RAM usage: 9.16/15 GB (60% / 0% / 150 sec.)
[β] RAM usage: 9.7/15 GB (59% / -1% / 160 sec.)
[β] RAM usage: 9.8/15 GB (59% / 0% / 170 sec.)
[β] RAM usage: 9.7/15 GB (59% / 0% / 180 sec.)
[β] RAM usage: 9.19/15 GB (60% / +1% / 190 sec.)
[β] RAM usage: 9.14/15 GB (59% / -1% / 200 sec.)
[β] RAM usage: 9.20/15 GB (60% / 0% / 210 sec.)
[β] RAM usage: 9.20/15 GB (60% / 0% / 220 sec.)
[β] RAM usage: 9.41/15 GB (61% / 0% / 230 sec.)
[β] RAM usage: 9.60/15 GB (62% / 0% / 240 sec.)
[β] RAM usage: 11.12/15 GB (72% / +2% / 251 sec.)
> [β] RAM usage: 12.42/15 GB (81% / +8% / 261 sec.)
> [β] RAM usage 81% exceeds threshold 80% β restarting container...
> [β] Container stopped due to high memory usage β restarting loop...
[β] Started container opencloud.app
[β] RAM usage: 4.29/15 GB (28% / +28% / 0 sec.)
[β] RAM usage: 4.68/15 GB (30% / 0% / 10 sec.)
[β] RAM usage: 4.67/15 GB (30% / 0% / 20 sec.)
[β] RAM usage: 4.68/15 GB (30% / 0% / 30 sec.)
[β] RAM usage: 4.79/15 GB (31% / 0% / 40 sec.)
[β] RAM usage: 6.73/15 GB (44% / +8% / 50 sec.)
[β] RAM usage: 10.27/15 GB (67% / +19% / 60 sec.)
> [β] RAM usage: 12.68/15 GB (83% / +7% / 70 sec.)
> [β] RAM usage 83% exceeds threshold 80% β restarting container...
> [β] Container stopped due to high memory usage β restarting loop...
[β] Started container opencloud.app
[β] RAM usage: 4.30/15 GB (28% / +28% / 0 sec.)
[β] RAM usage: 4.67/15 GB (30% / 0% / 10 sec.)
[β] RAM usage: 4.67/15 GB (30% / 0% / 20 sec.)
[β] RAM usage: 4.80/15 GB (31% / +1% / 30 sec.)
[β] RAM usage: 6.84/15 GB (44% / +1% / 40 sec.)
[β] RAM usage: 9.2/15 GB (59% / +7% / 50 sec.)
[β] RAM usage: 7.44/15 GB (48% / -12% / 60 sec.)
[β] RAM usage: 6.92/15 GB (45% / 0% / 70 sec.)
[β] RAM usage: 6.81/15 GB (44% / -1% / 80 sec.)
[β] RAM usage: 6.87/15 GB (45% / 0% / 90 sec.)
[β] RAM usage: 7.60/15 GB (49% / +3% / 101 sec.)
[β] RAM usage: 8.63/15 GB (56% / +2% / 111 sec.)
[β] RAM usage: 8.90/15 GB (58% / -2% / 121 sec.)
[β] RAM usage: 8.75/15 GB (57% / -1% / 131 sec.)
[β] RAM usage: 8.74/15 GB (57% / 0% / 141 sec.)
[β] RAM usage: 8.91/15 GB (58% / -1% / 151 sec.)
[β] RAM usage: 9.4/15 GB (59% / 0% / 161 sec.)
[β] RAM usage: 9.98/15 GB (65% / +1% / 171 sec.)
[β] RAM usage: 11.64/15 GB (76% / +2% / 181 sec.)
> [β] RAM usage 83% exceeds threshold 80% β restarting container...
> [β] Container stopped due to high memory usage β restarting loop...
[β] Started container opencloud.app
I'm currently unsure how to properly recover OpenCloud except limit ram usage by
mem_limit: 6g
memswap_limit: 6g
But this left me with mixed feelings.
@cscholz can you try to spin up opencloud without the search service (OC_EXCLUDE_RUN_SERVICES=search) and check if you still get that level of memory usage? The current suspicion is that the bleve search index is the culprit here.
@cscholz I would also be interested to see how the system behaves when you set a GOMEMLIMIT, e.g. GOMEMLIMIT=6GiB or so.
@cscholz I would also be interested to see how the system behaves when you set a GOMEMLIMIT, e.g.
GOMEMLIMIT=6GiBor so.
OC_EXCLUDE_RUN_SERVICES=search did not solve the problem.
GOMEMLIMIT=6GiB also not
tried also disabling
#SEARCH_EXTRACTOR_TYPE: tika
#SEARCH_EXTRACTOR_TIKA_TIKA_URL: https://search.domaint.dl
#FRONTEND_FULL_TEXT_SEARCH_ENABLED: "true"
but container is still crashing
I think I got the same issue. No more crash after increasing RAM from 4GB to 8GB but lot of errors in logs (see attached)
I am running opencloud 3.5.0 in a virtual machine without docker.
Here what I've done before making the logs:
- delete ~400 files from windows and syncing with desktop client
- crash
- added 4G
- sync : some of the deleted files were downloaded again from the server
- deleted from client ~300 files without crashing the server
- force sync : some of the deleted files were downloaded again from the server
- stop desktop client
- stop opencloud server and renamed log file
Here what I've done to make the log attached:
- start opencloud server
- wait
- start the client
- some of the deleted files were downloaded from the server
- deleted from the client ~95 files without crashing the server
- sync
- some of the deleted files were downloaded from the server
- capture the logs
@dragonchaser @flimmy Is that related to your findings?
@dragonchaser @flimmy Is that related to your findings?
AFAIK the Nats leak was only present in the Nightly.
@dragonchaser @flimmy Is that related to your findings?
AFAIK the Nats leak was only present in the Nightly.
AFAIK not even there, just in my faulty PR
As issue reporter I can update, that the issue did not occurred again in last versions.
Assume it is fixed. From my side, ca be closed.
I am still experiencing this problem in version 3.7.0. When idle, RAM usage is around 330β500 MB, but every few seconds it shoots up to 7 GB for 3 seconds. Sometimes the container crashes immediately, sometimes it doesn't. CPU usage, which is usually around 1β2%, also shoots up to 200%. However, this only lasts a few seconds.
Update: Disabling the search in OC_EXCLUDE_RUN_SERVICES "solved" the situation. Does this help to narrow down the error?
@fschade @dragonchaser What can we do about the search service memory consumption?
Crashes are bad. Therefore p2
The question is, if this is still valid after @aduffeck fixes to the search service.... Can somebody reproduce that with 3.7.0?
See my comment @dragonchaser (https://github.com/opencloud-eu/opencloud/issues/1269#issuecomment-3490689233)
I am still experiencing this problem in version 3.7.0. When idle, RAM usage is around 330β500 MB, but every few seconds it shoots up to 7 GB for 3 seconds. Sometimes the container crashes immediately, sometimes it doesn't. CPU usage, which is usually around 1β2%, also shoots up to 200%. However, this only lasts a few seconds.
@dennisoderwald sorry oversaw that, will take a look
I have tried running it with 24GiB of RAM, I can not reproduce it on that machine. I am assuming that there is some hard limit that is hit with your data in the cloud. Closing this here.
If you want to do further experiments you could try to limit the amout of memory that is assigned to tika e.g. passing a argument -JXmx4g to the tika process and see if the issue persists.
This happened to me multiple time when removing 100 files on my vm (no docker). Had 4G which was not enough despite the vm was fully dedicated to OpenCloud with no desktop (headless alpine).
While going to 8GB solved the crash I donβt believe this is satisfiying. When not removing files but only downloading or uploading thousands it consumes very few memory.
I am testing OpenCloud and comparing it to Owncloud Ocis with the exact copy of file tree Ocis has other issues but not this one (deleting is really slow).
Please make some further test with removing files
Sent from Proton Mail for iOS.
-------- Original Message -------- On Wednesday, 11/19/25 at 14:29 Christian Richter @.***> wrote:
dragonchaser left a comment (opencloud-eu/opencloud#1269)
I have tried running it with 24GiB of RAM, I can not reproduce it on that machine. I am assuming that there is some hard limit that is hit with your data in the cloud. Closing this here.
If you want to do further experiments you could try to limit the amout of memory that is assigned to tika e.g. passing a argument -JXmx4g to the tika process and see if the issue persists.
β Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
Did you delete the files or did you just move them to the trash-bin?
Deleted them using windows desktop client
Sent from Proton Mail for iOS.
-------- Original Message -------- On Wednesday, 11/19/25 at 15:29 Christian Richter @.***> wrote:
dragonchaser left a comment (opencloud-eu/opencloud#1269)
Did you delete the files or did you just move them to the trash-bin?
β Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
This happened to me multiple time when removing 100 files on my vm (no docker). Had 4G which was not enough despite the vm was fully dedicated to OpenCloud with no desktop (headless alpine).
While going to 8GB solved the crash I donβt believe this is satisfiying. When not removing files but only downloading or uploading thousands it consumes very few memory.
I am testing OpenCloud and comparing it to Owncloud Ocis with the exact copy of file tree Ocis has other issues but not this one (deleting is really slow).
Please make some further test with removing files
Sent from Proton Mail for iOS. β¦
in the fix that was mentioned by @dragonchaser we've introduced batching to reduce the overall requests made (before the fix each change was 1 request to the search backend)!
the downside is, we have to keep track of each affected resource in memory! In combination with tika the sum of resources kept in memory could grow pretty fast (it can contain the whole content of that resource)!
In your case, I assume you're deleting the resources and not emptying the bin (deleting a resource just updates the storage location and marks the file as deleted).
this means:
- searching the resource (and its child resources in case of a container)
- updating the resource location
- updating the resource deletion state
- putting the resource back in the index (and its child resources in case of a container)
the state is kept in memory till the limit is reached, right after, we send everything at once back to the search backend. The default size of a batch is 50 which means 50 times content size (+ some minor fields like id, name, path, ...).
can you do me a favor and try to decrease the batch size down to a smaller number?
SEARCH_BATCH_SIZE=N
but please keep in mind, decreasing the batch size increases the requests made!
In certain scenarios the search service could be pretty resource hungry (tika, bleve, many upates, ...) depending on the content that is indexed, please try to find a sweet spot even if it means increasing the host memory.
please let us know if that solves your problem.