time_limit and soft_time_limit feature are not working for celery task with SQS
Discussed in https://github.com/celery/celery/discussions/8381
Originally posted by Knight1997 July 19, 2023 I simply want to timeout the worker if the task is taking too long.
My setup: Running SQS locally using localStack and running 1 worker in solo pool. celery version=5.2.7
I am doing this:
upload_task.apply_async(queue=queue, args=(123), time_limit=5, soft_time_limit=2)
@celery_app.task(name="upload_task")
@async_to_sync
async def upload_task(id: str) -> None:
try:
sleep(10)
logger.info("upload task finished")
except SoftTimeLimitExceeded:
logger.exception(f"upload task: {id} failed due to worker timeout")
Ideally in this case, it this case, worker should not print string upload task finished but it does and the log says: Task upload_task[2f8b2aed-ad95-44e1-976e-6f078dd0d90] succeeded in 10.0109819220379s: None
I am expecting It should have been timed out with some timeout exception. but it did not.
I'm facing the same problem. Any updates?
As far as I can see, time limits are only supported in prefork/gevent pool type, it is not supported in solo mode! It actually makes sense, as in solo mode you don't have an extra process to kill by the main process of the worker.
https://docs.celeryq.dev/en/stable/userguide/workers.html#time-limits
This is an issue for all brokers, not just SQS. This has previously come up but has been dismissed as "not a bug" as the entire worker would need to be killed, not just the task.
Personally, I wouldn't mind the entire worker being killed as usually keeping it alive is handled by some other service (systemd, kubernetes, ECS, etc).
We need time limits of tasks so that they play nicely with visibility timeouts when using acks-late for reliability, and we're forced to use the solo worker due as that is the only thing that works for us for the bug seen in #4113.