[Performance]Historical data cache job timing out
From BugSnag: Delayed::WorkerTimeout HistoricalDataCacheJob@default
See https://app.bugsnag.com/ruby-for-good/human-essentials/errors/64e437ef88578c00083d0b99?filters[error.status]=open&filters[event.since]=30d
This came up before, and we cheated by increasing the timeout. I think this time we should look at optimizing directly, as the timeout is already 1200 seconds (20 minutes), and the growth rate seems too high to keep increasing the timeout.
Also noting that this is firing more often than expected -- I thought it was a once a day job, but it has timed out 10 times in the last day.
This issue is marked as stale due to no activity within 30 days. If no further activity is detected within 7 days, it will be unassigned.
An update for anyone who is later to the game on this one -- it was getting set off multiple times a day, but now it isn't. It is still timing out multiple times a day, but that's because there's a job called for each organization and some of these are timing out.
The folks that have taken a quick look say there are some obvious optimizations to be done, in the way of making the database calls do more
This issue is marked as stale due to no activity within 30 days. If no further activity is detected within 7 days, it will be unassigned.
This has been largely addressed -- we haven't had a timeout in 2 weeks. I don't know if there is still more work to be done here, @awwaiid, but if not, let's close it.
This issue is marked as stale due to no activity within 30 days. If no further activity is detected within 7 days, it will be unassigned.
Agree! Closing.