`Sentry::Scope#set_transaction_name` leaks memory
Issue Description
The method name Sentry::Scope#set_transaction_name suggests that it is a simple setter, but in fact it appends the specified name to an array. AFAICT this array is never popped/truncated, so if #set_transaction_name is called e.g. in an event loop, the array will grow unlimited.
I know that Sentry.with_scope is often a better solution inside a loop, but this is not always a feasible solution, e.g. if exceptions are rescued outside the loop.
Reproduction Steps
while true do
Sentry.configure_scope do |scope|
scope.set_transaction_name('foo')
p scope.transaction_names.size
end
end
Expected Behavior
The transaction names should not accumulate.
Actual Behavior
An ever increasing number is output, proving that the Sentry::Scope#transaction_names array keeps growing. Eventually a NoMemoryError is raised.
Ruby Version
3.2.3
SDK Version
5.15.0
Integration and Its Version
No response
Sentry Config
No response
thx for the report, this whole transaction name stuff is a bit of mess at the moment. In general, there's no reason it needs to be an array, we just care about the last one anyway.
@sl0thentr0py This issue seem to impact different part of a rails system. For example ActionCable would be impacted via https://github.com/getsentry/sentry-ruby/blob/master/sentry-rails/lib/sentry/rails/action_cable.rb#L19
Also it seems that the method now also pushes to another array which makes doubles the amount of memory persisted. https://github.com/getsentry/sentry-ruby/blob/master/sentry-ruby/lib/sentry/scope.rb#L235
What are the next steps?
im just gonna fix this, no need for it to be an array
@sl0thentr0py Cool
Is there a timeframe for cutting a new release with this fix included?
@henrikbjorn released https://github.com/getsentry/sentry-ruby/releases/tag/5.18.0
@sl0thentr0py Do you have any testing of this or other memory measurement systems in place?
This have been a long journey for us to stumble upon this in our application.
It seems it have also gradually been worse over time, with addition of new tracing features or bookmarks etc.
We don't really have a real world ruby/rails app to test our SDK on, with python/django monolith we have our own Sentry monolith so we catch these things faster.
To some degree, we rely on our community to report critical problems. In the very long run, we will try to setup non-trivial smoke test and benchmark pipelines but it's hard to dedicate too much engineering resources to this because ultimately we're building libraries and accounting for every possible production app out there is a non-trivial problem.
But thanks for the feedback, we'll try to be more stable.
I'm also curious because this transaction name stuff has been around forever, did you notice this after an update or have you always had this problem?
We noticed it after a migration from the Faye websocket framework and onto the default Rails ActionCable framework.
Since then we have had servers crawl slowly up in memory and then get recycled by our ops monitoring.
Recently we saw the same in a new project with a very simple ActionCable also. Disabling Sentry removed all of the memory issues (I have sent graphs to your support team)
I have a feeling we haven't seen the same in our servers running web via Puma as Puma have probably restarted its workers after x requests and therefor it hasn't been a problem.