[Crashlytics] SIGABRT
Device Brand:Google Model:Pixel 6 Orientation:Unknown RAM free: 203.5 MB Disk free: 25 GB Operating System Version:Android 14 Orientation:Unknown Rooted:No Crash Date:Jul 9, 2024, 9:23:31 AM App version:9.0.5-4 (1009000504)
SIGABRT libc.so
Crashed: Thread: SIGABRT 0x0000000000000000
#00 pc 0x5d8e4 libc.so (BuildId: 1d36f8ae6e0af6158793abea7d4f4f2b)
#01 pc 0x5d8b4 libc.so (BuildId: 1d36f8ae6e0af6158793abea7d4f4f2b)
#02 pc 0x76fba0 libart.so (BuildId: 5b1e3dce5abfbdc410d71d256d308227)
#03 pc 0x357d0 libbase.so (BuildId: 6f67f69ff36b970d0b831cfdab3b578d)
#04 pc 0x7014 liblog.so (BuildId: a7f00b6aec4360038e6e4af7a13c65b7)
#05 pc 0x595ae8 libhwui.so (BuildId: f4aa6f43716882cf3fe2d4330aa1a675)
#06 pc 0x595518 libhwui.so (BuildId: f4aa6f43716882cf3fe2d4330aa1a675)
#07 pc 0x218e38 libhwui.so (BuildId: f4aa6f43716882cf3fe2d4330aa1a675)
#08 pc 0x4db11c libhwui.so (BuildId: f4aa6f43716882cf3fe2d4330aa1a675)
#09 pc 0x32ceac libhwui.so (BuildId: f4aa6f43716882cf3fe2d4330aa1a675)
#10 pc 0x4c3140 libhwui.so (BuildId: f4aa6f43716882cf3fe2d4330aa1a675)
#11 pc 0x115d4 libutils.so (BuildId: c07f08c7e5a964a8f8c6bc5c820fb795)
#12 pc 0x6efbc libc.so (BuildId: 1d36f8ae6e0af6158793abea7d4f4f2b)
#13 pc 0x60d60 libc.so (BuildId: 1d36f8ae6e0af6158793abea7d4f4f2b)
Triggered auto assignment to @kevinksullivan (Bug), see https://stackoverflow.com/c/expensify/questions/14418 for more details. Please add this bug to a GH project, as outlined in the SO.
@adhorodyski
Hey hey, Adam from Callstack here - I'd like to work on this!
@kevinksullivan Uh oh! This issue is overdue by 2 days. Don't forget to update your issues!
Coming with an update on this one:
Overview
I took some time to discuss this with other engineers experiencing this kind of a crash (or exactly this one) in other RN apps and unfortunately it's pretty hard to narrow it down without having full historical analytics which could point us to the exact version that started crashing.
In our case we started collecting it in the middle which only points us that the problem was already there before we integrated Firebase Crashlytics.
You can see the lack of a 'peak' which is normally super helpful in narrowing the problem down to eg. a version bump on one of the dependencies.
Takeaways
- It's a really low level memory leak based crash that does not indicate any application-level actions occurred that might have caused it. Devices experiencing it all have very little free RAM available at the time it crashes which leads the app to not being able to further allocate & the OS decides to abort the process.
- It's very likely a problem in some of the dependencies of our project, on the native side (like trying to access a memory address that has already been freed up).
- It's imo not likely a one-time thing, but something that builds over time like animations
@muttmuure it's hard for me to come up with clear next steps for this, what I'm thinking of is:
- onboarding more people to gather more data points
- ideally we'd know what workflow/account combination produces a repro and this can lead us to a more clear path forward.
With reproduction steps in place we can try disabling different parts on an ad hoc build to narrow it down to eg. a given library that produces this bug. Also, having more data points can ensure us in mid-term that it eg. vanished due to a version bump on some packages.
@kevinksullivan 6 days overdue. This is scarier than being forced to listen to Vogon poetry!
@kevinksullivan Now this issue is 8 days overdue. Are you sure this should be a Daily? Feel free to change it!
@kevinksullivan 12 days overdue. Walking. Toward. The. Light...
@kevinksullivan this issue was created 2 weeks ago. Are we close to a solution? Let's make sure we're treating this as a top priority. Don't hesitate to create a thread in #expensify-open-source to align faster in real time. Thanks!
This issue has not been updated in over 14 days. @kevinksullivan eroding to Weekly issue.
@adhorodyski should I close this based on the investigation in the linked issue above?
I'd say yes!