missing field `self_times_ns` at line 1 column 563
Self-Hosted Version
25.10.0
CPU Architecture
x86_64
Docker Version
Docker version 28.5.1, build e180ab8
Docker Compose Version
Docker Compose version v2.40.1
Machine Specification
- [x] My system meets the minimum system requirements of Sentry
Steps to Reproduce
Install 25.10.0 from 25.8.0
Expected Result
Actual Result
snuba-profiling-functions-consumer-1 | {"timestamp":"2025-10-22T18:19:10.636934Z","level":"ERROR","fields":{"message":"Validation error","error":"Invalid message"},"target":"rust_snuba::strategies::processor","span":{"name":"_validate_schema"},"spans":[{"name":"_validate_schema"}]}
snuba-profiling-functions-consumer-1 | {"timestamp":"2025-10-22T18:19:10.637050Z","level":"ERROR","fields":{"message":"Failed processing message","error":"missing field `self_times_ns` at line 1 column 563"},"target":"rust_snuba::strategies::processor"}
snuba-profiling-functions-consumer-1 | {"timestamp":"2025-10-22T18:19:11.636930Z","level":"ERROR","fields":{"message":"Invalid message, but no DLQ policy configured","e":"InvalidMessage { partition: Partition { topic: Topic(\"profiles-call-tree\"), index: 0 }, offset: 1543574 }"},"target":"sentry_arroyo::processing"}
Event ID
No response
Same here, seeing this log event about every 1s:
{"timestamp":"2025-10-22T20:31:37.037208Z","level":"ERROR","fields":{"message":"Failed processing message","error":"missing field `self_times_ns` at line 1 column 255"},"target":"rust_snuba::strategies::processor"}
{"timestamp":"2025-10-22T20:31:35.803723Z","level":"ERROR","fields":{"message":"Failed processing message","error":"missing field `self_times_ns` at line 1 column 273"},"target":"rust_snuba::strategies::processor"}
{"timestamp":"2025-10-22T20:31:34.700309Z","level":"ERROR","fields":{"message":"Failed processing message","error":"missing field `self_times_ns` at line 1 column 292"},"target":"rust_snuba::strategies::processor"}
same
Asking this internally, probably we have a misconfiguration somewhere.
Okay heard back from the internal Slack.
TLDR: Please enable these feature flags on your sentry/sentry.conf.py:
-
projects:continuous-profiling-vroomrs-processing -
projects:transaction-profiling-vroomrs-processing
Then re-run ./install.sh
Long answer copied from Slack:
On Sept. 18th this PR removed the self_times array from the functions we send over (in vroom), hence why we're getting that error.
As a quick fix, you could enable the projects:continuous-profiling-vroomrs-processing and projects:transaction-profiling-vroomrs-processing feature flags.
This will enable the processing of the profiles directly in sentry instead of passing through vroom and it's what's already running in prod.
In future releases, you won't even need to enable them since, at this point, I think we can simply remove those feature flags.
Hi @aldy505 I tried to fresh install 25.10 and master versions with feature flags and still don't see profiles. Got 404 on any profile id.
Logs from components looks good:
docker compose logs ingest-profiles
ingest-profiles-1 | Updating certificates in /etc/ssl/certs...
ingest-profiles-1 | 0 added, 0 removed; done.
ingest-profiles-1 | Running hooks in /etc/ca-certificates/update.d...
ingest-profiles-1 | done.
ingest-profiles-1 | 19:57:30 [INFO] arroyo.processing.processor: New partitions assigned: {Partition(topic=Topic(name='profiles'), index=0): 1052}
ingest-profiles-1 | 19:57:30 [INFO] arroyo.processing.processor: Member id: 'rdkafka-f0719b63-1cb6-4d77-aaec-9f6f3a7ed965'
ingest-profiles-1 | 19:57:30 [INFO] arroyo.backends.kafka.consumer: Paused partitions after assignment: set()
docker compose logs snuba-profiling-profiles-consumer
snuba-profiling-profiles-consumer-1 | 2025-11-17 19:54:00,949 Initializing Snuba...
snuba-profiling-profiles-consumer-1 | 2025-11-17 19:56:28,183 Snuba initialization took 147.2424064790248s
snuba-profiling-profiles-consumer-1 | 2025-11-17 19:56:48,716 Initializing Snuba...
snuba-profiling-profiles-consumer-1 | 2025-11-17 19:59:12,351 Snuba initialization took 143.6812797599705s
snuba-profiling-profiles-consumer-1 | {"timestamp":"2025-11-17T19:59:12.539112Z","level":"INFO","fields":{"message":"Storage: profiles, ClickHouse Table Name: profiles_local, Message Processor: MessageProcessorConfig { python_class_name: \"ProfilesMessageProcessor\", python_module: \"snuba.datasets.processors.profiles_processor\" }, ClickHouse host: clickhouse, ClickHouse port: 9000, ClickHouse HTTP port: 8123, ClickHouse database: default"},"target":"rust_snuba::consumer"}
snuba-profiling-profiles-consumer-1 | {"timestamp":"2025-11-17T19:59:12.539243Z","level":"INFO","fields":{"message":"Starting consumer for \"profiles\"","storage":"profiles"},"target":"rust_snuba::consumer"}
snuba-profiling-profiles-consumer-1 | {"timestamp":"2025-11-17T19:59:14.489573Z","level":"INFO","fields":{"message":"Kafka consumer member id: \"rdkafka-22391d6b-1d5e-4039-b69a-d37e6b175f18\""},"target":"sentry_arroyo::backends::kafka"}
snuba-profiling-profiles-consumer-1 | {"timestamp":"2025-11-17T19:59:14.489913Z","level":"INFO","fields":{"message":"New partitions assigned: {Partition { topic: Topic(\"processed-profiles\"), index: 0 }: 626}"},"target":"sentry_arroyo::processing"}
snuba-profiling-profiles-consumer-1 | {"timestamp":"2025-11-17T19:59:17.648035Z","level":"INFO","fields":{"message":"Partitions to revoke: [Partition { topic: Topic(\"processed-profiles\"), index: 0 }]"},"target":"sentry_arroyo::processing"}
snuba-profiling-profiles-consumer-1 | {"timestamp":"2025-11-17T19:59:17.648083Z","level":"INFO","fields":{"message":"Joining strategy"},"target":"sentry_arroyo::processing"}
snuba-profiling-profiles-consumer-1 | {"timestamp":"2025-11-17T19:59:17.649363Z","level":"INFO","fields":{"message":"Partition revocation complete."},"target":"sentry_arroyo::processing"}
snuba-profiling-profiles-consumer-1 | {"timestamp":"2025-11-17T19:59:18.254294Z","level":"INFO","fields":{"message":"Kafka consumer member id: \"rdkafka-22391d6b-1d5e-4039-b69a-d37e6b175f18\""},"target":"sentry_arroyo::backends::kafka"}
snuba-profiling-profiles-consumer-1 | {"timestamp":"2025-11-17T19:59:18.254505Z","level":"INFO","fields":{"message":"New partitions assigned: {Partition { topic: Topic(\"processed-profiles\"), index: 0 }: 626}"},"target":"sentry_arroyo::processing"}
snuba-profiling-profiles-consumer-1 | {"timestamp":"2025-11-17T20:02:31.531956Z","level":"INFO","fields":{"message":"Inserted 1 rows"},"target":"rust_snuba::strategies::clickhouse::writer_v2"}
snuba-profiling-profiles-consumer-1 | {"timestamp":"2025-11-17T20:04:30.973466Z","level":"INFO","fields":{"message":"Inserted 2 rows"},"target":"rust_snuba::strategies::clickhouse::writer_v2"}
docker compose logs snuba-profiling-functions-consumer
snuba-profiling-functions-consumer-1 | 2025-11-17 19:53:59,896 Initializing Snuba...
snuba-profiling-functions-consumer-1 | 2025-11-17 19:56:26,343 Snuba initialization took 146.4510903400369s
snuba-profiling-functions-consumer-1 | 2025-11-17 19:56:45,936 Initializing Snuba...
snuba-profiling-functions-consumer-1 | 2025-11-17 19:59:06,325 Snuba initialization took 140.39047172799474s
snuba-profiling-functions-consumer-1 | {"timestamp":"2025-11-17T19:59:06.542266Z","level":"INFO","fields":{"message":"Storage: functions_raw, ClickHouse Table Name: functions_raw_local, Message Processor: MessageProcessorConfig { python_class_name: \"FunctionsMessageProcessor\", python_module: \"snuba.datasets.processors.functions_processor\" }, ClickHouse host: clickhouse, ClickHouse port: 9000, ClickHouse HTTP port: 8123, ClickHouse database: default"},"target":"rust_snuba::consumer"}
snuba-profiling-functions-consumer-1 | {"timestamp":"2025-11-17T19:59:06.542359Z","level":"INFO","fields":{"message":"Starting consumer for \"functions_raw\"","storage":"functions_raw"},"target":"rust_snuba::consumer"}
snuba-profiling-functions-consumer-1 | {"timestamp":"2025-11-17T19:59:10.596600Z","level":"INFO","fields":{"message":"Kafka consumer member id: \"rdkafka-45494d85-e832-4344-b604-13b20e2ae10d\""},"target":"sentry_arroyo::backends::kafka"}
snuba-profiling-functions-consumer-1 | {"timestamp":"2025-11-17T19:59:10.596902Z","level":"INFO","fields":{"message":"New partitions assigned: {Partition { topic: Topic(\"profiles-call-tree\"), index: 0 }: 519}"},"target":"sentry_arroyo::processing"}
snuba-profiling-functions-consumer-1 | {"timestamp":"2025-11-17T19:59:13.680771Z","level":"INFO","fields":{"message":"Partitions to revoke: [Partition { topic: Topic(\"profiles-call-tree\"), index: 0 }]"},"target":"sentry_arroyo::processing"}
snuba-profiling-functions-consumer-1 | {"timestamp":"2025-11-17T19:59:13.680818Z","level":"INFO","fields":{"message":"Joining strategy"},"target":"sentry_arroyo::processing"}
snuba-profiling-functions-consumer-1 | {"timestamp":"2025-11-17T19:59:13.719517Z","level":"INFO","fields":{"message":"Partition revocation complete."},"target":"sentry_arroyo::processing"}
snuba-profiling-functions-consumer-1 | {"timestamp":"2025-11-17T19:59:14.485834Z","level":"INFO","fields":{"message":"Kafka consumer member id: \"rdkafka-45494d85-e832-4344-b604-13b20e2ae10d\""},"target":"sentry_arroyo::backends::kafka"}
snuba-profiling-functions-consumer-1 | {"timestamp":"2025-11-17T19:59:14.485932Z","level":"INFO","fields":{"message":"New partitions assigned: {Partition { topic: Topic(\"profiles-call-tree\"), index: 0 }: 519}"},"target":"sentry_arroyo::processing"}
snuba-profiling-functions-consumer-1 | {"timestamp":"2025-11-17T19:59:16.829158Z","level":"INFO","fields":{"message":"Partitions to revoke: [Partition { topic: Topic(\"profiles-call-tree\"), index: 0 }]"},"target":"sentry_arroyo::processing"}
snuba-profiling-functions-consumer-1 | {"timestamp":"2025-11-17T19:59:16.829224Z","level":"INFO","fields":{"message":"Joining strategy"},"target":"sentry_arroyo::processing"}
snuba-profiling-functions-consumer-1 | {"timestamp":"2025-11-17T19:59:16.832627Z","level":"INFO","fields":{"message":"Partition revocation complete."},"target":"sentry_arroyo::processing"}
snuba-profiling-functions-consumer-1 | {"timestamp":"2025-11-17T19:59:18.124614Z","level":"INFO","fields":{"message":"Kafka consumer member id: \"rdkafka-45494d85-e832-4344-b604-13b20e2ae10d\""},"target":"sentry_arroyo::backends::kafka"}
snuba-profiling-functions-consumer-1 | {"timestamp":"2025-11-17T19:59:18.124808Z","level":"INFO","fields":{"message":"New partitions assigned: {Partition { topic: Topic(\"profiles-call-tree\"), index: 0 }: 519}"},"target":"sentry_arroyo::processing"}
snuba-profiling-functions-consumer-1 | {"timestamp":"2025-11-17T20:02:30.563897Z","level":"INFO","fields":{"message":"Inserted 1 rows"},"target":"rust_snuba::strategies::clickhouse::writer_v2"}
d0758280dd3d :) SHOW TABLES LIKE '%profiles%';
SHOW TABLES LIKE '%profiles%'
Query id: 52aa0c83-6999-49b0-8165-86f6d17eddd2
┌─name───────────┐
1. │ profiles_local │
└────────────────┘
1 row in set. Elapsed: 0.003 sec.
d0758280dd3d :) SELECT count() FROM profiles_local;
SELECT count()
FROM profiles_local
Query id: 6eb27e25-ddb2-45c1-979e-5a2ec91e1a22
┌─count()─┐
1. │ 967 │
└─────────┘
1 row in set. Elapsed: 0.003 sec.
I use python sdk with init config:
sentry_sdk.init(
dsn="https://sentry.my/123",
environment="staging",
traces_sample_rate=1.0,
profile_session_sample_rate=1.0,
profile_lifecycle="trace",
debug=True,
enable_logs=True,
integrations=[
SqlalchemyIntegration(),
LoggingIntegration()
],
send_default_pii=True
)
Also I tried to use continues profiles, but it's not helped.
is it related to this feature flags or it's another issue?
So we'll make a patch release to fix this. 25.11.1 will be released soon to cover the fix from #4060
Hi folks! 25.11.1 is out now, please do upgrade :)
I upgraded from 25.10.0 to 25.11.1
In 'install/bootstrap-s3-nodestore.sh' we have this cmd
bucket_list=$($s3cmd --access_key=sentry --secret_key=sentry --no-ssl --region=us-east-1 --host=localhost:8333 --host-bucket='localhost:8333/%(bucket)' ls)
In my case the result is:
2025-10-20 16:42 s3://nodestore
2025-12-02 09:01 `s3://profiles`
So the next time install.sh script is run you reach the next line:
if [[ $(echo "$bucket_list" | head -1 | awk '{print $3}') != 's3://nodestore' ]]; then
then the result is:
s3://profiles
so it enters the if and it wants to create the nodestore bucket
$s3cmd --access_key=sentry --secret_key=sentry --no-ssl --region=us-east-1 --host=localhost:8333 --host-bucket='localhost:8333/%(bucket)' mb s3://nodestore
which of course fails....
Am I doing something wrong?
Am I doing something wrong?
@klemen-df I'm gonna use the ultimate Claude answer: You are absolutely right!
I need to fix this soon.