"Duplicate TimeSeries encountered" errors from Stackdriver
After upgrading from 0.7.0 to the latest master, I started seeing occasional errors like this from CreateTimeSeries :
rpc error: code = InvalidArgument desc = One or more TimeSeries could not be written: Field timeSeries[1] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.: timeSeries[1]
Here's a sample request matching the error message above (slightly reformatted):
name:"projects/xxxxx"
time_series:<metric:<type:"custom.googleapis.com/opencensus/ts_bridge/metric_import_latencies" labels:<key:"metric_name" value:"sli_sample_ratio10m" > labels:<key:"opencensus_task" value:"go-2@localhost" > > resource:<type:"global" > points:<interval:<end_time:<seconds:1539597540 nanos:971130331 > start_time:<seconds:1539597540 nanos:971119624 > > value:<distribution_value:<count:1 mean:602 bucket_options:<explicit_buckets:<bounds:100 bounds:250 bounds:500 bounds:1000 bounds:2000 bounds:3000 bounds:4000 bounds:5000 bounds:7500 bounds:10000 bounds:15000 bounds:20000 bounds:40000 bounds:60000 bounds:90000 bounds:120000 bounds:300000 bounds:600000 > > bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:1 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 > > > >
time_series:<metric:<type:"custom.googleapis.com/opencensus/ts_bridge/metric_import_latencies" labels:<key:"metric_name" value:"sli_sample_ratio10m" > labels:<key:"opencensus_task" value:"go-2@localhost" > > resource:<type:"global" > points:<interval:<end_time:<seconds:1539597541 nanos:703625748 > start_time:<seconds:1539597540 nanos:971119624 > > value:<distribution_value:<count:1 mean:602 bucket_options:<explicit_buckets:<bounds:100 bounds:250 bounds:500 bounds:1000 bounds:2000 bounds:3000 bounds:4000 bounds:5000 bounds:7500 bounds:10000 bounds:15000 bounds:20000 bounds:40000 bounds:60000 bounds:90000 bounds:120000 bounds:300000 bounds:600000 > > bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:1 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 > > > >
time_series:<metric:<type:"custom.googleapis.com/opencensus/ts_bridge/metric_import_latencies" labels:<key:"metric_name" value:"total_bytes_rcvd" > labels:<key:"opencensus_task" value:"go-2@localhost" > > resource:<type:"global" > points:<interval:<end_time:<seconds:1539597541 nanos:703625748 > start_time:<seconds:1539597540 nanos:971119624 > > value:<distribution_value:<count:1 mean:718 bucket_options:<explicit_buckets:<bounds:100 bounds:250 bounds:500 bounds:1000 bounds:2000 bounds:3000 bounds:4000 bounds:5000 bounds:7500 bounds:10000 bounds:15000 bounds:20000 bounds:40000 bounds:60000 bounds:90000 bounds:120000 bounds:300000 bounds:600000 > > bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:1 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 > > > >
time_series:<metric:<type:"custom.googleapis.com/opencensus/ts_bridge/metric_import_latencies" labels:<key:"metric_name" value:"total_bytes_sent" > labels:<key:"opencensus_task" value:"go-2@localhost" > > resource:<type:"global" > points:<interval:<end_time:<seconds:1539597541 nanos:703625748 > start_time:<seconds:1539597540 nanos:971119624 > > value:<distribution_value:<count:1 mean:735 bucket_options:<explicit_buckets:<bounds:100 bounds:250 bounds:500 bounds:1000 bounds:2000 bounds:3000 bounds:4000 bounds:5000 bounds:7500 bounds:10000 bounds:15000 bounds:20000 bounds:40000 bounds:60000 bounds:90000 bounds:120000 bounds:300000 bounds:600000 > > bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:1 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 > > > >
time_series:<metric:<type:"custom.googleapis.com/opencensus/ts_bridge/import_latencies" labels:<key:"opencensus_task" value:"go-2@localhost" > > resource:<type:"global" > points:<interval:<end_time:<seconds:1539597541 nanos:703638673 > start_time:<seconds:1539597540 nanos:971119624 > > value:<distribution_value:<count:1 mean:1337 bucket_options:<explicit_buckets:<bounds:100 bounds:250 bounds:500 bounds:1000 bounds:2000 bounds:3000 bounds:4000 bounds:5000 bounds:7500 bounds:10000 bounds:15000 bounds:20000 bounds:40000 bounds:60000 bounds:90000 bounds:120000 bounds:300000 bounds:600000 > > bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:1 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 bucket_counts:0 > > > >
time_series:<metric:<type:"custom.googleapis.com/opencensus/ts_bridge/oldest_metric_age" labels:<key:"opencensus_task" value:"go-2@localhost" > > resource:<type:"global" > points:<interval:<end_time:<seconds:1539597541 nanos:703641788 > > value:<int64_value:479964 > > >
As you can see, timeSeries[0] and timeSeries[1] are identical except for the end_time (which is less than a second apart).
Stackdriver requires at most one data point per request per time series. I think the exporter will need either to only send the latest point (discarding earlier ones), or use several separate CreateTimeSeries calls per time series.
I believe I've not seen such errors on 0.7.0. I've not examined the diff between 0.7.0 and master very closely, but looking at the list of commits 2f26a5d1900c27d75297423079bda98fedb6712b seems the most suspicious.