bug(datadog_metrics sink): lack of counter datapoint aggregation leads to loss of updates
Problem
Currently, the datadog_metrics sink does not do any windowing/aggregation of metrics itself. In some cases, this might mean that it generates requests that update the same series (name + tags + metric type) multiple times within that request. Further, datapoints in a request to the Datadog metrics intake API have a timestamp with second granularity... and so it is possible to generate a request that essentially updates the same series multiple times, all with identical timestamps for the datapoints, even if the values are different.
However, the Datadog metrics intake, in the case of counters, will process these in a last-write win fashion. To wit, a single request with 10 series updates for the same series, with an identical number of datapoints sharing the same timestamp... leading to that series, when graphed, showing a value equal to the value in the last update in that request for the given series.
Required Changes
Conceptually, we need to perform some level of aggregation within the source itself to ensure that we are summing changes to counters for identical datapoints. We do not need to perform any sort of roll-up aggregation (i.e. all metrics in the last 10 seconds, etc) and we also do not need to change the logic for gauges or distributions/histograms: their behavior is already correct.
It's not currently clear to me what the best way to approach this aggregation is, but we should be able to do it fairly performantly since it's only scoped to counters and boils down to a map lookup and adding an integer together versus some of the stuff we would have to do to merge together histograms, etc.