Orion icon indicating copy to clipboard operation
Orion copied to clipboard

Inconsistent anomaly detection?

Open ajthomas1949 opened this issue 3 years ago • 0 comments

  • Orion version: Colab Demo slightly modified
  • Python version: 3.8
  • Operating System: Windows/Chrome

Description

I modified the Colab demo to use my own data set (without any known anomalies). The data set consists of 6336 floating point values representing temperature values ranging from a minimum of 63.15625 to a maximum of 77.61875, evenly spaced in time (I used incrementing integer values instead of actual timestamps)

I ran the code 3 times, restarting the run time before each run. The anomalies found in runs 1-3 were:

start end severity 0 2022 2151 0.055077 1 4016 4150 0.152319 2 4201 4334 0.362518

start end severity 0 14 162 0.168957 1 303 418 0.015307 2 4016 4152 0.211795 3 4201 4332 0.357512

start end severity 0 25 158 0.061093 1 4016 4152 0.192928 2 4201 4334 0.372024

The anomalies starting at timestamps 4016and 4201 seem so be fairly consistent across the runs, but the anomalies starting at run 1/2022, run 2/14, run2/303 and run 3/25 seem divergent. Any idea what might have cause these discrepancies? Is it just the randomness of training? Could it be a scaling issue around the temperature values, perhaps?

What I did

load signal

signal = 'SpaceTempAverage.csv' df = load_signal(signal)

from orion import Orion hyperparameters = { "mlprimitives.custom.timeseries_preprocessing.time_segments_aggregate#1": { "interval": 1 }, "orion.primitives.tadgan.TadGAN#1": { 'epochs': 5 } }

orion = Orion( pipeline='tadgan', hyperparameters=hyperparameters )

orion.fit(df) anomalies = orion.detect(df)

On each run I saw some warnings: Using TensorFlow backend. WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.init (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version. Instructions for updating: If using Keras pass *_constraint arguments to layers. (6336, 2) WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/tensorflow_core/python/ops/math_grad.py:1424: where (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.where in 2.0, which has the same broadcast rule as np.where /usr/local/lib/python3.7/dist-packages/keras/engine/training.py:297: UserWarning: Discrepancy between trainable weights and collected trainable weights, did you set model.trainable without calling model.compile after ? 'Discrepancy between trainable weights and collected trainable' WARNING:tensorflow:From /usr/local/lib/python3.7/dist-packages/keras/backend/tensorflow_backend.py:422: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.

/usr/local/lib/python3.7/dist-packages/keras/engine/training.py:297: UserWarning: Discrepancy between trainable weights and collected trainable weights, did you set model.trainable without calling model.compile after ? 'Discrepancy between trainable weights and collected trainable' /usr/local/lib/python3.7/dist-packages/keras/engine/training.py:297: UserWarning: Discrepancy between trainable weights and collected trainable weights, did you set model.trainable without calling model.compile after ? 'Discrepancy between trainable weights and collected trainable' /usr/local/lib/python3.7/dist-packages/keras/engine/training.py:297: UserWarning: Discrepancy between trainable weights and collected trainable weights, did you set model.trainable without calling model.compile after ? 'Discrepancy between trainable weights and collected trainable'

For run 3, the epoch data were: Epoch: 1/5, [Dx loss: [ 0.9434773 -1.8255786 1.7167794 0.10522768]] [Dz loss: [6.0178466 0.246037 4.6921635 0.10796461]] [G loss: [-4.6759696 -1.7039264 -3.9663026 0.09942596]] Epoch: 2/5, [Dx loss: [-0.87321526 -1.5089027 0.2742535 0.03614341]] [Dz loss: [-36.158577 0.15399204 -38.5907 0.22781241]] [G loss: [51.347267 -0.21352597 50.66463 0.08961608]] Epoch: 3/5, [Dx loss: [ -1.0263617 -15.784202 14.400498 0.03573452]] [Dz loss: [11.738983 0.07289093 10.341151 0.1324941 ]] [G loss: [ -7.3660216 -14.623326 6.456777 0.08005257]] Epoch: 4/5, [Dx loss: [ -0.7876256 -10.403595 9.273801 0.03421703]] [Dz loss: [-103.63609 1.3674035 -107.528656 0.2525214]] [G loss: [ 1.22568886e+02 -9.08051968e+00 1.30903702e+02 7.45710135e-02]] Epoch: 5/5, [Dx loss: [ -0.6160056 -12.731344 11.831088 0.02842519]] [Dz loss: [ -8.821623 1.5473021 -12.490306 0.21213844]] [G loss: [ 8.658516 -11.908795 19.834763 0.07325505]]

ajthomas1949 avatar Jun 30 '22 12:06 ajthomas1949