Oliver Cobb
Oliver Cobb
We need to revisit the way randomisation is performed in the tests: - In some of the CI tests we want random operations to be deterministic, and so we need...
For the model-uncertainty detectors returning the difference in average uncertainty on the ref set vs the test set can give a good indication of whether the drift is likely to...
Currently when performing Monte Carlo dropout to compute a notion of uncertainty all the models layers are put in training mode (by passing `training=True` to the call) whereas we would...
The wine-quality dataset is particularly simple and convenient for demoing drift detectors on. We should provide loading functionality in the library to facilitate demo notebooks and general user exploration.
At the moment we select lambda, a regularisation parameter, from the fairly ad-hoc list `[1/(4**i) for i in range(10)] `. A more principled way would be nice.
The functions aren't change detection specific so should be moved into pytorch/utils/ and tensorflow/utils/ instead with proper docstrings and tests.
At the moment the pytorch version applies _up to_ the indexed layer whereas the tensorflow version applied the indexed layer as well.
There are various ways that the functions `alibi_detect.utils.pytorch.distance.batch_compute_kernel_matrix` and `alibi_detect.utils.tensorflow.distance.batch_compute_kernel_matrix` can be made more efficient. Also providing options for computing the parts needed for linear time estimators makes sense for...
Currently we call kernels as `kernel(x, y)` however models defined using keras's functional API require multiple inputs to be passed as a list. In these cases we instead need to...
So far just a first draft of overview page. Sharing early to gather feedback before working towards something more polished. Methods pages for individual detectors and updated doctrings will then...