Takuya Kitazawa
Takuya Kitazawa
That is, implement onehot encoding for categorical attributes.
Idea: If `k` is set to zero, run model selection automatically.
Current default values: - r = 0.02 - T1 = 10 - T2 = 5 - k = (use the result of model selection)
```jl using Pkg Pkg.add("IJulia") ``` ``` $ jupyter notebook ```
https://dl.acm.org/doi/10.1145/2168752.2168771 > LIBFM also contains methods for optimizing an FM model with respect to ranking [Liu and Yang 2008] based on pairwise classification [Rendle et al. 2009] https://github.com/srendle/libfm/blob/30b9c799c41d043f31565cbf827bf41d0dc3e2ab/src/fm_core/fm_sgd.h#L53
Tool https://github.com/JuliaCI/BenchmarkTools.jl
### Synthetic r=0.02, k=6, T1=10, T2=5  ### Twitter r=0.03, k=6, T1=10, T2=5 
### Datadog aggregated data points of a sample metric - r=0.03, k=6, T1=10, T2=5 for **LogLoss** - r=0.09 for **Hellinger** 
One important requirement is that an algorithm has to work well on high-dimensional data points as: ``` [metric 1, metric 2, metric 3, ...., metric N] ``` This enable us...
Implemented Singular Spectrum Transformation (SST) based change-point detector: [sst.py](https://github.com/takuti/datadog-anomaly-detector/blob/87f313138bef7add43ce3cac512375d692b151db/core/sst/sst.py) - http://ide-research.net/papers/2005_SDM_Ide.pdf Some experimental results: ### Synthetic data  ### Twitter data  ^ larger `r`  ^ smaller `r` `w`...