SampledModel set_model_params overrides dt_model and conf parameters
When setting up models from an existing transition matrix, using the dt_model in a regular ML MSM and in a sampled MSM yields different scaling.
bmsm = pyemma.msm.models.SampledMSM(P=tmat, dt_model='0.1 ns')
bmsm.set_model_params(samples=[pyemma.msm.markov_model(p) for p in samples]) # edited
msm = pyemma.msm.markov_model(tmat, dt_model='0.1 ns')
yields np.allclose(msm.timescales(), bmsm.timescales()/10).
import pyemma
import numpy as np
tmat = np.array([[0.1, 0.9], [0.9, 0.1]])
bmsm = pyemma.msm.models.SampledMSM(P=tmat, dt_model='0.1 ns')
msm = pyemma.msm.markov_model(tmat, dt_model='0.1 ns')
print(bmsm.dt_model)
print(msm.dt_model)
np.testing.assert_allclose(msm.timescales(), bmsm.timescales())
passes and prints 0.1 ns both times. So I can not reproduce this issue
Sorry, my fault. I'm actually calling
bmsm.set_model_params(samples=[pyemma.msm.markov_model(p) for p in samples])
after instantiating it. This overwrites the dt_model='1 step' to default.
you need to set dt_model in the samples as well, I agree that there should be some consistency checking for the samples.
This is actually an issue in SampledMSM, where the default parameter of dt_model is not None, so it indeed gets overwritten by update_model_params. The same applies for the confidence interval, which has a default of 0.95 instead of None. So if the user sets the confidence interval to 0.9 and calls set_model_params with another parameter like samples, skipping "conf", it will be reset to 0.95 again.
This can only happen, when SampledModel is created manually, which I guess nobody but soley advanced users does.
I agree, but wouldn't it just be fixed by setting the defaults in this method to None?