diffxpy icon indicating copy to clipboard operation
diffxpy copied to clipboard

ValueError: assignment destination is read-only

Open Hrovatin opened this issue 4 years ago • 4 comments

I tried to model a single gene across different cell types with diffxpy, but I get the below error. Today I switched to latest dev, but the same also happens on v0.7.4.

training location model: True
training scale model: True
iter   0: ll=282395.523814
caught 1 linalg singular matrix errors
iter   1: ll=282395.523814, converged: 0.00% (loc: 100.00%, scale update: False), in 0.13sec
Fitting dispersion models: 0.00% in 0.00sec

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-108-a70ab00da08b> in <module>
      2     # Does not work in current version of diffxpy, so overwrite the function
      3     #result=de.fit.model(
----> 4     model_result=diffxpy.fit.model(
      5         data=adata_iir,
      6         dmat_loc=dmat_loc,

~/miniconda3/envs/rpy2_3/lib/python3.8/site-packages/diffxpy/diffxpy/fit/fit.py in model(data, formula_loc, formula_scale, as_numeric, init_a, init_b, gene_names, sample_description, dmat_loc, dmat_scale, constraints_loc, constraints_scale, noise_model, size_factors, batch_size, training_strategy, quick_scale, dtype, **kwargs)
    209     )
    210 
--> 211     model = _fit(
    212         noise_model=noise_model,
    213         data=data,

~/miniconda3/envs/rpy2_3/lib/python3.8/site-packages/diffxpy/diffxpy/testing/tests.py in _fit(noise_model, data, design_loc, design_scale, design_loc_names, design_scale_names, constraints_loc, constraints_scale, init_model, init_a, init_b, gene_names, size_factors, batch_size, backend, training_strategy, quick_scale, train_args, close_session, dtype)
    242         pass
    243 
--> 244     estim.train_sequence(
    245         training_strategy=training_strategy,
    246         **train_args

~/miniconda3/envs/rpy2_3/lib/python3.8/site-packages/batchglm/models/base/estimator.py in train_sequence(self, training_strategy, **kwargs)
    122                         (x, str(d[x]), str(kwargs[x]))
    123                     )
--> 124             self.train(**d, **kwargs)
    125             logger.debug("Training sequence #%d complete", idx + 1)
    126 

~/miniconda3/envs/rpy2_3/lib/python3.8/site-packages/batchglm/train/numpy/base_glm/estimator.py in train(self, max_steps, method_b, update_b_freq, ftol_b, lr_b, max_iter_b, nproc, **kwargs)
    104                 idx_update = np.where(np.logical_not(fully_converged))[0]
    105                 if self._train_scale:
--> 106                     b_step = self.b_step(
    107                         idx_update=idx_update,
    108                         method=method_b,

~/miniconda3/envs/rpy2_3/lib/python3.8/site-packages/batchglm/train/numpy/base_glm/estimator.py in b_step(self, idx_update, method, ftol, lr, max_iter, nproc)
    344             )
    345         else:
--> 346             return self._b_step_loop(
    347                 idx_update=idx_update,
    348                 method=method,

~/miniconda3/envs/rpy2_3/lib/python3.8/site-packages/batchglm/train/numpy/base_glm/estimator.py in _b_step_loop(self, idx_update, method, max_iter, ftol, nproc)
    514                         ))
    515 
--> 516                     delta_theta[0, j] = scipy.optimize.brent(
    517                         func=cost_b_var,
    518                         args=(data, eta_loc, xh_scale),

ValueError: assignment destination is read-only

Hrovatin avatar Apr 07 '21 20:04 Hrovatin

Duplicate of #186

Hrovatin avatar Apr 09 '21 06:04 Hrovatin

This is solved for me by replacing my 1-feature (gene) adata_iir with 2 feature adata_temp.

print('Original adata shape:',adata_iir.shape)
adata_temp=anndata.concat([adata_iir,adata_iir],axis=1)
adata_temp.var_names_make_unique()
print('Modified adata shape:',adata_temp.shape)

Original adata shape: (24162, 1)
Modified adata shape: (24162, 2)

Hrovatin avatar May 03 '21 11:05 Hrovatin

But I still have the problem that coef_sd == 2.222759e-162 so I can not do anything with this result.

Hrovatin avatar May 03 '21 11:05 Hrovatin

Seems to be something to do with the matrix dimensions as @Hrovatin. Test breaks until 129 features and works smoothly after that with the simulated data as in the tutorial

sim = Simulator(num_observations=200, num_features=129)

Could not understand why it is happening and how to generalize with the actual data!

kvshams avatar May 26 '21 15:05 kvshams