deep-rules icon indicating copy to clipboard operation
deep-rules copied to clipboard

Understand the trade-off between interpretability and performance

Open signalbash opened this issue 7 years ago • 2 comments

Deep learning methods are notably difficult to interpret. If data is provided without explicitly engineered features, how are you going to address finding any biases in predictions?

What sort of problem are you trying to solve with DL? Is high performance from a DL approach worth the difficulty in explaining how the model assigns a value, or is the value in the model in understanding the biological problem at hand?

signalbash avatar Nov 11 '18 07:11 signalbash

I think something can be learned about the problem by determining which models perform well, right? For instance, the fact that FactorNet benefits from information such as DNase-seq data says something about TF binding. Obviously, there is a limit to how much can be learned this way, and one should always guard against over-interpretation. However, I would argue that high-performing models can help us generate insightful hypotheses (which we should then verify) about the biological phenomena being modeled.

evancofer avatar Dec 08 '18 19:12 evancofer

@evancofer can you please link FactorNet repository or article reference? Thanks!

tbrittoborges avatar Dec 10 '18 16:12 tbrittoborges