Erin Pennington
Erin Pennington
I did something similar, although I just added a new preset with my limited models. Your change looks like it should work though. These are the changes I made (this...
This happened to me on an older AlphaFold version, so making sure you're on the latest version might help.
Although this does not address the potential bottleneck issue mentioned above, personally I set these environment variables to allow a single AlphaFold prediction to pool the memory across multiple available...
@davidyanglee I actually haven't observed the issue you described yet myself, but only because lately I haven't been running any large proteins in parallel with relaxation enabled, so I haven't...
It actually turns out that I have the same problem too, even with `TF_FORCE_UNIFIED_MEMORY=1` and increased `XLA_PYTHON_CLIENT_MEM_FRACTION` value. It seems like the higher `XLA_PYTHON_CLIENT_MEM_FRACTION` let AlphaFold use more of a...
I tried to understand what's happening with the multiple GPU memory issue, since I have a large complex I'm trying to predict which is taking much longer than expected. I'm...
I actually want to note that I installed OpenFold on another system, which did produce the expected prediction! The working system has 4 x NVidia Quadro RTX 5000. I'm going...
To narrow down the issue, I ran OpenFold predictions on both the bad-prediction and good-prediction systems using the `.a3m` file from MMseqs2 as the precomputed alignment. Here is what I...
Since this is happening on the release tag `v1.0.0`, `use_flash` isn't present there yet. Where in the config should I set `use_memory_efficient_kernel`, since it's not assigned there already?
Thanks! I changed that line from `not use_lma` to `False` and ran it again, but it still produced another bad prediction.