felix
felix
Hi, I'm using MMseqs2 for all-vs-all alignments. As indicated in the user guide pdf, I'm using the bash script to perform a fake prefiltering step before aligning. That seems to...
Hi, are the fine-tuning datasets that you used (after your preprocessing steps) available somewhere? Can't seem to find them in the repo.
Hey, I just tried making figures via the convenient draw.io import provided on the site. It seems to cut off the icons at the bottom:  Preview for comparison: ...
Hi, I'm interested in finding matches for a local substructure. To prevent aligning to other domains and partial overlaps, I tried removing everything from the query structure that I'm not...
**Describe the bug** I'm training a model and try to save it using `save_checkpoint` after the first epoch. Training (with stage 0, bf16) goes smoothly, but I get an NCCL...
# 📚 Which caches are updated efficiently in `get_fantasy_model` Hi, I'm interested in adding fantasy observations to a GP and computing the posterior covariance. The docs are pretty clear that...
# 🐛 Bug Hi, I'm working on a problem that requires me to temporarily add data to a GP's training data in the forward pass. It seems to work when...
### 🐛 Describe the bug Hi, I have a dataset in TFRecords format and am trying to move to TorchData's API for loading tfrecords files. This is the minimal example:...
Hi, I'm trying to understand the implementation of Mistral's attention in `MistralAttention`. https://github.com/huggingface/transformers/blob/main/src/transformers/models/mistral/modeling_mistral.py#L195 It is my understanding that it should always be using local window attention. In `MistralFlashAttention2` this is...
Hi, I'm trying to use homologene to rename the rows of a gene expression table. As indicated in the docs, queries for genes that cannot be mapped are not returned...