Daniel van Strien

Results 68 issues of Daniel van Strien

**Is your feature request related to a problem? Please describe.** ## The problem The datasets hub currently has `8,239` datasets. These datasets span a wide range of different modalities and...

enhancement

Unless I missed something, at the moment this plugin doesn't work with Jupyter notebooks opened in visual studio code. Since Jupyter notebooks often have a lot of prose it would...

## What this PR does? This PR aims to make the `__repr__`s in `hf_api.py` easier to view on a narrow screen i.e. to favour a longer `__repr__` for a class...

**Is your feature request related to a problem? Please describe.** Currently, the `__repr__`'s for classes in the `hf_api` module are nice and compact but can require a lot of side-scrolling...

Very nice project! I'm the Machine Learning Librarian at Hugging Face. We're seeing quite a few merged models produced via MergeKit being uploaded to the Hugging Face Hub. I wanted...

Feature description --------- Thanks for creating such an excellent tool! Since there is increasingly a desire to have a tight loop between annotation and model training, it could be useful...

### System Info ```shell Package version: optimum 1.11.0 System: Google Colab (Linux-5.15.109+-x86_64-with-glibc2.35) Python version: 3.10.12 transformers-cli env info: - `transformers` version: 4.31.0 - Platform: Linux-5.15.109+-x86_64-with-glibc2.35 - Python version: 3.10.12 -...

bug
bettertransformer

## Hugging Face Collections Hacktoberfest challenge! Hugging Face [Collections](https://huggingface.co/docs/hub/collections) are a handy tool for curating the Models, Datasets, Spaces and Papers on the hub. We want to see what cool...

good first issue

Big fan of the Kraken project! I think it would be great to integrate Kraken with the Hugging Face Hub (https://huggingface.co/models). This would allow: - Kraken to use the Hugging...

It's possible to use the [ORPOTrainer](https://huggingface.co/docs/trl/orpo_trainer) from TRL with very little modification to the current DPO notebook. Since ORPO reduces the resources required for training chat models even further (no...

on roadmap
feature request