RohitMidha23

Results 10 issues of RohitMidha23

On running the training code I get the following error : ``` File "..\..\keras_maskrcnn\preprocessing\generator.py", line 139, in random_transform_group_entry transform = adjust_transform_for_image(next(self.transform_generator), image, self.transform_parameters.relative_translation) TypeError: 'Compose' object is not an iterator...

Added Entity Memory feature

I get the following error when I run both, a huggingface model and a faster whisper model on the same GPU: ```bash self.model = ctranslate2.models.Whisper( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: CUDA failed with...

On translating a fine-tuned model from Huggingface Whisper to `ctranslate2` and running with faster whisper, i get extremely gibberish output. I've tried it with various different versions but the output...

As you mentioned, we should fine tune when the WER > 20% and dataset size < 1000 hours. This is my case as well, where I have a finetuned model...

Is it possible to provide the code that is used to make the claim of ~400ms latency? Just so all of us can also use and benefit from the same...

I have a particular usecase where I am trying to send data from my JS front end to my Python backend via websockets, exposed through ngrok. Python Server: ```python async...

bug

### System Info - `transformers` version: 4.40.0.dev0 - Platform: macOS-14.1-arm64-arm-64bit - Python version: 3.10.0 - Huggingface_hub version: 0.22.0 - Safetensors version: 0.4.2 - Accelerate version: 0.28.0 - Accelerate config: not...

Audio

For #254 - Added `quantized_model.yaml` and `peft_model.yaml` which showcase the usage of Quantization and PEFT models. - Added a short note on Authentication with `HF_TOKEN`.

## Issue encountered Seems like a lot of features are supported but none of the examples show how best to use them. ## Solution/Feature Even just one sample config file...

documentation
feature request
prio