Johnson Thomas
Johnson Thomas
I am facing similar problem. Previously bounding box model was working well - now it is returning all zeros
### Hacky fix added following lines to cropper's init.py import json from contextlib import suppress Changed line 145 to 162 as follows `if component_value: tot = len(component_value) -1 rect =...
NOT_IMPLEMENTED : Could not find an implementation for ConvInteger(10) node with name 'Conv_0_quant'
Same issue here. Do we have a timeline for the patch?
I second this. Support for Swift and ANE will be helpful for iOS and Mac developers
I have trained an adapter for Qwen2 and pushed it to huggingface hub after using auto train. Now I don’t have access to the virtual machine that created the adapter....
I am having the same issue. Hope they fix this.
Fixed Rotary embedding issue. Added mixed precision - so this can run in a GPU with 24GB. Edited repo at - https://github.com/johnyquest7/KBLaM_mixed_precision Need more testing - please feel to contribute...
@mfrederico I not using Azure. Everything was running locally. Local embedding creation, training and inference on RTX 3090. Had to add mixed precision training to make it work on 3090....
@lawrenceadams all-MiniLM-L6-v2 for embeddings and llama 3 1b for training
@lawrenceadams try using a batch size of 2. Otherwise use Google colab. Zip the repo. Upload to colab. Then run it. Try batch size of 4 or 2 in colab