Luca Gibelli

Results 7 comments of Luca Gibelli

I'm testing on a 4090 (24GB of VRAM) on a system with 20 GB of DDR4. The reason why lowering --pages_per_groups doesn't help is that each worker submits all pages...

The only real advantage of my approach is smoother queue depth. With your approach the queue would jump between 50→550→50→550. I will try your approach tomorrow and report back if...

when using my PR #316, I run: ` python -m olmocr.pipeline ./localworkspace --markdown --workers 2 --gpu-memory-utilization 0.85 --max_model_len 8192 --pages_per_batch 5 --min_queue_threshold 30 --semaphore_release_interval 3 --pdfs /path/to/*.pdf ` without my...

> Thank you so much! I have another question: if I want to run olmOCR with vLLM as an API so that I can send files/images to it and get...

> On may 20th azure published this: https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/file-search?tabs=python > > Looks like it's ready? nope, that's File Search, this issue is about File Inputs, different thing.