Richard.Kang
Richard.Kang
Ok it looks like I can pass in the path or filename of the config as runtime argument, as shown in https://speakerdeck.com/pmq20/node-dot-js-compiler-compiling-your-node-dot-js-application-into-a-single-executable?slide=36
Hi @Tuumix If I understand correctly, your training network size was 768x768, and you cropped your training images to that resolution. During detection, your input image resolution is 4000x3000 and...
Hi team, do we have any updates on this? Our workaround now is using AL2 but we would like to leverage the smaller footprint of BottleRocket, thanks!
Same here, still encountered the error even with all the latest packages: ``` % pip show h11 httpx litellm botocore Name: h11 Version: 0.14.0 Summary: A pure-Python, bring-your-own-I/O implementation of...
Hey @krrishdholakia @melanie531 `sagemaker-chat/` works! Let me do more testing with other engines: Input: ``` response = litellm.completion( model="sagemaker_chat/jumpstart-model", messages= [{"role":"system", "content":"You are Qwen, created by Alibaba Cloud. You are...
On the other hand, how should I specify `sagemaker-chat` in proxy mode? ``` model_list: - model_name: jumpstart-model litellm_params: model: sagemaker-chat/jumpstart-model aws_profile_name: ml-sandbox general_settings: # OPTIONAL Best Practices disable_spend_logs: False #...
Hit the same issue. Log as below if that helps with debugging: ``` INFO: 10.0.0.130:64796 - "POST /api/offer HTTP/1.1" 200 OK 2025-05-19 12:56:31.604 | INFO | bot:run_bot:26 - Starting Nova...
I get it working with `--network=host ` to expose the UDP ports. Do we know which UDP ports we need to expose, such as`-p 3000-3100:3000-3100/udp`?