npu_plugin icon indicating copy to clipboard operation
npu_plugin copied to clipboard

Bump huggingface-hub from 0.30.2 to 0.31.4 in /.github/actions/download-models

Open dependabot[bot] opened this issue 8 months ago • 0 comments

Bumps huggingface-hub from 0.30.2 to 0.31.4.

Release notes

Sourced from huggingface-hub's releases.

[v0.31.4]: strict dataclasses, support DTensor saving & some bug fixes

This release includes some new features and bug fixes:

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.31.2...v0.31.4

[v0.31.2] Hot-fix: make hf-xet optional again and bump the min version of the package

Patch release to make hf-xet optional. More context in #3079 and #3078.

Full Changelog: https://github.com/huggingface/huggingface_hub/compare/v0.31.1...v0.31.2

[v0.31.0] LoRAs with Inference Providers, auto mode for provider selection, embeddings models and more

🧑‍🎨 Introducing LoRAs with fal.ai and Replicate providers

We're introducing blazingly fast LoRA inference powered by fal.ai and Replicate through Hugging Face Inference Providers! You can use any compatible LoRA available on the Hugging Face Hub and get generations at lightning fast speed ⚡

from huggingface_hub import InferenceClient

client = InferenceClient(provider="fal-ai") # or provider="replicate"

output is a PIL.Image object

image = client.text_to_image( "a boy and a girl looking out of a window with a cat perched on the window sill. There is a bicycle parked in front of them and a plant with flowers to the right side of the image. The wall behind them is visible in the background.", model="openfree/flux-chatgpt-ghibli-lora", )

⚙️ auto mode for provider selection

You can now automatically select a provider for a model using auto mode — it will pick the first available provider based on your preferred order set in https://hf.co/settings/inference-providers.

from huggingface_hub import InferenceClient

will select the first provider available for the model, sorted by your order.

client = InferenceClient(provider="auto")

completion = client.chat.completions.create( model="Qwen/Qwen3-235B-A22B", messages=[ { </tr></table>

... (truncated)

Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

dependabot[bot] avatar May 19 '25 18:05 dependabot[bot]