Cage

Results 7 comments of Cage

similar question here, how does one do batch prediction using the jumpstart models?

okay something here could be relevant: https://github.com/aws/amazon-sagemaker-examples/blob/5c294c25541b51c53054ff4b4fd2629d8ece64d4/introduction_to_amazon_algorithms/jumpstart-foundation-models/text2text-generation-Batch-Transform.ipynb#L366

trying the same, but the model file doesnt seem to be a real tar ClientError: An error occurred (ValidationException) when calling the CreateTransformJob operation: Model file at "s3://jumpstart-cache-prod-us-west-2/meta-infer/infer-meta-textgeneration-llama-2-7b-f.tar.gz" is not...

having much better performance using a background remover like MelBand Roformer Kim | Big Beta v5e, using audio-separater, see my fork https://github.com/Cage89/Whisper-WebUI/blob/master/modules/uvr/music_separator.py Did have to manually install torch though to...

starting to see this as well with v2 in some cases (unless the ui doesnt really switch models due to some cacheing)

actually no that was due to yt-dlp somehow downloading an ai generated english audio instead of the original japanese, see https://www.youtube.com/watch?v=gh6ECMyH7-g