1feres1
1feres1
same error import requests requests.get('https://www.huggingface.co') SSLError: HTTPSConnectionPool(host='www.huggingface.co', port=443): Max retries exceeded with url: / (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:997)')))
the same here 
this helped me !conda env update -n base -f environment.yaml the new environment is generated and activated
downgrading requests to 2.27.1 + import os os.environ['CURL_CA_BUNDLE'] = '' will solve the problem
my example use case is summarization of chats data (large data) - I use Baml for output formatting - prediction time using ollama mistreal instruct takes 10 min / 100...
hello, Hope you doing well, You can find below the example of how I use VLLM for batch predictions for summary generation: from huggingface_hub import login login(token= "hf_") # change...