nicho2
nicho2
I want used ollama and nomic-embed-text model for embedding from anythingLLM 
ok, i've understand what to do I was misled by the command given on the ollama site: ollama run nomic-embed-text 
It's not necessary to run (just pull the model and : 
same thing with zeno_visualize.py model_args = re.sub( "/|=|:", "__", json.load( open(Path(args.data_path, model, "results.json"), encoding="utf-8") )["config"]["model_args"],
in test_audio2coeff.py change ## with something other else (__ for me) savemat(os.path.join(coeff_save_dir, '%s__%s.mat'%(batch['pic_name'], batch['audio_name'])), {'coeff_3dmm': coeffs_pred_numpy}) return os.path.join(coeff_save_dir, '%s__%s.mat'%(batch['pic_name'], batch['audio_name']))
I obtain this too  config = { "configurable": { # le user_id donne les droits d'accès aux espaces "user_id": "3", # Checkpoints are accessed by thread_id "thread_id": thread_id },...
This is all true. It's important that the LLM cites references, such as the paragraph number and title. I had to use Python code to do this, but the tradeoff...
perhaps cause by a bad indentation in samples/apps/autogen-studio/autogenstudio/workflowmanager.py 
Hello @jubinsoni , @crazywoola when i send with "keywords": (i have the chunk)  when i send with "keyword" (no chunk) 
Ok, i can try to do that