Xiao
Xiao
我试了下llama3-8b 1 terminal 1: python3 -m fastchat.serve.controller 2 terminal 2:python3 -m fastchat.serve.model_worker --model-path /data/llama3/Meta-Llama-3-8B-hf 3 termain 3: python3 -m fastchat.serve.openai_api_server --host 127.0.0.1 --port 30008 4 fs_agent.yaml ``` default: module: "src.client.agents.FastChatAgent"...
configs/assignments/default.yaml是要改成这样吗 ``` import: definition.yaml concurrency: task: dbbench-std: 5 os-std: 5 agent: llama3-8b: 5 #todo(xiao):this can changed to other models assignments: # List[Assignment] | Assignment - agent: # "task": List[str] |...
我在 terminal 5: python -m src.start_task -a报的错是 ``` mysql.connector.errors.DatabaseError: 2003 (HY000): Can't connect to MySQL server on '127.0.0.1:13563' (111) Traceback (most recent call last): File "/home/xxxx/anaconda3/envs/agent-bench/lib/python3.9/site-packages/mysql/connector/connection_cext.py", line 291, in _open_connection...
在 termina 6:python -m src.assigner报的错是 ``` python -m src.assigner /home/xxxx/anaconda3/envs/agent-bench/lib/python3.9/site-packages/transformers/utils/generic.py:311: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead. torch.utils._pytree._register_pytree_node( /home/xxxx/anaconda3/envs/agent-bench/lib/python3.9/site-packages/transformers/utils/generic.py:311: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead. torch.utils._pytree._register_pytree_node( Warning:...
if I want to use local model like llama3, how to use this model to run the bench?
> Maybe you are looking for https://huggingface.co/deepseek-ai/DeepSeek-V2-Lite/blob/main/modeling_deepseek.py I think if the deepseek team can maintain a repo like llama, it will be great. In llama repo, it has a model...
> @Gy-Lu @ver217 @binmakeswell can you update the PP doc? could you add the sequence parallelism doc? thank you very much.
> > > @Gy-Lu @ver217 @binmakeswell can you update the PP doc? > > > > > > could you add the sequence parallelism doc? thank you very much. >...
> #3056 thanks. let me try it
@marklysze hi, here are my steps 1 terminate 1: ollama run llama3.1:70b 2 terminatr 2: my code ``` # THIS TESTS: TWO AGENTS WITH TERMINATION altmodel_llm_config = { "config_list": [...