Mo

Results 14 issues of Mo

It would be nice to have a "Experiments" directory for reproducible research via MLflow or a similar tool.

[HuggingFace Endpoints](https://huggingface.co/inference-endpoints/dedicated) seems to be an easier way to run SWE-Llama models on the cloud. Tips for coding this feature?

Would it be possible to run inference on a CPU? It seems that the `run_llama.py` requires a GPU. ``` python run_llama.py --dataset_name_or_path princeton-nlp/SWE-bench_oracle --model_name_or_path princeton-nlp/SWE-Llama-13b --output_dir ./outputs --temperature 0 ```

**What problem or use case are you trying to solve?** OpenDevin requires an Agent implementation and LangGraph seems to be a good candidate. **Describe the UX of the solution you'd...

agent framework
severity:low

I think princeton-nlp/SWE-bench_Lite_oracle is more suitable for MVP. Source: https://www.swebench.com/lite.html

### Describe the feature Currently [deepseekcoder](https://deepseekcoder.github.io/) is a promising opensource model to experiment with SWE-agent which could be integrated via an OpenAI API and [deepseek.com](deepseek.com) offers 10M tokens free, making...

➕ feature
▼ prio: low

Add -e to viewing logs in order to show end of ollama logs

### Describe the feature Adding LLM tokens to the generated inference for cost calculation is important for comparing different models. ### Potential Solutions https://github.com/AgentOps-AI/tokencost can be used to calculate prompt...

inference

Adds export_path for visualize_speaker_transitions_dict ## Why are these changes needed? ## Related issue number ## Checks - [x] I've included any doc changes needed for https://microsoft.github.io/autogen/. See https://microsoft.github.io/autogen/docs/Contribute#documentation to build...

group chat

I am getting `WARNING:root:Warning: model not found. Using cl100k_base encoding.` when using non openAI models such as claude-2. I checked the code and it seems to be a tiktoken issue....