tungsten106
tungsten106
可以找到 [langchain_experimental](https://api.python.langchain.com/en/latest/tools/langchain_experimental.tools.python.tool.PythonAstREPLTool.html) 有一个相似的函数,可能是旧版本的函数被转移到了这里。安装这个包: ``` pip install langchain_experimental ``` 然后把 `langchain.tools.python.tool` 改为 `langchain_experimental.tools.python.tool` 就可以了。 (目前langchain版本0.1.12,langchain_experimental版本0.0.54
> @tungsten106 I'd love to review this, but the diffs seem to have issues (entire file is shown as deleted, with all the lines also shown as added). I'm having...
It is possible to use fitz/PyMuPDF to extract image at each page (just not at the **exact position** like docx files), save it to a position and label it as...
> > I think it's better to let the user choose the engine rather than replacing it > > I agree. There are pros and cons to each. The main...
> @tungsten106 please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information. > > ``` > @microsoft-github-policy-service agree [company="{your company}"] >...
> @tungsten106 Thank you for your contribution! It looks great so far. Just one more thing—when running the tests, files is generated. Could you add the following to the `.gitignore`...
> > I have updated that. > > For test speed, have you tried to use pytest-xdist to run test_markitdown.py in parallel? > > Thank you. You can run tests...
> maybe add clli option I have added an cli option `--engine` to choose different converters' engine and you could test it with the following command: ```bash python -m markitdown...
可以参考 https://github.com/OpenSQZ/MiniCPM-V-CookBook/blob/main/quantization/gguf/minicpm-v4_5_gguf_quantize_zh.md 将模型转换为gguf格式,然后用ollama在cpu上运行。但是速度非常的慢
Same issue found in UI TARS Desktop. Version: [UI-TARS-0.2.4](https://github.com/bytedance/UI-TARS-desktop/releases/download/v0.2.4/UI-TARS-0.2.4-Setup.exe) Used model "doubao-1-5-ui-tars-250428" from **volcengine**, the total token consumption is within the limit. Error logs: ``` INVOKE_RETRY_ERROR: Too many model invoke...