waveBoom
waveBoom
备注:关于@raw的使用补充: https://github.com/Tencent/APIJSON/releases/tag/4.4.0
> 目前可以用 @raw,具体见通用文档及 APIJSONBoot 的 DemoSQLConfig 中 RAW_MAP 配置。 后续会增强 @combine,支持多层嵌套与或非连接 谢谢tommy, 这种方式 RAW_MAP.put("commentWhereItem1","(`Comment`.`userId` = 38710 AND `Comment`.`momentId` = 470)"); 38710 和 470 我想动态传参数,这个有方法吗。
> @28-HuaSheng 已支持复杂条件组合 [2cc13da](https://github.com/Tencent/APIJSON/commit/2cc13dab41f658f729eb34684bd6c8f64d8fd0c2) > > [http://localhost:8080/get/User[]](http://localhost:8080/get/User%5B%5D) > > ```json > { > "User": { > "date>": "2017-02-01 11:21:50", > "name*~": "a", > "contactIdList": 82001, > "tag$": "%p%i%", > "@combine":...
hi 你好,感谢回答。 对于您说到的 Log.DEBUG = false 是在哪里配置? 我看文档好像没有说到这方面的? 另外我看java-demo代码 apijson.Log类 public static boolean DEBUG = true; 这个是写死的,好像也没地方提供配置呢?
看了下,应该是没更新,那个字段已经去除了 3.5.1版本
`openai.APITimeoutError: Request timed out. ERROR:server.py:handle_mentions:An error occurred: Traceback (most recent call last): File "/Users/bobo/practise/myGPTReader/venv/lib/python3.9/site-packages/llama_index/core/async_utils.py", line 29, in asyncio_run loop = asyncio.get_event_loop() File "/usr/local/Cellar/[email protected]/3.9.7_1/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/events.py", line 642, in get_event_loop raise RuntimeError('There is...
before my summary query engine is : `summary_query_engine = summary_index.as_query_engine( response_mode="tree_summarize" use_async=True, )` when i change the summary_query_engine to : ` httpclient = DefaultHttpxClient() llm_tree_summary = llmOpenAi(temperature=0, model=model_name, http_client=httpclient) service_context_tree_summary...
before code : ---------------- from llama_index.llms.openai import OpenAI as llmOpenAi llm = llmOpenAi(temperature=0, model=model_name) service_context = ServiceContext.from_defaults(llm=llm) index = VectorStoreIndex.from_documents(documents, service_context=service_context) summary_index = SummaryIndex.from_documents(documents,service_context=service_context) keyword_index = SimpleKeywordTableIndex.from_documents(documents,service_context=service_context)vector_query_engine = vector_index.as_query_engine(text_qa_template=prompt, similarity_top_k=5)...