Yi Lu
Yi Lu
看起来限制的BLOB大小给到2M了,如果按照monitor切分,然后再分出incident,12h内和长期历史行,即每个monitor用三行。表定义如下:  type可以取: 0. meta_data 1. incident 2. 12h_data 3. history_data 如何划分还受到最大查询次数的限制(50 QPW),可能这点还要再考虑, 使用batch操作看看能不能优化。 再结合一下 https://developers.cloudflare.com/d1/sql-api/query-json/ ,直接操作j son 得到对象看看能不能减少转换耗费的cpu时间。
> D1 提供的这个 query json 确实可能是个不错的选项,不过这需要重写大量代码,而且耦合很强不利于迁移到别的存储方式。 可以搞个DAO层适配一下,毕竟你要做多平台,这些优化还是可以考虑的。主要问题是SQL语句的最长是100KB,除非你有什么别的上传方式,否则写不进去呀。我目前想法是这样,反正大头是数据数组,读就一股脑读出来,插入、更新和删除就缓存起来然后以batch的方式做了(原谅我没怎么写过ts,仅作示意): ```typescript import type { D1Database, D1PreparedStatement } from "@cloudflare/workers-types" export enum ArrayDAOType { meta = 0, incident = 1, recent = 2, history...
It had been too long to merge this so I'd recommend to build it yourself if possible. ----------Reply to Message---------- On Wed, Aug 27, 2025 00:04 AM ***@***.***> wrote: ldenefle...
Great to hear that! I'd compile it and try, but something just wrong.... I had set tlb mode to virtual and register the callback, like below: ```python def hook_tlb_fill(uc :...
No, it only happens when VIRTUAL and hook set.
I just test to see if it's needed to clear. (Learned the function from the wiki). I dont get any doc about what should the function return....
OK. But the problem seems to happen before the callback. It should call the callback at least one time, but i didnt see any output from the callback. I'd give...
I'd found the problem. Please see the comment on the commit [here](https://github.com/unicorn-engine/unicorn/pull/2037/commits/81938f780ceee224f471537ae1dba89536dc97f8#r1810834858). Also I used a wrong callback proto so it didn't get called. :-)
But there is still something strange. When I just set the paddr = vaddr and always return true in tlb callback, the first time it success and executed, but the...
Here is a min example: ```python from unicorn import * from unicorn.riscv_const import * # The program in C; no read from memory after optimization """ int a[4096]; int b[4096];...