손희준

Results 8 comments of 손희준

Looks like `ghc-options: dynamic` in cabal.project is not honored either.

Tested with `-Weverything` flag and got same results. Non of flags in `~/.cabal/config`, `project-name.cabal`, `cabal.project` worked with custom build type project. I'm using cabal-install 3.4.0.0 rc7 with GHC 8.10.4.

Try "setting" HLS by pressing 'S' after selecting HLS 2.2.0.0(or 2.3.0.0, which is the latest version)

Adding verbose option(`ghcid --test=Main.main Main.hs -v`) allows `hDuplicate` to work properly(prints content of test.txt). However even with verbose enabled, about 20% of triggering fails with same message(`*** Exception: : hDuplicate:...

Those not familiar with frontend can patch backend to ignore parameters from each request. ``` sed -i 's/data, "top_k",/data, "xxxx",/' tools/server/server.cpp sed -i 's/data, "top_p",/data, "xxxx",/' tools/server/server.cpp sed -i 's/data,...

Key differences: https://platform.openai.com/docs/guides/migrate-to-responses Things need to be done: 1\. Require explicit `store: false` OpenAI Response API defaults `store` to true. AFAIK llama.cpp does not handle states, so documentation and assertions...

I roughly implemented text completions. https://github.com/openingnow/llama.cpp/commit/df53bfe2f173ae5c41ae0545c47ed93f75fc50c2 I need to decide whether the code for `/v1/chat/completions` and `/v1/responses` should be separated before proceeding further. i.e., keeping `to_json_oaicompat_response` separate from `to_json_oaicompat_chat`. Since...

[#16599](https://github.com/ggml-org/llama.cpp/pull/16599) introduced this bug.