qwq topic

List qwq repositories

grps_trtllm

161
Stars
11
Forks
161
Watchers

Higher performance OpenAI LLM service than vLLM serve: A pure C++ high-performance OpenAI LLM service implemented with GPRS+TensorRT-LLM+Tokenizers.cpp, supporting chat and function call, AI agents, d...

Sidekick

3.2k
Stars
142
Forks
3.2k
Watchers

A native macOS app that allows users to chat with a local LLM that can respond with information from files, folders and websites on your Mac without installing any other software. Powered by llama.cpp...

Search-o1

1.1k
Stars
97
Forks
1.1k
Watchers

🔍 Search-o1: Agentic Search-Enhanced Large Reasoning Models [EMNLP 2025]

WebThinker

1.4k
Stars
135
Forks
1.4k
Watchers

[NeurIPS 2025] 🌐 WebThinker: Empowering Large Reasoning Models with Deep Research Capability

OllamaR

185
Stars
168
Forks
185
Watchers

Ollama负载均衡服务器 | 一款高性能、易配置的开源负载均衡服务器,优化Ollama负载。它能够帮助您提高应用程序的可用性和响应速度,同时确保系统资源的有效利用。

unthinking_vulnerability

32
Stars
0
Forks
32
Watchers

To Think or Not to Think: Exploring the Unthinking Vulnerability in Large Reasoning Models

hogwild_llm

136
Stars
8
Forks
136
Watchers

Official PyTorch implementation for Hogwild! Inference: Parallel LLM Generation with a Concurrent Attention Cache