feat: Model Context Protocol
Make LiquidBounce support the Model Context Protocol, which is currently one of the hottest things in the large language model community (Claude is one such large language model). You just need to configure your MCP server in your large language model software, then launch the game and enable this module. This will allow the large language model to access your gameplay data and even interact with your game. The configuration tutorial has been placed in the mcpdoc folder—you can check it out there. (This module currently has several unknown bugs that need to be fixed. Additionally, because some of the libraries used by the MCP library overlap in functionality with our existing libraries (but are different implementations), directly integrating the MCP library without modifications would increase the size of the packaged LiquidBounce. Based on testing, the size would grow from around 44-45 MB to approximately 72 MB.)
Why do you need this feature in the game? What are its goals?
Why do you need this feature in the game? What are its goals?
Thanks for the explanation. I understand everything. From your point of view, the functionality is really useful, although limited.
What about exposing a extra eval function in scriptapi for claude?
Edit: I tend to implement this through existing infrastructure using scriptapi plus debug option(DAP), and claude debugs for you (vscode), because there are already existing MCP servers that can connect to a vscode active debugging session and can evaluate expressions.
Edit: I personally prefer the idea of LLM in the loop for hacked client development(faster code/validation loop) over the idea of LLM in the loop for hacking. (though hacking could use some models specifically made for hacking that's not a LLM).
simple demo using current vanilla LiquidBounce v0.30.1 with
- a script loaded in debug options
- vscode plugin claude debugs for you
- vscode plugin roo code (with proper configuration of the MCP server)
https://github.com/user-attachments/assets/bb6dc589-5464-4b13-b60a-fd888489beef
just load this script with .script debug repldummy.js DAP true false 4242, and connect with vscode
{
"version": "0.2.0",
"configurations": [
{
"name": "Attach",
"type": "node",
"request": "attach",
"debugServer": 4242,
"restart": true,
"sourceMaps": true
}
]
}
Place the breakpoint at the line with the comment, and then just run the script and type .t repl-dummy, and you should see vscode pause at the line but the game runs. (note everything in this script executes on a unsafe thread but this is a demo, so you might want to do it in render thread with extra utilities)
"use strict";
Object.defineProperty(exports, "__esModule", { value: true });
const script = registerScript.apply({
name: "repl-dummy",
version: "1.0.0",
authors: ["commandblock2"]
});
script.registerModule({
name: "repl-dummy",
description: "Please do .script debug repldummy(assuming that's the file name), and place your breakpoint in the Hi print statement",
category: "Client",
}, (mod) => {
mod.on("enable", () => {
// @ts-expect-error
UnsafeThread.run(() => {
Client.displayChatMessage(`Hi, ${mc.player}`); // Place your breakpoint here
});
});
});
//# sourceMappingURL=data:application/json;base64,eyJ2ZXJzaW9uIjozLCJmaWxlIjoicmVwbGR1bW15LmpzIiwic291cmNlUm9vdCI6IiIsInNvdXJjZXMiOlsiLi4vc3JjL3JlcGxkdW1teS50cyJdLCJuYW1lcyI6W10sIm1hcHBpbmdzIjoiOztBQUVBLE1BQU0sTUFBTSxHQUFHLGNBQWMsQ0FBQyxLQUFLLENBQUM7SUFDaEMsSUFBSSxFQUFFLFlBQVk7SUFDbEIsT0FBTyxFQUFFLE9BQU87SUFDaEIsT0FBTyxFQUFFLENBQUMsZUFBZSxDQUFDO0NBQzdCLENBQUMsQ0FBQztBQUVILE1BQU0sQ0FBQyxjQUFjLENBQUM7SUFDbEIsSUFBSSxFQUFFLFlBQVk7SUFDbEIsV0FBVyxFQUFFLHVIQUF1SDtJQUNwSSxRQUFRLEVBQUUsUUFBUTtDQUVyQixFQUFFLENBQUMsR0FBRyxFQUFFLEVBQUU7SUFDUCxHQUFHLENBQUMsRUFBRSxDQUFDLFFBQVEsRUFBRSxHQUFHLEVBQUU7UUFDbEIsbUJBQW1CO1FBQ25CLFlBQVksQ0FBQyxHQUFHLENBQUMsR0FBRyxFQUFFO1lBQ2xCLE1BQU0sQ0FBQyxrQkFBa0IsQ0FBQyxPQUFPLEVBQUUsQ0FBQyxNQUFNLEVBQUUsQ0FBQyxDQUFBLENBQUMsNkJBQTZCO1FBQy9FLENBQUMsQ0FDQSxDQUFBO0lBQ0wsQ0FBQyxDQUFDLENBQUE7QUFDTixDQUFDLENBQUMsQ0FBQSIsInNvdXJjZXNDb250ZW50IjpbImltcG9ydCB7IE1hdHJpeDJkIH0gZnJvbSBcImp2bS10eXBlcy9vcmcvam9tbC9NYXRyaXgyZFwiO1xuXG5jb25zdCBzY3JpcHQgPSByZWdpc3RlclNjcmlwdC5hcHBseSh7XG4gICAgbmFtZTogXCJyZXBsLWR1bW15XCIsXG4gICAgdmVyc2lvbjogXCIxLjAuMFwiLFxuICAgIGF1dGhvcnM6IFtcImNvbW1hbmRibG9jazJcIl1cbn0pO1xuXG5zY3JpcHQucmVnaXN0ZXJNb2R1bGUoe1xuICAgIG5hbWU6IFwicmVwbC1kdW1teVwiLFxuICAgIGRlc2NyaXB0aW9uOiBcIlBsZWFzZSBkbyAuc2NyaXB0IGRlYnVnIHJlcGxkdW1teShhc3N1bWluZyB0aGF0J3MgdGhlIGZpbGUgbmFtZSksIGFuZCBwbGFjZSB5b3VyIGJyZWFrcG9pbnQgaW4gdGhlIEhpIHByaW50IHN0YXRlbWVudFwiLFxuICAgIGNhdGVnb3J5OiBcIkNsaWVudFwiLFxuXG59LCAobW9kKSA9PiB7XG4gICAgbW9kLm9uKFwiZW5hYmxlXCIsICgpID0+IHtcbiAgICAgICAgLy8gQHRzLWV4cGVjdC1lcnJvclxuICAgICAgICBVbnNhZmVUaHJlYWQucnVuKCgpID0+IHtcbiAgICAgICAgICAgIENsaWVudC5kaXNwbGF5Q2hhdE1lc3NhZ2UoYEhpLCAke21jLnBsYXllcn1gKSAvLyBQbGFjZSB5b3VyIGJyZWFrcG9pbnQgaGVyZVxuICAgICAgICB9XG4gICAgICAgIClcbiAgICB9KVxufSkiXX0=
attached is the prompt history. roo_task_jun-2-2025_12-14-15-am.md
What about exposing a extra function in scriptapi for claude?
evalEdit: I tend to implement this through existing infrastructure using scriptapi plus debug option(DAP), and claude debugs for you (vscode), because there are already existing MCP servers that can connect to a vscode active debugging session and can evaluate expressions.
Edit: I personally prefer the idea of LLM in the loop for hacked client development(faster code/validation loop) over the idea of LLM in the loop for hacking. (though hacking could use some models specifically made for hacking that's not a LLM).
Implementing this via scripts is not a simple task, and I estimate that less than 1% of users would actually use scripts (at least until the marketplace written by izuna gets merged). Moreover, BaritoneAPI isn't easily accessible within JavaScript.
My original goal in developing this feature was to allow players to interface with BaritoneAPI through an LLM, enabling the LLM to control player actions. This way, we could accomplish many moderately complex tasks without relying on Alto Clef.
However, you're right—if our LLM can't be used directly in-game, the practicality of this feature would diminish. Unfortunately, I don't have a perfect solution for this yet (though perhaps we could achieve it via an LLM interface, similar to AutoChatGame).
Next, I'll try to connect the feature to BaritoneAPI and then test how well it works.
What about exposing a extra function in scriptapi for claude?
evalEdit: I tend to implement this through existing infrastructure using scriptapi plus debug option(DAP), and claude debugs for you (vscode), because there are already existing MCP servers that can connect to a vscode active debugging session and can evaluate expressions. Edit: I personally prefer the idea of LLM in the loop for hacked client development(faster code/validation loop) over the idea of LLM in the loop for hacking. (though hacking could use some models specifically made for hacking that's not a LLM).Implementing this via scripts is not a simple task, and I estimate that less than 1% of users would actually use scripts (at least until the marketplace written by izuna gets merged). Moreover, BaritoneAPI isn't easily accessible within JavaScript.
My original goal in developing this feature was to allow players to interface with BaritoneAPI through an LLM, enabling the LLM to control player actions. This way, we could accomplish many moderately complex tasks without relying on Alto Clef.
However, you're right—if our LLM can't be used directly in-game, the practicality of this feature would diminish. Unfortunately, I don't have a perfect solution for this yet (though perhaps we could achieve it via an LLM interface, similar to AutoChatGame).
Next, I'll try to connect the feature to BaritoneAPI and then test how well it works.
Would say calling baritone with script api would be much easier because we basically have a repl. But with presenting the client itself as a mcp server would be harder if we were going to use script api. For using llms to write baritone commands I would say it is almost a valid task, the only chanllange is that our llm current doesn't not do very well in understanding a 3D world like the terrian around you.
What about exposing a extra function in scriptapi for claude?
evalEdit: I tend to implement this through existing infrastructure using scriptapi plus debug option(DAP), and claude debugs for you (vscode), because there are already existing MCP servers that can connect to a vscode active debugging session and can evaluate expressions. Edit: I personally prefer the idea of LLM in the loop for hacked client development(faster code/validation loop) over the idea of LLM in the loop for hacking. (though hacking could use some models specifically made for hacking that's not a LLM).Implementing this via scripts is not a simple task, and I estimate that less than 1% of users would actually use scripts (at least until the marketplace written by izuna gets merged). Moreover, BaritoneAPI isn't easily accessible within JavaScript. My original goal in developing this feature was to allow players to interface with BaritoneAPI through an LLM, enabling the LLM to control player actions. This way, we could accomplish many moderately complex tasks without relying on Alto Clef. However, you're right—if our LLM can't be used directly in-game, the practicality of this feature would diminish. Unfortunately, I don't have a perfect solution for this yet (though perhaps we could achieve it via an LLM interface, similar to AutoChatGame). Next, I'll try to connect the feature to BaritoneAPI and then test how well it works.
Would say calling baritone with script api would be much easier because we basically have a repl. But with presenting the client itself as a mcp server would be harder if we were going to use script api. For using llms to write baritone commands I would say it is almost a valid task, the only chanllange is that our llm current doesn't not do very well in understanding a 3D world like the terrian around you.
I don’t deny the feasibility of implementing MCP through scripts, but as I mentioned, the user base for scripts is too small, which would result in this feature being little-known.
Additionally, scripts have another issue: they perform poorly in complex environments. If the goal is merely to achieve the existing functionalities, scripts are entirely viable—they might even perform better than directly embedding the feature in a mod. However, the appeal of MCP goes far beyond that. We aim to create an MCP capable of handling most non-emergency LLM-controlled scenarios in the game, which involves invoking complex MCP toolchains—something far beyond what lightweight scripts can achieve.
As you just pointed out regarding LLM’s shortcomings in 3D performance, this would require analyzing some fixed logic. If we need to implement such logic, I believe Kotlin would be more efficient than JS.
From my understanding, your idea is to use scripts to independently implement an MCP server. Please correct me if I’m wrong.
The existing MCP library uses a dynamic addTool() method for its Server. Perhaps we could leverage this to provide an interface for scripts? idk
I don't intend to implement a mcp server with a script, I would like to provide script api as a tool (in a repl sense), so that the llm can use more than predefined tools because the scriptapi can do almost everything. The llm should be able to execute/evaluate whatever expression it wants, and maybe create it's own tool on the fly for frequently used expression/snippet or complex logic.
Finally, I cannot wait to automate cheating in Minecraft!
I don't intend to implement a mcp server with a script, I would like to provide script api as a tool (in a repl sense), so that the llm can use more than predefined tools because the scriptapi can do almost everything. The llm should be able to execute/evaluate whatever expression it wants, and maybe create it's own tool on the fly for frequently used expression/snippet or complex logic.
In that case, it seems our ideas don't conflict, which makes everything easier to handle. Next, I'll first implement the core features of MCP (like the auto-pathfinding bot and automatic module switching), then consider the ScriptAPI aspects. However, since I don't frequently use scripts and don't use VSCode either (admittedly, I don't really understand REPL either, despite having researched it), I'm unclear about some key parts of the ScriptAPI and would appreciate your input.
(By the way, the current MCP Baritone goto feature can't be invoked on small-scale models because their response formats don't meet the requirements - I'm currently working on fixing this issue.
Finally, I cannot wait to automate cheating in Minecraft!
zeroday already did that with baritone lol. it just goes to the nearest player and attacks them.
We haven't enough utility in liquidbounce to finish this pr.And considering pr can't be too big,I will continue this pr as soon as I complete all utility this pr need and they are merged. And I am waiting for #6241 merged.
And I am waiting for #6241 merged.
I really doubt that that PR will be merged in the future. It's adding 5 MB to the binary while giving no obvious advantage
And I am waiting for #6241 merged.
I really doubt that that PR will be merged in the future. It's adding 5 MB to the binary while giving no obvious advantage
It hasn't obvious advantage,but ktor is easily to develop than netty(MukjepScarlet said.)
I highly recommend turning this into a script. I'm closing this pull request because we won't be merging it for obvious reasons.
MCP dependency has been updated by me (since 0.7.0). Now it contains very few extra libs.