Emily
Emily
The colour is given for `RequiredArgumentNode`s and it is defined in the client, nothing you can do about it. The colours will go like this, changing to the next one...
My current idea for this is just a "simple" `/lp dump` that dumps into bytebin a whole lot of info about both LuckPerms and the server, not any kind of...
> Add possibility to modify/remove `meta` through bulkupdate? > Example: `/lp bulkupdate users delete "meta == 14_12"` That is possible with the current setup, meta nodes are structured like so...
Even if the int variant wasn't there you need to either cast to a short or put the number into a short variable to use the method (more specifically, narrowing...
Casting is actively annoying and, when you have a method that does not require casting, it's very easy to miss and forget. Others might think differently but those are my...
I personally think a good alternative might be removing (deprecating) `Component.text()`, `Component.translatable()` etc and put in place `ComponentBuilder.text()`, `ComponentBuilder.translatable()` and so on. Something as simple as just changing the interface...
This issue is for planning and discussing Spark's future integration into Paper, not HotSpot's ability to run on docker or OpenJ9's performance metrics.
> Spark killed the Java machine on startup. It still the case en October 2023 I should mention that I am able to start and run a server running OpenJ9...
What makes you suspect there is a memory leak? Does the JVM crash running out of memory? What you are showing with those screenshots is not a memory leak, LuckPerms...
That sounds like a GC configuration problem. If you could provide the heap dump and the JVM flags used it can be investigated further, but without any more info there...