llama_cpp_dart icon indicating copy to clipboard operation
llama_cpp_dart copied to clipboard

Field 'context' has not been initialized.

Open ConnorDykes opened this issue 1 year ago • 14 comments

Here us my function, Can anyone give me any pointers on why the context is coming back not initalized?

// Load the path to the model file in assets
   final ByteData data = await rootBundle.load('assets/ai_model.gguf');
   final String modelPath = await _writeToFile(data, 'ai_model.gguf');

   final loadCommand = LlamaLoad(
       path: modelPath,
       modelParams: ModelParams(),
       contextParams: ContextParams(),
       samplingParams: SamplerParams(),
       format: ChatMLFormat());

   final llamaParent = LlamaParent(loadCommand);
   await llamaParent.init();

   llamaParent.stream.listen((response) => print(response));
   llamaParent.sendPrompt("Why is the sky blue");

ConnorDykes avatar Jan 01 '25 21:01 ConnorDykes

Can anyone have idea how to resolve the problem. We are stuck at initializing the model. To add more insight:

I tried build llama.cpp directly on my android device using Termux and it is able to execute my .gguf model on the device too. That means there should not be any problem about compatibility when running the model using llama.cpp. I am worried about putting in the wrong parameters during the call to load_model_from_file cause after running the function, my model pointer points to zero, in cpp it means null. Yeah, I debugged it by putting few printf statement along the way in the cpp library but to no avail.

If anyone can use/run this package's example. Please help out, thank you

ndnam198 avatar Jan 09 '25 13:01 ndnam198

In my case the issue is that the internal DynamicLibrary.open() call fails and in the error handling a dispose() call is made which causes that error message.

rekire avatar Jan 16 '25 19:01 rekire

I checked your sample code, in my case I had to add the line Llama.libraryPath = "llama.dll"; to make it working. I took the binaries from the latest release at GitHub I choose the vulkan release.

With the llama-cli command line tool I was able to verify that the model works as intended and that I have no missing dependencies.

In my case my app crashes silent in llama.dart in the method _initializeLlama(...) when calling lib.llama_load_model_from_file(...).

rekire avatar Jan 18 '25 15:01 rekire

I got it working (also with your code). You need a very specific build: https://github.com/ggerganov/llama.cpp/releases/tag/b4138 in the changelog is the commit hash 42ae10bbcd7b56f29a302c86796542a6dadf46c9 mentioned. I hope that will help you too

rekire avatar Jan 18 '25 17:01 rekire

Any updates? @rekire, your suggestion indeed helped: providing that particular build let me successfully run the test.dart in the example/ directory (inside the terminal). However, the rest of the code fails anyway. chat.dart throws the same error along with the code I'm trying to use within the Flutter widget. I tested this on macOS and iOS (+sim), but the issue persists, and it looks like it comes from the library, neither from the model nor the build.

@rekire I've noticed you forked the repos, so I hope you can find out what causes the issue.

maksymmatviievskyi avatar Jan 22 '25 15:01 maksymmatviievskyi

Depending on the sample code you might need to add Llama.libraryPath = "llama.dll";. I think I am in a similar situation like you have still some issues with the code.

I'm playing a bit with CI automation in my fork. I aimed to keep it automatically in sync with the llama.cpp repo by a cronjob.My guess was that some of the issues could be fixed in the llama.cpp Repo, otherwise it is not unlikely that the dart code has some issues. I have not much time and I am maintaining another flutter plugin, therefore don't expect much. I am still not sure if I can actually build an app with AI included.

rekire avatar Jan 22 '25 16:01 rekire

I use .dylib because I expect to run the application on arm (primarily iOS devices, but hopefully Android as well), so I instead provide a library path for that. I'll let you know if I manage to run an inference locally.

maksymmatviievskyi avatar Jan 23 '25 10:01 maksymmatviievskyi

That's great, so you are familiar with iOS. Is there a reason not to publish the .dylib within the package? I am aware of the fact that all libs needs to be signed, but is this a real problem that you get it precompiled?

rekire avatar Jan 23 '25 10:01 rekire

Well, I believe that's the enquiry to the maintainers, but having read through the history of the issues, I see their explanation as follows:

  1. llama.cpp develops rapidly, so they would have to track its versions and update the library correspondingly frequently, having limited availability themselfs.
  2. It is generally unsafe practice since anyone can compile malicious executables into

Hence, apparently, better to pull from the original source and build on your own.

maksymmatviievskyi avatar Jan 23 '25 16:01 maksymmatviievskyi

I would try to automate that part with CI so that this won't be an issue. I personally think that this is a huge waste of resources that every developer needs to install all the dependencies just to create the same binary over and over again. IMHO nobody does a full code review therefore I am personally fine when I get it already compiled as long I could do it myself. For Windows and Linux I would take the binaries from the official releases.

I still need to find out if that can be done with GitHub actions at all. MacOS runners trend to be quite expensive.

rekire avatar Jan 23 '25 19:01 rekire

I got it managed to let the CI update the ffigen code: https://github.com/rekire/llama_cpp_dart_fork/commit/849fe26995aa7944011998ffe6710955a2cf6e93

rekire avatar Jan 25 '25 07:01 rekire

See #54 to track my progress

rekire avatar Jan 29 '25 20:01 rekire

Thanks, https://github.com/ggerganov/llama.cpp/releases/tag/b4138 worked for me also but what is the latest version that works with this?

EnderRobber101 avatar Apr 27 '25 20:04 EnderRobber101

@rekire @EnderRobber101 @netdur @maksymmatviievskyi

I am getting the same error. Can you guys help me with the solution?

Thank you in advance!!

jaykukadiya99 avatar Apr 30 '25 09:04 jaykukadiya99