Jesse Jojo Johnson
Jesse Jojo Johnson
Note. I don't think this has anything to do with the duration. I first run it with duration and I had the same problem. As I said before, I run...
` ` Make sure your xml has `app:actionViewClass` rather than the default `android:actionViewClass`
I have a similar error: here's my stacktrace `10-23 10:16:57.378 9019-9772/? E/ExoPlayerImplInternal: Source error. com.google.android.exoplayer2.upstream.HttpDataSource$HttpDataSourceException: Unable to connect to http://127.0.0.1:38026/https%3A%2F%2Fcdn.iwillnotrevealmycompanynamehere.com%2Fmedia%2Fchurch%2Fmultimedia%2Fd80609188b444eb5aa91e28f6e9cfc2a.mp4 at com.google.android.exoplayer2.upstream.DefaultHttpDataSource.open(:203) at com.google.android.exoplayer2.upstream.DefaultDataSource.open(:123) at com.google.android.exoplayer2.source.ExtractorMediaPeriod$ExtractingLoadable.load(:631) at com.google.android.exoplayer2.upstream.Loader$LoadTask.run(:295) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1162)...
Does anyone have an alternative that has ESM Import compat? It's 2023 and I can't seem to find any. :( @LeaVerou what did you switch to?
This was quick! 😅 They've included a bit in the ReadMe indicating that compatibility with llama.cpp is actively desired. :) EDIT: related HN thread https://news.ycombinator.com/item?id=35629127
@dansinboy are you using the default server binary that comes with llama.cpp or a binding?
Same issue as @ThatCoffeeGuy has. My model is the old style. It doesn't work with binaries created after the breaking change. Edit: This script worked on the 7B alpaca model...
In interactive/chat mode, sometimes User: does not appear and I need to manually type in my nickname
I've experienced similar with older and the latest version of llama.cpp. When in interactive mode, the conversation sometimes hangs, and only continues when you hit ENTER. See screenshot below. Circled...
In interactive/chat mode, sometimes User: does not appear and I need to manually type in my nickname
> @jessejohnson @hengjiUSTC Very likely this is what we call the "context swap" - it occurs when the context is full and it takes a few seconds for the generation...
> What are the model / eval parameters you're using? All defaults. I'm running the 7B 4-bit quantized llama model. I also have a 7B 4-bit quantized alpaca model, both...