Brian Cullinan
Brian Cullinan
anybody try running this project through cursor/windsurf?
Can confirm that script led me to the right answers using .Zip installation on Mac M1.
Now that's the kind of thing I would do.
Please always identify your OS and hardware in these filings. Chips will be coming out that have many different functions. i.e. CUDA versus Apple arch64.
not using llama_local but still seeing this error, tried gpt-4o and meta-llama and hermes, all same result, repeats endlessly expecting a different response from the model like it isn't formatting...
Option+Shift+E on Mac. OMG "Select Tall" is a typo.