llama3:
Can we pull llama3 using this genai-stack? I was not sure, if the docker files are configured to pull llama3......Have pulled and used llama2 before.....
I used llama3 with my Ollama installation. In my case, Ollama is installed separately on macOS and started with ollama serve before executing docker compose up as the instructions suggest. (Of course, I had llama3 pulled already on my Ollama installation.)
Are you running on Linux? I haven't tested on that.
Yes.....I am running on Linux.....I am using Ubuntu 18.04 LTS......I tried using the same docker compose codes and only replaced Llama3 in place of Llama2......It pulled all the necessary containers.......But could see lot of bugs in chat BOT, as well as pdf BOT......
Yes.....I am running on Linux.....I am using Ubuntu 18.04 LTS......I tried using the same docker compose codes and only replaced Llama3 in place of Llama2......It pulled all the necessary containers.......But could see lot of bugs in chat BOT, as well as pdf BOT......
Have successfully run Llama3:8B.....When I initiate the first chat-BOT through /localhost/8501/, the UI for prompting continues to hide behind the RAG enable/disable box.....But when you copy and paste a text file, by pointing slightly into the "prompt-box", it accepts the entry and gives the response......All other BOTs are working normally.......I tried Llama:70B, but it took way beyond the normal pulling time and exited......