alph4b3th

Results 19 comments of alph4b3th

Yes please! the model is very slow on amd epyc cpu.. ![image](https://user-images.githubusercontent.com/66482679/228144096-7d684427-ed46-471b-b2aa-51f6c670e2ee.png)

you're right, I noticed strange behavior while using the template. The swap memory (usually 12gb) is in 90% beyond the memory itself ram (4gb) totaling 16gb. I think I need...

hi, I have a vps (setup attached) and I'm having a slowness when running the model 7B, 13B. The Model in addition to taking about 3-8 minutes to be loaded,...

I saw in another personal forum saying that there was slowness with docker. Are you running with or without docker?

I'm waiting for you. I'm also installing another application on top of llama.cpp to test performance outside of docker. I intend to go back today too.

I installed it outside of docker and did not get different results from those already mentioned above. What is it? Because I've seen some people running alpaca 7B and it...

@voarsh2 could you provide your evidence? I tested it and the results are apparently the opposite (faster serge)

> it's not with intel xeon. I changed the server to a vps with amd epyc (6nucleos 16gb ram) and it accelerated very little. It made it possible to make...

hey, it's really slow! amd epyc 16gb ram, and a lot of delay. more than a minute to load the model to ram. (SSD)

wtf? I have a powerful server! how heavy is this? ![image](https://user-images.githubusercontent.com/66482679/228143475-9863f5b1-82b6-47cd-9e3e-16987aba1257.png)