Prabhudayal Vaishnav

Results 17 comments of Prabhudayal Vaishnav

but is there any alternative to it, i have llama llm running in my local instance, so i do want it to streamline with the ai-scientist.

im a bit disappointed by the model as it doesn't generates the PDF, so far ive reached till here; ![image](https://github.com/user-attachments/assets/fbd12557-7f97-452d-bf4b-9491304b733f)

is there any tweak to create the PDF from the `aider-chat.txt` ? meanwhile here's the **review** of the paper, ive just passed the **aider-chat.txt** instead of the **report.pdf** 🥲, ![368824890-24b53d48-0718-47e6-ae13-cd149869a509](https://github.com/user-attachments/assets/d5fe428f-9ddd-42a9-b42a-42bf3aa7cf6d)...

Can't we just distribute the cached baseline results, to avoid the heavy task of compiling 🤔 On Sun, Sep 1, 2024, 11:16 PM Cong Lu ***@***.***> wrote: > Yes these...

@conglu1997 i've been using cloud instance to run this ai scientist, it works fine. i just wanna know that if all these work in COLAB PRO with A100? as many...

please be more verbose.

@7wik3vedi16 you can try changing the` dtype = float16` from the `experiment.py,` but that would be **really slower** as compared with `bfloat16` ive tried it in google colab and same...

try llama3.1 api from groq its free, [Added support for Groq](https://github.com/SakanaAI/AI-Scientist/pull/11)

there has been a rate limit on groq api, does anyone know how to bypass it & use it in free tier?

the nanogpt part would iterate till 1,00,000 times for i guess 2-3 times as per me but the nanogpt_lite one would be faster