Marco Neves

Results 17 comments of Marco Neves

@shroominic so you're saying I'll be able to load up my local `llama-2` model like below and have it be able to execute code in this CodeBox env? ## example...

I used what I'm familiar with which is Llama.cpp server with [ShareGPT4V-7B](https://huggingface.co/Lin-Chen/ShareGPT4V-7B). But it should work with any other backend/model. @SkalskiP you just need to define your custom payload with...

@fakerybakery have you looked into [mlx](https://github.com/ml-explore/mlx)? It's a new framework from Apple. [They have a separate repo for examples.](https://github.com/ml-explore/mlx-examples)

They've designed it to closely follow PyTorch's implementation-- though I'm not sure exactly what this means in terms of interop. Still worth some attention!

> Ok, persuaded. Just tried the model and yep, speed and quality are really good. I'll integrate this as next engine soon. How much work do you think it would...

Graph.backgroundColor(#000000);

Hey thanks for reaching out! Glad you're making some of this:) So all that baby_code.py does is wrap the server.cpp and bridges it along with my own endpoints to execute...

I would recommend looking into a more full-fledged version of what I was going for: https://github.com/KillianLucas/open-interpreter this would be way easier to integrate with a Home Assistant.

haha the confetti definitely keeps me entertained too lol glad you're getting a kick out of it!

@molander anything in particular you would like to see implemented? I kind of put if off to the side because I don't have any fun ideas to go off of...