recursionbane

Results 11 issues of recursionbane

This library looks promising! This is a vote for a [Microsoft Teams push connector](https://msdn.microsoft.com/en-us/microsoft-teams/connectors).

enhancement
help wanted

**Actual Behavior**: - `What is the issue? *` md-contact-chips do not appear to support md-on-add/remove/select bindings - `What is the expected behavior?` md-contact-chips should inherit functionality from md-chips, which do...

type: enhancement
P4: minor

What would it take to run this on an Apple M1 or M2 chip with 16+GB of unified CPU/GPU memory?

- [x] I have searched to see if a similar issue already exists. **Is your feature request related to a problem? Please describe.** The ChatInterface makes setting up a chatbot...

enhancement

### Version Command-line (Python) version ### Operating System Linux (other) ### Your question For questions where GPT-Pilot asks us to try running the app and let it know if something...

question

Thank you for this tool! I am here to suggest an improvement. The current iteration with timers between deletion cycles is somewhat clunky, and does not take machine performance and...

### Issue Hi, what would be needed to add Perl5 support (repo maps, not just linting) to Aider? Is it a matter of updating the tree-sitter grammar for Perl? ###...

question

Hi, this is a great project, thank you for creating it. I see that you have a nice, simple example for transcribing a prerecorded audio file. Looking through the sample...

documentation

To optimize response times and reduce API costs for Puter (especially if we [increase context limits](https://github.com/HeyPuter/puter/issues/773)), could we implement a server-side caching mechanism for AI API inference calls? The cache...

enhancement

Hi, Puter currently [hardcodes](https://github.com/HeyPuter/puter/blob/main/src/backend/src/modules/puterai/OpenAICompletionService.js#L195) the max input tokens for any chat completion request to 4k, but the default model (gpt-4o-mini) supports [128k](https://platform.openai.com/docs/models/gpt-4o-mini) input tokens. Can we increase the max token...

enhancement