Matt Cowger

Results 17 issues of Matt Cowger

**Is your feature request related to a problem? Please describe.** Each supported iaas supports underlying storageclasses. We provide one for GCE, but should provide for all the iaas types **Describe...

unscheduled

**What this PR does / why we need it**: This PR adds a default storage class for each of the currently supported IaaS types. Today, only a GCE storageclass is...

unscheduled

By default, new projects (as requested by the docs) need the Cloud Compute API enabled, which isn't true y default. They need to be enabled by accessing https://console.developers.google.com/apis/api/compute.googleapis.com/overview?project=\ The docs...

unscheduled

Currently there are 4 options for shifting - any reason that can't be more (assuming I write the code)? I'd like to develop a setup with a 12 position 'mode'...

## Context Added console.error logging to capture errors in Gemini, Mistral, OpenAI, and OpenAI-compatible embedders during validation and embedding creation. This improves error visibility and debugging capabilities. fix(scanner): improve batch...

- Add parseOllamaParametersToJSON function to properly parse model parameters - Enhance context window detection by checking model_info keys ending with '.context_length' - Prioritize environment variable OLLAMA_CONTEXT_LENGTH over model defaults -...

### Plugin Type VSCode Extension ### App Version 4.126.1 ### Description When using the `write_to_file` tool with JSON output, the UI and backend hang, with: ``` TypeError: d.startsWith is not...

### Plugin Type VSCode Extension ### App Version 4.125.0/.1 ### Description See video, but settings are not being saved consistently for providers. https://github.com/user-attachments/assets/633a14ff-95f6-4058-a8d1-659d39e141ce ### Reproduction steps See video. ### Provider...

Add logic to detect when write_to_file arguments contain valid JSON content instead of strings, and properly serialize with pretty formatting. Fixes write_to_file tool fails with pure JSON output Fixes #4130

## Context Add support for configuring max tokens in LiteLLM provider with: - New `litellmMaxTokens` option in provider config and schema - Default max tokens value of 8192 with fallback...