Add support for gpt-oss models
Hello! I've been testing gpt-oss:20b for commit message generation and it works exceptionally well.
I tried using gpt-oss:20b via the "Custom" model option but it doesn't work properly.
I think the issue is related to how gpt-oss interprets the prompts. This model uses a specific format (harmony response format) and has a different reasoning approach with chain-of-thought capabilities, so it might not be compatible with the current prompt structure the extension sends.
Would it be possible to add native support for gpt-oss models? I've tested it manually with Ollama and it generates really good commit messages - probably because of its reasoning capabilities. It comes in two sizes:
gpt-oss:20b (runs on 16GB+ RAM) gpt-oss:120b (needs 80GB+ memory)
Let me know if you need any testing or more details about the error!