Extensions icon indicating copy to clipboard operation
Extensions copied to clipboard

LLM Intent Classifier: OpenAI-powered intent detection for Cognigy

Open dshire opened this issue 5 months ago • 0 comments

Summary

Adds a Cognigy extension that classifies user intent from text using OpenAI Chat Completions. Results can be stored in input or context, and optionally overwrite input.intent and input.intentScore. Includes a complete README with setup, configuration, build, and troubleshooting.

Key Features

  • Single node: “LLM Intent Classifier” (classifyIntent)
  • OpenAI connection (OpenAI API Key) with secure key storage
  • Configurable model (default gpt-4.1-mini)
  • Custom intents JSON (name, optional description and examples)
  • Structured JSON output ({ intent, confidence })
  • Configurable storage location/key; optional overwrite of input.intent and input.intentScore

What’s Included

  • Node implementation: src/nodes/classifyIntent.ts
  • Connection schema: src/connections/classifyIntentOpenAiKey.ts
  • Module wiring: src/module.ts → build/module.js
  • Packaging/build scripts in package.json
  • Updated README.md with install, usage, build, and troubleshooting

How It Works

  • Sends the provided text and intents catalog to OpenAI Chat Completions with a strict system prompt and response_format: json_object.
  • Parses response into { intent: string, confidence: number }.
  • Stores result under input[<inputKey>] or context[<contextKey>].
  • If enabled, sets input.intent and input.intentScore.

Configuration

  • Model: OpenAI model ID (e.g., gpt-4.1-mini)
  • Connection: OpenAI API Key (type openAiApiKey)
  • Input: Text to classify (e.g., [[snippet...]] for input.text)
  • Intents: JSON array (must include name; description/examples optional)
  • Storage: Location (input/context) and key names
  • Overwrite Intent: Toggle

Install/Build

  • Prebuilt: Upload intent-classifier_v3.tar.gz (or packaged archive)
  • From source: npm ci → npm run build (creates intent-classifier.tar.gz) → upload in Cognigy
  • Scripts: transpile, lint, build, zip

Testing Instructions

  1. Install the extension and create a connection (OpenAI API Key) with a valid key.
  2. Add the “LLM Intent Classifier” node to a Flow.
  3. Use input.text as Input; keep default Intents JSON or provide your own.
  4. Validate both storage modes:
    • Store Location = input → check input.classifyIntentResult
    • Store Location = context → check context.classifyIntentResult
  5. Toggle “Overwrite Intent” and confirm:
    • input.intent equals returned intent
    • input.intentScore equals returned confidence
  6. Try a message that should hit “Fallback”; confirm confidence and output.
  7. Error paths:
    • Invalid model → OpenAI error surfaced
    • Missing/invalid API key → authentication error surfaced

Risk/Impact

  • External dependency on OpenAI availability and model access
  • Misconfigured model or key results in runtime errors
  • Classification quality depends on intent definitions (description/examples)

Security & Privacy

  • API key stored in Cognigy connection, not in code
  • Sends provided text and intents to OpenAI; ensure compliance with your data policies

Known Limitations

  • Only the last user message is considered for classification (by design in the system prompt)
  • Returns {} when no clear intent match is found
  • Requires models that support response_format: json_object

Files Changed

  • src/nodes/classifyIntent.ts (new node)
  • src/connections/classifyIntentOpenAiKey.ts (connection schema)
  • src/module.ts / build/module.js (wiring)
  • README.md (complete documentation)
  • package.json / package-lock.json (dependencies, scripts)

Checklist

  • [ ] Extension installs and loads without errors
  • [ ] Connection created and validated
  • [ ] Node classifies example inputs as expected
  • [ ] Storage and overwrite behaviors verified
  • [ ] README matches current behavior and options
  • [ ] No sensitive data committed (keys, tokens)

dshire avatar Sep 10 '25 08:09 dshire