claude-code icon indicating copy to clipboard operation
claude-code copied to clipboard

[DOCS] SubagentStop hook reference missing agent_id and agent_transcript_path added in 2.0.42

Open sylvansys opened this issue 2 months ago • 3 comments

Documentation Type

Incorrect/outdated documentation

Documentation Location

https://code.claude.com/docs/en/hooks#stop/subagentstop-decision-control

Section/Topic

Stop/SubagentStop Decision Control

Current Documentation

Stop and SubagentStop Input

stop_hook_active is true when Claude Code is already continuing as a result of a stop hook. Check this value or process the transcript to prevent Claude Code from running indefinitely.

{
  "session_id": "abc123",
  "transcript_path": "~/.claude/projects/.../00893aaf-19fa-41d2-8238-13269b9b3ca0.jsonl",
  "permission_mode": "default",
  "hook_event_name": "Stop",
  "stop_hook_active": true
}

What's Wrong or Missing?

"agent_id": <hexadecimal code string>,
 "agent_transcript_path": <.jsonl transcript path>

Suggested Improvement

An advanced example showing parsing of the transcript would be superb!!

Impact

Medium - Makes feature difficult to understand

Additional Context

Also may need updating:

  • https://code.claude.com/docs/en/hooks#configuration-2
  • https://code.claude.com/docs/en/hooks#example:-subagentstop-with-custom-logic

sylvansys avatar Nov 16 '25 05:11 sylvansys

might be relevant:

https://github.com/anthropics/claude-code/issues/11786#issuecomment-3543716217

The issue is that the docs are completely outdated. The prompt based hooks feature was shipped fast and evolved even faster. I've tried to report it to Anthropic in numerous ways, to no avail, I'd even offer to totally rewrite the Hooks Reference for them!

{
  "decision": "approve" | "block",
  "reason": "Explanation for the decision",
  "continue": false,  // Optional: stops Claude entirely
  "stopReason": "Message shown to user",  // Optional: custom stop message
  "systemMessage": "Warning or context"  // Optional: shown to user
}

This is not valid, nor are the documented examples for prompt based hooks valid either. It has been simplified to {'ok': boolean, 'reason': string}.

You have to use natural language, don't mention JSON, there is an internal prompt that runs for evalulating hooks and prompts:

description: Prompt given to Claude when acting evaluating whether to pass or fail a prompt hook.
---
You are evaluating a hook in Claude Code.

CRITICAL: You MUST return ONLY valid JSON with no other text, explanation, or commentary before or after the JSON. Do not include any markdown code blocks, thinking, or additional text.

Your response must be a single JSON object matching one of the following schemas:
1. If the condition is met, return: {"ok": true}
2. If the condition is not met, return: {"ok": false, "reason": "Reason for why it is not met"}

Return the JSON object directly with no preamble or explanation.

$ARGUMENTS is evolving fast too and probably not useful to add, since it's already prepended to the prompt. You can try saying "read the transcript" and experiment with more clear instruction.

There is also a model param you can use in settings.json now:

"hooks": [
          {
            "type": "prompt",
            "model": "sonnet",
            "prompt": "Evaluate if Claude should stop - has it completed everything it was asked to do? Read the transcript of the conversation and use any other context available to you."
          }

It does not work with haiku. Regardless of the model you use, there is some weird reliability issue where Claude prepends a { to valid JSON, which returns invalid JSON.

The docs say in one breath that prompt-based hook only work with Stop and SubagentStop, and in the next breath claim it will work with any hook event, and are useful for Stop, SubagentStop, UserPromptSubmit, PreToolUse.

I'm not alone in being unsure where the prompt-based hooks feature is going, Anthropic devs, it'd make much more sense if you jerry rig a subagent that inherits the same model and has (the hidden) forkContext: false set... in terms of caching and speed. Providing a transcript for evaluation seems like a waste, and using any model other than inherited model throws all the cached context out of the window. Structured outputs are nice (for the hook response evaluation agent) but full context is far more important for the prompt evaluation agent, right?

It's an incredibly powerful feature but it's not quite there yet... I look forward to it working, Anthropic I implore you to have an engineer look at things unassisted by Claude (hands behind your back!) because vibe coding based on a design document isn't working, nor are Mintlify authored docs - dedicate some organic brainpower and you could have a super-awesome feature in everyones hands!

kierr avatar Nov 19 '25 18:11 kierr

Definitely is relevant, thank you @kierr! Wish they would update docs more frequently. Do u know of any unofficial docs which are better for prompt based hooks?

might be relevant:

https://github.com/anthropics/claude-code/issues/11786#issuecomment-3543716217

The issue is that the docs are completely outdated. The prompt based hooks feature was shipped fast and evolved even faster. I've tried to report it to Anthropic in numerous ways, to no avail, I'd even offer to totally rewrite the Hooks Reference for them!

{
  "decision": "approve" | "block",
  "reason": "Explanation for the decision",
  "continue": false,  // Optional: stops Claude entirely
  "stopReason": "Message shown to user",  // Optional: custom stop message
  "systemMessage": "Warning or context"  // Optional: shown to user
}

This is not valid, nor are the documented examples for prompt based hooks valid either. It has been simplified to {'ok': boolean, 'reason': string}.

You have to use natural language, don't mention JSON, there is an internal prompt that runs for evalulating hooks and prompts:

description: Prompt given to Claude when acting evaluating whether to pass or fail a prompt hook.
---
You are evaluating a hook in Claude Code.

CRITICAL: You MUST return ONLY valid JSON with no other text, explanation, or commentary before or after the JSON. Do not include any markdown code blocks, thinking, or additional text.

Your response must be a single JSON object matching one of the following schemas:
1. If the condition is met, return: {"ok": true}
2. If the condition is not met, return: {"ok": false, "reason": "Reason for why it is not met"}

Return the JSON object directly with no preamble or explanation.

$ARGUMENTS is evolving fast too and probably not useful to add, since it's already prepended to the prompt. You can try saying "read the transcript" and experiment with more clear instruction.

There is also a model param you can use in settings.json now:

"hooks": [
          {
            "type": "prompt",
            "model": "sonnet",
            "prompt": "Evaluate if Claude should stop - has it completed everything it was asked to do? Read the transcript of the conversation and use any other context available to you."
          }

It does not work with haiku. Regardless of the model you use, there is some weird reliability issue where Claude prepends a { to valid JSON, which returns invalid JSON.

The docs say in one breath that prompt-based hook only work with Stop and SubagentStop, and in the next breath claim it will work with any hook event, and are useful for Stop, SubagentStop, UserPromptSubmit, PreToolUse.

I'm not alone in being unsure where the prompt-based hooks feature is going, Anthropic devs, it'd make much more sense if you jerry rig a subagent that inherits the same model and has (the hidden) forkContext: false set... in terms of caching and speed. Providing a transcript for evaluation seems like a waste, and using any model other than inherited model throws all the cached context out of the window. Structured outputs are nice (for the hook response evaluation agent) but full context is far more important for the prompt evaluation agent, right?

It's an incredibly powerful feature but it's not quite there yet... I look forward to it working, Anthropic I implore you to have an engineer look at things unassisted by Claude (hands behind your back!) because vibe coding based on a design document isn't working, nor are Mintlify authored docs - dedicate some organic brainpower and you could have a super-awesome feature in everyones hands!

sylvansys avatar Nov 20 '25 03:11 sylvansys

This issue has been inactive for 30 days. If the issue is still occurring, please comment to let us know. Otherwise, this issue will be automatically closed in 30 days for housekeeping purposes.

github-actions[bot] avatar Dec 20 '25 10:12 github-actions[bot]