claude-code icon indicating copy to clipboard operation
claude-code copied to clipboard

[BUG] StopHook with decision:block doesn't actually block

Open ljw1004 opened this issue 4 months ago • 1 comments

Preflight Checklist

  • [x] I have searched existing issues and this hasn't been reported yet
  • [x] This is a single bug report (please file separate reports for different bugs)
  • [x] I am using the latest version of Claude Code

What's Wrong?

The docs https://docs.claude.com/en/docs/claude-code/hooks#stop%2Fsubagentstop-decision-control say "Stop and SubagentStop hooks can control whether Claude must continue. "block" prevents Claude from stopping. You must populate reason for Claude to know how to proceed."

But it doesn't prevent Claude from stopping. Instead it gives rise to a confusing tree of asynchronous concurrent requests. I'm working on doing (slow) LLM work in StopHook. I experimented with a self-contained slow StopHook to see how slow StopHooks behave:

time.sleep(20)
print(json.dumps({"decision":"block", "reason":"Please give a concise answer in under 8 words"}))

Here's a blow-by-blow account of what happened:

  1. I type a message "What model are you?"
  2. Claude sends the request "What model are you?", receives the answer "Sonnet" and delivers it to the user, and kicks off StopHook in the background. It's now a race!
  3. I type a second message "Why is the sky blue?"
  4. Claude sends the request "What model are you? Sonnet. Why is the sky blue?", receives the answer "Rayleigh scattering" and delivers it to the user, and kicks off a second StopHook in the background. It's now a three-way race.
  5. The StopHook from step 2 finishes with decision:block. This causes a request to be sent "What model are you? Sonnet. StopReason2?". The response is received, delivered to the user, and kicks off another StopHook. It's again a three-way race.
  6. The StopHook from step 4 finishes with decision:block. This causes a request to be sent "What model are you? Sonnet. Why is the sky blue? Rayleigh scattering. StopReason4?"

Here's my attempt at explaining it...

  1. Every user prompt gives rise to an async stream of message -> response -> StopHookBlockReason -> response -> StopHookBlockReason -> response -> ...
  2. Other user prompts might be typed in after that first response. They will give rise to their own async streams too.
  3. When Claude sends a request to the Sonnet model, I can only guess at what is the transcript that's sent in the request. I think that, for a step in a given async stream, the transcript is (1) everything that happened up until the point this async stream started, (2) what's gone on in this async stream.

This is really confusing! Have I explained it right? Or is there a better way of explaining the behavior?

cc. @coygeek, @TikkanadityaSiddartha since you have looked deeply into hooks.

cc. @decider, @dicksontsai since you mentioned hooks in #3656 - although the discussion there doesn't apply, since that was a suggestion about whether a (fast) StopHook should return continue:false vs decision:block, but this current issue is a (slow) StopHook where we're not yet even in a position to return anything.

What Should Happen?

What should happen? Either

  1. StopHook should be run synchronously. That would allow there to be a single linear transcript.
  2. Otherwise, the async tree behavior should be documented and explained, and the StopHook documentation adjusted to explain that it does something quite different than what it currently says!

Error Messages/Logs

No logs.

Steps to Reproduce

  1. Create a python file with this content and mark it as executable:
import time
import json
from datetime import datetime

time.sleep(20)
d = datetime.now().strftime("%H%M%S")
print(json.dumps({"decision":"block", "reason":f"Please give a concise answer in under 8 words [{d}]"}))
  1. Hook it up under ~/.claude/settings.json as a stop hook:
"Stop": [{"hooks": [{"type": "command", "command": "~/slowhook.py"}]}]
  1. Start up Claude and ask a question "What model are you?"
  2. As soon as it prints a response, ask another question "Why is the sky blue?"

Observe the responses trickle in over time. I ran this under https://github.com/badlogic/lemmy/tree/main/apps/claude-trace so I could see precisely which messages were being sent to the LLM.

Claude Model

Sonnet (default)

Is this a regression?

I don't know

Last Working Version

No response

Claude Code Version

2.0.1

Platform

Anthropic API

Operating System

macOS

Terminal/Shell

VS Code integrated terminal

Additional Information

No response

ljw1004 avatar Oct 01 '25 07:10 ljw1004

This issue has been inactive for 30 days. If the issue is still occurring, please comment to let us know. Otherwise, this issue will be automatically closed in 30 days for housekeeping purposes.

github-actions[bot] avatar Dec 06 '25 10:12 github-actions[bot]