experimental_instructions_file - not working the slightest.
What version of Codex is running?
v0.72.0
What subscription do you have?
Pro
Which model were you using?
gpt-5.2-max
What platform is your computer?
Ubuntu 22.04
What issue are you seeing?
No matter how little I cange in experimental_instructions_file (e.g. experimental_instructions_file.md) I get a ■ {"detail":"Instructions are not valid"} error. Tried it with adding a single word - still throws.
What steps can reproduce the bug?
Create a experimental_instructions_file, link it in your config; copy paste the exact codex system prompt; try it --> all works; change one word --> error
What is the expected behavior?
It accepts my custom instructions as long as I don't tell him to hack the Pentagon.
Additional information
No response
This is an unsupported and unmaintained experimental feature that we should remove. I'll keep the bug open as a reminder to do so.
This is an unsupported and unmaintained experimental feature that we should remove. I'll keep the bug open as a reminder to do so.
Perhaps you should keep it and make it work (in a way that its not allowing violations of your guardrails), as it would allow the user to take a little more influence regarding tuning the models behavior as wanted... in the current "vanilla" prompt there is a lot of stuff that is unnecessary under a lot of conditions, while there is stuff missing that may be very helpful, e.g. making Codex aware he is working in a 6-agent cooperative force and must always align /w the other "guys"..
You can use the AGENTS.md file to add or adjust prompt details.
You can use the
AGENTS.mdfile to add or adjust prompt details.
I know, but I have the feeling that AGENTS.md get not injected as "hard" or as often as it should get; I do not know if it is just injected at the beginning of a conversation and kept in context or if it is some other mechanic - however, at least from my subjective perspective, it looks as if the basic instructions have more (reliable) influence on the actual behavior of the agent. If I am wrong I apologize :)
You can use the
AGENTS.mdfile to add or adjust prompt details.I know, but I have the feeling that AGENTS.md get not injected as "hard" or as often as it should get; I do not know if it is just injected at the beginning of a conversation and kept in context or if it is some other mechanic - however, at least from my subjective perspective, it looks as if the basic instructions have more (reliable) influence on the actual behavior of the agent. If I am wrong I apologize :)
It goes beyond just being insufficient; it essentially discards AGENTS.md entirely.
I looked into how AGENTS.md was handled a while back (I think it was version 0.42, around September) and noticed the scheme where AGENTS.md is injected during the 'list' and 'read' phases at the start of a conversation.
After the release of GPT-5.2, I even noticed that adherence to open-spec began to degrade. I also observed that while GPT-5, GPT-5.1, and non-Codex models maintained high instruction-following capabilities within the Bash environment and my Ubuntu system, the Codex and Codex-Max series models would intermittently ignore my instructions. GPT-5.2 seems to have inherited this lower level of instruction following.
I also discovered that if I try to modify the internal prompts of the codex-cli, the model appears extremely resistant to following my diffs. It feels as though the model itself has been distilled on the Codex raw_context and only recognizes the default system-level prompts of Codex.
Furthermore, the instructions in AGENTS.md seem to be significant regarding implementation style, but offer little customization for TODOs.
Since this MD file is already serving as a system-level meta-file for the model, shouldn't its definition priority for TODOs be higher than those external spec-driven frameworks (such as open-spec and spec-kit)? If its own head prompt can't even control the model's behavior—and I'm worried that feeding it my own poor-quality prompts will cause it to unwarrantedly expand its scope, leading to decreased accuracy—then can't we give AGENTS.md more value? Or perhaps emulate a spec-driven workflow to further refine the workspace scope or the conversational work scope on top of AGENTS.md?
I can't possibly refine AGENTS.md for every segment of history; writing each AGENTS.md like that would drive me insane!
I get a {"detail":"Instructions are not valid"} error.
I also ran into this issue when trying to run codex built w/ different model prompts. My assumption is that experimental_instructions_file isn't actually the issue here, it's that OAI doesn't allow custom base_instructions for codex plan users. This comment is instructive:
This is expected, you can use custom prompts with API key auth.
Originally posted by @pakrym-oai in #3376
We should document this condition in docs/config.md.
That said, @thom-heinrich, use the developer_instructions config. It gets injected into the conversation before AGENTS.md and is assigned a "developer" role (the description in docs/example-config.md is totally wrong). developer_instructions is given elevated authority over instructions in AGENTS.md and all other user messages. The chain of command is useful context here.
This is an unsupported and unmaintained experimental feature that we should remove. I'll keep the bug open as a reminder to do so.
Hi @etraut-openai, I believe this feature works fine with custom models. It only affects codex-plan-users. Would you mind explaining the justification for removing this feature?🤔