adk-js icon indicating copy to clipboard operation
adk-js copied to clipboard

Unable to submit request because it must include at least one parts field

Open ankargren opened this issue 1 month ago • 10 comments

Describe the bug A root agent is unable to transfer to subagents.

To Reproduce Steps to reproduce the behavior:

  1. Install @google/adk v0.2.0
  2. Add:
import { LlmAgent } from '@google/adk';

export const jokeAgent = new LlmAgent({
  name: 'tell_a_joke_agent',
  model: 'gemini-2.0-flash',
  description: 'Tell a joke',
  instruction: `You are a helpful assistant that tells a joke.`,
});

export const philosophicalAgent = new LlmAgent({
  name: 'philosophical_agent',
  model: 'gemini-2.0-flash',
  description: 'Say a philosophical statement',
  instruction: `You are a helpful assistant that says a philosophical statement.`,
});

export const rootAgent = new LlmAgent({
  name: 'root_agent',
  model: 'gemini-2.0-flash',
  description: 'Root agent',
  instruction: `You are a helpful assistant that is the root agent.`,
  subAgents: [jokeAgent, philosophicalAgent],
});

  1. Start the local web UI and trigger one of the subagents, like tell me a joke
  2. See the error appear: Unable to submit request because it must include at least one parts field, which describes the prompt input. Learn more: https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/gemini

Expected behavior It should be able to transfer to a subagent

Screenshots

Image

Desktop (please complete the following information):

  • OS: macOS (Darwin 25.1.0, arm64)
  • TS version/environment: 5.9.3
  • ADK version(see maven dependency): 0.2.0

ankargren avatar Dec 17 '25 07:12 ankargren

Hello, thanks for submitting the issue.

I tried to replicate that and it's not reproduced on my side: Image

Can you please double-check that or provide some additional context, thanks!

kalenkevich avatar Dec 18 '25 06:12 kalenkevich

I have the same issue. The first prompt always seems to work and then it somehow breaks and I get the same response all the time.

"@google/adk": "^0.2.1", "@google/adk-devtools": "^0.2.1"

google_adk % npx @google/adk-devtools run gitlab_workflow/index.ts 
[[email protected]] injecting env (3) from .env -- tip: ⚙️  load multiple .env files with { path: ['.env.local', '.env'] }
Running agent coordinator_agent, type exit to exit.
[user]: what can you do?
[ADK INFO]: event:  {"invocationId":"e-4eb3b16f-1b8b-4b27-b55f-92940f9dab69","author":"user","actions":{"stateDelta":{},"artifactDelta":{},"requestedAuthConfigs":{},"requestedToolConfirmations":{}},"content":{"role":"user","parts":[{"text":"what can you do?"}]},"id":"li9wSLls","longRunningToolIds":[],"timestamp":1766052954063}
[ADK INFO]: Sending out request, model: gemini-3-flash-preview, backend: VERTEX_AI, stream: false
[coordinator_agent]: I am the **Project Setup Coordinator**, and I can automate the end-to-end provisioning of new cloud projects on **Google Cloud Platform (GCP)** or **SAP Business Technology Platform (BTP)**.

Here is what I can do for you:

1.  **GitLab Project Creation**: I automatically create a new GitLab repository for your project using approved corporate templates.
2.  **Infrastructure as Code (IaC)**: I coordinate the update of the central Landing Zone repositories. This includes creating a feature branch, adding your project definition to the Terraform configuration, and opening a Merge Request.
3.  **Lifecycle Management**: I monitor the Merge Request for feedback and ensure the infrastructure setup proceeds correctly.
4.  **Environment Configuration**: Once the project is ready, I inject the required CI/CD variables (like Project IDs, Regions, or Credentials) into your new GitLab project so you can start developing immediately.

**To get started, please provide me with:**
*   **Project Name**
*   **Target Cloud** (Google or BTP)
*   **Jira Ticket ID**
[user]: Please create a new project called "Agent Test" on gcp. Jira Ticket Number is SOR-23455.
[ADK INFO]: event:  {"invocationId":"e-b1fca1d7-06f6-4059-8682-e06ba090d943","author":"user","actions":{"stateDelta":{},"artifactDelta":{},"requestedAuthConfigs":{},"requestedToolConfirmations":{}},"content":{"role":"user","parts":[{"text":"Please create a new project called \"Agent Test\" on gcp. Jira Ticket Number is XXX"}]},"id":"kX6Ig3xM","longRunningToolIds":[],"timestamp":1766052987543}
[ADK INFO]: event:  {"invocationId":"e-4eb3b16f-1b8b-4b27-b55f-92940f9dab69","author":"coordinator_agent","id":"U99OjX9p","actions":{"stateDelta":{},"artifactDelta":{},"requestedAuthConfigs":{},"requestedToolConfirmations":{}},"longRunningToolIds":[],"timestamp":1766052954068,"content":{"role":"model","parts":[{"text":"I am the **Project Setup Coordinator**, and I can automate the end-to-end provisioning of new cloud projects on **Google Cloud Platform (GCP)** or **SAP Business Technology Platform (BTP)**.\n\nHere is what I can do for you:\n\n1.  **GitLab Project Creation**: I automatically create a new GitLab repository for your project using approved corporate templates.\n2.  **Infrastructure as Code (IaC)**: I coordinate the update of the central Landing Zone repositories. This includes creating a feature branch, adding your project definition to the Terraform configuration, and opening a Merge Request.\n3.  **Lifecycle Management**: I monitor the Merge Request for feedback and ensure the infrastructure setup proceeds correctly.\n4.  **Environment Configuration**: Once the project is ready, I inject the required CI/CD variables (like Project IDs, Regions, or Credentials) into your new GitLab project so you can start developing immediately.\n\n**To get started, please provide me with:**\n*   **Project Name**\n*   **Target Cloud** (Google or BTP)\n*   **Jira Ticket ID**","thoughtSignature":"Cq4GAY89a19/AyjU+H8CbXfP279wCYKou8vs91oAeaUCKQzK763ui9NF9vHoy0wuFyFv2yX7rbKmWEZbBoQyFJSheLlydDOB9xPiwDYTX5YAjrPPFxM9Cw8W4xqbImhAhRp6OrAhKXQYMwjKD/+cNORu3x5e4a8A4AnINsyY3d/aAMeWjgeREW1C0DBrAS55USeKjeNj0PncguF48rwaQZGFJ68g7o4QQlbavqEUWDmqWCWFIqu+9ZZ+zJQy12pqgdU7p/PaC124H/ua9DDYxaYfUFDCP8HrWvVd16hk13durCFJ6c+OTa0UGi+BpQtthXTYVzE2mous95Hi/vN+9w5ncckA5lp9TN1m5dZsTQDj2988GjF2sXTwnA8PvS7PcTNpHUzo1KEuEDmfLnDr7Rnvf9hRJj2Bq7AZT1Z1McF0WvNOoa/45rrwgmTyW/FmqjQ6+gLuAEmiZRVXpNQoLlMIg9UHT7zbVkYPw2DMeEhFJwbnt8JY1B0lZ+J2if2xGfVVu//pPtYN4tcFFwrEemqszXxoe58h0NPRuLX/nMZ185UvTw3g83R6cMcEPchZezCVdPs5abP59QZntacdFUsY4r4TsHzyFR5hdggnzawDYVyZFy9TIcvhLxNY34Jfvnn3winIxTf0+xTkQCjgzoi6XdFrlGCQ+GJeK/M7TFJ1DxgOiAuAUEIfaULElJoOvH4mXvV4GP08/Wry3xmLOD+CBwsHOw8VwcopIsxoDbDMZjVzpi/SM+zYrYQeeOXIfQ0qUzUSdgb8l4CevkKgDbOJdJMKy78jWLyzDsu3vw0T9r2zrvd0DWYnmcBCmaOH5p7oUJtBhO07CiZV5oTKV1Fw8XGQKz+NvmCJn2fKqJS4zlhR2V2tn+NbnXSRNjLHoZOqVbDjI93PAbh/lbLxBumgWOf8KWbaEzJUANYt1NLg5SbzNABv0ZzjbkgiTTrkVN1l+zTIhuTv0yt3wc5HH7Xt7rvQjYK82SrsY61iwkQh2nV2w3WXdFTnvUbwCdfAB9T2oLMxaO1uUVveHuAktAFwiHNMuzus4ZQWB5NN2paUEOzqLkTLhbh2CWD/4ezHJg=="}]},"usageMetadata":{"promptTokenCount":733,"candidatesTokenCount":233,"totalTokenCount":1153,"trafficType":"ON_DEMAND","promptTokensDetails":[{"modality":"TEXT","tokenCount":733}],"candidatesTokensDetails":[{"modality":"TEXT","tokenCount":233}],"thoughtsTokenCount":187},"finishReason":"STOP"}
[ADK INFO]: Sending out request, model: gemini-3-flash-preview, backend: VERTEX_AI, stream: false
[ADK INFO]: event:  {"invocationId":"e-00408b19-6e74-48fe-ab47-6eb49e9f447e","author":"user","actions":{"stateDelta":{},"artifactDelta":{},"requestedAuthConfigs":{},"requestedToolConfirmations":{}},"content":{"role":"user","parts":[{"text":"Create a new project named 'Agent Test' in the namespace 'nla/blub' using the template 'template'. Please return the project ID and Web URL."}]},"id":"XC63xyXn","longRunningToolIds":[],"timestamp":1766052990638}
[ADK INFO]: Sending out request, model: gemini-3-flash-preview, backend: VERTEX_AI, stream: false
[ADK INFO]: event:  {"invocationId":"e-ce438170-8973-46b5-9c95-9e48e14c7c85","author":"user","actions":{"stateDelta":{},"artifactDelta":{},"requestedAuthConfigs":{},"requestedToolConfirmations":{}},"content":{"role":"user","parts":[{"text":"Please create a new GitLab project. \nProject Name: Agent Test\nNamespace: nla/blub\nTemplate: template\nReturn the project ID and the Web URL."}]},"id":"HU0C4Edt","longRunningToolIds":[],"timestamp":1766052993377}
[ADK INFO]: Sending out request, model: gemini-3-flash-preview, backend: VERTEX_AI, stream: false
[user]: what is the status?
[ADK INFO]: event:  {"invocationId":"e-c7973a12-e436-47ed-806e-e791238e3dac","author":"user","actions":{"stateDelta":{},"artifactDelta":{},"requestedAuthConfigs":{},"requestedToolConfirmations":{}},"content":{"role":"user","parts":[{"text":"what is the status?"}]},"id":"GrMpWB3K","longRunningToolIds":[],"timestamp":1766053709090}
[ADK INFO]: event:  {"invocationId":"e-b1fca1d7-06f6-4059-8682-e06ba090d943","author":"coordinator_agent","id":"ohpc9Dq6","actions":{"stateDelta":{},"artifactDelta":{},"requestedAuthConfigs":{},"requestedToolConfirmations":{}},"longRunningToolIds":[],"timestamp":1766052993379,"errorCode":"400","errorMessage":"Unable to submit request because it must include at least one parts field, which describes the prompt input. Learn more: https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/gemini"}
[ADK INFO]: Sending out request, model: gemini-3-flash-preview, backend: VERTEX_AI, stream: false
[user]: 

cpf-hse avatar Dec 18 '25 10:12 cpf-hse

@kalenkevich thanks for the answer! That's strange though. Here is my complete setup:

google-adk-demo/
├── package.json
├── tsconfig.json
└── src/
    └── agent.ts

package.json

{
  "name": "google-adk-demo",
  "version": "1.0.0",
  "description": "",
  "type": "module",
  "main": "index.js",
  "scripts": {
    "start": "npx ts-node --esm src/index.ts",
    "adk": "npx adk web"
  },
  "keywords": [],
  "author": "",
  "license": "ISC",
  "dependencies": {
    "@google/adk": "^0.2.1"
  },
  "devDependencies": {
    "@google/adk-devtools": "^0.2.1",
    "@types/node": "^25.0.3",
    "ts-node": "^10.9.2",
    "typescript": "^5.9.3"
  }
}

tsconfig.json

{
  "compilerOptions": {
    "target": "ES2022",
    "module": "NodeNext",
    "moduleResolution": "NodeNext",
    "esModuleInterop": true,
    "strict": true,
    "skipLibCheck": true,
    "outDir": "./dist"
  },
  "include": ["src/**/*"]
}

src/agent.ts

import { LlmAgent } from '@google/adk';

export const jokeAgent = new LlmAgent({
  name: 'tell_a_joke_agent',
  model: 'gemini-2.0-flash',
  description: 'Tell a joke',
  instruction: `You are a helpful assistant that tells a joke.`,
});

export const philosophicalAgent = new LlmAgent({
  name: 'philosophical_agent',
  model: 'gemini-2.0-flash',
  description: 'Say a philosophical statement',
  instruction: `You are a helpful assistant that says a philosophical statement.`,
});

export const rootAgent = new LlmAgent({
  name: 'root_agent',
  model: 'gemini-2.0-flash',
  description: 'Root agent',
  instruction: `You are a helpful assistant that is the root agent.`,
  subAgents: [jokeAgent, philosophicalAgent],
});

Then I run it with pnpm exec adk web src. Is there any other context information that would be helpful to share?

ankargren avatar Dec 18 '25 18:12 ankargren

If it's of any help, here are some versions I'm on:

  | Component            | Version                      |
  |----------------------|------------------------------|
  | OS                   | macOS (Darwin 25.1.0, arm64) |
  | Node.js              | v22.12.0                     |
  | pnpm                 | 10.2.0                       |
  | @google/adk          | 0.2.1                        |
  | @google/adk-devtools | 0.2.1                        |
  | TypeScript           | 5.9.3                        |
  | Google Cloud SDK     | 539.0.0                      |

and using Vertex AI via Application Default Credentials (which works since I can get the root agent to respond).

ankargren avatar Dec 18 '25 19:12 ankargren

I copied your agent definitions into an app and am able to reproduce locally. It does happen after the first message. We will take a look and see why the subagent transfer is breaking. Thank you for reporting and we'll let you know.

ScottMansfield avatar Dec 18 '25 22:12 ScottMansfield

So far I've discovered that using gemini-3-flash-preview works fine and so does gemini-2.5-flash.

I'm not sure yet what the root cause is for 2.0 but if you can use 2.5 or 3 for now you will be able to continue developing. Newer models are significantly better, so I would encourage you to use the latest one if possible. I will keep this open for now but since the mitigation is easy it will be lower priority.

ScottMansfield avatar Dec 19 '25 05:12 ScottMansfield

Thanks @ScottMansfield! I tried 2.5 (I don't have access to 3 preview yet as per company policy), but still getting the same error unfortunately.

Image
[ADK INFO]: event:  {"invocationId":"e-cc7efc0b-fb92-4bf7-95f3-b25629d6b57c","author":"user","actions":{"stateDelta":{},"artifactDelta":{},"requestedAuthConfigs":{},"requestedToolConfirmations":{}},"content":{"role":"user","parts":[{"text":"what can you do"}]},"id":"h7tZFiK7","longRunningToolIds":[],"timestamp":1766128312240}
[ADK INFO]: Sending out request, model: gemini-2.5-flash, backend: VERTEX_AI, stream: false
[ADK INFO]: event:  {"invocationId":"e-4b0ff6c4-22bf-42ac-b723-b134f0918fce","author":"user","actions":{"stateDelta":{},"artifactDelta":{},"requestedAuthConfigs":{},"requestedToolConfirmations":{}},"content":{"role":"user","parts":[{"text":"tell me a joke"}]},"id":"Oiljvuri","longRunningToolIds":[],"timestamp":1766128318115}
[ADK INFO]: event:  {"invocationId":"e-cc7efc0b-fb92-4bf7-95f3-b25629d6b57c","author":"root_agent","id":"XLHYled8","actions":{"stateDelta":{},"artifactDelta":{},"requestedAuthConfigs":{},"requestedToolConfirmations":{}},"longRunningToolIds":[],"timestamp":1766128312243,"content":{"role":"model","parts":[{"text":"I am the root agent. I can answer general questions and help you navigate to other specialized agents. I can transfer you to an agent that can tell jokes or one that can make philosophical statements. How can I help you today?"}]},"usageMetadata":{"promptTokenCount":205,"candidatesTokenCount":46,"totalTokenCount":419,"trafficType":"ON_DEMAND","promptTokensDetails":[{"modality":"TEXT","tokenCount":205}],"candidatesTokensDetails":[{"modality":"TEXT","tokenCount":46}],"thoughtsTokenCount":168},"finishReason":"STOP"}
[ADK INFO]: Sending out request, model: gemini-2.5-flash, backend: VERTEX_AI, stream: false
[ADK INFO]: Sending out request, model: gemini-2.5-flash, backend: VERTEX_AI, stream: false

Do you have any other ideas what the problem might be? I wonder what else there could be on my end that causes this issue.

ankargren avatar Dec 19 '25 07:12 ankargren

I don't want to just say "works on my machine" so I will give you what I'm working from so you might find something I can't yet see. I tried the web just now with 2.5 flash and it worked fine.

[ADK INFO]: event:  {"invocationId":"e-7d308d96-c81c-462d-bb5b-f10694907da4","author":"user","actions":{"stateDelta":{},"artifactDelta":{},"requestedAuthConfigs":{},"requestedToolConfirmations":{}},"content":{"role":"user","parts":[{"text":"what can you do?"}]},"id":"rnLG6NLr","longRunningToolIds":[],"timestamp":1766128735383}
[ADK INFO]: Sending out request, model: gemini-2.5-flash, backend: GEMINI_API, stream: false
[ADK INFO]: event:  {"invocationId":"e-083ee11c-a834-4ad0-9b71-e31702bbccea","author":"user","actions":{"stateDelta":{},"artifactDelta":{},"requestedAuthConfigs":{},"requestedToolConfirmations":{}},"content":{"role":"user","parts":[{"text":"tell me a joke"}]},"id":"wKcevqDM","longRunningToolIds":[],"timestamp":1766128740015}
[ADK INFO]: event:  {"invocationId":"e-7d308d96-c81c-462d-bb5b-f10694907da4","author":"root_agent","id":"16yERar8","actions":{"stateDelta":{},"artifactDelta":{},"requestedAuthConfigs":{},"requestedToolConfirmations":{}},"longRunningToolIds":[],"timestamp":1766128735386,"content":{"parts":[{"text":"I am a helpful assistant. I can also tell jokes or make philosophical statements if you'd like.\n"}],"role":"model"},"usageMetadata":{"promptTokenCount":232,"candidatesTokenCount":21,"totalTokenCount":253,"promptTokensDetails":[{"modality":"TEXT","tokenCount":232}]},"finishReason":"STOP"}
[ADK INFO]: Sending out request, model: gemini-2.5-flash, backend: GEMINI_API, stream: false
[ADK INFO]: Sending out request, model: gemini-2.5-flash, backend: GEMINI_API, stream: false

One difference I noticed is that I am using GEMINI_API as the backend and you are using VERTEX_AI. I don't think this should cause a problem but it's a difference.

I followed the quickstart to get a clean env and put in your agent definitions in place of the ones in the article.

Here's what I am using:

agent.ts:

import { LlmAgent } from '@google/adk';

export const jokeAgent = new LlmAgent({
  name: 'tell_a_joke_agent',
  // model: 'gemini-2.0-flash',
  model: 'gemini-2.5-flash',
  // model: 'gemini-3-flash-preview',
  description: 'Tell a joke',
  instruction: `You are a helpful assistant that tells a joke.`,
});

export const philosophicalAgent = new LlmAgent({
  name: 'philosophical_agent',
  // model: 'gemini-2.0-flash',
  model: 'gemini-2.5-flash',
  // model: 'gemini-3-flash-preview',
  description: 'Say a philosophical statement',
  instruction: `You are a helpful assistant that says a philosophical statement.`,
});

export const rootAgent = new LlmAgent({
  name: 'root_agent',
  // model: 'gemini-2.0-flash',
  model: 'gemini-2.5-flash',
  // model: 'gemini-3-flash-preview',
  description: 'Root agent',
  instruction: `You are a helpful assistant that is the root agent.`,
  subAgents: [jokeAgent, philosophicalAgent],
});

tsconfig.json:

{
  // Visit https://aka.ms/tsconfig to read more about this file
  "compilerOptions": {
    // File Layout
    // "rootDir": "./src",
    // "outDir": "./dist",

    // Environment Settings
    // See also https://aka.ms/tsconfig/module
    "module": "nodenext",
    "target": "esnext",
    "types": [],
    // For nodejs:
    // "lib": ["esnext"],
    // "types": ["node"],
    // and npm install -D @types/node

    // Other Outputs
    "sourceMap": true,
    "declaration": true,
    "declarationMap": true,

    // Stricter Typechecking Options
    "noUncheckedIndexedAccess": true,
    "exactOptionalPropertyTypes": true,

    // Style Options
    // "noImplicitReturns": true,
    // "noImplicitOverride": true,
    // "noUnusedLocals": true,
    // "noUnusedParameters": true,
    // "noFallthroughCasesInSwitch": true,
    // "noPropertyAccessFromIndexSignature": true,

    // Recommended Options
    "strict": true,
    "jsx": "react-jsx",
    "verbatimModuleSyntax": false,
    "isolatedModules": true,
    "noUncheckedSideEffectImports": true,
    "moduleDetection": "force",
    "skipLibCheck": true,
  }
}

package.json:

{
  "name": "my-agent-22",
  "version": "1.0.0",
  "description": "",
  "main": "agent.ts",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "keywords": [],
  "author": "",
  "license": "ISC",
  "type": "commonjs",
  "devDependencies": {
    "typescript": "^5.9.3"
  },
  "dependencies": {
    "@google/adk": "^0.2.1",
    "@google/adk-devtools": "^0.2.1"
  }
}

ScottMansfield avatar Dec 19 '25 07:12 ScottMansfield

I have re-created your setup @ScottMansfield and I run into the same issue as @ankargren.

Basic Infos:

node: v22.18.0
typescript: 5.9.3
OS: macOS 26.1 (25B78)
@google/adk: 0.2.1
google/adk-devtools: 0.2.1

I increased the log level in my agent.ts to get more logging.

import { LlmAgent, setLogLevel, LogLevel } from "@google/adk";

setLogLevel(LogLevel.DEBUG);

export const jokeAgent = new LlmAgent({
	name: "tell_a_joke_agent",
	// model: 'gemini-2.0-flash',
	model: "gemini-2.5-flash",
	// model: 'gemini-3-flash-preview',
	description: "Tell a joke",
	instruction: `You are a helpful assistant that tells a joke.`,
});

export const philosophicalAgent = new LlmAgent({
	name: "philosophical_agent",
	// model: 'gemini-2.0-flash',
	model: "gemini-2.5-flash",
	// model: 'gemini-3-flash-preview',
	description: "Say a philosophical statement",
	instruction: `You are a helpful assistant that says a philosophical statement.`,
});

export const rootAgent = new LlmAgent({
	name: "root_agent",
	// model: 'gemini-2.0-flash',
	model: "gemini-2.5-flash",
	// model: 'gemini-3-flash-preview',
	description: "Root agent",
	instruction: `You are a helpful assistant that is the root agent.`,
	subAgents: [jokeAgent, philosophicalAgent],
});

I also use google cloud and vertex ai .env:

GOOGLE_GENAI_USE_VERTEXAI=TRUE
GOOGLE_CLOUD_PROJECT=my-project
GOOGLE_CLOUD_LOCATION=global

logs:

+-----------------------------------------------------------------------------+
| ADK Web Server started                                                      |
|                                                                             |
| For local testing, access at http://localhost:8000.                        |
+-----------------------------------------------------------------------------+
[ADK INFO]: event:  {"invocationId":"e-3ad8740a-0073-412e-af3e-e3456c4beaeb","author":"user","actions":{"stateDelta":{},"artifactDelta":{},"requestedAuthConfigs":{},"requestedToolConfirmations":{}},"content":{"role":"user","parts":[{"text":"What can you do?"}]},"id":"kkr017JC","longRunningToolIds":[],"timestamp":1766147755261}
[ADK INFO]: Sending out request, model: gemini-2.5-flash, backend: VERTEX_AI, stream: false
[ADK DEBUG]: Skipping output save for agent root_agent: outputKey is not set
[ADK INFO]: event:  {"invocationId":"e-81b3170f-79eb-4a85-a651-3655720d9a9e","author":"user","actions":{"stateDelta":{},"artifactDelta":{},"requestedAuthConfigs":{},"requestedToolConfirmations":{}},"content":{"role":"user","parts":[{"text":"tell me a joke"}]},"id":"XhJOqmqU","longRunningToolIds":[],"timestamp":1766147764049}
[ADK INFO]: event:  {"invocationId":"e-3ad8740a-0073-412e-af3e-e3456c4beaeb","author":"root_agent","id":"gJK07WSg","actions":{"stateDelta":{},"artifactDelta":{},"requestedAuthConfigs":{},"requestedToolConfirmations":{}},"longRunningToolIds":[],"timestamp":1766147755265,"content":{"role":"model","parts":[{"text":"I can tell jokes or philosophical statements. What would you like me to do?"}]},"usageMetadata":{"promptTokenCount":206,"candidatesTokenCount":16,"totalTokenCount":222,"trafficType":"ON_DEMAND","promptTokensDetails":[{"modality":"TEXT","tokenCount":206}],"candidatesTokensDetails":[{"modality":"TEXT","tokenCount":16}]},"finishReason":"STOP"}
[ADK INFO]: Sending out request, model: gemini-2.5-flash, backend: VERTEX_AI, stream: false
[ADK DEBUG]: Skipping output save for agent root_agent: outputKey is not set
[ADK DEBUG]: execute_tool transfer_to_agent
[ADK DEBUG]: callToolAsync transfer_to_agent
[ADK DEBUG]: traceToolCall {
  tool: 'transfer_to_agent',
  args: { agentName: 'tell_a_joke_agent' },
  functionResponseEvent: 'gdcv9ID8'
}
[ADK DEBUG]: Skipping output save for agent root_agent: outputKey is not set
[ADK DEBUG]: Skipping output save for agent root_agent: outputKey is not set
[ADK DEBUG]: Skipping output save for agent root_agent: outputKey is not set
[ADK INFO]: Sending out request, model: gemini-2.5-flash, backend: VERTEX_AI, stream: false
[ADK DEBUG]: Skipping output save for agent tell_a_joke_agent: outputKey is not set
[ADK DEBUG]: Skipping output save for agent root_agent: event authored by tell_a_joke_agent
Image

cpf-hse avatar Dec 19 '25 12:12 cpf-hse

Since I am currently having issues with a larger agent orchestration, I wanted to check whether the problem lies within Gemini itself or the ADK.

I recreated the same example using the Python ADK, and everything works! Since I am not very fluent in Python—and even Gemini-CLI had trouble recreating my TypeScript agent with the Python ADK—I hope someone can identify and fix the issue. 👍

from google.adk.agents.llm_agent import Agent

joke_agent = Agent(
    model='gemini-3-flash-preview',
    name='joke_agent',
    description='A funny assistant that tells jokes.',
    instruction='Tell a funny joke in response to user prompts',
)

philosophy_agent = Agent(
    model='gemini-3-flash-preview',
    name='philosophy_agent',
    description='Say a philosophical statement.',
    instruction='You are a helpful assistant that says a philosophical statement.',
)

root_agent = Agent(
    model='gemini-3-flash-preview',
    name='root_agent',
    description='Root agent',
    instruction='You are a helpful assistant that is the root agent.',
    sub_agents=[joke_agent, philosophy_agent],
)
Image Image

cpf-hse avatar Dec 19 '25 13:12 cpf-hse