crawl4ai icon indicating copy to clipboard operation
crawl4ai copied to clipboard

[Bug]: `TypeError: 'async_generator' object is not iterable ` when processing crawl requests with multiple URLs

Open zinodynn opened this issue 9 months ago • 0 comments

crawl4ai version

0.6.0-r2

Expected Behavior

Many results should be returned in the response when I send a request for a task with many URLs.

Current Behavior

Encounter some errors like this:

Traceback (most recent call last):
  File "/workspace/deploy/docker/api.py", line 445, in handle_crawl_request
    "results": [result.model_dump() for result in results],
                                                  ^^^^^^^
TypeError: 'async_generator' object is not iterable

When I send a request with many URLs for a task , like this :

curl -X POST http://localhost:11235/crawl \
  -H "Content-Type: application/json" \
  -d '{"urls":["https://example.com","https://www.google.com"],"crawler_config":{"type":"CrawlerRunConfig","params":{"scraping_strategy":{"type":"WebScrapingStrategy","params":{}},"exclude_social_media_domains":["facebook.com","twitter.com","x.com","linkedin.com","instagram.com","pinterest.com","tiktok.com
","snapchat.com","reddit.com"],"stream":true}}}'

Is this reproducible?

Yes

Inputs Causing the Bug


Steps to Reproduce


Code snippets


OS

Linux

Python version

3.12.10

Browser

No response

Browser version

No response

Error logs & Screenshots (if applicable)

No response

zinodynn avatar May 02 '25 17:05 zinodynn