crawl4ai icon indicating copy to clipboard operation
crawl4ai copied to clipboard

[Bug]: Proxy Not Working with Oxylabs and Bright Data on v0.6.3 (net::ERR_NO_SUPPORTED_PROXIES)

Open avrum opened this issue 8 months ago • 4 comments

crawl4ai version

0.6.3

Expected Behavior

[Bug]: Proxy Not Working with Oxylabs and Bright Data on v0.6.3 (net::ERR_NO_SUPPORTED_PROXIES)

Description

I'm experiencing persistent issues using both Oxylabs and Bright Data proxy services with Crawl4AI (v0.6.3).
I've reviewed and tried the workarounds suggested in #993 and by @rubinsh, but have not managed to get proxies working reliably.


What works

  • Using the same proxy credentials with plain Python requests or curl works fine.
  • For example:
    proxy_config = ProxyConfig(
        server=f"https://{host}:{port}",
        username=f"{proxy_username}",
        password=proxy_password
    )
    # This works with Python requests/curl
    

What doesn't work

  • When using this proxy config with Crawl4AI’s BrowserConfig, the browser always fails to load the page.

  • I consistently get the following error:

    net::ERR_NO_SUPPORTED_PROXIES
    
  • Example traceback:

    [ERROR]... × https://www.google.com | Error: Unexpected error in _crawl_web at line 744
    in _crawl_web (.venv\lib\site-packages\crawl4ai/async_crawler_strategy.py):
    Error: Failed on navigating ACS-GOTO:
    Page.goto: net::ERR_NO_SUPPORTED_PROXIES at https://www.google.com/
    

Steps to Reproduce

  1. Set up a proxy with Oxylabs or Bright Data (residential proxy).
  2. Pass proxy config to Crawl4AI as follows:
    proxy_config = ProxyConfig(
        server=f"https://{host}:{port}",
        username=f"{proxy_username}",
        password=proxy_password
    )
    browser_config = BrowserConfig(
        headless=True,
        proxy_config=proxy_config
    )
    
  3. Try to crawl any page (e.g., google.com).

Expected Behavior

The browser should successfully connect through the proxy and load the target page, as it does in a regular Python request.

Actual Behavior

The crawler fails with net::ERR_NO_SUPPORTED_PROXIES (see above).


Environment

  • Crawl4AI version: 0.6.3
  • Python: 3.10.11
  • OS: Windows11

Additional Notes

  • Tried with both Oxylabs and Bright Data residential proxies, multiple credentials, ports, and servers.
  • Previous discussion: #993, @avrum

Is this reproducible?

Yes, 100% reproducible with both proxy providers.

Current Behavior

.

Is this reproducible?

Yes

Inputs Causing the Bug


Steps to Reproduce


Code snippets


OS

Windows

Python version

3.10.11

Browser

No response

Browser version

No response

Error logs & Screenshots (if applicable)

No response

avrum avatar Jun 02 '25 00:06 avrum

I'm having the same error here :/

thyago-coelho-rentcars avatar Jul 01 '25 13:07 thyago-coelho-rentcars

You can fix it using from crawl4ai import ProxyConfig from crawl4ai.async_configs import BrowserConfig proxy_config = ProxyConfig(server=f"URL",username="USERNAME",password="PASS" ) browser_cfg = BrowserConfig( headless=True, proxy_config=proxy_config, verbose=True )

shubhamwebspider avatar Jul 15 '25 11:07 shubhamwebspider

@avrum Did you try testing this in crawl4ai v0.7.4?

SohamKukreti avatar Sep 08 '25 15:09 SohamKukreti

@avrum , as Soham explained here here the recommended approach now is to use the ProxyConfig object instead:

proxy_config = ProxyConfig(
    server=f"http://{proxy_address}:{proxy_port}",
    username=username,
    password=password,
)

and then pass it to CrawlerRunConfig.

ntohidi avatar Nov 18 '25 12:11 ntohidi