Proxy Issues in ConnectorProviders
Since the built-in Client uses HttpURLConnection and can only use a proxy if it is globally via system properties, I've looked into the other offered connector providers and stumbled onto some issues.
See SSCCE for this issue https://github.com/leonard84/jersey-issues-3655
- JDK Connector seems to expect SSL even if a plain http is made, or something else
- Netty has the same problem
- Netty and JDK use
CONNECTeven for http so there might be the cause
- NettyConnector calls
io.netty.handler.proxy.HttpProxyHandler#HttpProxyHandler(java.net.SocketAddress, java.lang.String, java.lang.String)even if onlyClientProperties.PROXY_URIis set, causing a NPE, it should check for null and callio.netty.handler.proxy.HttpProxyHandler#HttpProxyHandler(java.net.SocketAddress)instead. - Connectors have different error handling if Proxy returns 403:
- Apache will return a response with 403
- Jdk fails with
java.io.IOException: "Connecting to proxy failed with status 403." - Grizzly and Jetty work for HTTP and return a response with 403
- Jetty fails for HTTPS with
javax.ws.rs.ProcessingException: java.util.concurrent.ExecutionException: org.eclipse.jetty.client.HttpResponseException: Unexpected HttpResponse[HTTP/1.1 403 Forbidden]@3e3692a6 for HttpRequest[CONNECT localhost:1337 HTTP/1.1]@fdb79f3 - GrizzlyConnector fails when the Proxy returns 403 when using
CONNECTfor HTTPS and blocks indefinetly.
java.lang.NullPointerException: null
at org.glassfish.grizzly.http.HttpClientFilter$ClientHttpResponseImpl.getProcessingState(HttpClientFilter.java:714) ~[grizzly-http-2.4.0.jar:2.4.0]
at com.ning.http.client.providers.grizzly.HttpTransactionContext.currentTransaction(HttpTransactionContext.java:134) ~[grizzly-http-client-1.13.jar:na]
at com.ning.http.client.providers.grizzly.AhcEventFilter.onHttpHeaderError(AhcEventFilter.java:261) ~[grizzly-http-client-1.13.jar:na]
at org.glassfish.grizzly.http.HttpCodecFilter.handleRead(HttpCodecFilter.java:627) ~[grizzly-http-2.4.0.jar:2.4.0]
at org.glassfish.grizzly.http.HttpClientFilter.handleRead(HttpClientFilter.java:175) ~[grizzly-http-2.4.0.jar:2.4.0]
at org.glassfish.grizzly.filterchain.ExecutorResolver$9.execute(ExecutorResolver.java:119) ~[grizzly-framework-2.4.0.jar:2.4.0]
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeFilter(DefaultFilterChain.java:284) [grizzly-framework-2.4.0.jar:2.4.0]
at org.glassfish.grizzly.filterchain.DefaultFilterChain.executeChainPart(DefaultFilterChain.java:201) [grizzly-framework-2.4.0.jar:2.4.0]
at org.glassfish.grizzly.filterchain.DefaultFilterChain.execute(DefaultFilterChain.java:133) [grizzly-framework-2.4.0.jar:2.4.0]
at org.glassfish.grizzly.filterchain.DefaultFilterChain.process(DefaultFilterChain.java:112) [grizzly-framework-2.4.0.jar:2.4.0]
at org.glassfish.grizzly.ProcessorExecutor.execute(ProcessorExecutor.java:77) [grizzly-framework-2.4.0.jar:2.4.0]
at org.glassfish.grizzly.nio.transport.TCPNIOTransport.fireIOEvent(TCPNIOTransport.java:539) [grizzly-framework-2.4.0.jar:2.4.0]
at org.glassfish.grizzly.strategies.AbstractIOStrategy.fireIOEvent(AbstractIOStrategy.java:112) [grizzly-framework-2.4.0.jar:2.4.0]
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.run0(WorkerThreadIOStrategy.java:117) [grizzly-framework-2.4.0.jar:2.4.0]
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy.access$100(WorkerThreadIOStrategy.java:56) [grizzly-framework-2.4.0.jar:2.4.0]
at org.glassfish.grizzly.strategies.WorkerThreadIOStrategy$WorkerThreadRunnable.run(WorkerThreadIOStrategy.java:137) [grizzly-framework-2.4.0.jar:2.4.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_141]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_141]
@leonard84 I can confirm I am seeing some similar really big failures with the Netty connector; it seems to be basically non-functional or almost totally untested, as far as I can see.
Using an HTTPS mode proxy on any mode resource doesn't work at all with the Netty connector.
Using an HTTP mode proxy on an HTTPS mode resource erroneously sends a raw TLS 1.2 handshake message to the HTTP mode proxy socket, which causes the connection to fail on the proxy. The error never comes back up the stack because it happens deep inside some async code which jams forever and never makes forward progress.
You can tell the packet is a raw TLS 1.2 handshake because it begins with 0x160303, or handshake message version 3.3 (TLS 1.2).
Packet capture attached with an example:
The issue does not seem to happen with the Apache backend. So it seems like some documentation improvement or deprecation of broken or unsupported proxy clients is needed here, since nobody replied to this issue in six months.