Bad Actor Identification
In our current setup, we rely on Intrusion Detection Systems (IDS) and firewalls to identify and block malicious IP addresses temporarily to mitigate pending attacks. These systems are crucial for protecting our networks and are extremely effective against various bots and script kiddies...
However, if attackers use hidden and/or shared IP addresses with legitimate users, our traditional approach to blocking these IPs could become ineffective. How will these security measures adapt to ensure they can still protect against threats without inadvertently affecting legitimate users who might be sharing the same IP for anonymity purposes?
Disclaimer: I am not affiliated with Google. IP protection does not apply to first party requests: https://github.com/GoogleChrome/ip-protection/issues/31#issuecomment-1819513175 Though, frankly, even script kiddies can download a VPN or residential proxy, meaning you can not block the attacking IP. IP based blocklisting will always be prone to false negatives and false positives. Thanks
At this time we have no plans to proxy first party requests. We also understand the importance of anti fraud use cases and will share more information at a later date.
@brgoldstein At this time we have no plans to proxy first party requests. We also understand the importance of anti fraud use cases and will share more information at a later date.
Can a bad actor exploit this mechanism to access free proxies? For example, could they trick Chrome into treating every request as a third party? I can see how free scan/DoS tools could route their traffic through Google’s proxy servers, which would put a strain on site owners by making it harder to block attackers, as they’d have easy and standardized proxy routes to targeted sites.
@iam-py-test Although, even script kiddies can download a VPN or use residential proxies, making it difficult to block the attacking IP.
You’re right, but based on our analysis, most attackers don’t go that route. Simply blocking their IPs temporarily often ends their efforts, and they usually give up. From a security standpoint, even mitigating those script kiddies helps reduce the risk of accidental vulnerability discovery, which is a significant benefit.
At this time we have no plans to proxy first party requests.
I'm not sure that this is sufficient to solve the core issue here (but I could be wrong, of course).
For example, consider a comments iframe embed https://my-blog-comments.com, embedded within https://my-blog.com/post. I'm guessing under the current model, the iframe would be loaded via a proxy IP. Then subsequent request within that iframe - e.g. to https://my-blog-comments.com/api/foo would be made via that same proxy IP?
There are many web applications (mine included) which have abuse-detection signals based on different IPs making requests for different resources on the same page - including resources at different subdomains/origins (e.g. CDN vs main server origin).
Fundamentally, it has been assumed for decades now that a user's IP will only change every so often, and so requests for resources will tend to be made via the same IP for any given user. If the user has requested https://my-blog-comments.com several hundred times, but hasn't ever requested https://my-blog.com/post, then this is suspicious, and so the user may eventually get blocked. But with this proposal, innocent users would get blocked because they are making requests to different resources with different IPs, since those resources are at different origins.
(BTW, I really like this proposal with my "internet user" hat on. I just hope that it doesn't get pushed through without solving core issues like the ones mentioned in this thread.)
Building on the current context of relying on Intrusion Detection Systems (IDS) and firewalls to block malicious IP addresses and prevent attacks, Google’s IP Protection proposal could severely complicate this approach. Currently, IDS and firewalls are pivotal in identifying and temporarily blocking IP addresses associated with malicious activity, protecting networks from a wide range of attackers, from bots to unauthorized scripts. However, if IP addresses are anonymized and shared across users through Google’s proxy, the precision of these security tools may be compromised, as it will become challenging to distinguish between legitimate users and potential attackers sharing the same IP.
Google’s proposal to anonymize IPs by rerouting traffic through a proxy risks reducing the effectiveness of traditional security protocols that rely on IP-based detection. This arrangement not only weakens the defensive capabilities of IDS and firewalls but could also create a dependency on Google as an intermediary to “filter” threats, particularly if websites are required to register with Google to access configuration options, analytics, and “enhanced” security measures. By forcing website owners to rely on Google for insights into traffic behavior and threat patterns, this model could further entrench Google as a gatekeeper in internet security.
Additionally, this setup raises critical questions about the rationale of “protecting users” while potentially sidestepping existing regulatory frameworks such as GDPR. GDPR mandates that websites maintain control over user data and implement protective measures locally, thus upholding user privacy and autonomy. By positioning itself as the privacy proxy, Google risks diluting GDPR protections by centralizing the control of user IPs, which could undermine current systems and increase reliance on a single provider. This arrangement threatens the integrity of user protections on the open internet, replacing decentralized, accountable security with a monopolized gateway that ultimately serves Google’s interests.
In sum, while IP anonymization might appear to enhance privacy, it introduces risks by circumventing proven, multi-party protective mechanisms in favor of a singular, centralized approach that compromises traditional network defenses and regulatory compliance.
the main thing that matters is whether ip protection is enabled by default, which is something google can toggle any time they want. its fine if users knowingly go out of their way to enable it, but not if people run their traffic through 2 hops without having any idea about it while giving the rights to themsevles to google