How will RSS3 address the issue of distributing CSAM (and similar) contents?
As per the RSS3 light paper and the reply in #13, I believe it's correct to say that anyone can upload anything to the RSS3 network.
Therefore, how will RSS3 do with CSAM contents (or similar things that violates others' rights)?
As the censorship can only be done per-platform or application basis, ...
This sounds like one can write an application that distributes CSAM content without having concerns about being censored or blocked.
We know that Apple is planning to deploy a client-side CSAM scanner which is pretty creepy and violates their privacy promise. However, such contents also violate one or multiple rights of the subject shown in them.
As the RSS3 network will have no censorship, my questions are
- If any content violates one's fundamental rights, can they claim their rights and demand to take down corresponding contents?
- If the RSS3 network becomes one of the world's largest CSAM content distribution platforms, will RSS3 do anything to address this problem?
Please note that in question 1, by any content, it literally means anything including but not limited to one's personal information, address(es), photo of their ID and daily routines.
TLDR: This cannot be done.
Facts to the best of my knowledge.
- RSS3 acts more like a transparent layer and metadata storage between the IPFS network and its users.
- Although RSS3-Hub is now (17/09/2021) running in centralised mode (using Amazon S3), it won't take too much time to release a decentralised version.
Pros.
- Excellent freedom and privacy protection.
- No censorship at all.
- Easy to build applications on it.
Cons.
- Content that can be seen as harassment et al. to others, once published, will be impossible to retract. It stays on the IPFS network forever. No way for the victims to get rid of this kind of content.
There are some other minor things to be improved, but they're just a matter of time. The biggest issue is the harassment, harmful and hate content. Yet it can never be addressed, otherwise, it disobeys the fundamentals of IPFS.
Unrestricted freedom is bound to cause the exploitation of the weak by the strong, so this is a serious issue that should be addressed.
But in a decentralized network, we cannot remove and censor content, all we can do is to tag infringing content, give apps the option not to show it, and stop them from spreading.
How to tag infringing content is open to debate, whether through DAO manual governance or AI tagging.