So how does nostr propose to solve the problem where there is, in fact, quite a lot of content that you want to filter out, whether because it makes for a better experience for the people using this protocol to talk to each other, or because there are some pretty solid laws about things that various governments require people to filter out?
https://abovethelaw.com/2022/11/hey-elon-let-me-help-you-spe... is a pretty decent rundown of a mix of these things; it is specifically pointed at Elon Musk's decision to buy Twitter and make it a haven for "free speech" but it is a glimpse at what is in the future for anyone setting up a "free speech" platform.
My experience as someone who has been running a Mastodon server since 2017 is that while "we are all for FREE SPEECH, we only block what the government ABSOLUTELY requires us to block!" sounds noble, in practice nodes of the Fediverse that say this become havens for people who are only there to be assholes to other people, and any sane admin will sigh and block the whole server, because it's just going to be a continual source of rude nasty bullshit.
That's a great article, and balancing free speech with censorship is a difficult problem. But it becomes a constant headache only on centralized platforms, where no amount of resources can realistically monitor and filter all content. This scales in complexity as the platform grows, which is the goal of any centralized service. And if the business model depends on advertising, it becomes even messier and crucial to its existence.
P2P services OTOH work on a decentralized and pull model. Users share and only subscribe to the content they're interested in. Censorship is distributed, and it's a problem for people who don't wish to see specific content. It's the way the internet works, and the existing approach of removing sensitive content applies to P2P services as well. Since there are no advertisers to appease, it's not an existential problem.
The question was, to wit,
>How do you keep CP off it?
Your answer was:
>I don't, just don't look at the CP.
The problem really in question here is:
You've just created a new distribution method for this type of thing and punted the consequences for someone else to deal with.
(Which is totally cool imo, but newsflash, expect to be the subject of a hit piece some time in the forseeable future. It shouldn't take long; either for actual criminals to set up on it, or for LE to do it to "snare unsophisticated actors").
Welcome to the Internet, where we can't have/make nice things anymore.
How do we deal with objectionable content already on the internet? It gets taken down by law enforcement of some country that finds it objectionable, and we handle the case in courts.
A new protocol that makes this content more accessible isn't an issue with the protocol, but with society and how we decide to deal with it. If CP is found to be served by nginx over HTTP, is that a problem with nginx or HTTP?
If anything, centralized services only make the problem more difficult to address, since they're expected to serve the demands of governments, companies and law enforcement agencies worldwide, while somehow being the arbiter of free speech. Those are impossible goals to reach, and go against the original design of the internet.
This discussion is as old as P2P protocols. I'm sure that if Nostr became a popular way to share copyrighted content, governments would try to fight it, just as they've done before. But at the same time, censorship is not something a protocol should care about, and just like BitTorrent thrives today, with enough interest, Nostr would also find a way to persist.
In many European jurisdictions, you are criminally liable for anything that makes it onto your systems, regardless of how it got there.
So, the OP has just given people a whole new way of making themselves liable for illegal content, just by running the P2P software in question.
In most cases, the police aren't interested in hauling a service provider into jail, they want to haul in the person who put that content there. But this does make a nice little lever that the police can use against any service provider.
Nostr lets you specify who and what you subscribe to, so unless you subscribe to everything and everyone this isn't really a problem. You can also subscribe to followers' followers and expand that way.
Unlike what OP says Nostr relays are not dumb, they can have their own policies and to me they look like a better version of Mastodon servers. They can have identities, "themes" and policies as they wish. On Nostr it's totally fine for one relay to only allow certain kinds of content and block everything else. Users can just connect to multiple relays if they want to read/write about different things.
What bugs me about it is their naivety to solved technical problems. For instance, they answer the question of “why this hasn’t been done?” with:
> I don't know, but I imagine it has to do with the fact that people making social networks are either companies wanting to make money or P2P activists who want to make a thing completely without servers.
Except it has been done. In fact, that’s literally what KaZaA was with its “superpeers”. And what they realized was that by making a semi-decentralized system, they just introduced the weaknesses of both systems (slow downloads via peer-latency and network limits + easy censorship by killing relays/nodes). In addition, this is exactly how IRC works, despite the fact that it’s mostly used with a few nodes these days.
I’m not against semi-decentralized systems. They’re great and help deal with some scalability problems; but they don’t solve for the number one issue most people moving to decentralized are seeking (anonymity, privacy and free speech), so it’s not fair to compare it to platforms/protocols that do offer those features.
They may be young and unaware of "solved technical problems," but they are full of energy, and every generation has to relearn the same things.
But as for peer latency issues or easy censorship by killing nodes, I don't see it. Nostr has fan-out, but not as much as RSS does and I don't expect superrelays.
I also don't follow you on the issue of anonymity or privacy. The guy who started it fiatjaf is anonymous. We don't know who he is. And you can be too. Just make up a key, create and sign an event, and push it into whatever relay takes your fancy... through Tor if that's your thing.
I don't know about KaZaA but I remember it was very popular for a time, so it might have done some things right? What was its fate? Was it censored to death?
And I disagree very much that IRC is "semi-decentralized". IRC is completely centralized, it is just chat rooms on a server. You have to register on each server and each server has full control over its rooms and users.
I'm sure all the script kiddies who loved to take over channels in netsplits are gonna be disappointed that they never actually did that now.
More seriously, this is the second time I've seen someone on here characterize IRC in this (very wrong) way in the last day. Where is this coming from?
IRC networks are made up of servers that relay (hence Internet Relay Chat) with each other. You connect to one server and you can communicate both with people local to that server and people on other servers that are part of the same network (including ones that server is not directly connected to). Channels prefixed with # are shared across all servers in the network, while channels starting with & are local to that server (though rarely used).
I think you may be confused because you decided to rely on this loose concept of "semi-decentralization". IRC providers may use multiple servers, but that doesn't mean decentralization. They are closed networks, not very different from any sufficiently big internet business that runs multiple servers behind a load-balancer.
Part of it is that for users IRC definitely presents as centralised. You don't usually connect to a specific server, but rather a network that did some load balancing in the background. Like you typically connect to irc.efnet.org and not one of the sixty servers specifically.
Scale is hard, but for small-to-medium sized clusters, you can use your own node, or a friends'. Now you have your own copy of the data, and control over latency. The key insight with nostr is the multi-master architecture.
Relays can censor as much as they want, since they're only stewrding a small part of the network, likely replicated elsewhere. As to building in ways to curate/moderate content in a distributed way, there are lots of ideas out there as to how that might happen. Here's an issue I opened just the other day: https://github.com/nostr-protocol/nips/issues/75
Yes this is the crux of any social media application. I don't know if there will ever be a perfect solution.
I like that nostr abstracts this problem away from the relays. Relays only focus on storing data and handling subscriptions. They can choose to censor and/or curate content if need be, but it's not their concern.
It's up to the client to come up with a solution, and that client can be a platform or a protocol of its own.
edit it also feels really great to work on that problem from the application layer. I can come up with a solution that isn't confined to the parent protocol.
I'd like such a protocol to be designed so servers, relays, etc. are unable to censor content at the protocol level (if someone creates it, it is available) and the filtering is done on the client side.
Commonly filtered things (account block lists, post flag lists, filter rules, etc.) could be shared via the same system — indeed there could even be competing versions and everyone could follow their preferred filter source.
Users would also likely run statistical and machine learning based spam and content filters locally (perhaps on a personal relay/server of some sort, or an account on a shared one) configured to their preferences.
I would expect the infrastructure running such a network to be in the same position as Signal, who do not know the content of messages and can't censor them, leaving individual clients to figure out blocking etc. (albeit the client side options as well as ways to share configurations etc. would need to be much more advanced for a social network or similar than for a messaing app).
Just mute them. Or just follow who you follow with no suggestions of other people. Or relays can have a censorship policy based on the law or community standards or anything else they want... and the people will use whatever relays work for them (typically multiple relays to follow multiple crowds). Some people want censorship, some don't, the protocol is totally agnostic on this point.
> r because there are some pretty solid laws about things that various governments require people to filter out?
There are solid laws protecting copyright everywhere yet it is stilm trivially easy to find copyrighted content available for free online. Laws dont mean anything unless they are or can be enforced.
Letting other countries run intelligence ops on you isn’t actually beneficial, even if you enjoy it at the time.
(No, the president’s son didn’t actually leave a laptop at the repair shop of a blind man in a different state and then not come back for it. And while some emails were verified through DKIM, the one you specifically mentioned wasn’t.)
What stake do you have in either of the debates that you're referencing? When was the last time a politician's family member or a trans person did anything good or bad for you, personally?
If you didn't allow social media to get you invested in these manufactured pseudo-political pseudo-events, you'd be free to realize that Trump and Biden are both obvious idiots and crooks; that people's gender identities, biochemical makeup, and surgical history are obviously completely irrelevant to complete strangers; and that many other things that you waste your time constructing your tribal identity around are obviously complete bullshit that someone artificially planted into the discourse for their own benefit. That someone couldn't care less about your best interests as a person, anon291.
The problem isn't that Twitter and Facebook censor one side of those debates and promote the other. The problem is that people are using them in the first place! That way, social media is the thing that forces those debates to exist in the first place - by making you feel like you have a stake in events that are completely remote from your life. Whose agenda is being promoted is tangential to the fact that these unelected, unaccountable corporate bodies have entrenched themselves in a position where they can basically replace our individual world-models with this sort of outrageous nonsense. That's what's fucking democracy up, and in the most elegant way, too: your right to make free choices between alternatives is preserved, but only inconsequential choices are ever presented.
On the Internet, nobody knows you're a dog; on corporate social media, nobody cares that you aren't a dog.
> that people's gender identities, biochemical makeup, and surgical history are obviously completely irrelevant to complete strangers
At least in the UK, this became a matter of renewed public interest for two reasons: firstly, a proposed reform of the law to remove all gatekeeping from the process of changing one's 'legal sex', and secondly, an appalling case of several imprisoned women being sexually assaulted by a man (Karen White) who had been incarcerated alongside them due to having a 'legal sex' of female.
It was left-wing feminist groups, who organised largely offline to begin with, that reignited this debate. This wasn't some artefact of social media raging, it was a grassroots effort to halt and reverse a change in the law due to its clearly negative effects on women.
This is also a debate that has been ongoing for decades, long before social media websites even existed. Janice Raymond wrote what turned out to be a quite prophetic book on this topic in the 1970s, for instance. Renée Richards was stirring controversy in women's tennis at around the same time. Much of what you'll hear on this topic these days has already been covered by radical feminists for many years prior.
You make valid points - but I don't think the author of the post that I was replying to would be able to appreciate the nuance.
By bringing up the Karen White example, aren't you basically saying it would somehow be less appaling if it wasn't a trans person perpetrating the assault? Because I thought this sort of thing was abhorrent regardless of the salient details of a particular case?
Social media only makes it easier to focus on "which cage should we use for transgender people", and so much harder to ask ourselves "why are we putting people in cages". Or, as per the other example, "did Hunter Biden really lose his incriminating laptop?" vs "why are we letting ourselves be governed by people with familial ties to criminals?"
Social media and the polarizing meaningless debates that it enables serve the purpose of precluding people from focusing on the latter kind of question. (Which is already hard enough as it is, because it involves actual thinking.) If the public conversation is retreading ground that was already covered in the 1970s like you say, doesn't that mean that our society is regressing? Shouldn't be worrying first and foremost about that, since that's where we'd find the root cause of all the more specific issues?
How many people feel good for having the correct in-group opinions, while their contribution to e.g. the trans rights debate only goes as far as canceling JK Rowling, or, conversely, going to a Jordan Peterson talk? How many people have even heard of the actual examples you mention, as compared to the number of people who only know "uhh, so there's a debate on the Internet about some abstract hot button issue, and I'm required to pick a side in order to participate in society"?
A couple years from now the topics may be completely different, they'll just find another scapegoat or another thorny bioethical edge case, but the medium of debate will still be largely the same ol' Internet, and the AIs will only have become more effective at sowing discord.
Wow, nice "people are saying" play. Every single time the laptop story comes up it's in a butthurt way that is hilariously selective in its indignation.
Misinformation/disinformation is a thing and is not considered healthy for discourse. It also seems to be willingly embraced when it serves one's biases. Pity.
The laptop story would not have survived for two plus years were it not for the fact that Twitter and facebook both took unprecedented steps to censor it. Like literally... if you want something relegated to ignominy, don't do what facebook and twitter did.
Plus, at the time of the censorship, facebook and twitter claimed that the misinformation was that the laptop did not belong to Mr Biden. At this point, every news outlet of repute has admitted the laptop belongs to Hunter.
> Misinformation/disinformation is a thing and is not considered healthy for discourse.
The stupid laptop being a Russian forgery was misinformation/disinformation that affected an election. The NYT and WaPo have faced up to it by now, but middle-aged extreme partisans will post "cope" like they're millennials from 2015.
https://abovethelaw.com/2022/11/hey-elon-let-me-help-you-spe... is a pretty decent rundown of a mix of these things; it is specifically pointed at Elon Musk's decision to buy Twitter and make it a haven for "free speech" but it is a glimpse at what is in the future for anyone setting up a "free speech" platform.
My experience as someone who has been running a Mastodon server since 2017 is that while "we are all for FREE SPEECH, we only block what the government ABSOLUTELY requires us to block!" sounds noble, in practice nodes of the Fediverse that say this become havens for people who are only there to be assholes to other people, and any sane admin will sigh and block the whole server, because it's just going to be a continual source of rude nasty bullshit.