The Fediverse is not safe

I've never been a fan of corporate-run social media, because I don't feel the overwhelming need to tell some random marketers everything about me and become a product to be packaged and sold to anyone who has the coin.

I was, therefore, predisposed to like the idea of federated services, both from the free from corpo profiteering aspect, as well as from a neat tech toy aspect, and from having grown up on FidoNet and Usenet and IRC as my primary means of socializing online, and all 3 of which are federated and often run by small entities.

It really felt like a reversion back to a medium I enjoyed and found value in.

Unfortunately, after spending time running services and getting into the weeds, I honestly think the warm fuzzies were nothing but misplaced nostalgia, and there's serious technical, social, user safety, and circle-of-trust flaws in any ActivityPub-ish federation method.

Why should you give a shit about my opinions? I spent 8 years doing fraud, abuse, and legal compliance work for a IaaS cloud provider, and got a hell of a education on everything ranging from copyright, illegal content, abuse mitigation,  and digital forensics and security monitoring: I'm not claiming to be an expert in any of these, but I have enough experience that I think I can make reasonable commentary that isn't just me pulling nonsense out of my ass.

First is simple admin trust: you're putting your eggs in a basket controlled by someone you do not know, cannot ascertain motivations for, and cannot audit and validate they're who they say they are and doing what they say they're doing.

This is not to cast negative light on every server admin, because I know that the vast majority are upright, honest people doing it because they think it's worthwhile.  The problem is that there's no way - currently - to verify anything, and there's no guarantee that the admin will continue running their service in a way that you're comfortable with long term: look at the drama around mastodon.social federating with Meta, as a current example.

You also don't know how they're handling security, data storage, or legal compliance.  Are they making backups? Are the backups encrypted? Are they sitting on someone's local computer waiting to be seized by the FBI? Are they going to notify you if they ever have a data breach? Are they even going to know they were breached?

There's an awful lot of pieces to orchestrate to safely run a public service that, frankly, most hobbiest admins simply are not aware of or don't know how to safely navigate and handle.  

And this is not intended to put the entire blame on someone who came across some software they thought was neat and then set it up somewhere: there's a woeful lack of useful education out there for admins to operate safely.

That looks to be changing thanks to more organizations and professionals offering advice and basic primers on a lot of the issues an admin has to be aware of, but every legal jurisdiction handles everything differently and it's very hard to write a comprehensive just-do-this-and-you're-fine document that can apply to everyone.

And I'm sure someone is currently furiously writing an email telling me that the right way to do this is to host your own, to which I say: bullshit.  You cannot expect anyone who doesn't spend all day fucking around with Linux to run anything, and honestly, nor should you really want that - the vast majority of people wouldn't be able to maintain and secure their Mastodon instance after they set it up, even if all they had to do for the initial setup was to paste instructions into a command line.

In fact, a shocking number of "admins" and "developers" cannot even keep their shitty Wordpress sites updated, and that's literally just pushing a button. Any expectations that a random person could maintain something as complex as a Mastodon or Firefish or Lemmy installation is entirely nonsensical and not based on any version of reality I've ever seen.

Continuing down the user safety path, the federated nature of these services are a risk simply due to how they operate: they're simple content forwarders and everyone subscribed to a specific instance/user/group/etc will get all content that's posted to that specific entity.

Combine that with the inexplicable love of every Fediverse service's developer to have you locally cache every piece of media locally, you have an immediately exploitable automated attack surface: all you need to do is post illegal content to one member of the network and it'll automatically and happily make tens or hundreds of thousands of copies of that content.

This particular technical pitfall was used recently to spread CSAM across several communities on the Lemmy platform. This is hardly a new or unique attack vector as it's been used for quite a while as trolls rapidly discovered that CSAM is one of the very few things that will provide an immediate and aggressive response from server admins, and often they get the desired result from it: wherever it's posted, admins tend to immediately delete everything and often decide that continuing to run their site is too much of a risk and vanish.

And yes, I'm aware that in most jurisdictions there's a shield for service providers that protect them from criminal charges in situations like this, but that's not global and the Fediverse is.  Worse, there's often requirements around what you must do in order to get the protection and most admins who may not even know what CSAM is are not likely to do the correct thing to protect themselves, because they have no idea what that is.

For example, in the US, you have a legal requirement that you must report content you become aware of to NCMEC, and then preserve the content for law enforcement use for a 'reasonable' time period, which, after discussion with NCMEC means 30-90 days.

A lot of admins in the Lemmy case simply deleted the images, and that by itself is not actually enough to protect themselves (if they're in the US, anways) because once you're aware of the content - which, if you're going to delete it, you certainly are - you're required to follow the reporting procedure.

Doing things like this is especially risky for self-hosters (homelab types) of Federated services, because the FBI will absolutely happily show up at 6AM on a Saturday to discuss what, exactly, you're doing with your computers.  And, once you're at that point, you're likely looking at a substantial expense: lawyers, sized hardware, and a lot of your time dealing with everything.

Moving on to a slightly less depressing topic, user safety is also still a complete mess because with Federation you either whitelist specific servers you trust, or you simply do not have user safety at all.  If you federate with the large, open sign up instances which have the majority of the user base, and thus content, you're at the mercy of whatever malicious actors feel like doing.

There was a recent post on their Support community about how one of the larger Lemmy instances, Beehaw, is considering moving away from Lemmy because of this exact issue: they can't enforce community guidelines and keep the toxic elements out because there's no reasonable way to police the entire network and for a hobby project, you cannot afford sufficient moderators to stay ahead of the aggressively malicious types.

You end up where you absolutely must block a huge swath of the Fediverse to keep known-bad instances from impacting you, but then you also have to police everything coming from instances that have many more users and resources than you do as well: it's a losing proposition for most people who want to make a small community online and remain federated with the larger instances as well.

And last but certainly not least, technical issues.  A lot of this software is written by a couple of people who may or may not add features that are desired or needed.

Going back to Lemmy, there's numerous requests for user safety features to improve content moderation, user moderation, and the ability to safely handle trolls doing malicious things and the response from the developers was, effectively, that they don't care and aren't going to do it or even consider putting it on the roadmap.

This kind of response puts admins in an uncomfortable position where they either continue risking their own and their users safety, figure out if there's a way to add the changes they need to a fork, or decide it's best to just forget the whole idea and just delete everything.

Last, theres the privacy aspect.  Privacy has always been a major talking point for these federated services, and they're absolute nonsense.  ActivityPub based software is an incredibly public venue, more so than even a siloed corpo social networking site is.

Fediverse servers are based on a protocol that is explicitly designed to send a copy of everything to anyone who asks nicely for it. You can restrict where the requests for the content comes from, but at the end of the day, the protocol very much essentially requires that you accept federation from random, anonymous sources.

And yes, you can go the allowlist route, but at that point, you're not really a member of the Fediverse, so why aren't you just running something else?

There should be zero expectations that anything you post is ever private because you - that would be the user you, not the admin you - has no idea of where it may be going, what's happening to your data once it gets there, or that deleting anything will make it completely vanish on any federated instance.

If that wasn't bad enough, even that things like direct messages are not secure and only able to be read by the intended recipient: they're not encrypted, and the admin of your instance and the receiving instance can read them.

In the interest of fairness, there are many user privacy advantages for using a service that's not trying to profit, and your Mastodon sysop (probably) isn't running a spy network that would make the CIA blush.

I think, however, that it's important to separate 'not being packaged and sold' and 'my content is private' and not conflate the two, because they're very much not the same thing, and the assumption that one being true makes the other also true is simply not accurate in this case.

So, to sum up my opinion on Fediverse stuff: don't. Or do, if you want, but go into it with open eyes and understand where the cracks are.