Privacy and Consent for Fediverse Developers: A Guide

Addressing the Network's thorniest issue.

In light of the recent controversy concerning Maven’s ingestion of over a million posts and thousands of profiles, the conversation has shifted towards the nature of open networks, and whether that might be at odds with the expectation of privacy and user consent. Like clockwork, new developers come into an existing space with only technical expectations; cultural norms and expectations are an afterthought. People clash and argue about it in the latest discourse, and developers often leave.

It doesn’t have to be that way. I believe that we can have both openness and privacy, both developer freedom and user consent, and both decentralization and defaults that protect end users. The problem boils down to communication, establishing best practices, and relying on mechanisms that empower users. We have to move past relying on informal communal knowledge to set expectations for newcomers, and write it down as a reference.

To that end, I have worked with Jon Pincus of The Privacy Nexus to write this list of principles; these points are derived from his original article on the matter. My focus is to take his existing language and ideas, and reframe it from a developer perspective.

Regardless of how you look at it, the Fediverse has a sizeable cohort of users that cares deeply about user consent, privacy controls, and agency in defining their own experience within the space.

Unfortunately, the majority of them still rely on platforms such as Mastodon, whose offerings for those things has been historically shaky. There’s a lot of promising work happening at the implementation level, specifically with it comes to the introduction of Reply Controls and Opt-In consent mechanisms, but it’s all grassroots for now.

A Bonfire design concept for Circles and Boundaries, a framework for stating who can do what with a thing. Source: Bonfire Project

For further education, I recommend looking into Hubzilla, Streams, and Bonfire, as those platforms are all doing interesting work in this area.

First, what does it mean for a person to consent to something? Does this mean that a user has to opt in to every single feature that a platform offers? Do they have to explicitly state what can or cannot be done with their profile, media, data, or content?

It may seem like an oversimplification, but Tea Consent is a great first principle to build on: if someone enthusiastically wants something being offered, they will opt in to partaking. If they don’t, they won’t, and shouldn’t be forced into an experience they want nothing to do with. A lack of affirmation or denial does not equal consent.

A more detailed definition is illustrated by what the GDPR has to say:

Consent of the data subject means any freely given, specific, informed and unambiguous indication of the data subject’s wishes by which he or she, by a statement or by a clear affirmative action, signifies agreement to the processing of personal data relating to him or her.

Let’s break down these concepts:

  • Freely Given – a person chose something of their own volition when presented with an option
  • Specific – the provision calls out a narrow use case for what’s acceptable
  • Informed – the user had knowledge of how choosing the option affects them
  • Unambiguous – What’s being communicated for consent is in simple, easy to understand language, and the conditions are clear and concise.
  • Affirmative – The person signaled that they definitely wanted this.

With most online systems these days, we see two different kinds of consent: opting in, and opting out.

Opt-In

When a person chooses to opt-in, they are electing to participate by their own volition. This can include choosing to have your public posts show up in search results, seeing Stories in Pixelfed, or allowing people to send you private messages.

One of the best mechanisms I’ve seen for user opt-in comes from mobile app permissions dialogs, where every feature can be turned on or off at any time. Users are presented with reasonable depictions of what things the app can access, and can ultimately decide whether to opt-in, or do nothing. It’s simple, straightforward, and takes very little effort on the user’s part.

Opt-Out

Opting out is a different story. While it’s better to have opt-out instead of no mechanism at all, it’s often a headache and a nuisance for people. Case in point: email solicitation. When you sign up for a service, watch a code repo online, or follow someone on Bandcamp, how often do you they email you?

This is my personal hellscape. Yes, this is real.

I have thousands of emails from political organizations, campaigns, newsletters, developer platforms, you name it. To free myself of these, I have to opt-out potentially thousands of times, clicking links and confirmations over and over again. It’s a legitimately bad experience, especially because I have to deal with all of them to effectively use my inbox.

At least there’s an Unsubscribe button.

Going beyond email for a moment: consumer design in user applications is truly baffling. More often than not, users are automatically opted into things no human would want, like advertisements that live in your notifications. The deceptive patterns are so bad that it becomes difficult to tell where a person can even opt out in the first place. Check out what Samsung does with the Galaxy phone.

Screenshot courtesy of Sam Mobile

What’s even worse is that, in addition to shoving advertisements into notifications, there’s some clever misdirection that discourages users from opting out. The average person sees these ads and thinks, “This has to be the Galaxy Store! I’ll go and turn off these notifications.”

Nope.

Users can’t opt-out of anything here: the switch is greyed out, and tapping it does nothing. The real settings live in a totally different part of the OS, called Samsung Push Service. You wouldn’t know to look for it if you don’t know that. Most users likely just give up at this point.

I’m still mad about this.

My point is, Opt-Out can quickly go down a rabbit hole of awful experiences. Users hate being treated this way, especially when the mechanism is in an obscure place, or hidden behind layers and layers of menus. Come on, be better than that.

Isn’t federation Opt-Out by default?

This is a point myself and others in the community noticed during the backlash about bridging different federated networks. In a nutshell: the Fediverse was built a certain way, because of a long line of design decisions stemming from GNU Social. These decisions culminated in a network that is both public and opt-out by default. Although there have been efforts to shift this dynamic over time, it’s hard to shift course from 15 years of technical debt.

Here’s a rule of thumb: when it comes to designing a new feature that interacts with people’s public data, you need to sufficiently address three key considerations:

  • Can this feature be used to harm people? This includes doxxing, harassment, and block evasion. If so, it probably should be opt-in by default, so that the user can decide for themselves.
  • Are expectations spelled out ahead of time? Does the feature explain to the user how it affects them, and what is being shared?
  • Does opting in or out require additional work on the user’s part? If so, how much work does the user have to do?

Also fundamentally important to understand: don’t make users jump through hoops to turn something off. You’re not going to be earning any goodwill by making something hard to opt out of.

This one is a bit tricky because of how the Web evolved, and how social networks try to act as a public square. There’s an assumption that something posted under a Public scope ought to be freely available to anyone, for any use, for any reason. Many tech startups tend to just shrug and say “Hey, if it’s out there, it’s fair game.”

Unfortunately, this flies in the face of user expectations, and the clash over consent can make people very angry. Sometimes, nuance can get lost because people fail to understand what it is a developer is doing: Content Nation dealt with an awful situation where people thought it was a scraper for a commercial service, when it was really just federating with the rest of the network.

2. Solicit Feedback Early

One of the best things a new entrant to a space can do is announce their presence, talk about what they’re planning to do, and solicit feedback from the community. This can be a great way to set expectations, and can help developers understand where some rough edges might exist that people feel uneasy about.

To be clear, I’m not saying “don’t build the thing”. You may have a fantastic idea that could make life better for everybody on the network! Just consider that, in the case of Maven, they literally did the thing first without telling anybody, and then had to backpedal when people reacted negatively. The developer response has an air of incredulity and surprise, as though they cannot fathom why people were mad about it.

Consent mechanisms in the Fediverse are an emergent topic, and the landscape is evolving. Platforms such as Friendica, Hubzilla, and Streams have long offered granular scopes to decide who can or can’t see or interact with a piece of content, and Bonfire goes a step further by offering a framework for custom permission settings.

Mastodon is currently the platform with the widest amount of adoption. It’s not even close. Unfortunately, its offerings are still comparatively limited. The main mechanisms leveraged by the Mastodon community are as follows:

  • The Indexable Attribute – used by Mastodon, Pixelfed, Piefed, and other platforms. Described in FEP-5feb, this extension lets Actors state whether their public posts should be listed in search results.
  • The Discoverable Flag – This option declares says whether an Actor consents to being featured in discovery algorithms, seen on various public timelines by strangers, and have their profile recommended as someone to follow.
  • Hashtags / Profile Fields – This is one of the oldest mechanisms that Actors leverage to opt out of various things. It’s hacky, but illustrates what users want to actively block. Common tags include #noindex, #nosearch, #nobot, and #nobridge, but others might exist.
  • Locked Accounts – Actors can be adjusted to require follow requests to be manually approved by default, and can use it in such a way that they only post privately. As a result, interactions can only be performed by mutuals who have already been vetted.
  • Timeline Filtering – While this isn’t an external indicator of consent per se, it does give Actors some agency in deciding what they see on their timelines every day.
  • Blocking – Similarly to filtering, Actors have a powerful opt-out mechanism to dealing with trolls, abusers, and serial harassers: they can just block those people and move on.
  • Secure Mode – although this is more of an “instance-wide” setting, it’s possible for a community server to be super locked-down, to the point that connections and discovery mechanisms are inherently limited.

That about sums up what tools are currently available to end users on Mastodon.

Additional Mechanisms

There are also a couple of novel approaches that people have made over the years that are worth talking about:

  • Verification – some services like TootFinder and Fediverse People Directory have users add a text string to their profile, then have them fill out a form.
  • Human Curation – Fedi Directory asks people to DM the admin directly for inclusion, and Trunk even provides a form to help fill out a DM with key details.
  • Automated DM – Bridgy Fed is planning on sending a DM to people when they’re first followed from Bluesky, giving them the option to reply to opt-in.
  • Service Accounts – An increasing practice for federated platforms and services is to create a dedicated Actor representing it. This is often a bot that responds to a limited set of commands, usually related to verification and confirmation.
  • OAuth Login – OAuth is often more thought of as an easy way to log into apps, but it’s absolutely a valid approach for expressing consent for access. ActivityPods is a great example here, where individual capabilities can be toggled on or off during initial sign-in.

Developer Resources

It’s also a great idea to get in touch with other developers in the space, who have been here a while. There are more than a few places to look at, so here’s a short list of useful resources: 

  • FediDevs offers a support network for builders looking to compare notes on how things are done.
  • SocialHub offers a conversation space for ideas.
  • Fediverse Enhancement Proposals develops protocol extensions that anyone can implement.
  • SocialCG offers case studies to how conventions and fixtures in the network get used.

4. Own Up to Mismatched Expectations

Look, it’s okay to make mistakes. Sometimes, a technical detail doesn’t fit in to what community members want or expect. Not everybody wants to have their posts get pulled in to a service’s list, have their posts mirrored to an unmoderated network, or have their profile show up on an index somewhere.

As a developer, it’s important to understand why people might be unhappy with a decision that’s been made. Doubling down isn’t going to convince anybody of your point of view or perspective.

Having an honest conversation and soliciting feedback on how to improve can lead to fruitful outcomes: Bridgy Fed is a great example of the developer listening; the dev came up with a novel opt-in approach for bridging networks together: a bot representing the bridge will DM users who aren’t already using the integration, asking whether they want to use it to talk.

5. Consider Other Groups

One of the biggest mistakes that developers make when building for this space is that they assume everyone else on the network is just like them, and that their own experience will be the default for everybody.

The truth is, not everybody’s experience is the same, and some people have had to deal with harassment, doxxing, abuse, rape, and death threats. Opening up everybody to everything can be a real mess, so it’s important for developers to consider the following:

  • Does my service respect post scopes, such as Public, Limited, and Private Mention?
  • Are there ways that my service could be used to troll, doxx, spam, or harass people?
  • Does my service expose user identities to unmoderated spaces?
  • Does my service protect against some of the “Worst of the Worst” parts of the network, such as CSAM, troll farms, or hate speech dens?
  • Is my service something that demonstrates positive values to Fediverse users, or does it just demonstrate positive values to me?

There are a lot of angles to consider, when it comes to building for public safety. If you’re not sure about some of the potential abuse vectors, consider looking into some of the resources IFTAS offers, and maybe hire a consultant to review these details.

6. Consider the Competitive Advantages

One purported value of the Fediverse is that it aims to be better than the networks we came from. Giving users agency and the ability to decide for themselves is a powerful and compelling way to set ourselves apart from the status quo.

If there’s anything to take away from what you’ve read here, think about this: when presented with an informed choice, users have been found to be far more receptive to taking it. Mastodon giving users the ability to opt-in to being discovered through search is a fantastic case study. Not everybody chooses it, but far more people are happy to participate as a result.

We have an opportunity to build the next generation of social communication in a way that respects user expectations, informs them of effects, and gives them the ability to easily change it at any time. This is a chance to win hearts and minds by building a superior app experience, one that doesn’t endlessly deceive and frustrate people.

Sean Tilley

Sean Tilley has been a part of the federated social web for over 15+ years, starting with his experiences with Identi.ca back in 2008. Sean was involved with the Diaspora project as a Community Manager from 2011 to 2013, and helped the project move to a self-governed model. Since then, Sean has continued to study, discuss, and document the evolution of the space and the new platforms that have risen within it.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button