Egregoros

Signal feed

Timeline

Post

Remote status

Context

5

@dougwade Delta Chat would still be a better option, besides being YEARS ahead of such new solution, in Delta Chat you don't depend on a particular server, for example here in fediverse if your instance goes down suddenly you lose your profile, with delta chat you can have several instances at the same time so if one goes down the other keeps working (this is new and still under development), the fedi solution would also need that level of resilience to be comparable

@rysiek

@dougwade @arcanechat @rysiek I don't want to break your heart but E2EE messaging will never happen on fedi no matter what anyone says, even Soatok himself. (furself?)

1. Fedi is accessed by users from multiple clients. So now you have a key synchronization problem that Matrix hasn't been able to get working correctly over the course of... 10 plus years?

2. every fedi app that exists which people use will have to updated to support it, and it will NOT be trivial. People are not going to give up their preferred apps just for E2EE messaging.

3. every web interface will have to be updated to support it properly. so now we're doing all this crypto in the browser. Your private key will have to live in browser local storage. Not great.

4. This of course implies that every fedi server will have to be updated to support it: Mastodon, Pleroma, Akkoma, all the Misskey forks, Lemmy, Pixelfed, Gotosocial.... and this is going to go smoothly without giant security issues happening due to poor implementations right? RIGHT????? :newlol:

It's not going to happen.

What could happen is that this could become a Mastodon specific feature that only works with Mastodon and the official Mastodon app. Or perhaps there will be a specific e2ee messaging mobile app created that only works with Mastodon. But I doubt it.

The biggest problem with this idea is that the entire ecosystem will be so broken/fractured that people will instead choose something else that doesn't have this problem. Whichever is easiest to onboard and doesn't leave you guessing "will they be able to receive my messages?" will win. It will be a dedicated E2EE messaging service such as DeltaChat.

The people who keep talking about E2EE coming to fedi are only doing so for clout. They're either dishonest or just stupid and have no idea what it takes to build such an app that will be accessible to the masses.

And even if such a thing did exist, it is too easily blocked anyway. Not like it would have helped people in Iran or anything.

@feld @dougwade @arcanechat @rysiek 2. and 4. is just ‘we can’t improve just the part of fedi we’re on’. if it was any true, Pleroma would be merely a worse Mastodon clone. there are apps creating new kinds of experiences using ActivityPub and they’re planning to implement the MLS over AP spec. just telling users you can’t use E2EE messaging with some of your friends is much less confusing than the experience mainstream IM users are used to, like whatever recently happened to Facebook Messenger when they switched to E2EE by default

Replies

22

@mkljczk @arcanechat @feld @dougwade @rysiek At least with Pleroma, the chat part has been partly solved. Using classic AP DMs (to: ["https://example.com/users/recipient"]) for E2EE isn't doable without lots of added complexity, because you can add new mentions at least according to how most implementations do it.

E2EE over AP is suffering from the Linux case of reinventing the same wheel all over again. Instead of Ciscoware and XML, we have AP/JSON(-LD) and endless extensions, neither work great. Instead of reinventing a protocol never meant for private communications (although never explicitly said) as a simple transport layer, people should have tried to fix XMPP.

Here's Pleroma Chats as they should have been from the start, and I'm not joking that much: https://docs.ejabberd.im/developer/extending-ejabberd/elixir/#embed-ejabberd-in-an-elixir-app

@feld @arcanechat @dougwade @rysiek @mkljczk It does, but it also is the closest to an open and extensible ideal messaging platform that currently exists. And it mostly works on the server side. What is not working at all is the client side where 3 different OMEMO versions co-exist, neither of which are compatible with each other and clients seemingly choose which one to implement at random.

In some tangential way, it suffers from the same issue as AP always did. Way too extensible to its detriment.

I have not looked at the Delta Chat internals yet, but so far after trying to package the relay (probably should continue on that endeavor when I find some inspiration/time), I'm not a fan. If the core is a pile of unportable madness that vendors openssl of all (thanks Rust), it has little hope of surviving long-term. Unless a different implementation(s) (the Golang one's) get more traction than the current reference one.
@phnt @arcanechat @dougwade @rysiek @mkljczk

> I have not looked at the Delta Chat internals yet, but so far after trying to package the relay

you absolutely can package the relay and I did it for FreeBSD but I don't see a point because half of it is based on very specific configuration of multiple services and that's not something that "packaging" alone can solve.

Now if you're annoyed about there being so many different services involved, you can look at other work being done in this area. There's a custom version of the Maddy mail server written in Go being worked on (and actively used in a certain country right now) so you can deploy servers with a single binary: https://github.com/themadorg/madmail

> If the core is a pile of unportable madness that vendors openssl of all (thanks Rust), it has little hope of surviving long-term.

The core as I package it on FreeBSD does not vendor openssl, and the reliance on any openssl at all can likely be removed in the not so distant future
@feld @arcanechat @dougwade @rysiek @mkljczk My annoyance with the packaging was more with the configuration being stuck in the debian-specific install tool (at least as of ~2 months ago). I've heard there were improvements on that front I think. The number of services involved was expected since it's email. If you want a normal Dovecot/Postfix setup, you need all of that anyway.

Packaging the core wasn't that bad, after packaging a bunch of python dependencies, because I decided to try my chances with packaging it on RHEL8. I think I have it successfully packaged, but I never tried testing it.
@feld @phnt @arcanechat @dougwade @rysiek @mkljczk the most ideal way to use json-ld is to use expanded form and put it directly in a triple store and make that searchable, but I found it was hard to do this on a FLOSS stack. the triple store space isn't in great shape, you either get expensive proprietary systems or you get poorly documented systems that fall over when you try to CRUD fediverse data at realtime speeds. or sometimes just fall over because they don't work at all, I had multiple projects where release builds just didn't even work right and I had to reach out to developers to find out all the "tricks" to get a working system, terrible bugs, etc.

there are other unrelated problems, like you need to have predictable data structure to index and you need indices to make your system work so in practice you have to constrain what you accept.

currently I am dealing with the fact that activitypub json-ld documents can have multiple types. in practice I think no system supports this, they just reject documents with an array instead of a string. I extended an activitypub server to support Verifiable Credentials 2.0, and if you want to support Open Badges, it is a hard requirement that the type is ["VerifiableCredential", "OpenBadge"]. So I ended up compromising and internally made our server use heuristics to pick one primary type, and keep supplementary type array for later use. And, internally it only works for non-Activity objects that are the object of an Activity. Hard limitation of the system. Couldn't support full flexibility. Made a compromise. The compromise still is ugly and added annoying complexity to the code. Even if you made a commitment to supporting multiple types, how do you even do that? you can't support it arbitrarily. you can only just hardcode how you deal with specific combinations of types.
@feld @phnt @arcanechat @dougwade @rysiek @mkljczk

I have run into situations where it seems literally impossible to make two things that use json-ld interoperate by making the document function for both, which is kind of an explicit promise of json-ld. but, it works a lot of the time, life isn't perfect and if you think about it a bit you realize the promise could never be 100% filled.

json-ld works as a substrate for representing a graph of arbitrary-content triple "documents". it is your responsibiliity when you make a real-world system to constrain what you accept.

the problem as I see it is that it has no constraints on real-world profiles of usage. its ok at the activitypub level because it is another substrate but if you build something on top of activitypub you should have a spec defining narrowly and rigorously what is valid. so for example if you're building a microblog network you define a microblog interop spec and you also don't pretend it will mesh with for example a subreddit spec or forum spec. you might even make practical constraints like "json-ld allows multiple types but this spec mandates one"
@feld @arcanechat @dougwade @mkljczk @phnt @rysiek > json-ld works as a substrate for representing a graph of arbitrary-content triple "documents". it is your responsibiliity when you make a real-world system to constrain what you accept.

to clarify this, you could have a system that still lets you have a document representing something that has all kinds of arbitrary data in it. but maybe your system just tracks the graph and a few predictable properties, but lets someone looking up that document by id see any kind of data in there you want, which some other consuming system can handle. json-ld is great for that. you still need to say "it at least has this; it must never have this"
@sun @arcanechat @feld @dougwade @rysiek @mkljczk I couldn't have said it better and I'm not well versed in document parsing and JSON-LD, but I'll add this.

The only reason why I mentioned the LD part at first is because it is an entirely optional part of the spec that not many projects use. Contrary to what some say, it is not mandatory at all to use LD. But for something like E2EE you might do things that are more LD-friendly. But if you want schemas (purpose of LD), realistically XML is better at it even though support for XML parsers hasn't been great for the last few years.

Which creates an interesting issue. You can remap types in JSON-LD, so you can create a document that has two different meanings to a JSON consumer and a JSON-LD consumer.

And the way LD is currently treated isn't by using it properly in a triple store and the usual way you handle documents like this. For the most part, it is handled as pure JSON that is compacted/expanded when processed by a JSON parser with extra logic on top of it. Which of course makes JSON handling a notable performance issue in federation for at least one project and a constant source of issues for the projects.