Nikita Vilunov eb34ac65ae | ||
---|---|---|
.gitea/workflows | ||
.run | ||
certs | ||
crates | ||
dist | ||
docs | ||
src | ||
.dockerignore | ||
.gitignore | ||
.pre-commit-config.yaml | ||
Cargo.lock | ||
Cargo.toml | ||
config.toml | ||
deny.toml | ||
readme.md | ||
rust-toolchain | ||
rustfmt.toml |
readme.md
Lavina
Multiprotocol chat server based on open protocols.
Goals
Support for multiple open protocols
Federated protocols such as XMPP and Matrix aim to solve the problem of isolation between service providers. They allow people using one service provider to easily communicate with people using different providers, which is not possible with providers based on non-federated protocols, such as Telegram and WhatsApp.
Non-federated protocols also make it harder to bring a new provider into the market, as it will have a few users at first, which will not be able to communicate with their friends who use a different provider – commonly known as a network effect.
Using a federated protocol does not solve the problem entirely, though, as it brings the network effect onto a higher level. Previously, we had XMPP as the only open federated protocol. Matrix was introduced in 2014 as a modern alternative. This fragmented the ecosystem, since users of Matrix cannot communicate with users of XMPP straightforwardly, creating a different sort of a network effect.
Lavina is an attempt to solve that problem. It is not a new protocol, but it's a way to make other protocols be interoperable between each other. Users of a Lavina service should be able to connect to their service using a client of any supported protocols, their service should federate with other services using any supported protocols.
Products should compete with each other on basis of technical merit and not network effects.
Scaling down, up and wide
Federated services are being run by communities and companies of various sizes and scales – this ranges from tiniest single-person servers to large communities with thousands of active users to global corporations with billions of concurrent connections.
- Scale down – we should support running the software on low-level hardware such as Raspberry Pi or single-core VPS. This is important for costs saving of smaller sized instances.
- Scale up – we should utilize available resources of more powerful machines in a fair and efficient manner. This includes multiple CPUs, increased RAM and storage, network bandwidth.
- Scale wide – the most complex property to implement, we should support running the service as a distributed cluster of instances, which efficiently schedule the load and balance connections across themselves.
Clustered setup may additionally provide support for data replicas for additional reliability and improved read scaling. It should also consider data locality so as not to introduce additional network hops, which may result in multiple cross-AZ/DC data transfers in scope of a single request.