Nikita Vilunov
26720a2a08
This separates the core in two layers – the model objects and the `LavinaCore` service. Service is responsible for implementing the application logic and exposing it as core's public API to projections, while the model objects will be independent of each other and responsible only for managing and owning in-memory data. The model objects include: 1. `Storage` – the open connection to the SQLite DB. 2. `PlayerRegistry` – creates, stores refs to, and stops player actors. 3. `RoomRegistry` – manages active rooms. 4. `DialogRegistry` – manages active dialogs. 5. `Broadcasting` – manages subscriptions of players to rooms on remote cluster nodes. 6. `LavinaClient` – manages HTTP connections to remote cluster nodes. 7. `ClusterMetadata` – read-only configuration of the cluster metadata, i.e. allocation of entities to nodes. As a result: 1. Model objects will be fully independent of each other, e.g. it's no longer necessary to provide a `Storage` to all registries, or to provide `PlayerRegistry` and `DialogRegistry` to each other. 2. Model objects will no longer be `Arc`-wrapped; instead the whole `Services` object will be `Arc`ed and provided to projections. 3. The public API of `lavina-core` will be properly delimited by the APIs of `LavinaCore`, `PlayerConnection` and so on. 4. `LavinaCore` and `PlayerConnection` will also contain APIs of all features, unlike it was previously with `RoomRegistry` and `DialogRegistry`. This is unfortunate, but it could be improved in future. Reviewed-on: lavina/lavina#66 |
||
---|---|---|
.gitea/workflows | ||
.run | ||
certs | ||
crates | ||
dist | ||
docs | ||
src | ||
.dockerignore | ||
.gitignore | ||
.pre-commit-config.yaml | ||
Cargo.lock | ||
Cargo.toml | ||
config.0.toml | ||
config.1.toml | ||
config.toml | ||
deny.toml | ||
readme.md | ||
rust-toolchain | ||
rustfmt.toml |
readme.md
Lavina
Multiprotocol chat server based on open protocols.
Goals
Support for multiple open protocols
Federated protocols such as XMPP and Matrix aim to solve the problem of isolation between service providers. They allow people using one service provider to easily communicate with people using different providers, which is not possible with providers based on non-federated protocols, such as Telegram and WhatsApp.
Non-federated protocols also make it harder to bring a new provider into the market, as it will have a few users at first, which will not be able to communicate with their friends who use a different provider – commonly known as a network effect.
Using a federated protocol does not solve the problem entirely, though, as it brings the network effect onto a higher level. Previously, we had XMPP as the only open federated protocol. Matrix was introduced in 2014 as a modern alternative. This fragmented the ecosystem, since users of Matrix cannot communicate with users of XMPP straightforwardly, creating a different sort of a network effect.
Lavina is an attempt to solve that problem. It is not a new protocol, but it's a way to make other protocols be interoperable between each other. Users of a Lavina service should be able to connect to their service using a client of any supported protocols, their service should federate with other services using any supported protocols.
Products should compete with each other on basis of technical merit and not network effects.
Scaling down, up and wide
Federated services are being run by communities and companies of various sizes and scales – this ranges from tiniest single-person servers to large communities with thousands of active users to global corporations with billions of concurrent connections.
- Scale down – we should support running the software on low-level hardware such as Raspberry Pi or single-core VPS. This is important for costs saving of smaller sized instances.
- Scale up – we should utilize available resources of more powerful machines in a fair and efficient manner. This includes multiple CPUs, increased RAM and storage, network bandwidth.
- Scale wide – the most complex property to implement, we should support running the service as a distributed cluster of instances, which efficiently schedule the load and balance connections across themselves.
Clustered setup may additionally provide support for data replicas for additional reliability and improved read scaling. It should also consider data locality so as not to introduce additional network hops, which may result in multiple cross-AZ/DC data transfers in scope of a single request.