Sink is a data access layer that additionally handles synchronization with external sources and indexing of data for efficient queries.
The client facing Store API hides all Sink internals from the applications and emulates a unified store that provides data through a standardized interface. This allows applications to transparently use various data sources with various data source formats.
A resource is a plugin that provides access to an additional source. It consists of a store, a synchronizer process that executes synchronization & change replay to the source and maintains the store, as well as a facade plugin for the client api.
Storage / Indexes
Each resource maintains a store that can either store the full dataset for offline access or only metadata for quick lookups. Resources can define how data is stored. The store consists of revisions with every revision containing one entity.
The store additionally contains various secondary indexes for efficient lookups.
The domain types exposed in the public interface provide standardized access to the store. The domain types and their properties directly define the granularity of data retrieval and thus also what queries can be executed.
The buffers used by the resources in the store may be different from resource to resource, and don't necessarily have a 1:1 mapping to the domain types. This allows resources to store data in a way that is convenient/efficient for synchronization, altough it may require a bit more effort when accessing the data. The individual buffer types are specified by the resource and internal to it. Default buffer types exist of all domain types.
Commands are used to modify the store. The resource processes commands that are generated by clients and the synchronizer.
The resource emits notifications to inform clients of new revisions and other changes.
The change replay is based on the revisions in the store. Clients (as well as also the write-back mechanism that replays changes to the source), are informed that a new revision is available. Each client can then go through all new revisions (starting from the last seen revision), and thus update its state to the latest revision.
The synchronizer executes a periodic synchronization that results in change commands to synchronize the store with the source. The change-replay mechanism is used to write back changes to the source that happened locally.
The resources have an internal persitant command queue hat is populated by the synchronizer and clients continuously processed.
Each resource has an internal pipeline of preprocessors that can be used for tasks such as indexing or filtering, and through which every command goes before it enters the store. The pipeline guarantees that the preprocessor steps are executed on any command before the entity is persisted.