Wβ
All docs
π
Sign Up/Sign In
signaldb.js.org/getting-started/ (+11)
Public Link
Apr 16, 2025, 5:32:25 AM - complete - 84.5 kB
Apr 16, 2025, 5:32:25 AM - complete - 84.5 kB
Apr 16, 2025, 4:34:41 AM - complete - 56.4 kB
Starting URLs:
https://signaldb.js.org/getting-started/
Crawl Prefixes:
https://signaldb.js.org/getting-started/
https://signaldb.js.org/core-concepts/
https://signaldb.js.org/queries/
https://signaldb.js.org/data-manipulation/
https://signaldb.js.org/data-persistence/
https://signaldb.js.org/reactivity/
https://signaldb.js.org/sync/
https://signaldb.js.org/orm/
https://signaldb.js.org/schema-validation/
https://signaldb.js.org/guides/react/
https://signaldb.js.org/reference/core/
https://signaldb.js.org/reference/react/
## Page: https://signaldb.js.org/getting-started/ Welcome to the Getting Started Guide for SignalDB, a local-first database with signal-based reactivity and real-time synchronization. This guide will help you understand the basics of SignalDB and get you up and running quickly. SignalDB is designed for blazing fast query performance and data persistence, while remaining framework-agnostic. ## Installation β Installing SignalDB is simple and easy. It can be installed using npm. Open your terminal and enter the following command: bash $ npm install @signaldb/core ## Creating Collections β Creating collections is straight forward. js import { Collection } from '@signaldb/core' const posts = new Collection() but normally you want to persist your data. Persistence in SignalDB is achieved by using persistence adapters. Choose one that fits your needs and pass it to the collection constructor. Here is an example using `@signaldb/localstorage`: js import { Collection } from '@signaldb/core' import createLocalStorageAdapter from '@signaldb/localstorage' const posts = new Collection({ persistence: createLocalStorageAdapter('posts'), }) That's all you have to do. There a also some optional configuration options you find here: collections reference ## Adding data β After you've created you first collection, you can start to add documents to it. js // ... const postId = posts.insert({ title: 'Foo', text: 'Lorem ipsum β¦' }) You created your first document in SignalDB! Check out the data manipulation page to learn how to update and remove documents. ## Querying β Getting your documents back is also very easy. js // ... const cursor = collection.find({}) console.log(cursor.fetch()) // returns an array with all documents in the collection You've finished the Getting Started Guide! The next steps are getting reactivity to work. Check out the core concepts about reactivity to learn how to do this. ## Next steps β Now you know the basics about SignalDB. It's time to learn how to integrate it with the framework you're using. Take a look at our guides: * Angular * React * Solid * Svelte * Vue After that you might want to learn more about the core concepts of SignalDB, how you can query your data or how to connect it to your backend. --- ## Page: https://signaldb.js.org/core-concepts/ The following are some key concepts that are important to understanding how to use SignalDB effectively. ## Collections β In SignalDB, all data is stored in memory, making query performance exceptionally fast. Users can create collections of documents, where each document is a record in the database. Queries can be run against these collections to retrieve data according to specific criteria. This architecture also plays an important role in achieving optimistic UI strategies. ### Schemaless β SignalDB is schema-less, which means you don't need to define a schema for your data before you start using it. This allows you to store any data you want without worrying about defining a schema first. More information on how to define collections and perform queries will be found in the dedicated sections: * Collections * Queries ### Optimistic UI β Optimistic UI is an approach where user interfaces are updated optimistically in response to user actions before the actual server response is received.This approach provides a smoother and more responsive user experience because the UI doesn't have to wait for the server to confirm the success of the action. SignalDB's schema-less nature and reactive querying via reactivity adapters enable the creation of robust, optimistic UI implementations. When a user triggers an action that changes the data, such as submitting a form, SignalDB's reactivity system can immediately update the UI to reflect the intended changes. This is possible because the reactivity adapters automatically propagate changes to the UI components that rely on the affected data. ## Signals and Reactivity β As the name suggests, SignalDB's reactivity is based on signals, a concept from functional reactive programming. The concept is quite old, but is becoming popular again since the hype around solidjs in early 2023. Since many signal libraries are currently emerging, SignalDB is designed to be library agnostic. Reactivity adapters in SignalDB enable reactive querying of the documents in a collection. They provide a simple interface that allows you to integrate a signal library. By using reactivity adapters, you can ensure that whenever the data in your collections changes, any reactive queries tied to that data are automatically updated, keeping the state of your application consistent with your data. To learn more about signals, read The Evolution of Signals in JavaScript by Ryan Carniato (author of SolidJS). Typically, you'll simply use a predefined reactivity adapter for the signal library you're using. Check out the available adapters in the Reactivity section of the documentation. ## Memory Adapters β SignalDB's memory adapters play a critical role in controlling how and where data is stored in memory. These adapters provide an abstraction over the underlying memory storage mechanism, giving users the flexibility to define custom methods for handling data storage operations. Simply put, a memory adapter is a piece of code that dictates how your data is stored in memory. When you perform a write or read operation, the adapter is responsible for translating those high-level operations into low-level memory operations. Normally, you don't need to worry about memory adapters because SignalDB comes with a default one. Since a memory adapter is a subset of the `Array` class, the most basic memory adapter is an emtpty array (`[]`). You can also create a MemoryAdapter on your own. See the createMemoryAdapter reference for more information. ## Data Persistence β SignalDB only stores the data in memory and it will be lost when the memory is flushed (e.g. page reload). Normally you don't want to lose data and you want to persist it. This is where persistence adapters come in. Persistence adapters in SignalDB play a critical role in ensuring that your data remains intact across multiple user sessions or application reloads. These adapters facilitate data persistence by providing a standard interface for storing and retrieving data, thereby abstracting from the specifics of the underlying storage mechanism. A persistence adapter provides the necessary code to interact with a specific storage medium, such as localStorage, IndexedDB, or even a remote server. The role of the adapter is to translate the high-level operations that you perform on your data (such as saving or loading a document) into low-level operations that the storage medium can understand. The main benefit of using persistence adapters is flexibility. Because they provide an abstraction layer over the storage system, you can switch between different storage systems with minimal impact on the rest of your code. See also the persistence adapters documentation page. --- ## Page: https://signaldb.js.org/queries/ Like most databases, SignalDB lets you query your data. It uses an approach similar to MongoDB, where you can apply selectors to filter your data and use options to control things like sorting, projection, skipping, and limiting the results. When you run a query with `.find()`, the query doesnβt execute right away. Instead, it returns a cursor, which you can use to call methods and get the actual dataβjust like in MongoDB. This makes it easier to only process what you need. For example, if you just need the `.count()` of a query, you donβt have to load all the data. A unique feature of SignalDB is that all queries are reactive by default. This means if you run a query and use a function on the returned cursor within the `effect` or `autorun` function of your reactivity library, the query will automatically rerun whenever the data changes. ## Queries β You can query you data by calling the `.find()` or `.findOne()` method of your collection. `.findOne()` returns the first found document while `.find()` returns a cursor. ### Selectors β SignalDB uses the `mingo` library under the hood. It's very similar to MongoDB selectors. Check out their documentation to learn how a selector should look like: https://github.com/kofrasa/mingo ### Options β The second parameter you can pass to the `.find()` method are the options. With the options you can control things like sorting, projection, skipping or limiting data. ### Sorting β To sort the documention returned by a cursor, you can provide a `sort` object to the options. The object should contain the keys you want to sort and a direction (`1 = ascending`, `-1 = descending`) js collection.find({}, { sort: { createdAt: -1 }, }) ### Projection β You can also control which fields should be returned in the query. To do this, specify the `fields` object in the `options` of the `.find()` method. TIP With the `fields` option you can also control when you query will rerun. If you only query for a field that is not changing, the query will not rerun. Also see Field-Level Reactitivity js collection.find({}, { fields: { title: 1 }, }) ### `skip` and `limit` β To skip or limit the result of a query, use the `skip` or `limit` options. Both options are optional. js collection.find({}, { skip: 10, limit: 10, }) ## Field-Level Reactivity β SignalDB introduces a powerful enhancement to its reactivity system called **Field-Level Reactivity**, which ensures that reactive functions (such as `effect` or `autorun`) only rerun when specific fields accessed in your code are changed. Previously, the reactive system would rerun the query if any field in any item of the result set was modified, regardless of whether those fields were actually used in the code. This led to unnecessary reactivity and potential performance bottlenecks, especially with large datasets. ### Key Features β * **Field-Level Reactivity**: Reactive reruns now occur only when the fields actually accessed by your code are modified, rather than triggering for all changes in the dataset. * **Item-Level Reactivity**: If a query returns multiple items but you only access fields from specific items, changes in unaccessed items will not trigger a rerun. * **Automatic Field Tracking**: Instead of manually specifying which fields to track using the `fields` option, SignalDB now automatically tracks fields as you access them. This reduces the chance of developer oversight and simplifies code maintenance. ### Opt-In to Field-Level Tracking β To enable field-level reactivity, there are three ways to configure field tracking: globally, per collection, or through the options parameter of the `.find()` method. #### 1\. Global Configuration β To enable field tracking globally for all collections in your application, use the static method `Collection.setFieldTracking`. This ensures that field tracking is active by default across all collections unless overridden. js Collection.setFieldTracking(true) // Enables field tracking globally #### 2\. Per Collection Configuration β To configure field tracking for a specific collection, use the setFieldTracking method on that collection. js someCollection.setFieldTracking(true) // Enables field tracking for this collection only #### 3\. Enable Field Tracking in `.find()` Options β You can enable field tracking on a per-query basis by passing the fieldTracking: true option to the .find() method. When this option is set, reactivity is scoped to the fields you access. js effect(() => { const items = someCollection.find({}, { fieldTracking: true }).fetch() // Access the fields you care about here console.log(items[0].name) // Will rerun only if 'name' field of the 0th item changes }) This behavior optimizes your appβs performance by reducing the number of unnecessary reruns. Instead of rerunning every time any field in any document changes, it only reruns when the relevant fields youβre interacting with are modified. ### Benefits of Automatic Field Tracking β 1. Improved Performance: By reducing the scope of reactive reruns to only relevant data, SignalDB minimizes computational overhead and maximizes efficiency, particularly in scenarios where queries return large datasets or where irrelevant fields change frequently. 2. Simplified Code: Developers no longer need to manually specify fields to track. With automatic field tracking, the system handles this for you, allowing you to focus on business logic rather than managing reactivity manually. 3. Reduced Developer Error: Manually tracking fields can be error-prone, especially as queries evolve. Automatic field-level reactivity ensures that your queries remain optimal even as your code changes, making it easier to maintain over time. --- ## Page: https://signaldb.js.org/data-manipulation/ ## Inserting data β To insert data into a collection, use the `.insert()` method. js const id = collection.insert({ title: 'Hello World' }) ## Updating data β To update data in a collection, use the `.updateOne()` or `.updateMany()` method. SignalDB uses the `mingo` library under the hood. It allows modifiers that are very similar to MongoDB modifiers. Check out their documentation to learn how a modifier should look like: https://github.com/kofrasa/mingo#updating-documents js collection.updateOne({ id: 'xyz' }, { $set: { title: 'Hello SignalDB' }, }) collection.updateMany({ title: 'Hello World' }, { $set: { title: 'Hello SignalDB' }, }) ## Replacing items β To replace an item in a collection, use the `.replaceOne()` method. js collection.replaceOne({ id: 'xyz' }, { title: 'Hello SignalDB' }) ## Deleting data β To delete data from a collection, use the `.removeOne()` or `.removeMany()` method. js collection.removeOne({ id: 'xyz' }) collection.removeMany({ title: 'Hello World' }) --- ## Page: https://signaldb.js.org/data-persistence/ Persistence adapters in SignalDB provide the mechanism for storing and retrieving data, ensuring that your data is kept safe across sessions and reloads of your application. These adapters interact with the underlying storage medium, such as localStorage, IndexedDB, or even a remote server, and handle the specifics of those storage systems while providing a consistent interface for data operations in your application. Persistence adapters are responsible for transforming the high-level operations you perform on your data (such as saving a document or loading a collection) into the low-level operations that the specific storage system can understand and perform. The main benefit of using persistence adapters is the abstraction they provide. They allow SignalDB to remain agnostic to the underlying storage system. This means that you can switch between different systems without changing the rest of your code. The follwing persistence adapters are currently available: * IndexedDB * localStorage * OPFS * FileSystem Building your own persistence adapter for your speicific use case is also possible and pretty straight forward. See `createPersistenceAdapter` for more information. --- ## Page: https://signaldb.js.org/reactivity/ Today, users demand near-instant feedback from their applications, expecting smooth and seamless interactions. Traditional asynchronous operations can sometimes slow down this experience. That's the reason why SignalDB uses reactivity and makes it easy to achieve an Optimistic UI. ## Understanding Optimistic UI β Optimistic UI is a design pattern where the application UI is updated immediately based on the expected result of an action, without waiting for a server's confirmation. It's all about improving the perceived performance of the application and delivering a more responsive user experience. Key benefits of Optimistic UI are an improved User Experience (UX) and a general improved responsiveness of an application. Immediate feedback enhances user confidence in the application. The application feels more faster, since it will not wait for server confirmation and this means there's no noticeable lag for the user. SignalDB's reactive architecture, combined with its memory storage, forms the backbone for implementing Optimistic UI. The database allows instant updates to the UI based on anticipated changes, which are then synchronized with actual data once it's processed. ### Signals and Reactivity β SignalDB harnesses signals, derived from functional reactive programming, to manage reactivity. The resurgence of signals, particularly after the popularity of solidjs in 2023, places SignalDB in a prime position. 1. **Integration with Signal Libraries**: SignalDB's design remains neutral to any particular signal library, offering integration through reactivity adapters. This compatibility ensures an up-to-date UI in tandem with data changes. 2. **Reactivity Adapters**: With these adapters, SignalDB can instantly query documents within a collection reactively. They seamlessly integrate with signal libraries, ensuring auto-updates to reactive queries when data changes. By providing a smooth, responsive user experience, SignalDB ensures that user interactions remain at the forefront of modern web design and functionality. ### Data Persistence and Optimistic UI β While SignalDB stores data in memory, ensuring the persistence of this data across sessions or reloads is vital. With persistence adapters, this challenge is met head-on. They provide the mechanism to store data, whether it's in localStorage, IndexedDB, or a remote server. When coupled with Optimistic UI, persistence adapters ensure that even if there's a momentary lapse in data storage, the user's experience remains unaffected. Also check out the core concepts about reactivity. ## Reactivity Libraries β We provide prebuilt reactivity adapters for existing reactivity libraries. If an adapter is missing, feel free to request it by opening an issue at Github or write one at your own. See createReactivityAdapter for more information. For some libraries, it wasn't possible to implement a `onDispose` method in the adapter. That means that you have to cleanup the cursor manually after the reactive context was closed. There are examples on the specific adapter documentation pages. Make sure that you implement it properly, since not doing this can lead to memory leaks. Scope checking is only supported by a few libraries. Scope checking means, that SignalDB is not able to check if a cursor was created from a reactive scope ((`find`/`findOne` called in an `effect` function)) and applies the required event handlers used to provide the reactivity. To avoid memory leaks, use an adapter with scope checking or pass `{ reactive: false }` to your options (e.g. `<collection>.find({ β¦ }, { reactive: false })`). | Library | Reactivity adapter | Automatic Cleanup | Scope check | | --- | --- | --- | --- | | `@preact/signals-core` | β | β | β | | `@reactively/core` | β | β | β | | `@webreflection/signal` | β | \- | \- | | `alien-signals` | β | β | β | | `Angular Signals` | β | β | β | | `Maverick-js Signals` | β | β | β | | `Meteor Tracker` | β | β | β | | `MobX` | β | β | β | | `oby` | β | β | β | | `Qwik` | β | \- | \- | | `S.js` | β | β | β | | `signal-polyfill` | β | \- | \- | | `signia` | β | \- | \- | | `sinuous` | β | β | β | | `Solid Signals` | β | β | β | | `sprae` (see #858) | β | β | β | | `Svelte Runes` | β | β | β | | `ulive` | β | \- | \- | | `usignal` | β | β | β | | `Vue.js refs` | β | β | β | --- ## Page: https://signaldb.js.org/sync/ ## Introduction to Sync in SignalDB β SignalDB is designed to handle synchronization efficiently and flexibly, making it adaptable to various backend systems. At its core, synchronization in SignalDB revolves around ensuring that data across multiple collections remains consistent and up-to-date, whether you are working with a local-first approach or integrating with remote servers. ### High-Level Overview β In SignalDB, synchronization is managed by the `SyncManager` class, which is central to the framework's ability to maintain data consistency. The `SyncManager` is responsible for coordinating the synchronization process for multiple collections, pulling and pushing data as needed. This centralization provides several key benefits: * **Flexibility**: SignalDB's sync mechanism is designed to work with any backend system, from REST APIs to GraphQL and beyond. This flexibility means you can integrate SignalDB with virtually any data source without worrying about compatibility issues. * **Efficiency**: Instead of handling synchronization for each collection separately, the `SyncManager` allows you to manage sync operations for all collections from a single instance. This streamlined approach simplifies the development process, reducing the need for repetitive code and minimizing potential synchronization errors. ### Role of the SyncManager β The `SyncManager` class plays a pivotal role in SignalDB by: * **Managing Multiple Collections**: A single `SyncManager` instance can oversee the sync operations for multiple collections simultaneously. This centralized management ensures that changes in one collection can be synchronized with others effectively and efficiently. * **Improving Developer Experience**: By handling synchronization through one class instance, SignalDB enhances developer experience. You no longer need to call sync functions for each collection individually. Instead, you can manage all sync operations through the `SyncManager`, which takes care of coordinating and executing these tasks behind the scenes. * **Conflict Resolution**: SignalDB provides built-in conflict resolution mechanisms to handle situations where data conflicts occur during synchronization. Conflict resolution ensures that the most recent changes are preserved, while maintaining data consistency across all collections. * **Queueing Sync Operations**: The `SyncManager` queues sync operations to ensure that they are executed in the correct order. This is particularly important when dealing with interdependent collections or when sync operations have dependencies on each other. * **Debouncing Pushes**: To optimize network usage and minimize unnecessary data transfers, the `SyncManager` debounces push operations. This means that multiple push operations for the same collection are merged into a single operation, reducing the number of network requests and improving performance. This approach not only simplifies your codebase but also helps maintain consistency and reliability across your application's data. ## Local-First Synchronization vs. On-Demand Fetching β When it comes to data synchronization, SignalDB offers two primary strategies: local-first synchronization and on-demand fetching. Understanding the differences between these approaches can help you choose the best method for your application's needs. ### Local-First Synchronization β In a local-first synchronization approach, data changes are managed and stored locally on the client device. The local data is periodically synchronized with the server, either automatically or manually. This method provides several key benefits: * **Performance**: Local-first syncing ensures that your application remains responsive, as data operations are performed locally before syncing with the server. This reduces the need for constant server interactions, leading to faster data retrieval and updates. * **Offline Support**: With local-first synchronization, users can continue interacting with your application even when they are offline. Changes are queued locally and synchronized once connectivity is restored, ensuring a seamless experience regardless of network conditions. * **Optimistic UI**: This approach allows for a smoother user experience by employing optimistic UI techniques. Users see immediate feedback on their actions (such as form submissions or data updates) while the sync process happens in the background. This eliminates the need for loading spinners or delays in user interactions. ### On-Demand Fetching β On-demand fetching involves retrieving data directly from the server whenever it is needed. This method can be more suitable in certain scenarios: * **Up-to-Date Information**: For applications where having the most current data is essential, on-demand fetching guarantees that the latest information is always retrieved from the server. However it can lead to more network requests and potentially slower performance. * **Resource Efficiency**: If the data changes infrequently or if the application primarily relies on the most up-to-date information, on-demand fetching can be more resource-efficient. It reduces the need for local storage and minimizes the complexity of handling local changes. ### Comparison β To help you better understand the differences, here's a comparison table highlighting the key aspects of each approach: | Aspect | Local-First Synchronization | On-Demand Fetching | | --- | --- | --- | | **Performance** | High - operates locally, independent of server load and latency | Can be slower due to server request times | | **Offline Support** | Strong - allows for offline operations and synchronization later | Limited - relies on continuous connectivity | | **User Experience** | Smooth - uses optimistic UI for immediate feedback | May include delays or loading spinners during data fetch | | **Real-Time Accuracy** | May lag behind the server data if not synced frequently | Always retrieves the latest data from the server | | **Resource Usage** | Higher - requires local storage and management | Lower - no local storage needed | In summary, local-first synchronization is ideal for enhancing performance and offline capabilities, while on-demand fetching is suited for applications that prioritize real-time data accuracy. SignalDBβs flexibility allows you to choose the approach that best fits your use case. ## Syncing with Any Backend β SignalDB is designed to offer versatile synchronization capabilities, making it compatible with a wide range of backend systems. Whether you're using REST APIs, GraphQL endpoints, or even custom protocols, SignalDB's modular architecture ensures smooth integration and synchronization. ### Backend-Agnostic Sync Mechanisms β SignalDB's synchronization mechanism is inherently backend-agnostic. This means it can connect and sync with virtually any type of backend system without requiring major modifications. The framework abstracts the specifics of the server interactions, allowing developers to focus on integrating their chosen backend without being tied to a particular technology. ### Abstracting Server Interaction β Central to this flexibility are the `pull` and `push` functions within the `SyncManager`. These functions act as intermediaries between your application and the backend, abstracting the details of data retrieval and submission. This design ensures that: * **Pull Function**: Retrieves data from the server. You can define how data is fetched, whether itβs through a REST API call, a GraphQL query, or another method. This flexibility allows you to adapt to various server architectures with minimal effort. * **Push Function**: Sends local changes to the server. Similar to the pull function, you can specify how changes are transmitted, ensuring compatibility with your backend's requirements. This includes sending data through HTTP methods, websockets, or custom protocols. Additionally to this two functions, you can also register a function that can be called when the client receives live-updates from the server. With this concept, you can easily implement real-time updates in your application without the need to call the `pull` function everytime. ### Examples of Integration β Hereβs how SignalDB can be integrated with different backend systems: * **REST APIs**: SignalDB can interact with RESTful endpoints for both pulling and pushing data. For example, a `pull` function might fetch data from `/api/reference/core/collection/todos`, while the `push` function sends updates to the same endpoint. * **GraphQL**: For applications using GraphQL, SignalDB can perform queries and mutations. The `pull` function might execute a GraphQL query to retrieve collection data, and the `push` function could execute a mutation to submit changes. * **Custom Protocols**: If you have a custom backend or protocol, you can implement the `pull` and `push` functions to accommodate these specifics. This ensures that SignalDB remains adaptable to unique or proprietary systems. * **Live Updates**: SignalDB can also work with real-time updates through technologies like WebSockets or server-sent events. By integrating live update mechanisms, SignalDB can maintain real-time data synchronization across all clients. In summary, SignalDBβs design ensures that you can synchronize data seamlessly with any backend system. Its modular approach, with abstracted `pull` and `push` functions, provides the flexibility to integrate with various technologies while maintaining efficient and reliable synchronization. ## Sync Flow & Conflict Resolution β SignalDBβs synchronization process is designed to efficiently handle the flow of data between the client and server, ensuring that local changes are correctly synchronized while managing conflicts in a straightforward manner. To do that, SignalDB tracks all changes made locally to the data. This includes insertions, updates, and removals. These changes are logged and queued for synchronization with the server. To handle conflicts, SignalDB uses a "replay" mechanism during synchronization. This involves: * **Replaying Changes**: Local changes are applied to the latest data fetched from the server. This ensures that any conflicts are resolved based on the most recent data. * **Handling Conflicts**: The "last change operation wins" strategy is employed, where changes are replayed on the latest data, and conflicts are resolved by applying the most recent operations. ### Sync Flow β SignalDBβs sync process consists of several key steps: * **Client Pulls Data**: The client retrieves the latest data from the server. This is the initial step where the client gets up-to-date information from the backend. In a real-time scenario, this could also be triggered by a live-update from the server that notifies the client about changes. * **Replay Changes**: Any local changes (such as inserts, updates, or removals) since last sync are applied on the new data to check which actual changes need to be pushed to the server. This step ensures that local modifications are applied to the updated data set, maintaining consistency and avoiding overwrites of remote changes. * **Pushes Changes**: After applying local changes to the latest data, the client pushes these changes to the server. This ensures that the server receives and records the clientβs updates. * **Pulls Again for Verification**: Finally, the client performs another pull to verify that all changes have been correctly applied and synchronized with the server. The following pseudo code shows the sync flow in detail. It's based on the actual implementation of the sync function, but it's simplified to illustrate the main concepts. js async function sync(changes, lastSnapshot, newData) { let dataFromServer = await pullDataFromServer(); if (changes) { let localSnapshot = applyChangesToSnapshot(changes, lastSnapshot); if (hasDifference(localSnapshot, lastSnapshot)) { let newSnapshot = applyChangesToSnapshot(changes, newData); if (hasDifference(newData, newSnapshot)) { pushChangesToServer(newSnapshot); dataFromServer = pullUpdatedData(); } } } updateDataOnCollection(dataFromServer); } ### Conflict Resolution β SignalDB uses a βlast change operation winsβ strategy to handle conflicts during synchronization. Hereβs how it works: * **Replay Mechanism**: When the client pulls the latest data, it replays all logged changes on top of this data. This ensures that any modifications made locally are applied to the most recent version of the data from the server. * **Last Change Wins**: In cases where conflicts arise (e.g., when the same item has been modified both locally and on the server), the latest change operation takes precedence. The client's local changes are replayed on the most recent server data, ensuring that the latest state is reflected. This strategy is suitable for many applications because it simplifies conflict resolution by ensuring that the most recent changes are applied. It also helps maintain a consistent state across clients by synchronizing modifications in a clear and predictable manner. This chart illustrates the conflict resolution process: ## Implementing Synchronization β This page describes how remote synchronization could be implemented on the frontend side. ### Creating a `SyncManager` β The `SyncManager` is the main class that handles synchronization. To get started with implementing synchronization in your app, you need to create a `SyncManager` instance. The `SyncManager` constructor takes an option object as the first and only parameter. This object contains the methods for your `pull` and `push` logic and also a method to create a `persistenceAdapter` that will be used internally to store snapshot, changes and sync operations. This is needed in case you need to cache those data offline. Additionally a `reactivityAdapter` can be passed to the options object. This adapter is used to make some of the functions provided by the `SyncManager` reactive (e.g. `isSyncing()`). There is also a `registerRemoteChange` method that can be used to register a method for notifying the `SyncManager` about remote changes. ts import { SyncManager } from '@signaldb/sync' const syncManager = new SyncManager({ reactivityAdapter: someReactivityAdapter, persistenceAdapter: name => createLocalPersistenceAdapter(name), pull: async () => { // your pull logic }, push: async () => { // your push logic }, registerRemoteChange: (collectionOptions, onChange) => { // β¦ } }) ### Adding Collections β Before we go in the details of the `pull` and `push` methods, we need to understand how we add collection to our `syncManager`. The `addCollection` method takes two parameters. The first one is the collection itself and the second one is an option object. This object must contain at least a `name` property that will be used to identify the collection in the `syncManager`. You can also pass other informations to the options object. These properties will be passed to your `push` & `pull` methods and can be used to access additionally informations about the collection that are needed for the synchronization (e.g. api endpoint url). This concept also allows you to do things like passing `canRead`/`canWrite` methods to the options that are later on used to check if the user has the necessary permissions to `pull`/`push`. ts import { Collection } from '@signaldb/core' const someCollection = new Collection() syncManager.addCollection(someCollection, { name: 'someCollection', apiPath: '/api/someCollection', }) ### Implementing the `pull` method β After we've added our collection to the `syncManager`, we can start implementing the `pull` method. The `pull` method is responsible for fetching the latest data from the server and applying it to the collection. The `pull` method is called whenever the `syncAll` or the `sync(name)` method are called. During sync, the `pull` method will be called for each collection that was added to the `syncManager`. It's receiving the collection options, passed to the `addCollection` method, as the first parameter and an object with additional information, like the `lastFinishedSyncStart` and `lastFinishedSyncEnd` timestamps, as the second parameter. The `pull` method must return a promise that resolves to an object with either an `items` property containing all items that should be applied to the collection or a `changes` property containing all changes `{ added: T[], modified: T[], removed: T[] }`. ts const syncManager = new SyncManager({ // β¦ pull: async ({ apiPath }, { lastFinishedSyncStart }) => { const data = await fetch(`${apiPath}?since=${lastFinishedSyncStart}`).then(res => res.json()) return { items: data } }, // β¦ }) ### Implementing the `push` method β The `push` method is responsible for sending the changes to the server. The `push` method is called during sync for each collection that was added to the `syncManager` if changes are present. It's receiving the collection options, passed to the `addCollection` method, as the first parameter and an object including the changes that should be sent to the server as the second parameter. The `push` method returns a promise without a resolved value. If an error occurs during the `push`, the sync for the collection will be aborted and the error will be thrown. **There are some errors that need to be handled by yourself. These are normally validation errors (e.g. `4xx` status codes) were the sync shouldn't fail, but the local data should be overwritten with the latest server data.** If you throw these errors in your `push` method, the `syncManager` will keep the changes passed to the `push` method and will try to `push` them again on the next sync. This can lead to a loop where the changes are never pushed successfully to the server. To prevent this, handle those errors in the `push` method and just return afterwards. ts const syncManager = new SyncManager({ // β¦ push: async ({ apiPath }, { changes }) => { await Promise.all(changes.added.map(async (item) => { const response = await fetch(apiPath, { method: 'POST', body: JSON.stringify(item) }) if (response.status >= 400 && response.status <= 499) return await response.text() })) await Promise.all(changes.modified.map(async (item) => { const response = await fetch(apiPath, { method: 'PUT', body: JSON.stringify(item) }) if (response.status >= 400 && response.status <= 499) return await response.text() })) await Promise.all(changes.removed.map(async (item) => { const response = await fetch(apiPath, { method: 'DELETE', body: JSON.stringify(item) }) if (response.status >= 400 && response.status <= 499) return await response.text() })) }, // β¦ }) ### Handle Remote Changes β To handle remote changes for a specific collection, you have to get the event handler to call on remote changes through the `registerRemoteChange` method. This method gets the `collectionOptions` as the first parameter and an `onChange` handler a the second parameter. The `onChange` handler can be called after changes were received from the server for the collection that matches the provided `collectionOptions`. The `onChange` handler takes optionally the changes as the first parameter. If the changes are not provided, the `pull` method will be called for the collection. ts const syncManager = new SyncManager({ // β¦ registerRemoteChanges: (collectionOptions, onChange) => { someRemoteEventSource.addEventListener('change', (collection) => { if (collectionOptions.name === collection) onChange() }) }, // β¦ }) ### Example Implementations β #### Simple RESTful API β Below is an example implementation of a simple REST API. js import { Collection, SyncManager, EventEmitter } from '@signaldb/core' const Authors = new Collection() const Posts = new Collection() const Comments = new Collection() const errorEmitter = new EventEmitter() errorEmitter.on('error', (message) => { // display validation errors to the user }) const apiBaseUrl = 'https://example.com/api' const syncManager = new SyncManager({ pull: async ({ apiPath }) => { const data = await fetch(`${apiBaseUrl}${apiPath}`).then(res => res.json()) return { items: data } }, push: async ({ apiPath }, { changes }) => { await Promise.all(changes.added.map(async (item) => { const response = await fetch(apiPath, { method: 'POST', body: JSON.stringify(item) }) const responseText = await response.text() if (response.status >= 400 && response.status <= 499) { errorEmitter.emit('error', responseText) return } })) await Promise.all(changes.modified.map(async (item) => { const response = await fetch(apiPath, { method: 'PUT', body: JSON.stringify(item) }) const responseText = await response.text() if (response.status >= 400 && response.status <= 499) { errorEmitter.emit('error', responseText) return } })) await Promise.all(changes.removed.map(async (item) => { const response = await fetch(apiPath, { method: 'DELETE', body: JSON.stringify(item) }) const responseText = await response.text() if (response.status >= 400 && response.status <= 499) { errorEmitter.emit('error', responseText) return } })) }, }) syncManager.addCollection(Posts, { name: 'posts', apiPath: '/posts', }) syncManager.addCollection(Authors, { name: 'authors', apiPath: '/authors', }) syncManager.addCollection(Comments, { name: 'comments', apiPath: '/comments', }) #### More Examples β If you think that an example is definitely missing here, feel free to create a pull request. Also don't hesitate to create a discussion if you have any questions or need help with your implementation. --- ## Page: https://signaldb.js.org/orm/ SignalDB provides functionality to add methods to collections and item instances to enable ORM-like behavior. With this functionality, you can also reactively resolve relationships between items in different collections. ## Adding Static Methods to Collections β To add new static methods to a specific collection instance, we have to create a a new class that inherits from the collection class and use it like a kind of singleton. With this pattern, it's also possible to directly define collection options like the name or the persistence adapter. In the example below, we create a new class `PostsCollection` that inherits from the `Collection` class and adds a new static method `getPublishedPosts` to the class. This method returns all published posts from the collection. js import { Collection } from '@signaldb/core' class PostsCollection extends Collection { constructor() { super({ name: 'posts', reactivity: /* specify reactivity options */, persistence: /* specify persistence adapter */, }) } // static method to get all published posts static getPublishedPosts() { return this.find({ published: true }) } } const Posts = new PostsCollection() const publishedPosts = Posts.getPublishedPosts().fetch() You can use this pattern to add methods to your collection that predefines queries that you use often in your application like in the example above. You can also override existing methods like `removeOne` or `updateOne` to add custom behavior or custom checks to your collection. If you want to check if a user has the permission to delete or update a post for example. ## Adding Instance Methods to Items β To add new instance methods to a specific item instance, we have to create a new class for item instances and transform items to an instance of this class using the `transform` option of the collection. js import { Collection } from '@signaldb/core' class Post { constructor(data) { Object.assign(this, data) } hasComments() { return this.comments.length > 0 } } const Posts = new Collection({ transform: item => new Post(item) }) In the example above, we create a new class `Post` that adds a new instance method `hasComments` to the class. This method returns `true` if the post has comments and `false` if not. ## Resolving Relationships β With the ORM functionality, you can also resolve relationships between items in different collections. You can even chain them together later on in your code to build complex queries that span multiple collections and also reactively rerun on changes. js import { Collection } from '@signaldb/core' class Post { constructor(data) { Object.assign(this, data) } getAuthor() { return Users.findOne(this.authorId) } getComments() { return Comments.find({ postId: this._id }) } } class Comment { constructor(data) { Object.assign(this, data) } getAuthor() { return Users.findOne(this.authorId) } } class User { constructor(data) { Object.assign(this, data) } getPosts() { return Posts.find({ authorId: this._id }) } } const Posts = new Collection({ name: 'posts', transform: item => new Post(item) }) const Users = new Collection({ name: 'users', transform: item => new User(item) }) const Comments = new Collection({ name: 'comments', transform: item => new Comment(item) }) effect(() => { const lastPost = Posts.findOne({}, { sort: { createdAt: -1 }, }) // get the author of the last comment of the last post const authorOfLastComment = lastPost.getComments().fetch()[0].getAuthor() // get comment count of all posts of the author let commentCount = 0 authorOfLastComment.getPosts().forEach((post) => { commentCount += post.getComments().count() }) }) In the example above, we create three classes `Post`, `Comment`, and `User` that add new instance methods to the classes. These methods resolve relationships between items in different collections. With this functionality, you can move complex queries to the item classes and run them in a more declarative way in your application code. ## TypeScript Support β When extending the collection class with static methods, TypeScript works seamlessly without additional setup. However, adding instance methods to item instances requires using a helper class to maintain type safety for the instance class. This is because we need to include all item properties in the class interface. ts declare interface BaseEntity<T extends {}> extends T {} class BaseEntity<T extends {}> { constructor(data: T) { Object.assign(this, data) } } With this helper class, you only need to inherit from BaseEntity and provide the item type as a generic parameter to the class. ts interface PostType { id: string, title: string, content: string, authorId: string, createdAt: number, } class Post extends BaseEntity<PostType> { getAuthor() { return Users.findOne(this.authorId) } } --- ## Page: https://signaldb.js.org/schema-validation/ Although SignalDB is schema-less by design, it provides a mechanism to validate items against a defined schema before they are saved to the database. This is achieved by emitting a `validate` event with the item as its argument. If no handler is registered for this event, the item is automatically considered valid and is saved. However, by registering your own handler for the `validate` event, you can enforce custom validation rules and prevent invalid items from being stored by throwing an error. **Key Points:** * **Validation Trigger:** Before an item is saved, the `validate` event is emitted. * **Custom Validation:** You can register a handler to check the item against any schema or rule set. * **Default Behavior:** Without a handler, all items are treated as valid. * **Error Handling:** Throwing an error in your handler stops the item from being saved. ## Basic Usage β Below is an example of how to register a simple validation for a collection: js import { Collection } from '@signaldb/core' const Posts = new Collection() // Register a validation handler that ensures each post has a 'title' Posts.on('validate', (post) => { if (!post.title) { throw new Error('Title is required') } }) // This insertion works because 'title' is provided Posts.insert({ title: 'Hello, World!' }) // This insertion will throw an error due to the missing 'title' Posts.insert({ author: 'Joe' }) ## Advanced Example with Zod β For more robust validation, you can integrate a library like Zod to define and enforce schemas. A dedicated `SchemaCollection` class acts as a wrapper around SignalDB's `Collection`, automatically validating items against a provided Zod schema. This approach ensures both runtime validation and compile-time type safety by inferring types directly from the schema. ts import { Collection } from '@signaldb/core' import type { CollectionOptions } from '@signaldb/core' import type { ZodSchema, infer as ZodInfer } from 'zod' interface SchemaCollectionOptions< T extends ZodSchema<BaseItem<I>>, I, U = ZodInfer<T>, > extends CollectionOptions<ZodInfer<T>, I, U> { schema: T, } class SchemaCollection< T extends ZodSchema<BaseItem<I>>, I = any, U = ZodInfer<T>, > extends Collection<ZodInfer<T>, I, U> { private schema: T constructor(options: SchemaCollectionOptions<T, I, U>) { super(options) this.schema = options.schema // Automatically validate each item against the Zod schema before saving this.on('validate', (item) => { this.schema.parse(item) }) } } You can now create a collection with schema validation using `SchemaCollection`: ts import { z } from 'zod' const Posts = new SchemaCollection({ schema: z.object({ title: z.string(), content: z.string(), }), }) // This insertion is valid because it meets the schema requirements Posts.insert({ title: 'Hello, World!', content: 'This is a post content.' }) // This insertion will throw an error because the 'content' field is missing Posts.insert({ title: 'Hello, World!' }) ## Additional Considerations β * **Error Management:** Ensure that your application catches and handles validation errors appropriately to provide meaningful feedback to the user. * **Extensibility:** While the advanced example uses Zod, you can integrate other validation libraries in a similar manner by modifying the event handler. * **Type-Safety:** Leveraging schema validation with a tool like Zod not only validates runtime data but also infers types, reducing redundancy in your type definitions. By using SignalDB's built-in validation mechanism, you can maintain data integrity even in a flexible, schema-less environment, while still enjoying the benefits of custom validation rules and type safety. --- ## Page: https://signaldb.js.org/guides/react/ In this guide, you will learn how to use SignalDB together with React. We will cover the basic setup and how to use SignalDB in your React project. ## Prerequisites β Let's assume you already have basic knowledge of React and already have a React project set up. If you don't, you can follow the official React documentation to get started. Also a basic understanding of signal-based reactivity is helpful. If you are not familiar with it, you can read about it on the Core Concepts page to get an overview. ## Installation β First of all, you need to install SignalDB. You can do this by running the following command in your terminal: bash npm install @signaldb/core We also need to install a signals library that provides the reactivity for SignalDB. We going to use Maverick Signals. The following command installs Maverick Signals and the corresponding reactivity adapter for SignalDB: bash npm install @maverick-js/signals npm install @signaldb/maverickjs Additionally we need to install the `@signaldb/react` package that provides the React bindings for SignalDB: bash npm install @signaldb/react **You can also install all packages at once by running the following command:** bash npm install @signaldb/core @maverick-js/signals @signaldb/maverickjs @signaldb/react ## Basic Setup β To use SignalDB in your React project, you need to set up your collections and the reactivity adapter. Do this in a file that you can import in your components. js // Posts.js import { Collection } from '@signaldb/core' import maverickReactivityAdapter from '@signaldb/maverickjs' const Posts = new Collection({ reactivity: maverickReactivityAdapter, }) export default Posts In another file, you have to setup a react hook that provides the reactivity to your components. We have a helper function for that in the `@signaldb/react` package, so that it is just a one liner for you: js // useReactivity.js import { createUseReactivityHook } from '@signaldb/react' import { effect } from '@maverick-js/signals' const useReactivity = createUseReactivityHook(effect) export default useReactivity The `useReactivity` function is a react hook that you can use in your components to make them reactive. It receives a function as the first argument that runs your reactive context. The function reruns whenever the data used inside changes. The return value of the function is the data that you want to use in your component. The `useReactivity` function also takes a dependency list as an optional second argument, similar to the dependency list of the `useEffect` hook. ## Using SignalDB in your Components β Now you can use SignalDB in your components. Here is an example of a simple component that displays a list of posts: jsx import React from 'react' import useReactivity from './useReactivity' import Posts from './Posts' const PostList = () => { const posts = useReactivity(() => Posts.find({}).fetch()) return ( <ul> {posts.map(post => ( <li key={post._id}>{post.title}</li> ))} </ul> ) } In this example, we use the `useReactivity` hook to make the component reactive. The `useReactivity` hook runs the function that fetches the posts from the collection whenever the data changes. The `fetch` method returns an array of all documents in the collection. ## Conclusion β That's it! You have successfully set up SignalDB in your React project and created a reactive component that displays a list of posts. You can now use SignalDB in your React project to manage your data and make your components reactive. ## Next Steps β Now that youβve learned how to use SignalDB in React, you maybe want to explore the possibilities how you can synchronize the data with your backend. Take a look at the Synchronization Overview to get started --- ## Page: https://signaldb.js.org/reference/core/collection/ ts import { Collection } from '@signaldb/core' The Collection class is designed to manage and manipulate collections of data in memory, with options for reactivity, transformations and persistence adapters. Collections are schemaless, meaning that you don't need to define a schema for your data before you start using it. This allows you to store any data you want without worrying about defining a schema first. However, it's recommended that you define a typescript interface for the documents in the collection, so that you can benefit from type safety when working with the data. ## Static Methods β ### `setFieldTracking(enable: boolean)` β Enables or disables field tracking for all collections. See Field-Level Reactivity for more information. ### `batch(callback: () => void)` β If you need to execute many operations at once in multiple collections, you can use the global `Collection.batch()` method. This method will execute all operations inside the callback without rebuilding the index on every change. ### `getCollections()` β Returns an array of all collections that have been created. ### `onCreation(callback: (collection: Collection) => void)` β Registers a callback that will be called whenever a new collection is created. The callback will receive the newly created collection as an argument. ### `onDispose(callback: (collection: Collection) => void)` β Registers a callback that will be called whenever a collection is disposed. The callback will receive the disposed collection as an argument. ### `enableDebugMode()` β Enables debug mode for all collections. This will enable measurements for query timings and other debug information. ## Constructor β js const collection = new Collection<T, I, U>(options?: CollectionOptions<T, I, U>) Constructs a new Collection object. Parameters * options (Optional): An object specifying various options for the collection. Options include: * name: An optional name for the collection to make it easier to identify. This name will also be used in the developer tools. * memory: A MemoryAdapter for storing items in memory. * reactivity: A ReactivityAdapter for enabling reactivity. * persistence: A PersistenceAdapter for enabling persistent storage. * transform: A transformation function to be applied to items. The document that should be transformed is passed as the only parameter. The function should return the transformed document (e.g. `(doc: T) => U`) * indices: An array of IndexProvider objects for creating indices on the collection. ## Methods β ### `isReady()` β Resolves when the persistence adapter finished initializing and the collection is ready to be used. This is useful when you need to wait for the collection to be ready before executing any operations directly after creating it. Example: ts const collection = new Collection({ persistence: /* ... */ }) await collection.isReady() collection.insert({ name: 'Item 1' }) // ... ### `find(selector?: Selector<T>, options?: Options)` β Returns a new cursor object for the items in the collection that match a given selector and options. Also check out the queries section. Parameters * `selector` (Optional): A function to filter items in the collection. * `options` (Optional): Options for the cursor. ### `findOne(selector?: Selector<T>, options?: Options)` β Behaves the same like `.find()` but doesn't return a cursor. Instead it will directly return the first found document. ### `insert(item: Omit<T, 'id'> & Partial<Pick<T, 'id'>>)` β Inserts an item into the collection and returns the ID of the newly inserted item. Also check out the data manipulation section. ### `insertMany(items: Array<Omit<T, 'id'> & Partial<Pick<T, 'id'>>>)` β Inserts multiple items into the collection and returns the IDs of the newly inserted items. Parameters * `item`: The item to be inserted into the collection. ### `updateMany(selector: Selector<T>, modifier: Modifier<T>, options?: { upsert?: boolean })` β Updates multiple items in the collection that match a given selector with the specified modifier. Also check out the data manipulation section. Parameters * `selector`: A function to filter items in the collection. * `modifier`: An object describing how to modify the matching items. * `options`: An object with additional options. Currently only `upsert` is supported, which will insert a document based on the modifier, if the selector doesn't match any documents. ### `updateOne(selector: Selector<T>, modifier: Modifier<T>, options?: { upsert?: boolean })` β Behaves the same like `.updateMany()` but only updates the first found document. ### `replaceOne(selector: Selector<T>, replacement: Omit<T, 'id'> & Partial<Pick<T, 'id'>>, options?: { upsert?: boolean })` β Replaces a single item in the collection that matches a given selector with the specified replacement. Also check out the data manipulation section. Parameters * `selector`: A function to filter items in the collection. * `replacement`: The new item that should replace the existing one. * `options`: An object with additional options. Currently only `upsert` is supported, which will insert a document based on the replacement, if the selector doesn't match any documents. ### `removeMany(selector: Selector<T>)` β Removes multiple items from the collection that match a given selector. Parameters * `selector`: A function to filter items in the collection. ### `removeOne(selector: Selector<T>)` β Behaves the same like `.removeMany()` but only removes the first found document. ### `batch(callback: () => void)` β If you need to execute many operations at once, things can get slow as the index would be rebuild on every change to the collection. To prevent this, you can use the `.batch()` method. This method will execute all operations inside the callback without rebuilding the index on every change. If you need to batch updates of multiple collections, you can use the global `Collection.batch()` method. js collection.batch(() => { collection.insert({ name: 'Item 1' }) collection.insert({ name: 'Item 2' }) // β¦ }) ### `dispose()` β Disposes the collection and all its resources. This will unregister the persistence adapter and clean up all internal data structures. ### `setFieldTracking(enabled: boolean)` β Enables or disables field tracking for the collection. See Field-Level Reactivity for more information. ## Events β The Collection class is equipped with a set of events that provide insights into the state and changes within the collection. These events, emitted by the class, can be crucial for implementing reactive behaviors and persistence management. Here is an overview of the events: * `added`: Triggered when a new item is added to the collection. The event handler receives the added item as an argument. * `changed`: Fired when an existing item in the collection undergoes modification. The event handler is passed the modified item. * `removed`: Signaled when an item is removed or deleted from the collection. The event handler receives the removed item. * `validate`: Emitted when an item should be validated. The event handler receives the item as an argument. Validate the item inside of the event handler and throw an error if the item is invalid. This will prevent the item from being inserted or updated. In addition to that, the collection will fire events for each executed method. For example, if you call `.updateOne()`, the collection will fire an `updateOne` event. The event handler will receive the selector and the modifier as arguments. * `find`: Emitted when the `find` method is called. The event handler receives the selector, options and the cursor as arguments. * `findOne`: Triggered when the `findOne` method is called. The event handler receives the selector, options and the returned item as arguments. * `insert`: Fired when the `insert` method is called. The event handler receives the inserted item as an argument. * `updateMany`: Emitted when the `updateMany` method is called. The event handler receives the selector and the modifier as arguments. * `updateOne`: Triggered when the `updateOne` method is called. The event handler receives the selector and the modifier as arguments. * `replaceOne`: Emitted when the `replaceOne` method is called. The event handler receives the selector and the replacement as arguments. * `removeMany`: Emitted when the `removeMany` method is called. The event handler receives the selector as an argument. * `removeOne`: Triggered when the `removeOne` method is called. The event handler receives the selector as an argument. In addition to these basic events, there are events related to persistence operations. These events are only emitted when a persistence adapter is used. * `persistence.init`: Marks the initialization of the persistence adapter. * `persistence.error`: Indicates an error during persistence operations. The event handler receives an Error object describing the error. * `persistence.transmitted`: Triggered after successfully transmitting data to the persistence adapter. * `persistence.received`: Signifies the reception of data from the persistence adapter. These events empower developers to build dynamic and responsive applications by reacting to changes in the collection, facilitating synchronization with external data sources, and handling persistence-related events. --- ## Page: https://signaldb.js.org/reference/core/creatememoryadapter/ js import { createMemoryAdapter } from '@signaldb/core' const memoryAdapter = createMemoryAdapter(/* ... */) You can create a MemoryAdapter to use it with your collection by using the `createMemoryAdapter` helper function. You must pass the following methods with the same signature as in the `Array` class: * `push(item: T): void` * `pop(): T | undefined` * `splice(start: number, deleteCount?: number, ...items: T[]): T[]` * `map<U>(callbackfn: (value: T, index: number, array: T[]) => U): U[]` * `find(predicate: (value: T, index: number, obj: T[]) => boolean): T | undefined` * `filter(predicate: (value: T, index: number, array: T[]) => unknown): T[]` * `findIndex(predicate: (value: T, index: number, obj: T[]) => boolean): number` --- ## Page: https://signaldb.js.org/reference/core/cursor/ Cursors are a concept that appears in many database systems and are used to iterate over and access data in a controlled manner. A cursor in SignalDB is a pointer to a specific set of rows. It provides an interface to interact with items while offering capabilities like reactivity, transformation, observation of changes, and more. You don't have to create a cursor by yourself. SignalDB is handling that for you and returns the cursor from a `.find()` call. The following methods are available in the cursor class: ## β‘οΈ `forEach(callback: (item: TransformedItem) => void)` _(reactive)_ β Iterates over each item in the cursor, applying the given callback function. * Parameters: * `callback`: A function that gets executed for each item. Reactive β‘οΈ This method is reactive, so it will rerun automatically when a document is added, removed, or when any of its fields change. You can control when it reruns by using the `fields` option in the `.find()` method to specify which fields to track. Reactivity will only be triggered by changes in the fields you choose. ## β‘οΈ `map<T> (callback: (item: TransformedItem) => T)` _(reactive)_ β Maps each item in the cursor to a new array using the provided callback function. * Parameters: * `callback`: A function that transforms each item. * Returns * An array of transformed items Reactive β‘οΈ This method is reactive, so it will rerun automatically when a document is added, removed, or when any of its fields change. You can control when it reruns by using the `fields` option in the `.find()` method to specify which fields to track. Reactivity will only be triggered by changes in the fields you choose. ## β‘οΈ `fetch()` _(reactive)_ β Fetches all the items in the cursor and returns them. * Returns * An array of items Reactive β‘οΈ This method is reactive, so it will rerun automatically when a document is added, removed, or when any of its fields change. You can control when it reruns by using the `fields` option in the `.find()` method to specify which fields to track. Reactivity will only be triggered by changes in the fields you choose. ## β‘οΈ `count()` _(reactive)_ β Counts the number of items in the cursor. * Returns * The count of items Reactive β‘οΈ This method is reactive, so it will rerun automatically when a document was added or removed from the query. ## `observeChanges(callbacks: ObserveCallbacks<U>, skipInitial = false)` β This method allows observation of changes in the cursor items. It uses callbacks to notify of different events like addition, removal, changes, etc. * Parameters * `callbacks`: An object of Callback functions for different observation events. * `added(item: T)`gets called when a new item was added to the cursor * `addedBefore(item: T, before: T)`gets called when a new item was added to the cursor and also indicates the position of the new item * `changed(item: T)`gets called when an item in the cursor was changed * `movedBefore(item: T, before: T)`gets called when an item moved its position in the cursor * `removed(item: T)`gets called when an item was removed from the cursor * `skipInitial`: A boolean to decide whether to skip the initial observation event. * Returns * A function that, when called, stops observing the changes. ## `requery()` β Re-queries the cursor to fetch items and check observers for any changes. ## `cleanup()` β The cleanup method is used to invoke all the cleanup callbacks. This helps in managing resources and ensuring efficient garbage collection. You have to call this method, if you're using a reactivity adapter, that doesn't support automatic cleanup. --- ## Page: https://signaldb.js.org/reference/core/createpersistenceadapter/ ts import { createPersistenceAdapter } from '@signaldb/core' While SignalDB comes with a few built-in Persistence Adapters, there may be scenarios where you need to create a custom one to cater to specific requirements. You can create a custom persistene adapter by calling `createPersistenceAdapter` and supplying a `PersistenceAdapter` compatible object as follows: ts interface Changeset<T> { added: T[], modified: T[], removed: T[], } // contains either items or changes (but not both) type LoadResponse<T> = { items: T[], changes?: never } | { items?: never, changes: Changeset<T> } interface PersistenceAdapter<T> { register(onChange: (data?: LoadResponse<T>) => void | Promise<void>): Promise<void>, load(): Promise<LoadResponse<T>>, save(items: T[], changes: Changeset<T>): Promise<void>, unregister?(): Promise<void>, } * **register** is called when initializing the collection. The `onChange` function should be called when data in the adapter was updated externally so the collection can update its internal memory. You can optionally directly pass a `LoadResponse<T>` object returned from the `load` function to make the implementation of your adapter more straightforward. * **load** is called to load data from the adapter and should return a `LoadResponse<T>` which includes either an `items` property containing all of the items, or a `changeset` property containing only the changes. The collection will update its internal memory by either replacing all of its items, or applying the changeset to make differential changes, respectively. * **save** is called when data was updated, and should save the data. Both `items` and `changes` are provided so you can chose which one you'd like to use. * **unregister?** _(optional)_ is called when the `dispose` method of the collection is called. Allows you to clean up things. Here is a short example how the File system persistence adapter is implemented: js import fs from 'fs' import { createPersistenceAdapter } from '@signaldb/core' export default function createFilesystemAdapter(filename: string) { return createPersistenceAdapter({ async register(onChange) { const exists = await fs.promises.access(filename).then(() => true).catch(() => false) if (!exists) await fs.promises.writeFile(filename, '[]') fs.watch(filename, { encoding: 'utf8' }, () => { void onChange() }) }, async load() { const exists = await fs.promises.access(filename).then(() => true).catch(() => false) if (!exists) return { items: [] } const contents = await fs.promises.readFile(filename, 'utf8') const items = JSON.parse(contents) return { items } }, async save(items) { await fs.promises.writeFile(filename, JSON.stringify(items)) }, }) } --- ## Page: https://signaldb.js.org/reference/core/createreactivityadapter/ ## createReactivityAdapter β ts import { createReactivityAdapter } from '@signaldb/core' A reactivity adapter is a simple object that provides a way for a collection to track dependencies and notify them when changes occur, thereby providing reactivity to your collection. The following code snippets demonstrate the implementation of a reactivity adapter using the `@maverick-js/signals` library. A ReactivityAdapter object can have the following methods. ### `create() -> Dependency` β The `create` function creates a new reactive dependency. A `Dependency` object must have at least these two methods: * `depend()`: This method is called when the collection data is read, marking the place in the code as dependent on the collection data. Subsequent changes to the collection data will cause this place to be re-evaluated. * `notify()`: This method is called when the collection data changes, notifying all dependent parts of the code that they need to re-evaluate. You can also include more methods or other data in the dependency which you can access from the `onDispose` method. js create: () => { const dep = signal(0) return { depend: () => { dep() }, notify: () => { dep.set(peek(() => dep() + 1)) }, } } ### `isInScope(dependency: Dependency): boolean` (optional) β The `isInScope` function is used for checking wether a SignalDB is in a reactive scope. If SignalDB is not in a reactive context, reactivity will be automatically disabled to avoid memory leaks. That mean that if you are not in a reactive scope (`find`/`findOne` called outside an `effect` function), you have to turn off reactivity manually by adding the `{ reactivity: false }` option to the `find`/`findOne` method (e.g. `<collection>.find({ β¦ }, { reactive: false })`). If you're not doing this, SignalDB setups reactivity unnecessarily and is not able to cleanup this automatically later on. js isInScope() { return !!getScope() } ### `onDispose(callback: () -> void, dependency: Dependency)` (optional) β This method is used to register a callback to be executed when the reactive computation is disposed. The dependency created in the `create` method, will be passed as the second parameter. This can be useful if a framework requires data from the creation on disposal. The `onDispose` function is optional, but it's highly recommended to implement it whenever it's possible. Without `onDispose`, SignalDB will not be able to clean up resources automatically when they are no longer needed. That means, that you have to call the `cursor.cleanup()` method manually at the end of the computation. Normally there is some way to cleanup things after the computation runs, but it's possible that you cannot implement it in this `onDispose` method (like in the Angular adapter for example). js onDispose: (callback, dependency) => { onDispose(callback) } The above methods are what you need to implement to provide a basic reactivity system to your collection. Here's a complete example of a reactivity adapter: js import { signal, peek, getScope, onDispose } from '@maverick-js/signals' import { createReactivityAdapter } from '@signaldb/core' const reactivity = createReactivityAdapter({ create: () => { const dep = signal(0) return { depend: () => { dep() }, notify: () => { dep.set(peek(() => dep() + 1)) }, } }, isInScope: () => !!getScope(), onDispose: (callback) => { onDispose(callback) }, }) export default reactivity Once the `reactivity` object is created, you can use it as the `reactivity` option when creating a new Collection. This will provide the collection with reactivity capabilities. --- ## Page: https://signaldb.js.org/reference/core/autofetchcollection/ ts import { AutoFetchCollection } from '@signaldb/core' The `AutoFetchCollection` class automatically fetches data from a async source when the collection is accessed. This is useful if you want to fetch specific data on demand rather than pulling the whole dataset at app start. The concept of the `AutoFetchCollection` is, that it calls the `fetchQueryItems` method everytime a query is executed on the collection. This way, you can fetch only the data that is needed for the query. The first time the query is executed, the query will return a empty dataset (if the data is not already fetched). After the data is fetched, the query will reactively update and return the loaded data. While the data is fetched, the you can observe the loading state with the `isLoading` function on the collection to show a loading indicator. The `Γ¬sLoading` function will be updated reactively. The usage of the `AutoFetchCollection` is also really simple: js const Todos = new AutoFetchCollection({ fetchQueryItems: async (selector) => { // The fetchQueryItems method is for fetching data from the remote service. // The selector parameter is the query that is executed on the collection. // Use this to fetch only the data that is needed for the query. // Also make sure that the returned data matches the query to avoid inconsistencies // You can return the data directly // return { items: [...] } // Or you can return only the changes // return { changes: { added: [...], modified: [...], removed: [...] } } }, // optional, specifie the delay in milliseconds after which the data will be // purged from the collection after the query is not used anymore // default is 10 seconds purgeDelay: 1000 * 10, push: async (changes, items) => { // The push method is called when the local data has changed // As the first parameter you get the changes in the format { added: [...], modified: [...], removed: [...] } // As the second parameter you also get all items in the collection, if you need them // in the push method, no return value is expected }, // You can also optionally specify a persistence adapter // If a persistence adapter is used, the data is loaded first and will be updated after the server data is fetched // If the data will be updated, the data will be saved to the persistence adapter and pushed to the server simultaneously persistence: createLocalStorageAdapter('todos'), // Optionally you can also specify a mergeItems function to merge items // if they're returned by multiple fetchQueryItems calls. mergeItems: (itemA, itemB) => ({ ...itemA, ...itemB }), }) // You can also observe the loading state of the collection. const loading = Todos.isLoading() // The isLoading method takes an optional selector parameter to observe the loading state of a specific query const postsFromMaxLoading = Todos.isLoading({ author: 'Max' }) // It's also possible to register and unregister queries manuallly Todos.registerQuery({ author: 'Max' }) Todos.unregisterQuery({ author: 'Max' }) --- ## Page: https://signaldb.js.org/reference/core/createindex/ ts import { createIndex } from '@signaldb/core' The `createIndex()` function can be used to create a single field index on a collection. It takes a field name as a parameter and returns an IndexProvider object which can be passed directly to the `indices` option of the Collection constructor. js import { createIndex, Collection } from '@signaldb/core' interface User { id: string name: string age: number } const users = new Collection<User>({ indices: [ createIndex('name'), createIndex('age'), ], }) --- ## Page: https://signaldb.js.org/reference/core/createindexprovider/ ts import { createIndexProvider } from '@signaldb/core' An IndexProvider is an object that specifies how to create an index on a collection. It can be created with the `createIndexProvider()` function. Take a look at the `createIndex` function for an example. js const indexProvider = createIndexProvider({ query(selector: FlatSelector<T>) { // Receives a flat selector (without $and, $or or $nor) as the first parameter // Returns an object with the following properties: // { // matched: true, // Wether the index were hit by the selector // keys: [0, 1, 2, 3], // An array of all matched items array indices in the memory adapter (only provided if matched = true) // fields: ['name', 'age'], // An array of all fields that were used in the index. // // These fields will be removed from the selector before // // it is executed on the memory adapter for optimization. // } }, rebuild(items: T[]) { // Rebuild the index and save the array indices }, }) --- ## Page: https://signaldb.js.org/reference/core/combinepersistenceadapters/ ts import { combinePersistenceAdapters } from '@signaldb/core' If a SignalDB collection needs more than one persistence adapter, you can use `combinePersistenceAdapters` to combine multiple persistence adapters into one. The `combinePersistenceAdapters` function takes a primary and a secondary adapter. The primary adapter is typically the one that is the primary location for the data. The secondary adapter is usually one that has faster read and write times. The function returns a new persistence adapter that combines the functionality of the two adapters. ts const adapter = combinePersistenceAdapters(primaryAdapter, secondaryAdapter) --- ## Page: https://signaldb.js.org/reference/react/ ## createUseReactivityHook (`default`) β ts import createUseReactivityHook from '@signaldb/react' import { effect } from 'β¦' const useReactivity = createUseReactivityHook(effect) This function creates a custom hook that provides reactivity to your components. It takes a function as the single argument that specifies the effect function of a reactive library. The effect function must have the following signature: ts (reactiveFunction: () => void) => () => void The provided function is called with a reactive function that should be executed when the reactivity changes. The returned function is the cleanup function that removes the effect. Also check out our guide on how to use SignalDB with React.