W↓
All docs
🔑
Sign Up/Sign In
docs.turso.tech/features/
Public Link
Apr 6, 2025, 5:06:25 PM - complete - 32.1 kB
Starting URLs:
https://docs.turso.tech/features/
## Page: https://docs.turso.tech/features/ Product Manage databases, and teams with the Turso Platform API. The Turso Platform API is a RESTful API that allows you to manage databases, and users. It is the same API that is used by the Turso Platform Web UI and CLI. The API is built for platforms that want to integrate with Turso to provide their users a serverless SQLite database. You can create databases, database branches, recover databases from a point in time, as well as manage teams, API tokens, and more with the Turso Platform API. ## API Resources --- ## Page: https://docs.turso.tech/features/platform-api Product Manage databases, and teams with the Turso Platform API. The Turso Platform API is a RESTful API that allows you to manage databases, and users. It is the same API that is used by the Turso Platform Web UI and CLI. The API is built for platforms that want to integrate with Turso to provide their users a serverless SQLite database. You can create databases, database branches, recover databases from a point in time, as well as manage teams, API tokens, and more with the Turso Platform API. ## API Resources --- ## Page: https://docs.turso.tech/features/ai-and-embeddings Turso and libSQL enable vector search capability without an extension. ## How it works * Create a table with one or more vector columns (e.g. `FLOAT32`) * Provide vector values in binary format or convert text representation to binary using the appropriate conversion function (e.g. `vector32(...)`) * Calculate vector similarity between vectors in the table or from the query itself using dedicated vector functions (e.g. `vector_distance_cos`) * Create a special vector index to speed up nearest neighbors queries (use the `libsql_vector_idx(column)` expression in the `CREATE INDEX` statement to create vector index) * Query the index with the special `vector_top_k(idx_name, q_vector, k)` table-valued function ## Vectors ### Types LibSQL uses the native SQLite BLOB storage class for vector columns. To align with SQLite affinity rules, all type names have two alternatives: one that is easy to type and another with a `_BLOB` suffix that is consistent with affinity rules. The table below lists six vector types currently supported by LibSQL. Types are listed from more precise and storage-heavy to more compact but less precise alternatives (the number of dimensions in vector DD is used to estimate storage requirements for a single vector). | Type name | Storage (bytes) | Description | | --- | --- | --- | | `FLOAT64` | `F64_BLOB` | 8D+18D + 1 | Implementation of IEEE 754 double precision format for 64-bit floating point numbers | | `FLOAT32` | `F32_BLOB` | 4D4D | Implementation of IEEE 754 single precision format for 32-bit floating point numbers | | `FLOAT16` | `F16_BLOB` | 2D+12D + 1 | Implementation of IEEE 754-2008 half precision format for 16-bit floating point numbers | | `FLOATB16` | `FB16_BLOB` | 2D+12D + 1 | Implementation of bfloat16 format for 16-bit floating point numbers | | `FLOAT8` | `F8_BLOB` | D+14D + 14 | LibSQL specific implementation which compresses each vector component to single `u8` byte `b` and reconstruct value from it using simple transformation: shift+alpha⋅b\\texttt{shift} + \\texttt{alpha} \\cdot b | | `FLOAT1BIT` | `F1BIT_BLOB` | ⌈D8⌉+3\\lceil \\frac{D}{8} \\rceil + 3 | LibSQL-specific implementation which compresses each vector component down to 1-bit and packs multiple components into a single machine word, achieving a very compact representation | ### Functions To work with vectors, LibSQL provides several functions that operate in the vector domain. Each function understands vectors in binary format aligned with the six types described above or in text format as a single JSON array of numbers. Currently, LibSQL supports the following functions: | Function name | Description | | --- | --- | | `vector64` | `vector32` | `vector16` | `vectorb16` | `vector8` | `vector1bit` | Conversion function which accepts a valid vector and converts it to the corresponding target type | | `vector` | Alias for `vector32` conversion function | | `vector_extract` | Extraction function which accepts valid vector and return its text representation | | `vector_distance_cos` | Cosine distance (1 - cosine similarity) function which operates over vector of **same type** with **same dimensionality** | | `vector_distance_l2` | Euclidean distance function which operates over vector of **same type** with **same dimensionality** | ### Vectors usage 1 2 3 ### Understanding Distance Results The `vector_distance_cos` function calculates the cosine distance, which is defined at: * Cosine Distance = 1 — Cosine Similarity The cosine distance ranges from 0 to 2, where: * A distance close to 0 indicates that the vectors are nearly identical or exactly matching. * A distance close to 1 indicates that the vectors are orthogonal (perpendicular). * A distance close to 2 indicates that the vectors are pointing in opposite directions. ### Vector Limitations * Euclidean distance is **not supported** for 1-bit `FLOAT1BIT` vectors * LibSQL can only operate on vectors with no more than 65536 dimensions ## Indexing Nearest neighbors (NN) queries are popular for various AI-powered applications (RAG uses NN queries to extract relevant information, and recommendation engines can suggest items based on embedding similarity). LibSQL implements DiskANN algorithm in order to speed up approximate nearest neighbors queries for tables with vector columns. ### Vector Index LibSQL introduces a custom index type that helps speed up nearest neighbors queries against a fixed distance function (cosine similarity by default). From a syntax perspective, the vector index differs from ordinary application-defined B-Tree indices in that it must wrap the vector column into a `libsql_vector_idx` marker function like this The vector index is fully integrated into the LibSQL core, so it inherits all operations and most features from ordinary indices: * An index created for a table with existing data will be automatically populated with this data * All updates to the base table will be **automatically** reflected in the index * You can rebuild index from scratch using `REINDEX movies_idx` command * You can drop index with `DROP INDEX movies_idx` command * You can create partial vector index with a custom filtering rule: ### Query At the moment vector index must be queried **explicitly** with special `vector_top_k(idx_name, q_vector, k)` table-valued function. The function accepts index name, query vector and amount of neighbors to return. This function searches for `k` approximate nearest neighbors and returns `ROWID` of these rows or `PRIMARY KEY` if base index do not have ROWID. In order for table-valued function to work query vector **must** have the same vector type and dimensionality. ### Settings LibSQL vector index optionally can accept settings which must be specified as a variadic parameters of the `libsql_vector_idx` function as a strings in the format `key=value`: At the moment LibSQL supports the following settings: | Setting key | Value type | Description | | --- | --- | --- | | `metric` | `cosine` | `l2` | Which distance function to use for building the index. Default: `cosine` | | `max_neighbors` | positive integer | How many neighbors to store for every node in the DiskANN graph. The lower the setting — the less storage index will use in exchange to search precision. Default: 3D3 \\sqrt{D} where DD — dimensionality of vector column | | `compress_neighbors` | `float1bit`|`float8`| `float16`|`floatb16`| `float32` | Which vector type must be used to store neighbors for every node in the DiskANN graph. The more compact vector type is used for neighbors — the less storage index will use in exchange to search precision. Default: **no compression** (neighbors has same type as base table) | | `alpha` | positive float ≥1\\geq 1 | “Density” parameter of general sparse neighborhood graph build during DiskANN algorithm. The lower parameter — the more sparse is DiskANN graph which can speed up query speed in exchange to lower search precision. Default: `1.2` | | `search_l` | positive integer | Setting which limits the amount of neighbors visited during vector search. The lower the setting — the faster will be search query in exchange to search precision. Default: `200` | | `insert_l` | positive integer | Setting which limits the amount of neighbors visited during vector insert. The lower the setting — the faster will be insert query in exchange to DiskANN graph navigability properties. Default: `70` | ### Index usage 1 2 3 4 ### Index limitations * Vector index works only for tables **with** `ROWID` or with singular `PRIMARY KEY`. Composite `PRIMARY KEY` without `ROWID` is not supported --- ## Page: https://docs.turso.tech/features/branching A branch is a separate database instance that is created from an existing database. You can also create a branch from a point-in-time snapshot of a database. Branches are useful for development and testing, because they allow you to make changes to the database without affecting the original database. ## How it works 1. You create a new database from an existing database using the CLI or API. 2. You connect to the new database using the group API token. 3. Make changes to the new schema using a migration tool (optional). 4. Apply the changes to the original database using a migration tool when merging using a GitHub Action (optional). 5. Delete the database when you no longer need it. ## Usage You can create a new database from an existing database using the CLI or API: Refer to the following references for more details about all arguments: ## Things to know * Database branches are completely separate from the original database. This means that you need to handle merging any schema changes or data manually using a migration tool. * You will need to create a new token (or use a group token) to connect to the new database. * You will need to manually delete the database branch when you no longer need it. * Branches count towards your plan’s database quota. ## CI/CD Automating branching is useful for creating a new database for each pull request. This allows you to test changes without affecting the original database. Here’s an example of what that might look like using the Platform API: .github/workflows/create-database-branch.yml --- ## Page: https://docs.turso.tech/features/point-in-time-recovery Turso supports point-in-time recovery (PITR) for databases. PITR allows you to restore a database to a specific point in time. This is useful for recovering from user errors, such as dropping a table by mistake. ## How it works 1. You create a new database from the existing database using the CLI or API. 2. You update your application to use the new database connection string. 3. You delete the old database when you no longer need it. ## Usage Refer to the following references for more details about all arguments: ## Things to know * Restoring from a PITR creates a new database. You will need to update your application to use the new database connection string. * You cannot restore from a PITR to a database that already exists. * You will need to create a new token (or use a group token) to connect to the new database. * You will need to manually delete the old database when you no longer need it. * Restores count towards your plan’s database quota. --- ## Page: https://docs.turso.tech/features/scale-to-zero This feature is now deprecated for all new users. Existing free users will be moved from Fly to AWS, and receive no cold starts by default — read the announcement For free Starter Plan users, Turso dynamically scales databases down to zero after an hour of no activity. This behaviour is how we can continue to provide hundreds of databases on the free plan. When a request is made, the databases automatically scale back up to one. There may be a delay of up to `500ms` for databases that have been inactive. Database groups with extended inactivity (**10 days**) will require a manual “unarchive” operation using the CLI or API. --- ## Page: https://docs.turso.tech/features/organizations ## New to Turso? Turso is a SQLite-compatible database built on libSQL, the Open Contribution fork of SQLite. ## Get Started Create your first database ## Embedded Replicas Get zero latency reads on-device ## AI & Embeddings Vector is just another datatype ## Backups and Recovery Restore your database to any point in time ## Start building Learn how to manage, distribute and integrate your databases with the CLI, API and SDKs. ## Turso CLI Manage groups, databases, and API tokens with the Turso CLI. ## Turso Platform API Manage groups, databases, and API tokens with the Turso API. ## Client SDKs Connect and integrate Turso into your application with one of our libSQL drivers. ## Tutorials Learn how to work with Turso and your favorite language or framework. ## Join the community Join the Turso community to ask questions, discuss best practices, and share tips. ## Discord ## GitHub ## X (Twitter) --- ## Page: https://docs.turso.tech/features/embedded-replicas/introduction Turso’s embedded replicas are a game-changer for SQLite, making it more flexible and suitable for various environments. This feature shines especially for those using VMs or VPS, as it lets you replicate a Turso database right within your applications without needing to relying on Turso’s edge network. For mobile applications, where stable connectivity is a challenge, embedded replicas are invaluable as they allow uninterrupted access to the local database. Embedded replicas provide a smooth switch between local and remote database operations, allowing the same database to adapt to various scenarios effortlessly. They also ensure speedy data access by syncing local copies with the remote database, enabling microsecond-level read operations — a significant advantage for scenarios demanding quick data retrieval. ## How it works 1. You configure a local file to be your main database. * The `url` parameter in the client configuration. 2. You configure a remote database to sync with. * The `syncUrl` parameter in the client configuration. 3. You read from a database: * Reads are always served from the local replica configured at `url`. 4. You write to a database: * Writes are always sent to the remote primary database configured at `syncUrl`. * Any write transactions with reads are also sent to the remote primary database. * Once the write is successful, the local database is updated with the changes automatically (read your own writes — can be disabled). ### Periodic sync You can automatically sync data to your embedded replica using the periodic sync interval property. Simply pass the `syncInterval` parameter when instantiating the client: ### Read your writes Embedded Replicas also will guarantee read-your-writes semantics. What that means in practice is that after a write returns successfully, the replica that initiated the write will always be able to see the new data right away, even if it never calls `sync()`. Other replicas will see the new data when they call `sync()`, or at the next sync period, if Periodic Sync is used. ### Encryption at rest Embedded Replicas support encryption at rest with one of the libSQL client SDKs. Simply pass the `encryptionKey` parameter when instantiating the client: ## Usage To use embedded replicas, you need to create a client with a `syncUrl` parameter. This parameter specifies the URL of the remote Turso database that the client will sync with: You can sync changes from the remote database to the local replica manually: ## Things to know * Do not open the local database while the embedded replica is syncing. This can lead to data corruption. * In certain contexts, such as serverless environments without a filesystem, you can’t use embedded replicas. * There are a couple scenarios where you may sync more frames than you might expect. * A write that causes the internal btree to split at any node would cause many new frames to be written to the replication log. * A server restart that left the on-disk wal in dirty state would regenerate the replication log and sync additional frames. * Removing/invalidating the local files on disk could cause the embedded replica to re-sync from scratch. * One frame equals 4kB of data (one on disk page frame), so if you write a 1 byte row, it will always show up as a 4kB write since that is the unit in which libsql writes with. ## Deployment Guides --- ## Page: https://docs.turso.tech/features/embedded-replicas/with-akamai ## Prerequisites Before you start, make sure you: * Install the Turso CLI * Sign up or login to Turso * Have an Akamai account - create one 1 Retrieve database credentials You will need an existing database to continue. If you don’t have one, create one. Get the database URL: turso db show --url <database-name> Get the database authentication token: turso db tokens create <database-name> Assign credentials to the environment variables inside `.env`. TURSO_DATABASE_URL= TURSO_AUTH_TOKEN= You will want to store these as environment variables. 2 Fork one of the following embedded replica projects from GitHub 3 Set up a Linode server Configure and create a new linode. Then, set up SSH authentication to securely access the Linode server from your terminal. Prepare the newly created linode server environment by accessing and set it up for Rust/JavaScript development depending on the project you forked earlier. Install and set up Git too. 4 Transfer project to Linode server SSH into your server, clone the project from GitHub, and follow its README instructions to set it up. 5 Deploy Build, run the project, and set up load balancing for it. pm2 is one of the good candidates out there with built-in load balancing, log monitoring, and bug/exception alerts. You can go with your favorite options for where to buy domains, reverse proxy setup, and SSL certificates. Caddy is another good option here. --- ## Page: https://docs.turso.tech/features/multi-db-schemas Turso allows you to create a single schema and share it across multiple databases. This is useful for creating a multi-tenant application where each tenant has their own database. ## How it works 1. You create a database that is used as the parent schema database. 2. You create one or more databases that are used as the child databases. 3. You apply schema changes to the parent database, child databases are automatically updated with the new schema. ## Usage You can create and manage parent or child databases using the Turso CLI, and Platform API. ### Turso CLI Make sure you have the Turso CLI installed, and logged in. 1 2 3 4 ### Platform API Make sure you have an API Token, and know your Organization name: 1 2 3 4 ## Things to know * Schema databases cannot be shared across groups or globally via an organization/account. * You can (but not recommended) `INSERT` rows into the parent database that can be queried by the child database(s). * Be aware of any constraints that may conflict with the child database(s). * You can’t delete a parent database if it has one or more child databases. * When a migration is applied to the schema database: * It’s first run as a dry-run on the schema and all other associated databases. * If successful, a migration job is created. * Tasks are created for each database that references this schema. * The migration is then applied to each referencing database. * Finally, if all tasks succeed, the migration is applied to the schema database itself. * You can’t create or delete a database if there are any migrations running. * **During a migration, all databases are locked to write operations.** * Make sure any application querying the child databases handle any databases not yet updated with the schema. * You cannot apply schema changes to a child database directly. You must use the parent (schema) database. * You can check the status of a migration using the `/jobs` endpoint — learn more. --- ## Page: https://docs.turso.tech/features/data-edge In the realm of data management, each millisecond of latency is critical. That’s why Turso offers over 30 locations for data storage and replication, ensuring minimal delay in data access. For those seeking the ultimate in speed, Turso enables the embedding of databases directly within your application on the same node. This configuration eliminates inter-regional request hopping, effectively bringing latency down to zero. ## How it works 1. You create a database in a primary location 2. You add additional locations where data should be replicated 3. You query a single URL that automatically routes to the nearest edge ## Add replica location You can add locations to your database group using the Turso CLI or Platform API: ## Remove replica location You can remove locations from your database group using the Turso CLI or Platform API: --- ## Page: https://docs.turso.tech/features/embedded-replicas/with-fly ## Prerequisites Before you start, make sure you: * Install the Turso CLI * Sign up or login to Turso * Install the Fly.io CLI 1 Locate your application You should have an application ready using your Turso database that you want to deploy to Fly. 2 Launch with Fly Using the Fly CLI, launch it: fly launch Your application will automatically deploy to Fly, but we’re not ready yet. 3 Create a shared volume Now create a volume that will be used to store the embedded replica(s): fly volumes create libsql_data 4 Mount and configure volumes The files `fly.toml` and `Dockerfile` created created when you launched previously. Update `fly.toml` this file to mount the new volume: [[mounts]] source = "libsql_data" destination = "/app/data" Then inside `Dockerfile`, make sure you install and update `ca-certificates`: RUN apt-get update -qq && \ apt-get install -y ca-certificates && \ update-ca-certificates Make sure to also add the following line after any `COPY` commands to copy the certificates: COPY --from=build /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ 5 Configure the libSQL client You will want to change the `url` to point to a local file, and set the `syncUrl` to be your Turso database URL: import { createClient } from "@libsql/client"; const client = createClient({ url: "file:./app/data/local.db", syncUrl: process.env.TURSO_DATABASE_URL, authToken: process.env.TURSO_AUTH_TOKEN, syncInterval: 60, }); 6 Deploy your updated app fly deploy --- ## Page: https://docs.turso.tech/features/embedded-replicas/with-koyeb ## Prerequisites Before you start, make sure you: * Install the Turso CLI * Sign up or login to Turso * Have a Koyeb account - create one 1 Retrieve database credentials You will need an existing database to continue. If you don’t have one, create one. Get the database URL: turso db show --url <database-name> Get the database authentication token: turso db tokens create <database-name> Assign credentials to the environment variables inside `.env`. TURSO_DATABASE_URL= TURSO_AUTH_TOKEN= You will want to store these as environment variables. 2 Fork one of the following embedded replica project from GitHub Or, you can: 3 Add a new Koyeb app 1. Create a new app in the Koyeb control panel. 2. Select GitHub as the deployment option. 3. Import the GitHub project to Koyeb. 4 Fill in the environment variables on Koyeb's deploy page 5 Deploy Click the **Deploy** button at the bottom to deploy your web service. --- ## Page: https://docs.turso.tech/features/embedded-replicas/with-railway 1 Retrieve database credentials You will need an existing database to continue. If you don’t have one, create one. Get the database URL: turso db show --url <database-name> Get the database authentication token: turso db tokens create <database-name> Assign credentials to the environment variables inside `.env`. TURSO_DATABASE_URL= TURSO_AUTH_TOKEN= You will want to store these as environment variables. 2 Get application code Fork and clone the following embedded replica project from GitHub locally: 3 Create a new Railway project Run the following command to create a new Railway project. Provide the project’s name when prompted. railway init 4 Add a service to the Railway project 5 Link application to service Run the following command to list and select the service to link to your application: railway service 6 Add database credentials Open the service on your Railway dashboard and add your Turso database Credentials. TURSO_DATABASE_URL=libsql://[db-name]-[github-username].turso.io TURSO_AUTH_TOKEN=... LOCAL_DB=file:local-db-name.db 7 Deploy Run the following command to deploy your application: railway up If you are on a free plan, you’ll need to connect your Railway account to GitHub to have access to code deployments. --- ## Page: https://docs.turso.tech/features/embedded-replicas/with-render 1 Retrieve database credentials You will need an existing database to continue. If you don’t have one, create one. Get the database URL: turso db show --url <database-name> Get the database authentication token: turso db tokens create <database-name> Assign credentials to the environment variables inside `.env`. TURSO_DATABASE_URL= TURSO_AUTH_TOKEN= You will want to store these as environment variables. 2 Get application code 3 Create a web service Create a new Render **Web Service** by clicking on the “New Web Service” button on the Web Services card inside you Render dashboard. 4 Connect to Git repository 1. Select “build and deploy from a Git repository” and proceed to the next page. 2. Click on “Connect” for your target project repository 5 Set project's environment variables On the web service configuration page, under “Advanced” add **a secret file** and fill it in with your database secret credentials: 6 Deploy project Scroll to the bottom of the web service configuration page and click on “Create Web Service”. --- ## Page: https://docs.turso.tech/features/embedded-replicas Turso’s embedded replicas are a game-changer for SQLite, making it more flexible and suitable for various environments. This feature shines especially for those using VMs or VPS, as it lets you replicate a Turso database right within your applications without needing to relying on Turso’s edge network. For mobile applications, where stable connectivity is a challenge, embedded replicas are invaluable as they allow uninterrupted access to the local database. Embedded replicas provide a smooth switch between local and remote database operations, allowing the same database to adapt to various scenarios effortlessly. They also ensure speedy data access by syncing local copies with the remote database, enabling microsecond-level read operations — a significant advantage for scenarios demanding quick data retrieval. ## How it works 1. You configure a local file to be your main database. * The `url` parameter in the client configuration. 2. You configure a remote database to sync with. * The `syncUrl` parameter in the client configuration. 3. You read from a database: * Reads are always served from the local replica configured at `url`. 4. You write to a database: * Writes are always sent to the remote primary database configured at `syncUrl`. * Any write transactions with reads are also sent to the remote primary database. * Once the write is successful, the local database is updated with the changes automatically (read your own writes — can be disabled). ### Periodic sync You can automatically sync data to your embedded replica using the periodic sync interval property. Simply pass the `syncInterval` parameter when instantiating the client: ### Read your writes Embedded Replicas also will guarantee read-your-writes semantics. What that means in practice is that after a write returns successfully, the replica that initiated the write will always be able to see the new data right away, even if it never calls `sync()`. Other replicas will see the new data when they call `sync()`, or at the next sync period, if Periodic Sync is used. ### Encryption at rest Embedded Replicas support encryption at rest with one of the libSQL client SDKs. Simply pass the `encryptionKey` parameter when instantiating the client: ## Usage To use embedded replicas, you need to create a client with a `syncUrl` parameter. This parameter specifies the URL of the remote Turso database that the client will sync with: You can sync changes from the remote database to the local replica manually: ## Things to know * Do not open the local database while the embedded replica is syncing. This can lead to data corruption. * In certain contexts, such as serverless environments without a filesystem, you can’t use embedded replicas. * There are a couple scenarios where you may sync more frames than you might expect. * A write that causes the internal btree to split at any node would cause many new frames to be written to the replication log. * A server restart that left the on-disk wal in dirty state would regenerate the replication log and sync additional frames. * Removing/invalidating the local files on disk could cause the embedded replica to re-sync from scratch. * One frame equals 4kB of data (one on disk page frame), so if you write a 1 byte row, it will always show up as a 4kB write since that is the unit in which libsql writes with. ## Deployment Guides --- ## Page: https://docs.turso.tech/features/attach-database The `ATTACH` statement enables you to link multiple databases within a single transaction, which is ideal for: * Organizing data in a modular way * Streamlining data access and enhancing scalability * Aggregating data ## How it works 1. You enable the `ATTACH` feature on the databases you want to connect to. 2. You retrieve the **Database ID** for the database you want to `ATTACH`. 3. You connect to the database * **CLI**: `--attach` flag to automatically create a token with the correct permissions. * **SDK**: Create a token with the `attach` permission for the database you want to attach. 4. You invoke `ATTACH` to connect to the other databases within the database shell or SDK. ## Usage You can use the `ATTACH` statement to connect to other databases within a transaction using the CLI, or libSQL SDK. Once attached, you can query the attached databases as if they were part of the current database using the assigned alias. ### Turso CLI Make sure you have the Turso CLI installed, and logged in. 1 2 3 4 5 ### libSQL SDKs You can use one of the libSQL client SDKs with TypeScript, Rust, Go, Python, or over HTTP. 1 2 3 4 ## Things to know * You can only attach databases that have the `attach` feature enabled. * You can only attach databases belonging to a group, and in the same group. * There is a maximum of 10 databases that can be attached to a single transaction. * The attached databases are read only. * `ATTACH` statement can be used only within transactions. * `ATTACH` doesn’t support Embedded Replicas