W↓
All docs
🔑
Sign Up/Sign In
unstorage.unjs.io/
Public Link
Apr 8, 2025, 12:39:30 PM - complete - 51.1 kB
Starting URLs:
https://unstorage.unjs.io/guide
Crawl Prefixes:
https://unstorage.unjs.io/
## Page: https://unstorage.unjs.io/guide We usually choose one or more storage backends based on our use cases, such as the filesystem, a database, or LocalStorage for browsers. It soon starts to create troubles when supporting and combining multiple options or switching between them. For JavaScript library authors, this usually means that they have to decide how many platforms they are going to support and implement storage for each of them. Install `unstorage` npm package: npm i unstorage my-storage.js `import { createStorage } from "unstorage"; const storage = createStorage(/* opts */); await storage.getItem("foo:bar"); // or storage.getItem('/foo/bar')` **Options:** * `driver`: Default driver, using memory if not provided ### `hasItem(key, opts?)` Checks if storage contains a key. Resolves to either `true` or `false`. `await storage.hasItem("foo:bar");` You can also use the `has` alias: `await storage.has("foo:bar");` ### `getItem(key, opts?)` Gets the value of a key in storage. Resolves to either a JavaScript primitive value or `null`. `await storage.getItem("foo:bar");` You can also use the `get` alias: `await storage.get("foo:bar");` ### `getItems(items, opts)` (Experimental) Gets the value of multiple keys in storage in parallel. Each item in the array can either be a string or an object with `{ key, options? }` format. Returned value is a Promise resolving to an array of objects with `{ key, value }` format. ### `getItemRaw(key, opts?)` **Note:** This is an experimental feature. Please check unjs/unstorage#142 for more information. Gets the value of a key in storage in raw format. `// Value can be a Buffer, Array or Driver's raw format const value = await storage.getItemRaw("foo:bar.bin");` ### `setItem(key, value, opts?)` Add/Update a value to the storage. If the value is not a string, it will be stringified. If the value is `undefined`, it is same as calling `removeItem(key)`. `await storage.setItem("foo:bar", "baz");` You can also use the `set` alias: `await storage.set("foo:bar", "baz");` ### `setItems(items, opts)` (Experimental) Add/Update items in parallel to the storage. Each item in `items` array should be in `{ key, value, options? }` format. Returned value is a Promise resolving to an array of objects with `{ key, value }` format. ### `setItemRaw(key, value, opts?)` **Note:** This is an experimental feature. Please check unjs/unstorage#142 for more information. Add/Update a value to the storage in raw format. If value is `undefined`, it is the same as calling `removeItem(key)`. `await storage.setItemRaw("data/test.bin", new Uint8Array([1, 2, 3]));` ### `removeItem(key, opts = { removeMeta = false })` Remove a value (and it's meta) from storage. `await storage.removeItem("foo:bar", { removeMeta: true }); // same as await storage.removeItem("foo:bar", true);` You can also use the `del` or `remove` aliases: `await storage.remove("foo:bar"); await storage.del("foo:bar");` ### `getMeta(key, opts = { nativeOnly? })` Get metadata object for a specific key. This data is fetched from two sources: * Driver native meta (like file creation time) * Custom meta set by `storage.setMeta` (overrides driver native meta) `await storage.getMeta("foo:bar"); // For fs driver returns an object like { mtime, atime, size }` ### `setMeta(key, opts?)` Set custom meta for a specific key by adding a `$` suffix. `await storage.setMeta("foo:bar", { flag: 1 }); // Same as storage.setItem('foo:bar$', { flag: 1 })` ### `removeMeta(key, opts?)` Remove meta for a specific key by adding a `$` suffix. `await storage.removeMeta("foo:bar"); // Same as storage.removeItem('foo:bar$')` ### `getKeys(base?, opts?)` Get all keys. Returns an array of strings. Meta keys (ending with `$`) will be filtered. If a base is provided, only keys starting with the base will be returned and only mounts starting with base will be queried. Keys still have a full path. `await storage.getKeys();` You can also use the `keys` alias: `await storage.keys();` ### `clear(base?, opts?)` Removes all stored key/values. If a base is provided, only mounts matching base will be cleared. `await storage.clear();` ### `dispose()` Disposes all mounted storages to ensure there are no open-handles left. Call it before exiting process. **Note:** Dispose also clears in-memory data. `await storage.dispose();` ### `mount(mountpoint, driver)` By default, everything is stored in memory. We can mount additional storage space in a Unix-like fashion. When operating with a `key` that starts with mountpoint, instead of default storage, mounted driver will be called. In addition to `base`, you can set `readOnly` and `noClear` to disable write and clear operations. `import { createStorage } from "unstorage"; import fsDriver from "unstorage/drivers/fs"; // Create a storage container with default memory storage const storage = createStorage({}); storage.mount("/output", fsDriver({ base: "./output" })); // Writes to ./output/test file await storage.setItem("/output/test", "works"); // Adds value to in-memory storage await storage.setItem("/foo", "bar");` ### `unmount(mountpoint, dispose = true)` Unregisters a mountpoint. Has no effect if mountpoint is not found or is root. `await storage.unmount("/output");` ### `watch(callback)` Starts watching on all mountpoints. If driver does not support watching, only emits even when `storage.*` methods are called. `const unwatch = await storage.watch((event, key) => {}); // to stop this watcher await unwatch();` ### `unwatch()` Stop all watchers on all mountpoints. `await storage.unwatch();` ### `getMount(key)` Gets the mount point (driver and base) for a specific key in storage. `storage.mount("cache" /* ... */); storage.mount("cache:routes" /* ... */); storage.getMount("cache:routes:foo:bar"); // => { base: "cache:routes:", driver: "..." }` ### `getMounts(base?, { parents: boolean }?)` Gets the mount points on a specific base. `storage.mount("cache" /* ... */); storage.mount("cache:sub" /* ... */); storage.getMounts("cache:sub"); // => [{ base: "cache:sub", driver }] storage.getMounts("cache:"); // => [{ base: "cache:sub", driver }, { base: "cache:", driver }] storage.getMounts(""); storage.getMounts("cache:sub", { parents: true }); // => [{ base: "cache:sub", driver }, { base: "cache:", driver }, { base: "", driver }]` **Type `getItem` return value:** `await storage.getItem<string>("k"); // => <string> await storage.getItemRaw<Buffer>("k"); // => <Buffer>` **Type check `setItem` parameters:** `storage.setItem<string>("k", "val"); // check ok storage.setItemRaw<string>("k", "val"); // check ok storage.setItem<string>("k", 123); // ts error storage.setItemRaw<string>("k", 123); // ts error` **Typed storage instance:** `const storage = createStorage<string>(); await storage.getItem("k"); // => <string> storage.setItem("k", "val"); // Check ok storage.setItem("k", 123); // TS error` Forward references use inheritance instead of overriding types. `const storage = createStorage<string>(); storage.setItem<number>("k", 123); // TS error: <number> is not compatible with <string>` **Typing a sub set using `prefixStorage`:** `const storage = createStorage(); const htmlStorage = prefixStorage<string>(storage, "assets:html"); await htmlStorage.getItem("foo.html"); // => <string> type Post = { title: string; content: string; }; const postStorage = prefixStorage<Post>(storage, "assets:posts"); await postStorage.getItem("foo.json"); // => <Post>` In strict mode, it will also return the `null` type to help you handle the case when `getItem` is missing. `"use strict"; await storage.getItem<string>("k"); // => <string | null>` **Specifying namespace:** `type StorageDefinition = { items: { foo: string; baz: number; }; }; const storage = createStorage<StorageDefinition>(); await storage.has("foo"); // Ts will prompt you that there are two optional keys: "foo" or "baz" await storage.getItem("baz"); // => string await storage.setItem("foo", 12); // TS error: <number> is not compatible with <string> await storage.setItem("foo", "val"); // Check ok await storage.remove("foo");` --- ## Page: https://unstorage.unjs.io/ ## A simple, small, and fast key-value storage library for JavaScript. Runtime Agnostic Your code will work on any JavaScript runtime including Node.js, Bun, Deno and Workers. Built-in drivers Unstorage is shipped with 20+ built-in drivers for different platforms: Memory (default), FS, Redis, Memory, MongoDB, CloudFlare, GitHub, etc. Snapshots Expand your server and add capabilities. Your codebase will scale with your project. Multi Storages Unix-style driver mounting to combine storages on different mounts. JSON friendly Unstorage automatically serializes and deserializes JSON values. Binary Support Store binary and raw data like images, videos, audio files, etc. --- ## Page: https://unstorage.unjs.io/drivers Azure Store data in Azure available storages. Browser Store data in browser storages (localStorage, sessionStorage, indexedDB). Capacitor Preferences Store data via Capacitor Preferences API on mobile devices or local storage on the web. Cloudflare Store data in Cloudflare KV or R2 storage. Filesystem (Node.js) Store data in the filesystem using Node.js API. GitHub Map files from a remote github repository (readonly). HTTP Use a remote HTTP/HTTPS endpoint as data storage. LRU Cache Keeps cached data in memory using LRU Cache. Memory Keep data in memory. MongoDB Store data in MongoDB database. Netlify Blobs Store data in Netlify Blobs. Overlay Create a multi-layer overlay driver. PlanetScale Store data in PlanetScale database. Redis Store data in Redis. SQL Database Store data in SQL database. Vercel KV Store data in Vercel KV. --- ## Page: https://unstorage.unjs.io/guide/utils ## Namespace Create a namespaced instance of the main storage. All operations are virtually prefixed, which is useful for creating shorcuts and limiting access. `prefixStorage(storage, prefix)` `import { createStorage, prefixStorage } from "unstorage"; const storage = createStorage(); const assetsStorage = prefixStorage(storage, "assets"); // Same as storage.setItem('assets:x', 'hello!') await assetsStorage.setItem("x", "hello!");` ## Snapshots * `snapshot(storage, base?)` Takes a snapshot from all keys in the specified base and stores them in a plain JavaScript object (string: string). Base is removed from keys. `import { snapshot } from "unstorage"; const data = await snapshot(storage, "/etc");` * `restoreSnapshot(storage, data, base?)` Restore a snapshot created by `snapshot()`. `await restoreSnapshot(storage, { "foo:bar": "baz" }, "/etc2");` --- ## Page: https://unstorage.unjs.io/guide/http-server Request url is mapped to a key and method/body is mapped to a function. See below for supported HTTP methods. Programmatic usage of creating an HTTP server exposing methods to communicate with the `storage` instance: server.js ``import { listen } from "listhen"; import { createStorage } from "unstorage"; import { createStorageServer } from "unstorage/server"; const storage = createStorage(); const storageServer = createStorageServer(storage, { authorize(req) { // req: { key, type, event } if (req.type === "read" && req.key.startsWith("private:")) { throw new Error("Unauthorized Read"); } }, }); // Alternatively we can use `storageServer.handle` as a middleware await listen(storageServer.handle);`` The `storageServer` is an h3 instance. Check out also listhen for an elegant HTTP listener. **🛡️ Security Note:** Make sure to always implement `authorize` in order to protect the server when it is exposed to a production environment. `import { createStorage } from "unstorage"; import httpDriver from "unstorage/drivers/http"; const client = createStorage({ driver: httpDriver({ base: "SERVER_ENDPOINT", }), }); const keys = await client.getKeys();` When passing `accept: application/octet-stream` for GET and SET operations, the server switches to binary mode via `getItemRaw` and `setItemRaw`. --- ## Page: https://unstorage.unjs.io/guide/custom-driver `import { createStorage, defineDriver } from "unstorage"; const myStorageDriver = defineDriver((options) => { return { name: "my-custom-driver", options, async hasItem(key, _opts) {}, async getItem(key, _opts) {}, async setItem(key, value, _opts) {}, async removeItem(key, _opts) {}, async getKeys(base, _opts) {}, async clear(base, _opts) {}, async dispose() {}, async watch(callback) {}, }; }); const storage = createStorage({ driver: myStorageDriver(), });` --- ## Page: https://unstorage.unjs.io/drivers/memory Keeps data in memory using Map. (default storage) ## Usage **Driver name:** `memory` By default, it is mounted at the top level, so it's unlikely that you will need to mount it again. `import { createStorage } from "unstorage"; import memoryDriver from "unstorage/drivers/memory"; const storage = createStorage({ driver: memoryDriver(), });` --- ## Page: https://unstorage.unjs.io/drivers/azure Store data in the key-value store of Azure App Configuration. ### Usage **Driver name:** `azure-app-configuration` Learn more about Azure App Configuration. This driver uses the configuration store as a key-value store. It uses the `key` as the name and the `value` as content. You can also use labels to differentiate between different environments (dev, prod, etc.) and use prefixes to differentiate between different applications (app01, app02, etc.). To use it, you will need to install `@azure/app-configuration` and `@azure/identity` in your project: npm i @azure/app-configuration @azure/identity Usage: `import { createStorage } from "unstorage"; import azureAppConfiguration from "unstorage/drivers/azure-app-configuration"; const storage = createStorage({ driver: azureAppConfiguration({ appConfigName: "unstoragetest", label: "dev", prefix: "app01", }), });` **Authentication:** The driver supports the following authentication methods: * **`DefaultAzureCredential`**: This is the recommended way to authenticate. It will use managed identity or environment variables to authenticate the request. It will also work in a local environment by trying to use Azure CLI or Azure PowerShell to authenticate. ⚠️ Make sure that your Managed Identity or personal account has the `App Configuration Data Owner` role assigned to it, even if you already are the `Contributor` or `Owner` on the app configuration resource. * **`connectionString`**: The app configuration connection string. Not recommended for use in production. **Options:** * `appConfigName`: The name of the app configuration resource. * `endpoint`: The endpoint of the app configuration resource. * `connectionString`: The connection string of the app configuration resource. * `prefix`: Optional prefix for keys. This can be used to isolate keys from different applications in the same Azure App Configuration instance. E.g. "app01" results in keys like "app01:foo" and "app01:bar". * `label`: Optional label for keys. If not provided, all keys will be created and listed without labels. This can be used to isolate keys from different environments in the same Azure App Configuration instance. E.g. "dev" results in keys like "foo" and "bar" with the label "dev". Store data in Azure Cosmos DB NoSQL API documents. ### Usage **Driver name:** `azure-cosmos` Learn more about Azure Cosmos DB. This driver stores KV information in a NoSQL API Cosmos DB collection as documents. It uses the `id` field as the key and adds `value` and `modified` fields to the document. To use it, you will need to install `@azure/cosmos` and `@azure/identity` in your project: npm i @azure/cosmos @azure/identity Usage: `import { createStorage } from "unstorage"; import azureCosmos from "unstorage/drivers/azure-cosmos"; const storage = createStorage({ driver: azureCosmos({ endpoint: "ENDPOINT", accountKey: "ACCOUNT_KEY", }), });` **Authentication:** * **`DefaultAzureCredential`**: This is the recommended way to authenticate. It will use managed identity or environment variables to authenticate the request. It will also work in a local environment by trying to use Azure CLI or Azure PowerShell to authenticate. ⚠️ Make sure that your Managed Identity or personal account has at least `Cosmos DB Built-in Data Contributor` role assigned to it. If you already are the `Contributor` or `Owner` on the resource it should also be enough, but that does not accomplish a model of least privilege. * **`accountKey`**: CosmosDB account key. If not provided, the driver will use the DefaultAzureCredential (recommended). **Options:** * **`endpoint`** (required): CosmosDB endpoint in the format of `https://<account>.documents.azure.com:443/`. * `accountKey`: CosmosDB account key. If not provided, the driver will use the DefaultAzureCredential (recommended). * `databaseName`: The name of the database to use. Defaults to `unstorage`. * `containerName`: The name of the container to use. Defaults to `unstorage`. Store data in a Azure Key Vault secrets. ### Usage **Driver name:** `azure-key-vault` Learn more about Azure Key Vault secrets. This driver stores KV information in Azure Key Vault secrets by using the key as secret id and the value as secret content. Please be aware that key vault secrets don't have the fastest access time and are not designed for high throughput. You also have to disable purge protection for your key vault to be able to delete secrets. This implementation deletes and purges a secret when it is deleted to avoid conflicts with soft delete. ⚠️ Be aware that this driver stores the keys of your `key:value` pairs in an encoded way in Key Vault to avoid conflicts with naming requirements for secrets. This means that you will not be able to access manually (outside of unstorage) created secrets inside your Key Vault, as long as they are not encoded in the same way. To use it, you will need to install `@azure/keyvault-secrets` and `@azure/identity` in your project: npm i @azure/keyvault-secrets @azure/identity Usage: `import { createStorage } from "unstorage"; import azureKeyVault from "unstorage/drivers/azure-key-vault"; const storage = createStorage({ driver: azureKeyVault({ vaultName: "testunstoragevault", }), });` **Authentication:** The driver supports the following authentication methods: * **`DefaultAzureCredential`**: This is the recommended way to authenticate. It will use managed identity or environment variables to authenticate the request. It will also work in a local environment by trying to use Azure CLI or Azure PowerShell to authenticate. ⚠️ Make sure that your Managed Identity or personal account has either the `Key Vault Secrets Officer` (or `Key Vault Secrets User` for read-only) RBAC role assigned or is a member of an access policy that grants `Get`, `List`, `Set`, `Delete` and `Purge` secret permissions. **Options:** * **`vaultName`** (required): The name of the key vault to use. * `serviceVersion`: Version of the Azure Key Vault service to use. Defaults to 7.3. * `pageSize`: The number of entries to retrieve per request. Impacts getKeys() and clear() performance. Maximum value is 25. Store data in a Azure blob storage. ### Usage **Driver name:** `azure-storage-blob` Learn more about Azure blob storage. This driver stores KV information in a Azure blob storage blob. The same container is used for all entries. Each entry is stored in a separate blob with the key as the blob name and the value as the blob content. To use it, you will need to install `@azure/storage-blob` and `@azure/identity` in your project: npm i @azure/storage-blob @azure/identity Please make sure that the container you want to use exists in your storage account. `import { createStorage } from "unstorage"; import azureStorageBlobDriver from "unstorage/drivers/azure-storage-blob"; const storage = createStorage({ driver: azureStorageBlobDriver({ accountName: "myazurestorageaccount", }), });` **Authentication:** The driver supports the following authentication methods: * **`DefaultAzureCredential`**: This is the recommended way to authenticate. It will use managed identity or environment variables to authenticate the request. It will also work in a local environment by trying to use Azure CLI or Azure PowerShell to authenticate. ⚠️ Make sure that your Managed Identity or personal account has the `Storage Blob Data Contributor` role assigned to it, even if you already are `Contributor` or `Owner` on the storage account. * **`AzureNamedKeyCredential`** (only available in Node.js runtime): This will use the `accountName` and `accountKey` to authenticate the request. * **`AzureSASCredential`**: This will use the `accountName` and `sasToken` to authenticate the request. * **connection string** (only available in Node.js runtime): This will use the `connectionString` to authenticate the request. This is not recommended as it will expose your account key in plain text. **Options:** * **`accountName`** (required): The name of your storage account. * `containerName`: The name of the blob container to use. Defaults to `unstorage`. * `accountKey`: The account key to use for authentication. This is only required if you are using `AzureNamedKeyCredential`. * `sasKey`: The SAS token to use for authentication. This is only required if you are using `AzureSASCredential`. * `connectionString`: The storage accounts' connection string. `accountKey` and `sasKey` take precedence. Store data in a Azure table storage. ### Usage **Driver name:** `azure-storage-table` Learn more about Azure table storage. This driver is currently not compatible with edge workers like Cloudflare Workers or Vercel Edge Functions. There may be a http based driver in the future. Store data in a data-tables. This driver stores KV information in a Azure table storage. The same partition key is used for all keys and the field `unstorageValue` is used to store the value. To use it, you will need to install `@azure/data-table` and `@azure/identity` in your project: npm i @azure/data-table @azure/identity Please make sure that the table you want to use exists in your storage account. `import { createStorage } from "unstorage"; import azureStorageTableDriver from "unstorage/drivers/azure-storage-table"; const storage = createStorage({ driver: azureStorageTableDriver({ accountName: "myazurestorageaccount", }), });` **Authentication:** The driver supports the following authentication methods: * **`DefaultAzureCredential`**: This is the recommended way to authenticate. It will use managed identity or environment variables to authenticate the request. It will also work in a local environment by trying to use Azure CLI or Azure PowerShell to authenticate. ⚠️ Make sure that your Managed Identity or personal account has the `Storage Table Data Contributor` role assigned to it, even if you already are `Contributor` or `Owner` on the storage account. * **`AzureNamedKeyCredential`** (only available in Node.js runtime): This will use the `accountName` and `accountKey` to authenticate the request. * **`AzureSASCredential`**: This will use the `accountName` and `sasToken` to authenticate the request. * **connection string** (only available in Node.js runtime): This will use the `connectionString` to authenticate the request. This is not recommended as it will expose your account key in plain text. **Options:** * **`accountName`** (required): The name of your storage account. * `tableName`: The name of the table to use. Defaults to `unstorage`. * `partitionKey`: The partition key to use. Defaults to `unstorage`. * `accountKey`: The account key to use for authentication. This is only required if you are using `AzureNamedKeyCredential`. --- ## Page: https://unstorage.unjs.io/drivers/browser ## LocalStorage / SessionStorage ### Usage **Driver name:** `localstorage` or `sessionstorage` Store data in localStorage or sessionStorage `import { createStorage } from "unstorage"; import localStorageDriver from "unstorage/drivers/localstorage"; const storage = createStorage({ driver: localStorageDriver({ base: "app:" }), });` **Options:** * `base`: Add base to all keys to avoid collision * `storage`: (optional) provide `localStorage` or `sessionStorage` compatible object. * `windowKey`: (optional) Can be `"localStorage"` (default) or `"sessionStorage"` * `window`: (optional) provide `window` object ## IndexedDB Store key-value in IndexedDB. ### Usage **Driver name:** `indexeddb` Learn more about IndexedDB. To use it, you will need to install `idb-keyval` in your project: npm i idb-keyval Usage: `import { createStorage } from "unstorage"; import indexedDbDriver from "unstorage/drivers/indexedb"; const storage = createStorage({ driver: indexedDbDriver({ base: "app:" }), });` **Options:** * `base`: Add `${base}:` to all keys to avoid collision * `dbName`: Custom name for database. Defaults to `keyval-store` * `storeName`: Custom name for store. Defaults to `keyval` IndexedDB is a browser database. Avoid using this preset on server environments. --- ## Page: https://unstorage.unjs.io/drivers/capacitor-preferences Learn more about Capacitor Preferences API. ## Usage **Driver name:** `capacitor-preferences` To use this driver, you need to install and sync `@capacitor/preferences` inside your capacitor project: npm i @capacitor/preferences npx cap sync Usage: `import { createStorage } from "unstorage"; import capacitorPreferences from "unstorage/drivers/capacitor-preferences"; const storage = createStorage({ driver: capacitorPreferences({ base: "test", }), });` **Options:** * `base`: Add `${base}:` to all keys to avoid collision --- ## Page: https://unstorage.unjs.io/drivers/cloudflare > Store data in Cloudflare KV and access from worker bindings. ### Usage **Driver name:** `cloudflare-kv-binding` Learn more about Cloudflare KV. **Note:** This driver only works in a cloudflare worker environment, use `cloudflare-kv-http` for other environments. You need to create and assign a KV. See KV Bindings for more information. `import { createStorage } from "unstorage"; import cloudflareKVBindingDriver from "unstorage/drivers/cloudflare-kv-binding"; // Directly setting binding const storage = createStorage({ driver: cloudflareKVBindingDriver({ binding: "STORAGE" }), }); // Using binding name to be picked from globalThis const storage = createStorage({ driver: cloudflareKVBindingDriver({ binding: globalThis.STORAGE }), }); // Using from Durable Objects and Workers using Modules Syntax const storage = createStorage({ driver: cloudflareKVBindingDriver({ binding: this.env.STORAGE }), }); // Using outside of Cloudflare Workers (like Node.js) // Use cloudflare-kv-http` **Options:** * `binding`: KV binding or name of namespace. Default is `STORAGE`. * `base`: Adds prefix to all stored keys > Store data in Cloudflare KV using the Cloudflare API v4. ### Usage **Driver name:** `cloudflare-kv-http` Learn more about Cloudflare KV API. You need to create a KV namespace. See KV Bindings for more information. **Note:** This driver uses native fetch and works universally! For a direct usage in a cloudflare worker environment, please use `cloudflare-kv-binding` driver for best performance! ``import { createStorage } from "unstorage"; import cloudflareKVHTTPDriver from "unstorage/drivers/cloudflare-kv-http"; // Using `apiToken` const storage = createStorage({ driver: cloudflareKVHTTPDriver({ accountId: "my-account-id", namespaceId: "my-kv-namespace-id", apiToken: "supersecret-api-token", }), }); // Using `email` and `apiKey` const storage = createStorage({ driver: cloudflareKVHTTPDriver({ accountId: "my-account-id", namespaceId: "my-kv-namespace-id", email: "[email protected]", apiKey: "my-api-key", }), }); // Using `userServiceKey` const storage = createStorage({ driver: cloudflareKVHTTPDriver({ accountId: "my-account-id", namespaceId: "my-kv-namespace-id", userServiceKey: "v1.0-my-service-key", }), });`` **Options:** * `accountId`: Cloudflare account ID. * `namespaceId`: The ID of the KV namespace to target. **Note:** be sure to use the namespace's ID, and not the name or binding used in a worker environment. * `apiToken`: API Token generated from the User Profile 'API Tokens' page. * `email`: Email address associated with your account. May be used along with `apiKey` to authenticate in place of `apiToken`. * `apiKey`: API key generated on the "My Account" page of the Cloudflare console. May be used along with `email` to authenticate in place of `apiToken`. * `userServiceKey`: A special Cloudflare API key good for a restricted set of endpoints. Always begins with "v1.0-", may vary in length. May be used to authenticate in place of `apiToken` or `apiKey` and `email`. * `apiURL`: Custom API URL. Default is `https://api.cloudflare.com`. * `base`: Adds prefix to all stored keys **Transaction options:** * `ttl`: Supported for `setItem(key, value, { ttl: number /* seconds min 60 */ })` **Supported methods:** * `getItem`: Maps to Read key-value pair `GET accounts/:account_identifier/storage/kv/namespaces/:namespace_identifier/values/:key_name` * `hasItem`: Maps to Read key-value pair `GET accounts/:account_identifier/storage/kv/namespaces/:namespace_identifier/values/:key_name`. Returns `true` if `<parsed response body>.success` is `true`. * `setItem`: Maps to Write key-value pair `PUT accounts/:account_identifier/storage/kv/namespaces/:namespace_identifier/values/:key_name` * `removeItem`: Maps to Delete key-value pair `DELETE accounts/:account_identifier/storage/kv/namespaces/:namespace_identifier/values/:key_name` * `getKeys`: Maps to List a Namespace's Keys `GET accounts/:account_identifier/storage/kv/namespaces/:namespace_identifier/keys` * `clear`: Maps to Delete key-value pair `DELETE accounts/:account_identifier/storage/kv/namespaces/:namespace_identifier/bulk` > Store data in Cloudflare R2 buckets and access from worker bindings. This is an experimental driver! This driver only works in a cloudflare worker environment and cannot be used in other runtime environments such as Node.js (r2-http driver is coming soon) ### Usage **Driver name:** `cloudflare-r2-binding` Learn more about Cloudflare R2 buckets. You need to create and assign a R2 bucket. See R2 Bindings for more information. `import { createStorage } from "unstorage"; import cloudflareR2BindingDriver from "unstorage/drivers/cloudflare-r2-binding"; // Using binding name to be picked from globalThis const storage = createStorage({ driver: cloudflareR2BindingDriver({ binding: "BUCKET" }), }); // Directly setting binding const storage = createStorage({ driver: cloudflareR2BindingDriver({ binding: globalThis.BUCKET }), }); // Using from Durable Objects and Workers using Modules Syntax const storage = createStorage({ driver: cloudflareR2BindingDriver({ binding: this.env.BUCKET }), });` **Options:** * `binding`: Bucket binding or name. Default is `BUCKET`. * `base`: Prefix all keys with base. **Transaction options:** * `getItemRaw(key, { type: "..." })` * `type: "object"`: Return the R2 object body. * `type: "stream"`: Return body stream. * `type: "blob"`: Return a `Blob`. * `type: "bytes"`: Return an `Uint8Array`. * `type: "arrayBuffer"`: Return an `ArrayBuffer` (default) To use Cloudflare R2 over HTTP, you can use s3 driver. Make sure to set `region` to `auto` --- ## Page: https://unstorage.unjs.io/drivers/database Database driver is experimental and behavior may change in the future. Select and configure the appropriate connector for your database. Learn more about configuring connectors in the `db0` documentation. `import { createDatabase } from "db0"; import { createStorage } from "unstorage"; import dbDriver from "unstorage/drivers/db0"; import sqlite from "db0/connectors/better-sqlite3"; // Learn more: https://db0.unjs.io const database = createDatabase( sqlite({ /* db0 connector options */ }) ); const storage = createStorage({ driver: dbDriver({ database, table: "custom_table_name", // Default is "unstorage" }), });` The database table is automatically created, no additional setup is required! Before first operation, driver ensures a table with columns of `id`, `value`, `blob`, `created_at` and `updated_at` exist. --- ## Page: https://unstorage.unjs.io/drivers/deno Learn more about Deno KV. `deno-kv` driver requires Deno deploy or Deno runtime with `--unstable-kv` CLI flag. See Node.js section for other runtimes. The driver automatically maps Unstorage keys to Deno. For example, `"test:key"` key will be mapped to `["test", "key"]` and vice versa. `import { createStorage } from "unstorage"; import denoKVdriver from "unstorage/drivers/deno-kv"; const storage = createStorage({ driver: denoKVdriver({ // path: ":memory:", // base: "", }), });` Deno provides `@deno/kv` npm package, A Deno KV client library optimized for Node.js. `import { createStorage } from "unstorage"; import denoKVNodedriver from "unstorage/drivers/deno-kv-node"; const storage = createStorage({ driver: denoKVNodedriver({ // path: ":memory:", // base: "", }), });` --- ## Page: https://unstorage.unjs.io/drivers/fs ## Usage **Driver name:** `fs` or `fs-lite` Maps data to the real filesystem using directory structure for nested keys. Supports watching using chokidar. This driver implements meta for each key including `mtime` (last modified time), `atime` (last access time), and `size` (file size) using `fs.stat`. `import { createStorage } from "unstorage"; import fsDriver from "unstorage/drivers/fs"; const storage = createStorage({ driver: fsDriver({ base: "./tmp" }), });` **Options:** * `base`: Base directory to isolate operations on this directory * `ignore`: Ignore patterns for watch * `watchOptions`: Additional chokidar options. ## Node.js Filesystem (Lite) This driver uses pure Node.js API without extra dependencies. `import { createStorage } from "unstorage"; import fsLiteDriver from "unstorage/drivers/fs-lite"; const storage = createStorage({ driver: fsLiteDriver({ base: "./tmp" }), });` **Options:** * `base`: Base directory to isolate operations on this directory * `ignore`: Optional callback function `(path: string) => boolean` --- ## Page: https://unstorage.unjs.io/drivers/github This driver fetches all possible keys once and keep it in cache for 10 minutes. Due to GitHub rate limit, it is highly recommended to provide a token. It only applies to fetching keys. `import { createStorage } from "unstorage"; import githubDriver from "unstorage/drivers/github"; const storage = createStorage({ driver: githubDriver({ repo: "nuxt/nuxt", branch: "main", dir: "/docs", }), });` --- ## Page: https://unstorage.unjs.io/drivers/http ## Usage **Driver name:** `http` This driver implements meta for each key including `mtime` (last modified time) and `status` from HTTP headers by making a `HEAD` request. `import { createStorage } from "unstorage"; import httpDriver from "unstorage/drivers/http"; const storage = createStorage({ driver: httpDriver({ base: "http://cdn.com" }), });` **Options:** * `base`: Base URL for urls (**required**) * `headers`: Custom headers to send on all requests **Supported HTTP Methods:** * `getItem`: Maps to http `GET`. Returns deserialized value if response is ok * `hasItem`: Maps to http `HEAD`. Returns `true` if response is ok (200) * `getMeta`: Maps to http `HEAD` (headers: `last-modified` => `mtime`, `x-ttl` => `ttl`) * `setItem`: Maps to http `PUT`. Sends serialized value using body (`ttl` option will be sent as `x-ttl` header). * `removeItem`: Maps to `DELETE` * `clear`: Not supported **Transaction Options:** * `headers`: Custom headers to be sent on each operation (`getItem`, `setItem`, etc) * `ttl`: Custom `ttl` (in seconds) for supported drivers. Will be mapped to `x-ttl` http header. --- ## Page: https://unstorage.unjs.io/drivers/lru-cache ## Usage **Driver name:** `lru-cache` Keeps cached data in memory using LRU Cache. See `lru-cache` for supported options. By default, `max` setting is set to `1000` items. A default behavior for `sizeCalculation` option is implemented based on buffer size of both key and value. `import { createStorage } from "unstorage"; import lruCacheDriver from "unstorage/drivers/lru-cache"; const storage = createStorage({ driver: lruCacheDriver(), });` --- ## Page: https://unstorage.unjs.io/drivers/mongodb ## Usage **Driver name:** `mongodb` Learn more about MongoDB. This driver stores KV information in a MongoDB collection with a separate document for each key value pair. To use it, you will need to install `mongodb` in your project: npm i mongodb Usage: `import { createStorage } from "unstorage"; import mongodbDriver from "unstorage/drivers/mongodb"; const storage = createStorage({ driver: mongodbDriver({ connectionString: "CONNECTION_STRING", databaseName: "test", collectionName: "test", }), });` **Authentication:** The driver supports the following authentication methods: * **`connectionString`**: The MongoDB connection string. This is the only way to authenticate. **Options:** * **`connectionString`** (required): The connection string to use to connect to the MongoDB database. It should be in the format `mongodb://<username>:<password>@<host>:<port>/<database>`. * `databaseName`: The name of the database to use. Defaults to `unstorage`. * `collectionName`: The name of the collection to use. Defaults to `unstorage`. --- ## Page: https://unstorage.unjs.io/drivers/netlify Store data in a Netlify Blobs store. This is supported in both edge and Node.js function runtimes, as well as during builds. Read more in Netlify Blobs. **Driver name:** `netlify-blobs` `import { createStorage } from "unstorage"; import netlifyBlobsDriver from "unstorage/drivers/netlify-blobs"; const storage = createStorage({ driver: netlifyBlobsDriver({ name: "blob-store-name", }), });` You can create a deploy-scoped store by setting `deployScoped` option to `true`. This will mean that the deploy only has access to its own store. The store is managed alongside the deploy, with the same deploy previews, deletes, and rollbacks. This is required during builds, which only have access to deploy-scoped stores. `import { createStorage } from "unstorage"; import netlifyBlobsDriver from "unstorage/drivers/netlify-blobs"; const storage = createStorage({ driver: netlifyBlobsDriver({ deployScoped: true, }), });` To use, you will need to install `@netlify/blobs` as dependency or devDependency in your project: `{ "devDependencies": { "@netlify/blobs": "latest" } }` **Options:** * `name` - The name of the store to use. It is created if needed. This is required except for deploy-scoped stores. * `deployScoped` - If set to `true`, the store is scoped to the deploy. This means that it is only available from that deploy, and will be deleted or rolled-back alongside it. * `consistency` - The consistency model to use for the store. This can be `eventual` or `strong`. Default is `eventual`. * `siteID` - Required during builds, where it is available as `constants.SITE_ID`. At runtime this is set automatically. * `token` - Required during builds, where it is available as `constants.NETLIFY_API_TOKEN`. At runtime this is set automatically. **Advanced options:** These are not normally needed, but are available for advanced use cases or for use in unit tests. * `apiURL` * `edgeURL` * `uncachedEdgeURL` When using Unstorage in a Netlify edge function you should use a URL import. This does not apply if you are compiling your code in a framework - just if you are creating your own edge functions. `import { createStorage } from "https://esm.sh/unstorage"; import netlifyBlobsDriver from "https://esm.sh/unstorage/drivers/netlify-blobs"; export default async function handler(request: Request) { const storage = createStorage({ driver: netlifyBlobsDriver({ name: "blob-store-name", }), }); // ... }` There has been a change in the way global blob stores are stored in `@netlify/blobs` version `7.0.0` which means that you will not be able to access objects in global stores created by older versions until you migrate them. This does not affect deploy-scoped stores, nor does it affect objects created with the new version. You can migrate objects in your old stores by running the following command in the project directory using the latest version of the Netlify CLI: `netlify recipes blobs-migrate <name of store>` --- ## Page: https://unstorage.unjs.io/drivers/null `import { createStorage } from "unstorage"; import nullDriver from "unstorage/drivers/null"; const storage = createStorage({ driver: nullDriver(), });` --- ## Page: https://unstorage.unjs.io/drivers/overlay This is a special driver that creates a multi-layer overlay driver. All write operations happen on the top level layer while values are read from all layers. When removing a key, a special value `__OVERLAY_REMOVED__` will be set on the top level layer internally. ## Usage **Driver name:** `overlay` In the example below, we create an in-memory overlay on top of fs. No changes will be actually written to the disk when setting new keys. `import { createStorage } from "unstorage"; import overlay from "unstorage/drivers/overlay"; import memory from "unstorage/drivers/memory"; import fs from "unstorage/drivers/fs"; const storage = createStorage({ driver: overlay({ layers: [memory(), fs({ base: "./data" })], }), });` --- ## Page: https://unstorage.unjs.io/drivers/planetscale Learn more about PlanetScale. This driver stores KV information in a Planetscale DB with columns of `id`, `value`, `created_at` and `updated_at`. Then you can create a table to store your data by running the following query in your Planetscale database, where `<storage>` is the name of the table you want to use: `create table <storage> ( id varchar(255) not null primary key, value longtext, created_at timestamp default current_timestamp, updated_at timestamp default current_timestamp on update current_timestamp );` `import { createStorage } from "unstorage"; import planetscaleDriver from "unstorage/drivers/planetscale"; const storage = createStorage({ driver: planetscaleDriver({ // This should certainly not be inlined in your code but loaded via runtime config // or environment variables depending on your framework/project. url: "mysql://xxxxxxxxx:************@xxxxxxxxxx.us-east-3.psdb.cloud/my-database?sslaccept=strict", // table: 'storage' }), });` --- ## Page: https://unstorage.unjs.io/drivers/redis **Driver name:** `redis` Learn more about Redis. Unstorage uses `ioredis` internally to connect to Redis. To use it, you will need to install `ioredis` in your project: npm i ioredis Usage with single Redis instance: `import { createStorage } from "unstorage"; import redisDriver from "unstorage/drivers/redis"; const storage = createStorage({ driver: redisDriver({ base: "unstorage", host: 'HOSTNAME', tls: true as any, port: 6380, password: 'REDIS_PASSWORD' }), });` Usage with a Redis cluster (e.g. AWS ElastiCache or Azure Redis Cache): ⚠️ If you connect to a cluster, you have to use `hashtags` as the prefix to avoid the Redis error `CROSSSLOT Keys in request don't hash to the same slot`. This means the prefix has to be surrounded by curly braces, which forces the keys into the same hash slot. You can read more here. `const storage = createStorage({ driver: redisDriver({ base: "{unstorage}", cluster: [ { port: 6380, host: "HOSTNAME", }, ], clusterOptions: { redisOptions: { tls: { servername: "HOSTNAME" }, password: "REDIS_PASSWORD", }, }, }), });` **Options:** * `base`: Optional prefix to use for all keys. Can be used for namespacing. Has to be used as a hashtag prefix for redis cluster mode. * `url`: Url to use for connecting to redis. Takes precedence over `host` option. Has the format `redis://<REDIS_USER>:<REDIS_PASSWORD>@<REDIS_HOST>:<REDIS_PORT>` * `cluster`: List of redis nodes to use for cluster mode. Takes precedence over `url` and `host` options. * `clusterOptions`: Options to use for cluster mode. * `ttl`: Default TTL for all items in **seconds**. See ioredis for all available options. `lazyConnect` option is enabled by default so that connection happens on first redis operation. **Transaction options:** * `ttl`: Supported for `setItem(key, value, { ttl: number /* seconds */ })` --- ## Page: https://unstorage.unjs.io/drivers/s3 S3 driver allows storing KV data to Amazon S3 or any other S3-compatible provider. Driver implementation is lightweight and based on fetch working with Node.js as well as edge workers. Setup a "Bucket" in your S3-compatible provider. You need this info: `import { createStorage } from "unstorage"; import s3Driver from "unstorage/drivers/s3"; const storage = createStorage({ driver: s3Driver({ accessKeyId: "", // Access Key ID secretAccessKey: "", // Secret Access Key endpoint: "", bucket: "", region: "", }), });` Any S3-compatible provider should work out of the box. Pull-Requests are more than welcome to add info about other any other tested provider. Read more in Cloudflare R2. --- ## Page: https://unstorage.unjs.io/drivers/uploadthing Learn more about UploadThing. UploadThing support is currently experimental! There is a known issue that same key, if deleted cannot be used again tracker issue . ## Usage **Driver name:** `uploadthing` To use, you will need to install `uploadthing` dependency in your project: npm i uploadthing `import { createStorage } from "unstorage"; import uploadthingDriver from "unstorage/drivers/uploadthing"; const storage = createStorage({ driver: uploadthingDriver({ // token: "<your token>", // UPLOADTHING_SECRET environment variable will be used if not provided. }), });` **Options:** * `token`: Your UploadThing API key. Will be automatically inferred from the `UPLOADTHING_SECRET` environment variable if not provided. --- ## Page: https://unstorage.unjs.io/drivers/upstash ## Usage **Driver name:** `upstash` Learn more about Upstash. To use it, you will need to install `@upstash/redis` in your project: npm i @upstash/redis Usage with Upstash Redis: `import { createStorage } from "unstorage"; import upstashDriver from "unstorage/drivers/upstash"; const storage = createStorage({ driver: upstashDriver({ base: "unstorage", // url: "", // or set UPSTASH_REDIS_REST_URL env // token: "", // or set UPSTASH_REDIS_REST_TOKEN env }), });` **Options:** * `base`: Optional prefix to use for all keys. Can be used for namespacing. * `url`: The REST URL for your Upstash Redis database. Find it in the Upstash Redis console. Driver uses `UPSTASH_REDIS_REST_URL` environment by default. * `token`: The REST token for authentication with your Upstash Redis database. Find it in the Upstash Redis console. Driver uses `UPSTASH_REDIS_REST_TOKEN` environment by default. * `ttl`: Default TTL for all items in **seconds**. See @upstash/redis documentation for all available options. **Transaction options:** * `ttl`: Supported for `setItem(key, value, { ttl: number /* seconds */ })` --- ## Page: https://unstorage.unjs.io/drivers/vercel > Store data in a Vercel KV Store. Learn more about Vercel KV. ### Usage **Driver name:** `vercel-kv` `import { createStorage } from "unstorage"; import vercelKVDriver from "unstorage/drivers/vercel-kv"; const storage = createStorage({ driver: vercelKVDriver({ // url: "https://<your-project-slug>.kv.vercel-storage.com", // KV_REST_API_URL // token: "<your secret token>", // KV_REST_API_TOKEN // base: "test", // env: "KV", // ttl: 60, // in seconds }), });` To use, you will need to install `@vercel/kv` dependency in your project: `{ "dependencies": { "@vercel/kv": "latest" } }` **Note:** For driver options type support, you might need to install `@upstash/redis` dev dependency as well. **Options:** * `url`: Rest API URL to use for connecting to your Vercel KV store. Default is `KV_REST_API_URL`. * `token`: Rest API Token to use for connecting to your Vercel KV store. Default is `KV_REST_API_TOKEN`. * `base`: optional Prefix to use for all keys. Can be used for namespacing. * `env`: optional Flag to customize environment variable prefix (Default is `KV`). Set to `false` to disable env inference for `url` and `token` options. See @upstash/redis for all available options. > Store data in a Vercel Blob Store. Learn more about Vercel Blob. Currently Vercel Blob stores all data with public access. ### Usage **Driver name:** `vercel-blob` To use, you will need to install `@vercel/blob` dependency in your project: npm i @vercel/blob `import { createStorage } from "unstorage"; import vercelBlobDriver from "unstorage/drivers/vercel-blob"; const storage = createStorage({ driver: vercelBlobDriver({ access: "public", // Required! Beware that stored data is publicly accessible. // token: "<your secret token>", // or set BLOB_READ_WRITE_TOKEN // base: "unstorage", // envPrefix: "BLOB", }), });` **Options:** * `access`: Whether the blob should be publicly accessible. (required, must be `public`) * `base`: Prefix to prepend to all keys. Can be used for namespacing. * `token`: Rest API token to use for connecting to your Vercel Blob store. If not provided, it will be read from the environment variable `BLOB_READ_WRITE_TOKEN`. * `envPrefix`: Prefix to use for token environment variable name. Default is `BLOB` (env name = `BLOB_READ_WRITE_TOKEN`). --- ## Page: https://unstorage.unjs.io/cdn-cgi/l/email-protection Please enable cookies. ## You are unable to access this email address unjs.io The website from which you got to this page is protected by Cloudflare. Email addresses on that page have been hidden in order to keep them from being accessed by malicious bots. **You must enable Javascript in your browser in order to decode the e-mail address**. If you have a website and are interested in protecting it in a similar way, you can sign up for Cloudflare. * How does Cloudflare protect email addresses on website from spammers? * Can I sign up for Cloudflare? Cloudflare Ray ID: **92d1d455ee9d3651** • Your IP: 2a01:4f8:c012:3fae::1 • Performance & security by Cloudflare