W↓
All docs
🔑
Sign Up/Sign In
bun.sh/docs/test/
Public Link
Apr 8, 2025, 12:52:09 PM - complete - 64 kB
Starting URLs:
https://bun.sh/docs
Crawl Prefixes:
https://bun.sh/docs/test/
## Page: https://bun.sh/docs Bun is an all-in-one toolkit for JavaScript and TypeScript apps. It ships as a single executable called `bun`. At its core is the _Bun runtime_, a fast JavaScript runtime designed as **a drop-in replacement for Node.js**. It's written in Zig and powered by JavaScriptCore under the hood, dramatically reducing startup times and memory usage. bun run index.tsx # TS and JSX supported out of the box The `bun` command-line tool also implements a test runner, script runner, and Node.js-compatible package manager, all significantly faster than existing tools and usable in existing Node.js projects with little to no changes necessary. bun run start # run the `start` script bun install <pkg> # install a package bun build ./index.tsx # bundle a project for browsers bun test # run tests bunx cowsay 'Hello, world!' # execute a package Get started with one of the quick links below, or read on to learn more about Bun. ## What is a runtime? JavaScript (or, more formally, ECMAScript) is just a _specification_ for a programming language. Anyone can write a JavaScript _engine_ that ingests a valid JavaScript program and executes it. The two most popular engines in use today are V8 (developed by Google) and JavaScriptCore (developed by Apple). Both are open source. But most JavaScript programs don't run in a vacuum. They need a way to access the outside world to perform useful tasks. This is where _runtimes_ come in. They implement additional APIs that are then made available to the JavaScript programs they execute. ### Browsers Notably, browsers ship with JavaScript runtimes that implement a set of Web-specific APIs that are exposed via the global `window` object. Any JavaScript code executed by the browser can use these APIs to implement interactive or dynamic behavior in the context of the current webpage. ### Node.js Similarly, Node.js is a JavaScript runtime that can be used in non-browser environments, like servers. JavaScript programs executed by Node.js have access to a set of Node.js-specific globals like `Buffer`, `process`, and `__dirname` in addition to built-in modules for performing OS-level tasks like reading/writing files (`node:fs`) and networking (`node:net`, `node:http`). Node.js also implements a CommonJS-based module system and resolution algorithm that pre-dates JavaScript's native module system. Bun is designed as a faster, leaner, more modern replacement for Node.js. ## Design goals Bun is designed from the ground-up with today's JavaScript ecosystem in mind. * **Speed**. Bun processes start 4x faster than Node.js currently (try it yourself!) * **TypeScript & JSX support**. You can directly execute `.jsx`, `.ts`, and `.tsx` files; Bun's transpiler converts these to vanilla JavaScript before execution. * **ESM & CommonJS compatibility**. The world is moving towards ES modules (ESM), but millions of packages on npm still require CommonJS. Bun recommends ES modules, but supports CommonJS. * **Web-standard APIs**. Bun implements standard Web APIs like `fetch`, `WebSocket`, and `ReadableStream`. Bun is powered by the JavaScriptCore engine, which is developed by Apple for Safari, so some APIs like `Headers` and `URL` directly use Safari's implementation. * **Node.js compatibility**. In addition to supporting Node-style module resolution, Bun aims for full compatibility with built-in Node.js globals (`process`, `Buffer`) and modules (`path`, `fs`, `http`, etc.) _This is an ongoing effort that is not complete._ Refer to the compatibility page for the current status. Bun is more than a runtime. The long-term goal is to be a cohesive, infrastructural toolkit for building apps with JavaScript/TypeScript, including a package manager, transpiler, bundler, script runner, test runner, and more. --- ## Page: https://bun.sh/docs/test/writing Define tests with a Jest-like API imported from the built-in `bun:test` module. Long term, Bun aims for complete Jest compatibility; at the moment, a limited set of `expect` matchers are supported. ## Basic usage To define a simple test: math.test.ts import { expect, test } from "bun:test"; test("2 + 2", () => { expect(2 + 2).toBe(4); }); Jest-style globals Tests can be grouped into suites with `describe`. math.test.ts import { expect, test, describe } from "bun:test"; describe("arithmetic", () => { test("2 + 2", () => { expect(2 + 2).toBe(4); }); test("2 * 2", () => { expect(2 * 2).toBe(4); }); }); Tests can be `async`. import { expect, test } from "bun:test"; test("2 * 2", async () => { const result = await Promise.resolve(2 * 2); expect(result).toEqual(4); }); Alternatively, use the `done` callback to signal completion. If you include the `done` callback as a parameter in your test definition, you _must_ call it or the test will hang. import { expect, test } from "bun:test"; test("2 * 2", done => { Promise.resolve(2 * 2).then(result => { expect(result).toEqual(4); done(); }); }); ## Timeouts Optionally specify a per-test timeout in milliseconds by passing a number as the third argument to `test`. import { test } from "bun:test"; test("wat", async () => { const data = await slowOperation(); expect(data).toBe(42); }, 500); // test must run in <500ms In `bun:test`, test timeouts throw an uncatchable exception to force the test to stop running and fail. We also kill any child processes that were spawned in the test to avoid leaving behind zombie processes lurking in the background. The default timeout for each test is 5000ms (5 seconds) if not overridden by this timeout option or `jest.setDefaultTimeout()`. ### 🧟 Zombie process killer When a test times out and processes spawned in the test via `Bun.spawn`, `Bun.spawnSync`, or `node:child_process` are not killed, they will be automatically killed and a message will be logged to the console. This prevents zombie processes from lingering in the background after timed-out tests. ## `test.skip` Skip individual tests with `test.skip`. These tests will not be run. import { expect, test } from "bun:test"; test.skip("wat", () => { // TODO: fix this expect(0.1 + 0.2).toEqual(0.3); }); ## `test.todo` Mark a test as a todo with `test.todo`. These tests will not be run. import { expect, test } from "bun:test"; test.todo("fix this", () => { myTestFunction(); }); To run todo tests and find any which are passing, use `bun test --todo`. bun test --todo my.test.ts: ✗ unimplemented feature ^ this test is marked as todo but passes. Remove `.todo` or check that test is correct. 0 pass 1 fail 1 expect() calls With this flag, failing todo tests will not cause an error, but todo tests which pass will be marked as failing so you can remove the todo mark or fix the test. ## `test.only` To run a particular test or suite of tests use `test.only()` or `describe.only()`. import { test, describe } from "bun:test"; test("test #1", () => { // does not run }); test.only("test #2", () => { // runs }); describe.only("only", () => { test("test #3", () => { // runs }); }); The following command will only execute tests #2 and #3. bun test --only The following command will only execute tests #1, #2 and #3. bun test ## `test.if` To run a test conditionally, use `test.if()`. The test will run if the condition is truthy. This is particularly useful for tests that should only run on specific architectures or operating systems. test.if(Math.random() > 0.5)("runs half the time", () => { // ... }); const macOS = process.arch === "darwin"; test.if(macOS)("runs on macOS", () => { // runs if macOS }); ## `test.skipIf` To instead skip a test based on some condition, use `test.skipIf()` or `describe.skipIf()`. const macOS = process.arch === "darwin"; test.skipIf(macOS)("runs on non-macOS", () => { // runs if *not* macOS }); ## `test.todoIf` If instead you want to mark the test as TODO, use `test.todoIf()` or `describe.todoIf()`. Carefully choosing `skipIf` or `todoIf` can show a difference between, for example, intent of "invalid for this target" and "planned but not implemented yet." const macOS = process.arch === "darwin"; // TODO: we've only implemented this for Linux so far. test.todoIf(macOS)("runs on posix", () => { // runs if *not* macOS }); ## `test.failing` Use `test.failing()` when you know a test is currently failing but you want to track it and be notified when it starts passing. This inverts the test result: * A failing test marked with `.failing()` will pass * A passing test marked with `.failing()` will fail (with a message indicating it's now passing and should be fixed) // This will pass because the test is failing as expected test.failing("math is broken", () => { expect(0.1 + 0.2).toBe(0.3); // fails due to floating point precision }); // This will fail with a message that the test is now passing test.failing("fixed bug", () => { expect(1 + 1).toBe(2); // passes, but we expected it to fail }); This is useful for tracking known bugs that you plan to fix later, or for implementing test-driven development. ## Conditional Tests for Describe Blocks The conditional modifiers `.if()`, `.skipIf()`, and `.todoIf()` can also be applied to `describe` blocks, affecting all tests within the suite: const isMacOS = process.platform === "darwin"; // Only runs the entire suite on macOS describe.if(isMacOS)("macOS-specific features", () => { test("feature A", () => { // only runs on macOS }); test("feature B", () => { // only runs on macOS }); }); // Skips the entire suite on Windows describe.skipIf(process.platform === "win32")("Unix features", () => { test("feature C", () => { // skipped on Windows }); }); // Marks the entire suite as TODO on Linux describe.todoIf(process.platform === "linux")("Upcoming Linux support", () => { test("feature D", () => { // marked as TODO on Linux }); }); ## `test.each` and `describe.each` To run the same test with multiple sets of data, use `test.each`. This creates a parametrized test that runs once for each test case provided. const cases = [ [1, 2, 3], [3, 4, 7], ]; test.each(cases)("%p + %p should be %p", (a, b, expected) => { expect(a + b).toBe(expected); }); You can also use `describe.each` to create a parametrized suite that runs once for each test case: describe.each([ [1, 2, 3], [3, 4, 7], ])("add(%i, %i)", (a, b, expected) => { test(`returns ${expected}`, () => { expect(a + b).toBe(expected); }); test(`sum is greater than each value`, () => { expect(a + b).toBeGreaterThan(a); expect(a + b).toBeGreaterThan(b); }); }); ### Argument Passing How arguments are passed to your test function depends on the structure of your test cases: * If a table row is an array (like `[1, 2, 3]`), each element is passed as an individual argument * If a row is not an array (like an object), it's passed as a single argument // Array items passed as individual arguments test.each([ [1, 2, 3], [4, 5, 9], ])("add(%i, %i) = %i", (a, b, expected) => { expect(a + b).toBe(expected); }); // Object items passed as a single argument test.each([ { a: 1, b: 2, expected: 3 }, { a: 4, b: 5, expected: 9 }, ])("add($a, $b) = $expected", data => { expect(data.a + data.b).toBe(data.expected); }); ### Format Specifiers There are a number of options available for formatting the test title: <table><thead></thead><tbody><tr><td><code>%p</code></td><td><a href="https://www.npmjs.com/package/pretty-format"><code>pretty-format</code></a></td></tr><tr><td><code>%s</code></td><td>String</td></tr><tr><td><code>%d</code></td><td>Number</td></tr><tr><td><code>%i</code></td><td>Integer</td></tr><tr><td><code>%f</code></td><td>Floating point</td></tr><tr><td><code>%j</code></td><td>JSON</td></tr><tr><td><code>%o</code></td><td>Object</td></tr><tr><td><code>%#</code></td><td>Index of the test case</td></tr><tr><td><code>%%</code></td><td>Single percent sign (<code>%</code>)</td></tr></tbody></table> #### Examples // Basic specifiers test.each([ ["hello", 123], ["world", 456], ])("string: %s, number: %i", (str, num) => { // "string: hello, number: 123" // "string: world, number: 456" }); // %p for pretty-format output test.each([ [{ name: "Alice" }, { a: 1, b: 2 }], [{ name: "Bob" }, { x: 5, y: 10 }], ])("user %p with data %p", (user, data) => { // "user { name: 'Alice' } with data { a: 1, b: 2 }" // "user { name: 'Bob' } with data { x: 5, y: 10 }" }); // %# for index test.each(["apple", "banana"])("fruit #%# is %s", fruit => { // "fruit #0 is apple" // "fruit #1 is banana" }); ## Assertion Counting Bun supports verifying that a specific number of assertions were called during a test: ### expect.hasAssertions() Use `expect.hasAssertions()` to verify that at least one assertion is called during a test: test("async work calls assertions", async () => { expect.hasAssertions(); // Will fail if no assertions are called const data = await fetchData(); expect(data).toBeDefined(); }); This is especially useful for async tests to ensure your assertions actually run. ### expect.assertions(count) Use `expect.assertions(count)` to verify that a specific number of assertions are called during a test: test("exactly two assertions", () => { expect.assertions(2); // Will fail if not exactly 2 assertions are called expect(1 + 1).toBe(2); expect("hello").toContain("ell"); }); This helps ensure all your assertions run, especially in complex async code with multiple code paths. ## Matchers Bun implements the following matchers. Full Jest compatibility is on the roadmap; track progress here. <table><thead></thead><tbody><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#not"><code>.not</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#tobevalue"><code>.toBe()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#toequalvalue"><code>.toEqual()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#tobenull"><code>.toBeNull()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#tobeundefined"><code>.toBeUndefined()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#tobenan"><code>.toBeNaN()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#tobedefined"><code>.toBeDefined()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#tobefalsy"><code>.toBeFalsy()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#tobetruthy"><code>.toBeTruthy()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#tocontainitem"><code>.toContain()</code></a></td></tr><tr><td>✅</td><td><a href="https://jest-extended.jestcommunity.dev/docs/matchers/Object#tocontainallkeyskeys"><code>.toContainAllKeys()</code></a></td></tr><tr><td>✅</td><td><a href="https://jest-extended.jestcommunity.dev/docs/matchers/Object#tocontainvaluevalue"><code>.toContainValue()</code></a></td></tr><tr><td>✅</td><td><a href="https://jest-extended.jestcommunity.dev/docs/matchers/Object#tocontainvaluesvalues"><code>.toContainValues()</code></a></td></tr><tr><td>✅</td><td><a href="https://jest-extended.jestcommunity.dev/docs/matchers/Object#tocontainallvaluesvalues"><code>.toContainAllValues()</code></a></td></tr><tr><td>✅</td><td><a href="https://jest-extended.jestcommunity.dev/docs/matchers/Object#tocontainanyvaluesvalues"><code>.toContainAnyValues()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#tostrictequalvalue"><code>.toStrictEqual()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#tothrowerror"><code>.toThrow()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#tohavelengthnumber"><code>.toHaveLength()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#tohavepropertykeypath-value"><code>.toHaveProperty()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#expectextendmatchers"><code>.extend</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#expectanything"><code>.anything()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#expectanyconstructor"><code>.any()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#expectarraycontainingarray"><code>.arrayContaining()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#expectassertionsnumber"><code>.assertions()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#expectclosetonumber-numdigits"><code>.closeTo()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#expecthasassertions"><code>.hasAssertions()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#expectobjectcontainingobject"><code>.objectContaining()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#expectstringcontainingstring"><code>.stringContaining()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#expectstringmatchingstring--regexp"><code>.stringMatching()</code></a></td></tr><tr><td>❌</td><td><a href="https://jestjs.io/docs/expect#expectaddsnapshotserializerserializer"><code>.addSnapshotSerializer()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#resolves"><code>.resolves()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#rejects"><code>.rejects()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#tohavebeencalled"><code>.toHaveBeenCalled()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#tohavebeencalledtimesnumber"><code>.toHaveBeenCalledTimes()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#tohavebeencalledwitharg1-arg2-"><code>.toHaveBeenCalledWith()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#tohavebeenlastcalledwitharg1-arg2-"><code>.toHaveBeenLastCalledWith()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#tohavebeennthcalledwithnthcall-arg1-arg2-"><code>.toHaveBeenNthCalledWith()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#tohavereturned"><code>.toHaveReturned()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#tohavereturnedtimesnumber"><code>.toHaveReturnedTimes()</code></a></td></tr><tr><td>❌</td><td><a href="https://jestjs.io/docs/expect#tohavereturnedwithvalue"><code>.toHaveReturnedWith()</code></a></td></tr><tr><td>❌</td><td><a href="https://jestjs.io/docs/expect#tohavelastreturnedwithvalue"><code>.toHaveLastReturnedWith()</code></a></td></tr><tr><td>❌</td><td><a href="https://jestjs.io/docs/expect#tohaventhreturnedwithnthcall-value"><code>.toHaveNthReturnedWith()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#tobeclosetonumber-numdigits"><code>.toBeCloseTo()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#tobegreaterthannumber--bigint"><code>.toBeGreaterThan()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#tobegreaterthanorequalnumber--bigint"><code>.toBeGreaterThanOrEqual()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#tobelessthannumber--bigint"><code>.toBeLessThan()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#tobelessthanorequalnumber--bigint"><code>.toBeLessThanOrEqual()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#tobeinstanceofclass"><code>.toBeInstanceOf()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#tocontainequalitem"><code>.toContainEqual()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#tomatchregexp--string"><code>.toMatch()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#tomatchobjectobject"><code>.toMatchObject()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#tomatchsnapshotpropertymatchers-hint"><code>.toMatchSnapshot()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#tomatchinlinesnapshotpropertymatchers-inlinesnapshot"><code>.toMatchInlineSnapshot()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#tothrowerrormatchingsnapshothint"><code>.toThrowErrorMatchingSnapshot()</code></a></td></tr><tr><td>✅</td><td><a href="https://jestjs.io/docs/expect#tothrowerrormatchinginlinesnapshotinlinesnapshot"><code>.toThrowErrorMatchingInlineSnapshot()</code></a></td></tr></tbody></table> --- ## Page: https://bun.sh/docs/test/hot To automatically re-run tests when files change, use the `--watch` flag: bun test --watch Bun will watch for changes to any files imported in a test file, and re-run tests when a change is detected. It's fast. > "bun test --watch url" in a large folder with multiple files that start with "url" pic.twitter.com/aZV9BP4eFu > > — Jarred Sumner (@jarredsumner) March 29, 2023 --- ## Page: https://bun.sh/docs/test/lifecycle The test runner supports the following lifecycle hooks. This is useful for loading test fixtures, mocking data, and configuring the test environment. | Hook | Description | | --- | --- | | `beforeAll` | Runs once before all tests. | | `beforeEach` | Runs before each test. | | `afterEach` | Runs after each test. | | `afterAll` | Runs once after all tests. | Perform per-test setup and teardown logic with `beforeEach` and `afterEach`. import { beforeEach, afterEach } from "bun:test"; beforeEach(() => { console.log("running test."); }); afterEach(() => { console.log("done with test."); }); // tests... Perform per-scope setup and teardown logic with `beforeAll` and `afterAll`. The _scope_ is determined by where the hook is defined. To scope the hooks to a particular `describe` block: import { describe, beforeAll } from "bun:test"; describe("test group", () => { beforeAll(() => { // setup }); // tests... }); To scope the hooks to a test file: import { describe, beforeAll } from "bun:test"; beforeAll(() => { // setup }); describe("test group", () => { // tests... }); To scope the hooks to an entire multi-file test run, define the hooks in a separate file. setup.ts import { beforeAll, afterAll } from "bun:test"; beforeAll(() => { // global setup }); afterAll(() => { // global teardown }); Then use `--preload` to run the setup script before any test files. $ bun test --preload ./setup.ts To avoid typing `--preload` every time you run tests, it can be added to your `bunfig.toml`: [test] preload = ["./setup.ts"] --- ## Page: https://bun.sh/docs/test/mocks Create mocks with the `mock` function. import { test, expect, mock } from "bun:test"; const random = mock(() => Math.random()); test("random", async () => { const val = random(); expect(val).toBeGreaterThan(0); expect(random).toHaveBeenCalled(); expect(random).toHaveBeenCalledTimes(1); }); Alternatively, you can use the `jest.fn()` function, as in Jest. It behaves identically. import { test, expect, jest } from "bun:test"; const random = jest.fn(() => Math.random()); test("random", async () => { const val = random(); expect(val).toBeGreaterThan(0); expect(random).toHaveBeenCalled(); expect(random).toHaveBeenCalledTimes(1); }); The result of `mock()` is a new function that's been decorated with some additional properties. import { mock } from "bun:test"; const random = mock((multiplier: number) => multiplier * Math.random()); random(2); random(10); random.mock.calls; // [[ 2 ], [ 10 ]] random.mock.results; // [ // { type: "return", value: 0.6533907460954099 }, // { type: "return", value: 0.6452713933037312 } // ] The following properties and methods are implemented on mock functions. ## `.spyOn()` It's possible to track calls to a function without replacing it with a mock. Use `spyOn()` to create a spy; these spies can be passed to `.toHaveBeenCalled()` and `.toHaveBeenCalledTimes()`. import { test, expect, spyOn } from "bun:test"; const ringo = { name: "Ringo", sayHi() { console.log(`Hello I'm ${this.name}`); }, }; const spy = spyOn(ringo, "sayHi"); test("spyon", () => { expect(spy).toHaveBeenCalledTimes(0); ringo.sayHi(); expect(spy).toHaveBeenCalledTimes(1); }); ## Module mocks with `mock.module()` Module mocking lets you override the behavior of a module. Use `mock.module(path: string, callback: () => Object)` to mock a module. import { test, expect, mock } from "bun:test"; mock.module("./module", () => { return { foo: "bar", }; }); test("mock.module", async () => { const esm = await import("./module"); expect(esm.foo).toBe("bar"); const cjs = require("./module"); expect(cjs.foo).toBe("bar"); }); Like the rest of Bun, module mocks support both `import` and `require`. ### Overriding already imported modules If you need to override a module that's already been imported, there's nothing special you need to do. Just call `mock.module()` and the module will be overridden. import { test, expect, mock } from "bun:test"; // The module we're going to mock is here: import { foo } from "./module"; test("mock.module", async () => { const cjs = require("./module"); expect(foo).toBe("bar"); expect(cjs.foo).toBe("bar"); // We update it here: mock.module("./module", () => { return { foo: "baz", }; }); // And the live bindings are updated. expect(foo).toBe("baz"); // The module is also updated for CJS. expect(cjs.foo).toBe("baz"); }); ### Hoisting & preloading If you need to ensure a module is mocked before it's imported, you should use `--preload` to load your mocks before your tests run. // my-preload.ts import { mock } from "bun:test"; mock.module("./module", () => { return { foo: "bar", }; }); bun test --preload ./my-preload To make your life easier, you can put `preload` in your `bunfig.toml`: [test] # Load these modules before running tests. preload = ["./my-preload"] #### What happens if I mock a module that's already been imported? If you mock a module that's already been imported, the module will be updated in the module cache. This means that any modules that import the module will get the mocked version, BUT the original module will still have been evaluated. That means that any side effects from the original module will still have happened. If you want to prevent the original module from being evaluated, you should use `--preload` to load your mocks before your tests run. ### `__mocks__` directory and auto-mocking Auto-mocking is not supported yet. If this is blocking you from switching to Bun, please file an issue. ### Implementation details Module mocks have different implementations for ESM and CommonJS modules. For ES Modules, we've added patches to JavaScriptCore that allow Bun to override export values at runtime and update live bindings recursively. As of Bun v1.0.19, Bun automatically resolves the `specifier` argument to `mock.module()` as though you did an `import`. If it successfully resolves, then the resolved specifier string is used as the key in the module cache. This means that you can use relative paths, absolute paths, and even module names. If the `specifier` doesn't resolve, then the original `specifier` is used as the key in the module cache. After resolution, the mocked module is stored in the ES Module registry **and** the CommonJS require cache. This means that you can use `import` and `require` interchangeably for mocked modules. The callback function is called lazily, only if the module is imported or required. This means that you can use `mock.module()` to mock modules that don't exist yet, and it means that you can use `mock.module()` to mock modules that are imported by other modules. ### Module Mock Implementation Details Understanding how `mock.module()` works helps you use it more effectively: 1. **Cache Interaction**: Module mocks interacts with both ESM and CommonJS module caches. 2. **Lazy Evaluation**: The mock factory callback is only evaluated when the module is actually imported or required. 3. **Path Resolution**: Bun automatically resolves the module specifier as though you were doing an import, supporting: * Relative paths (`'./module'`) * Absolute paths (`'/path/to/module'`) * Package names (`'lodash'`) 4. **Import Timing Effects**: * When mocking before first import: No side effects from the original module occur * When mocking after import: The original module's side effects have already happened * For this reason, using `--preload` is recommended for mocks that need to prevent side effects 5. **Live Bindings**: Mocked ESM modules maintain live bindings, so changing the mock will update all existing imports ## Global Mock Functions ### Clear all mocks with `mock.clearAllMocks()` Reset all mock function state (calls, results, etc.) without restoring their original implementation: import { expect, mock, test } from "bun:test"; const random1 = mock(() => Math.random()); const random2 = mock(() => Math.random()); test("clearing all mocks", () => { random1(); random2(); expect(random1).toHaveBeenCalledTimes(1); expect(random2).toHaveBeenCalledTimes(1); mock.clearAllMocks(); expect(random1).toHaveBeenCalledTimes(0); expect(random2).toHaveBeenCalledTimes(0); // Note: implementations are preserved expect(typeof random1()).toBe("number"); expect(typeof random2()).toBe("number"); }); This resets the `.mock.calls`, `.mock.instances`, `.mock.contexts`, and `.mock.results` properties of all mocks, but unlike `mock.restore()`, it does not restore the original implementation. ### Restore all function mocks with `mock.restore()` Instead of manually restoring each mock individually with `mockFn.mockRestore()`, restore all mocks with one command by calling `mock.restore()`. Doing so does not reset the value of modules overridden with `mock.module()`. Using `mock.restore()` can reduce the amount of code in your tests by adding it to `afterEach` blocks in each test file or even in your test preload code. import { expect, mock, spyOn, test } from "bun:test"; import * as fooModule from './foo.ts'; import * as barModule from './bar.ts'; import * as bazModule from './baz.ts'; test('foo, bar, baz', () => { const fooSpy = spyOn(fooModule, 'foo'); const barSpy = spyOn(barModule, 'bar'); const bazSpy = spyOn(bazModule, 'baz'); expect(fooSpy).toBe('foo'); expect(barSpy).toBe('bar'); expect(bazSpy).toBe('baz'); fooSpy.mockImplementation(() => 42); barSpy.mockImplementation(() => 43); bazSpy.mockImplementation(() => 44); expect(fooSpy).toBe(42); expect(barSpy).toBe(43); expect(bazSpy).toBe(44); mock.restore(); expect(fooSpy).toBe('foo'); expect(barSpy).toBe('bar'); expect(bazSpy).toBe('baz'); }); ## Vitest Compatibility For added compatibility with tests written for Vitest, Bun provides the `vi` global object as an alias for parts of the Jest mocking API: import { test, expect } from "bun:test"; // Using the 'vi' alias similar to Vitest test("vitest compatibility", () => { const mockFn = vi.fn(() => 42); mockFn(); expect(mockFn).toHaveBeenCalled(); // The following functions are available on the vi object: // vi.fn // vi.spyOn // vi.mock // vi.restoreAllMocks // vi.clearAllMocks }); This makes it easier to port tests from Vitest to Bun without having to rewrite all your mocks. --- ## Page: https://bun.sh/docs/test/snapshots Snapshot testing saves the output of a value and compares it against future test runs. This is particularly useful for UI components, complex objects, or any output that needs to remain consistent. ## Basic snapshots Snapshot tests are written using the `.toMatchSnapshot()` matcher: import { test, expect } from "bun:test"; test("snap", () => { expect("foo").toMatchSnapshot(); }); The first time this test is run, the argument to `expect` will be serialized and written to a special snapshot file in a `__snapshots__` directory alongside the test file. On future runs, the argument is compared against the snapshot on disk. Snapshots can be re-generated with the following command: bun test --update-snapshots ## Inline snapshots For smaller values, you can use inline snapshots with `.toMatchInlineSnapshot()`. These snapshots are stored directly in your test file: import { test, expect } from "bun:test"; test("inline snapshot", () => { // First run: snapshot will be inserted automatically expect({ hello: "world" }).toMatchInlineSnapshot(); // After first run, the test file will be updated to: // expect({ hello: "world" }).toMatchInlineSnapshot(` // { // "hello": "world", // } // `); }); When you run the test, Bun automatically updates the test file itself with the generated snapshot string. This makes the tests more portable and easier to understand, since the expected output is right next to the test. ### Using inline snapshots 1. Write your test with `.toMatchInlineSnapshot()` 2. Run the test once 3. Bun automatically updates your test file with the snapshot 4. On subsequent runs, the value will be compared against the inline snapshot Inline snapshots are particularly useful for small, simple values where it's helpful to see the expected output right in the test file. ## Error snapshots You can also snapshot error messages using `.toThrowErrorMatchingSnapshot()` and `.toThrowErrorMatchingInlineSnapshot()`: import { test, expect } from "bun:test"; test("error snapshot", () => { expect(() => { throw new Error("Something went wrong"); }).toThrowErrorMatchingSnapshot(); expect(() => { throw new Error("Another error"); }).toThrowErrorMatchingInlineSnapshot(); }); --- ## Page: https://bun.sh/docs/test/time `bun:test` lets you change what time it is in your tests. This works with any of the following: * `Date.now` * `new Date()` * `new Intl.DateTimeFormat().format()` Timers are not impacted yet, but may be in a future release of Bun. ## `setSystemTime` To change the system time, use `setSystemTime`: import { setSystemTime, beforeAll, test, expect } from "bun:test"; beforeAll(() => { setSystemTime(new Date("2020-01-01T00:00:00.000Z")); }); test("it is 2020", () => { expect(new Date().getFullYear()).toBe(2020); }); To support existing tests that use Jest's `useFakeTimers` and `useRealTimers`, you can use `useFakeTimers` and `useRealTimers`: test("just like in jest", () => { jest.useFakeTimers(); jest.setSystemTime(new Date("2020-01-01T00:00:00.000Z")); expect(new Date().getFullYear()).toBe(2020); jest.useRealTimers(); expect(new Date().getFullYear()).toBeGreaterThan(2020); }); test("unlike in jest", () => { const OriginalDate = Date; jest.useFakeTimers(); if (typeof Bun === "undefined") { // In Jest, the Date constructor changes // That can cause all sorts of bugs because suddenly Date !== Date before the test. expect(Date).not.toBe(OriginalDate); expect(Date.now).not.toBe(OriginalDate.now); } else { // In bun:test, Date constructor does not change when you useFakeTimers expect(Date).toBe(OriginalDate); expect(Date.now).toBe(OriginalDate.now); } }); **Timers** — Note that we have not implemented builtin support for mocking timers yet, but this is on the roadmap. ### Reset the system time To reset the system time, pass no arguments to `setSystemTime`: import { setSystemTime, expect, test } from "bun:test"; test("it was 2020, for a moment.", () => { // Set it to something! setSystemTime(new Date("2020-01-01T00:00:00.000Z")); expect(new Date().getFullYear()).toBe(2020); // reset it! setSystemTime(); expect(new Date().getFullYear()).toBeGreaterThan(2020); }); ## Get mocked time with `jest.now()` When you're using mocked time (with `setSystemTime` or `useFakeTimers`), you can use `jest.now()` to get the current mocked timestamp: import { test, expect, jest } from "bun:test"; test("get the current mocked time", () => { jest.useFakeTimers(); jest.setSystemTime(new Date("2020-01-01T00:00:00.000Z")); expect(Date.now()).toBe(1577836800000); // Jan 1, 2020 timestamp expect(jest.now()).toBe(1577836800000); // Same value jest.useRealTimers(); }); This is useful when you need to access the mocked time directly without creating a new Date object. ## Set the time zone By default, the time zone for all `bun test` runs is set to UTC (`Etc/UTC`) unless overridden. To change the time zone, either pass the `$TZ` environment variable to `bun test`. TZ=America/Los_Angeles bun test Or set `process.env.TZ` at runtime: import { test, expect } from "bun:test"; test("Welcome to California!", () => { process.env.TZ = "America/Los_Angeles"; expect(new Date().getTimezoneOffset()).toBe(420); expect(new Intl.DateTimeFormat().resolvedOptions().timeZone).toBe( "America/Los_Angeles", ); }); test("Welcome to New York!", () => { // Unlike in Jest, you can set the timezone multiple times at runtime and it will work. process.env.TZ = "America/New_York"; expect(new Date().getTimezoneOffset()).toBe(240); expect(new Intl.DateTimeFormat().resolvedOptions().timeZone).toBe( "America/New_York", ); }); --- ## Page: https://bun.sh/docs/test/coverage Bun's test runner now supports built-in _code coverage reporting_. This makes it easy to see how much of the codebase is covered by tests, and find areas that are not currently well-tested. ## Enabling coverage `bun:test` supports seeing which lines of code are covered by tests. To use this feature, pass `--coverage` to the CLI. It will print out a coverage report to the console: $ bun test --coverage -------------|---------|---------|------------------- File | % Funcs | % Lines | Uncovered Line #s -------------|---------|---------|------------------- All files | 38.89 | 42.11 | index-0.ts | 33.33 | 36.84 | 10-15,19-24 index-1.ts | 33.33 | 36.84 | 10-15,19-24 index-10.ts | 33.33 | 36.84 | 10-15,19-24 index-2.ts | 33.33 | 36.84 | 10-15,19-24 index-3.ts | 33.33 | 36.84 | 10-15,19-24 index-4.ts | 33.33 | 36.84 | 10-15,19-24 index-5.ts | 33.33 | 36.84 | 10-15,19-24 index-6.ts | 33.33 | 36.84 | 10-15,19-24 index-7.ts | 33.33 | 36.84 | 10-15,19-24 index-8.ts | 33.33 | 36.84 | 10-15,19-24 index-9.ts | 33.33 | 36.84 | 10-15,19-24 index.ts | 100.00 | 100.00 | -------------|---------|---------|------------------- To always enable coverage reporting by default, add the following line to your `bunfig.toml`: [test] # always enable coverage coverage = true By default coverage reports will _include_ test files and _exclude_ sourcemaps. This is usually what you want, but it can be configured otherwise in `bunfig.toml`. [test] coverageSkipTestFiles = true # default false ### Coverage thresholds It is possible to specify a coverage threshold in `bunfig.toml`. If your test suite does not meet or exceed this threshold, `bun test` will exit with a non-zero exit code to indicate the failure. [test] # to require 90% line-level and function-level coverage coverageThreshold = 0.9 # to set different thresholds for lines and functions coverageThreshold = { lines = 0.9, functions = 0.9, statements = 0.9 } Setting any of these thresholds enables `fail_on_low_coverage`, causing the test run to fail if coverage is below the threshold. ### Exclude test files from coverage By default, test files themselves are included in coverage reports. You can exclude them with: [test] coverageSkipTestFiles = true # default false This will exclude files matching test patterns (e.g., \_.test.ts, \_\_spec.js) from the coverage report. ### Sourcemaps Internally, Bun transpiles all files by default, so Bun automatically generates an internal source map that maps lines of your original source code onto Bun's internal representation. If for any reason you want to disable this, set `test.coverageIgnoreSourcemaps` to `true`; this will rarely be desirable outside of advanced use cases. [test] coverageIgnoreSourcemaps = true # default false ### Coverage defaults By default, coverage reports: 1. Exclude `node_modules` directories 2. Exclude files loaded via non-JS/TS loaders (e.g., .css, .txt) unless a custom JS loader is specified 3. Include test files themselves (can be disabled with `coverageSkipTestFiles = true` as shown above) ### Coverage reporters By default, coverage reports will be printed to the console. For persistent code coverage reports in CI environments and for other tools, you can pass a `--coverage-reporter=lcov` CLI option or `coverageReporter` option in `bunfig.toml`. [test] coverageReporter = ["text", "lcov"] # default ["text"] coverageDir = "path/to/somewhere" # default "coverage" | Reporter | Description | | --- | --- | | `text` | Prints a text summary of the coverage to the console. | | `lcov` | Save coverage in lcov format. | #### lcov coverage reporter To generate an lcov report, you can use the `lcov` reporter. This will generate an `lcov.info` file in the `coverage` directory. [test] coverageReporter = "lcov" --- ## Page: https://bun.sh/docs/test/reporters bun test supports different output formats through reporters. This document covers both built-in reporters and how to implement your own custom reporters. ## Built-in Reporters ### Default Console Reporter By default, bun test outputs results to the console in a human-readable format: test/package-json-lint.test.ts: ✓ test/package.json [0.88ms] ✓ test/js/third_party/grpc-js/package.json [0.18ms] ✓ test/js/third_party/svelte/package.json [0.21ms] ✓ test/js/third_party/express/package.json [1.05ms] 4 pass 0 fail 4 expect() calls Ran 4 tests in 1.44ms When a terminal doesn't support colors, the output avoids non-ascii characters: test/package-json-lint.test.ts: (pass) test/package.json [0.48ms] (pass) test/js/third_party/grpc-js/package.json [0.10ms] (pass) test/js/third_party/svelte/package.json [0.04ms] (pass) test/js/third_party/express/package.json [0.04ms] 4 pass 0 fail 4 expect() calls Ran 4 tests across 1 files. [0.66ms] ### JUnit XML Reporter For CI/CD environments, Bun supports generating JUnit XML reports. JUnit XML is a widely-adopted format for test results that can be parsed by many CI/CD systems, including GitLab, Jenkins, and others. #### Using the JUnit Reporter To generate a JUnit XML report, use the `--reporter=junit` flag along with `--reporter-outfile` to specify the output file: bun test --reporter=junit --reporter-outfile=./junit.xml This continues to output to the console as usual while also writing the JUnit XML report to the specified path at the end of the test run. #### Configuring via bunfig.toml You can also configure the JUnit reporter in your `bunfig.toml` file: [test.reporter] junit = "path/to/junit.xml" # Output path for JUnit XML report #### Environment Variables in JUnit Reports The JUnit reporter automatically includes environment information as `<properties>` in the XML output. This can be helpful for tracking test runs in CI environments. Specifically, it includes the following environment variables when available: | Environment Variable | Property Name | Description | | --- | --- | --- | | `GITHUB_RUN_ID`, `GITHUB_SERVER_URL`, `GITHUB_REPOSITORY`, `CI_JOB_URL` | `ci` | CI build information | | `GITHUB_SHA`, `CI_COMMIT_SHA`, `GIT_SHA` | `commit` | Git commit identifiers | | System hostname | `hostname` | Machine hostname | This makes it easier to track which environment and commit a particular test run was for. #### Current Limitations The JUnit reporter currently has a few limitations that will be addressed in future updates: * `stdout` and `stderr` output from individual tests are not included in the report * Precise timestamp fields per test case are not included ### GitHub Actions reporter Bun test automatically detects when it's running inside GitHub Actions and emits GitHub Actions annotations to the console directly. No special configuration is needed beyond installing Bun and running `bun test`. For a GitHub Actions workflow configuration example, see the CI/CD integration section of the CLI documentation. ## Custom Reporters Bun allows developers to implement custom test reporters by extending the WebKit Inspector Protocol with additional testing-specific domains. ### Inspector Protocol for Testing To support test reporting, Bun extends the standard WebKit Inspector Protocol with two custom domains: 1. **TestReporter**: Reports test discovery, execution start, and completion events 2. **LifecycleReporter**: Reports errors and exceptions during test execution These extensions allow you to build custom reporting tools that can receive detailed information about test execution in real-time. ### Key Events Custom reporters can listen for these key events: * `TestReporter.found`: Emitted when a test is discovered * `TestReporter.start`: Emitted when a test starts running * `TestReporter.end`: Emitted when a test completes * `Console.messageAdded`: Emitted when console output occurs during a test * `LifecycleReporter.error`: Emitted when an error or exception occurs --- ## Page: https://bun.sh/docs/test/configuration Configure `bun test` via `bunfig.toml` file and command-line options. This page documents the available configuration options for `bun test`. ## bunfig.toml options You can configure `bun test` behavior by adding a `[test]` section to your `bunfig.toml` file: [test] # Options go here ### Test discovery #### root The `root` option specifies a root directory for test discovery, overriding the default behavior of scanning from the project root. [test] root = "src" # Only scan for tests in the src directory ### Reporters #### reporter.junit Configure the JUnit reporter output file path directly in the config file: [test.reporter] junit = "path/to/junit.xml" # Output path for JUnit XML report This complements the `--reporter=junit` and `--reporter-outfile` CLI flags. ### Memory usage #### smol Enable the `--smol` memory-saving mode specifically for the test runner: [test] smol = true # Reduce memory usage during test runs This is equivalent to using the `--smol` flag on the command line. ### Coverage options In addition to the options documented in the coverage documentation, the following options are available: #### coverageSkipTestFiles Exclude files matching test patterns (e.g., \*.test.ts) from the coverage report: [test] coverageSkipTestFiles = true # Exclude test files from coverage reports #### coverageThreshold (Object form) The coverage threshold can be specified either as a number (as shown in the coverage documentation) or as an object with specific thresholds: [test] # Set specific thresholds for different coverage metrics coverageThreshold = { lines = 0.9, functions = 0.8, statements = 0.85 } Setting any of these enables `fail_on_low_coverage`, causing the test run to fail if coverage is below the threshold. #### coverageIgnoreSourcemaps Internally, Bun transpiles every file. That means code coverage must also go through sourcemaps before they can be reported. We expose this as a flag to allow you to opt out of this behavior, but it will be confusing because during the transpilation process, Bun may move code around and change variable names. This option is mostly useful for debugging coverage issues. [test] coverageIgnoreSourcemaps = true # Don't use sourcemaps for coverage analysis When using this option, you probably want to stick a `// @bun` comment at the top of the source file to opt out of the transpilation process. ### Install settings inheritance The `bun test` command inherits relevant network and installation configuration (registry, cafile, prefer, exact, etc.) from the `[install]` section of bunfig.toml. This is important if tests need to interact with private registries or require specific install behaviors triggered during the test run. --- ## Page: https://bun.sh/docs/test/runtime-behavior `bun test` is deeply integrated with Bun's runtime. This is part of what makes `bun test` fast and simple to use. #### `$NODE_ENV` environment variable `bun test` automatically sets `$NODE_ENV` to `"test"` unless it's already set in the environment or via .env files. This is standard behavior for most test runners and helps ensure consistent test behavior. import { test, expect } from "bun:test"; test("NODE_ENV is set to test", () => { expect(process.env.NODE_ENV).toBe("test"); }); #### `$TZ` environment variable By default, all `bun test` runs use UTC (`Etc/UTC`) as the time zone unless overridden by the `TZ` environment variable. This ensures consistent date and time behavior across different development environments. #### Test Timeouts Each test has a default timeout of 5000ms (5 seconds) if not explicitly overridden. Tests that exceed this timeout will fail. This can be changed globally with the `--timeout` flag or per-test as the third parameter to the test function. ## Error Handling ### Unhandled Errors `bun test` tracks unhandled promise rejections and errors that occur between tests. If such errors occur, the final exit code will be non-zero (specifically, the count of such errors), even if all tests pass. This helps catch errors in asynchronous code that might otherwise go unnoticed: import { test } from "bun:test"; test("test 1", () => { // This test passes }); // This error happens outside any test setTimeout(() => { throw new Error("Unhandled error"); }, 0); test("test 2", () => { // This test also passes }); // The test run will still fail with a non-zero exit code // because of the unhandled error Internally, this occurs with a higher precedence than `process.on("unhandledRejection")` or `process.on("uncaughtException")`, which makes it simpler to integrate with existing code. ## Using General CLI Flags with Tests Several Bun CLI flags can be used with `bun test` to modify its behavior: ### Memory Usage * `--smol`: Reduces memory usage for the test runner VM ### Debugging * `--inspect`, `--inspect-brk`: Attaches the debugger to the test runner process ### Module Loading * `--preload`: Runs scripts before test files (useful for global setup/mocks) * `--define`: Sets compile-time constants * `--loader`: Configures custom loaders * `--tsconfig-override`: Uses a different tsconfig * `--conditions`: Sets package.json conditions for module resolution * `--env-file`: Loads environment variables for tests * `--prefer-offline`, `--frozen-lockfile`, etc.: Affect any network requests or auto-installs during test execution ## Watch and Hot Reloading When running `bun test` with the `--watch` flag, the test runner will watch for file changes and re-run affected tests. The `--hot` flag provides similar functionality but is more aggressive about trying to preserve state between runs. For most test scenarios, `--watch` is the recommended option. ## Global Variables The following globals are automatically available in test files without importing (though they can be imported from `bun:test` if preferred): * `test`, `it`: Define tests * `describe`: Group tests * `expect`: Make assertions * `beforeAll`, `beforeEach`, `afterAll`, `afterEach`: Lifecycle hooks * `jest`: Jest global object * `vi`: Vitest compatibility alias for common jest methods --- ## Page: https://bun.sh/docs/test/discovery bun test's file discovery mechanism determines which files to run as tests. Understanding how it works helps you structure your test files effectively. ## Default Discovery Logic By default, `bun test` recursively searches the project directory for files that match specific patterns: * `*.test.{js|jsx|ts|tsx}` - Files ending with `.test.js`, `.test.jsx`, `.test.ts`, or `.test.tsx` * `*_test.{js|jsx|ts|tsx}` - Files ending with `_test.js`, `_test.jsx`, `_test.ts`, or `_test.tsx` * `*.spec.{js|jsx|ts|tsx}` - Files ending with `.spec.js`, `.spec.jsx`, `.spec.ts`, or `.spec.tsx` * `*_spec.{js|jsx|ts|tsx}` - Files ending with `_spec.js`, `_spec.jsx`, `_spec.ts`, or `_spec.tsx` ## Exclusions By default, Bun test ignores: * `node_modules` directories * Hidden directories (those starting with a period `.`) * Files that don't have JavaScript-like extensions (based on available loaders) ## Customizing Test Discovery ### Position Arguments as Filters You can filter which test files run by passing additional positional arguments to `bun test`: bun test <filter> <filter> ... Any test file with a path that contains one of the filters will run. These filters are simple substring matches, not glob patterns. For example, to run all tests in a `utils` directory: bun test utils This would match files like `src/utils/string.test.ts` and `lib/utils/array_test.js`. ### Specifying Exact File Paths To run a specific file in the test runner, make sure the path starts with `./` or `/` to distinguish it from a filter name: bun test ./test/specific-file.test.ts ### Filter by Test Name To filter tests by name rather than file path, use the `-t`/`--test-name-pattern` flag with a regex pattern: # run all tests with "addition" in the name bun test --test-name-pattern addition The pattern is matched against a concatenated string of the test name prepended with the labels of all its parent describe blocks, separated by spaces. For example, a test defined as: describe("Math", () => { describe("operations", () => { test("should add correctly", () => { // ... }); }); }); Would be matched against the string "Math operations should add correctly". ### Changing the Root Directory By default, Bun looks for test files starting from the current working directory. You can change this with the `root` option in your `bunfig.toml`: [test] root = "src" # Only scan for tests in the src directory ## Execution Order Tests are run in the following order: 1. Test files are executed sequentially (not in parallel) 2. Within each file, tests run sequentially based on their definition order --- ## Page: https://bun.sh/docs/test/dom Bun's test runner plays well with existing component and DOM testing libraries, including React Testing Library and `happy-dom`. ## `happy-dom` For writing headless tests for your frontend code and components, we recommend `happy-dom`. Happy DOM implements a complete set of HTML and DOM APIs in plain JavaScript, making it possible to simulate a browser environment with high fidelity. To get started install the `@happy-dom/global-registrator` package as a dev dependency. bun add -d @happy-dom/global-registrator We'll be using Bun's _preload_ functionality to register the `happy-dom` globals before running our tests. This step will make browser APIs like `document` available in the global scope. Create a file called `happydom.ts` in the root of your project and add the following code: import { GlobalRegistrator } from "@happy-dom/global-registrator"; GlobalRegistrator.register(); To preload this file before `bun test`, open or create a `bunfig.toml` file and add the following lines. [test] preload = "./happydom.ts" This will execute `happydom.ts` when you run `bun test`. Now you can write tests that use browser APIs like `document` and `window`. dom.test.ts import {test, expect} from 'bun:test'; test('dom test', () => { document.body.innerHTML = `<button>My button</button>`; const button = document.querySelector('button'); expect(button?.innerText).toEqual('My button'); }); Depending on your `tsconfig.json` setup, you may see a `"Cannot find name 'document'"` type error in the code above. To "inject" the types for `document` and other browser APIs, add the following triple-slash directive to the top of any test file. dom.test.ts /// <reference lib="dom" /> import {test, expect} from 'bun:test'; test('dom test', () => { document.body.innerHTML = `<button>My button</button>`; const button = document.querySelector('button'); expect(button?.innerText).toEqual('My button'); }); Let's run this test with `bun test`: bun test bun test v1.2.8 dom.test.ts: ✓ dom test [0.82ms] 1 pass 0 fail 1 expect() calls Ran 1 tests across 1 files. 1 total [125.00ms] --- ## Page: https://bun.sh/docs/test/coverage.md Bun's test runner now supports built-in \_code coverage reporting\_. This makes it easy to see how much of the codebase is covered by tests, and find areas that are not currently well-tested. ## Enabling coverage \`bun:test\` supports seeing which lines of code are covered by tests. To use this feature, pass \`--coverage\` to the CLI. It will print out a coverage report to the console: \`\`\`js $ bun test --coverage -------------|---------|---------|------------------- File | % Funcs | % Lines | Uncovered Line #s -------------|---------|---------|------------------- All files | 38.89 | 42.11 | index-0.ts | 33.33 | 36.84 | 10-15,19-24 index-1.ts | 33.33 | 36.84 | 10-15,19-24 index-10.ts | 33.33 | 36.84 | 10-15,19-24 index-2.ts | 33.33 | 36.84 | 10-15,19-24 index-3.ts | 33.33 | 36.84 | 10-15,19-24 index-4.ts | 33.33 | 36.84 | 10-15,19-24 index-5.ts | 33.33 | 36.84 | 10-15,19-24 index-6.ts | 33.33 | 36.84 | 10-15,19-24 index-7.ts | 33.33 | 36.84 | 10-15,19-24 index-8.ts | 33.33 | 36.84 | 10-15,19-24 index-9.ts | 33.33 | 36.84 | 10-15,19-24 index.ts | 100.00 | 100.00 | -------------|---------|---------|------------------- \`\`\` To always enable coverage reporting by default, add the following line to your \`bunfig.toml\`: \`\`\`toml \[test\] # always enable coverage coverage = true \`\`\` By default coverage reports will \_include\_ test files and \_exclude\_ sourcemaps. This is usually what you want, but it can be configured otherwise in \`bunfig.toml\`. \`\`\`toml \[test\] coverageSkipTestFiles = true # default false \`\`\` ### Coverage thresholds It is possible to specify a coverage threshold in \`bunfig.toml\`. If your test suite does not meet or exceed this threshold, \`bun test\` will exit with a non-zero exit code to indicate the failure. \`\`\`toml \[test\] # to require 90% line-level and function-level coverage coverageThreshold = 0.9 # to set different thresholds for lines and functions coverageThreshold = { lines = 0.9, functions = 0.9, statements = 0.9 } \`\`\` Setting any of these thresholds enables \`fail\_on\_low\_coverage\`, causing the test run to fail if coverage is below the threshold. ### Exclude test files from coverage By default, test files themselves are included in coverage reports. You can exclude them with: \`\`\`toml \[test\] coverageSkipTestFiles = true # default false \`\`\` This will exclude files matching test patterns (e.g., \_.test.ts, \_\\\_spec.js) from the coverage report. ### Sourcemaps Internally, Bun transpiles all files by default, so Bun automatically generates an internal \[source map\](https://web.dev/source-maps/) that maps lines of your original source code onto Bun's internal representation. If for any reason you want to disable this, set \`test.coverageIgnoreSourcemaps\` to \`true\`; this will rarely be desirable outside of advanced use cases. \`\`\`toml \[test\] coverageIgnoreSourcemaps = true # default false \`\`\` ### Coverage defaults By default, coverage reports: 1. Exclude \`node\_modules\` directories 2. Exclude files loaded via non-JS/TS loaders (e.g., .css, .txt) unless a custom JS loader is specified 3. Include test files themselves (can be disabled with \`coverageSkipTestFiles = true\` as shown above) ### Coverage reporters By default, coverage reports will be printed to the console. For persistent code coverage reports in CI environments and for other tools, you can pass a \`--coverage-reporter=lcov\` CLI option or \`coverageReporter\` option in \`bunfig.toml\`. \`\`\`toml \[test\] coverageReporter = \["text", "lcov"\] # default \["text"\] coverageDir = "path/to/somewhere" # default "coverage" \`\`\` | Reporter | Description | | -------- | --------------------------------------------------------------------------- | | \`text\` | Prints a text summary of the coverage to the console. | | \`lcov\` | Save coverage in \[lcov\](https://github.com/linux-test-project/lcov) format. | #### lcov coverage reporter To generate an lcov report, you can use the \`lcov\` reporter. This will generate an \`lcov.info\` file in the \`coverage\` directory. \`\`\`toml \[test\] coverageReporter = "lcov" \`\`\`