Integrations

Lighthouse CLI for Entire Sites

The CLI provides direct website scanning with a rich interactive interface. Ideal for one-off audits and development testing.

Installation

npm install -g @unlighthouse/cli

One-time run (no install)

npx unlighthouse --site example.com

Unlighthouse wraps the Lighthouse npm package to enable site-wide scanning.

Unlighthouse CLI vs Lighthouse CLI

Featurelighthouse CLIunlighthouse CLI
Packagelighthouse@unlighthouse/cli
Pages per run1Unlimited
URL discoveryManualAutomatic
Interactive UINoYes
CachingNoYes
Dynamic samplingNoYes
CI/CD readyManual setupBuilt-in

When to use Lighthouse CLI

  • Single page audits
  • Quick manual checks
  • Debugging specific pages

When to use Unlighthouse CLI

  • Full site audits
  • Pre-launch checks
  • Ongoing monitoring
  • CI/CD integration

Usage

Once installed globally you'll have access to Unlighthouse through the unlighthouse binary.

Run the default scan.

unlighthouse --site example.com --debug

Run without caching, throttle the requests and do 3 samples.

unlighthouse --site example.com --debug --no-cache --throttle --samples 3

Configuration

Configuring the CLI can be done either through the CLI arguments or through a config file.

See the Configuration section for more details and the guides.

CLI Options

Options
-v, --versionDisplay version number.
--site <url>Host URL to scan.
--root <path>Define the project root. Useful for changing where the config is read from or setting up sampling.
--config-file <path>Path to config file.
--output-path <path>Path to save the contents of the client and reports to.
--cacheEnable the caching.
--no-cacheDisable the caching.
--desktopSimulate device as desktop.
--mobileSimulate device as mobile.
--user-agent <user-agent>Specify a top-level user agent all requests will use.
--router-prefix <path>The URL path prefix for the client and API to run from.
--throttleEnable the throttling.
--samples <samples>Specify the amount of samples to run.
--sitemaps <sitemaps>Comma separated list of sitemaps to use for scanning. Providing these will override any in robots.txt.
--urls <urls>Specify explicit relative paths to scan as a comma-separated list, disabling the link crawler.
e.g. unlighthouse --site unlighthouse.dev --urls /guide,/api,/config
--exclude-urls <urls>Relative paths (string or regex) to exclude as a comma-separated list.
e.g. unlighthouse --site unlighthouse.dev --exclude-urls /guide/.*,/api/.*
--include-urls <urls>Relative paths (string or regex) to include as a comma-separated list.
e.g. unlighthouse --site unlighthouse.dev --include-urls /guide/.*
--enable-javascriptWhen inspecting the HTML wait for the javascript to execute. Useful for SPAs.
--disable-javascriptWhen inspecting the HTML, don't wait for the javascript to execute.
--enable-i18n-pagesEnable scanning pages which use x-default.
--disable-i18n-pagesDisable scanning pages which use x-default.
--disable-robots-txtDisables the robots.txt crawling.
--disable-sitemapDisables the sitemap.xml crawling.
--disable-dynamic-samplingDisables the sampling of paths.
--extra-headers <headers>Extra headers to send with the request. Example: --extra-headers foo=bar,bar=foo
--cookies <cookies>Cookies to send with the request. Example: --cookies foo=bar;bar=foo
--auth <auth>Basic auth to send with the request. Example: --auth username:password
--default-query-params <params>Default query params to send with the request. Example: --default-query-params foo=bar,bar=foo
-d, --debugDebug. Enable debugging in the logger.
-h, --helpDisplay available CLI options

Config File

If you want to configure Unlighthouse, you can create a unlighthouse.config.ts file in your cwd.

import { defineUnlighthouseConfig } from 'unlighthouse/config'

export default defineUnlighthouseConfig({
  site: 'example.com',
  debug: true,
  scanner: {
    device: 'desktop',
  },
})
Did this page help you?