Unlighthouse discovers URLs through multiple methods:
site from --site or config--urls flag or urls on the provider.robotsTxt - Reading robots.txt, if it exists. Provides sitemap URLs and disallowed paths.sitemap - Reading sitemap.xml, if it existscrawler - Inspecting internal linksWhen a robots.txt is found, it will attempt to read the sitemap and disallowed paths.
You may not want to use the robots.txt in all occasions. For example if you want to scan URLs which are disallowed.
import { defineUnlighthouseConfig } from 'unlighthouse/config'
export default defineUnlighthouseConfig({
scanner: {
// disable robots.txt scanning
robotsTxt: false,
},
})
By default, the sitemap config will be read from your /robots.txt. Otherwise, it will fall back to using /sitemap.xml.
Note: When a sitemap exists with over 50 paths, it will disable the crawler.
You may provide an array of sitemap paths to scan.
export default defineUnlighthouseConfig({
scanner: {
sitemap: [
'/sitemap.xml',
'/sitemap2.xml',
],
},
})
If you know your site doesn't have a sitemap, it may make sense to disable it.
export default defineUnlighthouseConfig({
scanner: {
// disable sitemap scanning
sitemap: false,
},
})
When enabled, the crawler will inspect the HTML payload of a page and extract internal links. These internal links will be queued up and scanned if they haven't already been scanned.
If you have many pages with many internal links, it may be a good idea to disable the crawling.
export default defineUnlighthouseConfig({
scanner: {
crawler: false,
},
})
While not recommended for most use cases, you may provide relative URLs within your configuration file, or use the --urls flag.
This will disable the crawler and sitemap scanning.
Can be provided statically.
export default defineUnlighthouseConfig({
urls: [
'/about',
'/other-page',
],
})
Or you can return a function or promise.
export default defineUnlighthouseConfig({
urls: async () => await getUrls(),
})
Specify explicit relative URLs as a comma-separated list.
unlighthouse --site https://example.com --urls /about,/other-page