|  | 
| 1 | 1 | # CSS Code Coverage | 
| 2 | 2 | 
 | 
| 3 | 3 | Takes your generated coverage files and turns them into something actually usable. Accepts coverage reports generated by browsers (Edge/Chrome/chromium), Puppeteer, Playwright. | 
|  | 4 | + | 
|  | 5 | +Features include: | 
|  | 6 | + | 
|  | 7 | +- 🤩 Prettifies CSS for easy inspection and updates coverage ranges after prettification | 
|  | 8 | +- 🪄 Marks each line of each CSS file as covered or uncovered | 
|  | 9 | +- 📑 A single stylesheet that's reported over multiple URL's is combined into a single one, coverage ranges merged | 
|  | 10 | +- 🗂️ Creates a report of total line coverage, byte coverage and coverage details per individual stylesheet discovered | 
|  | 11 | + | 
|  | 12 | +## Installation | 
|  | 13 | + | 
|  | 14 | +```sh | 
|  | 15 | +npm install @projectwallace/css-code-coverage | 
|  | 16 | +``` | 
|  | 17 | + | 
|  | 18 | +## Usage | 
|  | 19 | + | 
|  | 20 | +### Prerequisites | 
|  | 21 | + | 
|  | 22 | +1. You have collected browser coverage data of your CSS. There are several ways to do this: | 
|  | 23 | + | 
|  | 24 | +   1. in the browser devtools in [Edge](https://learn.microsoft.com/en-us/microsoft-edge/devtools-guide-chromium/coverage/)/[Chrome](https://developer.chrome.com/docs/devtools/coverage/)/chromium | 
|  | 25 | +   1. Via the `coverage.startCSSCoverage()` API that headless browsers like [Playwright](https://playwright.dev/docs/api/class-coverage#coverage-start-css-coverage) or [Puppeteer](https://pptr.dev/api/puppeteer.coverage.startcsscoverage/) provide. | 
|  | 26 | + | 
|  | 27 | +   Either way you end up with one or more JSON files that contain coverage data. | 
|  | 28 | + | 
|  | 29 | +   ```ts | 
|  | 30 | +   // Read a single JSON or a folder full of JSON files with coverage data | 
|  | 31 | +   // Coverage data looks like this: | 
|  | 32 | +   // { | 
|  | 33 | +   //   url: 'https://www.projectwallace.com/style.css', | 
|  | 34 | +   //   text: 'a { color: blue; text-decoration: underline; }', etc. | 
|  | 35 | +   //   ranges: [ | 
|  | 36 | +   //     { start: 0, end: 46 } | 
|  | 37 | +   //   ] | 
|  | 38 | +   // } | 
|  | 39 | +   let files = await fs.glob('./css-coverage/**/*.json') | 
|  | 40 | +   let coverage_data = [] | 
|  | 41 | +   for (let file of files) { | 
|  | 42 | +   	let json_content = await fs.readFile(file, 'urf-8') | 
|  | 43 | +   	coverage_data.push(JSON.parse(json_content)) | 
|  | 44 | +   } | 
|  | 45 | +   ``` | 
|  | 46 | + | 
|  | 47 | +1. You provide a HTML parser that we use to 'scrape' the HTML in case the browser gives us not just plain CSS contents. Depending on where you run this analysis you can use: | 
|  | 48 | + | 
|  | 49 | +   1. Browser: | 
|  | 50 | +      ```ts | 
|  | 51 | +      function parse_html(html) { | 
|  | 52 | +      	return new DOMParser().parseFromString(html, 'text/html') | 
|  | 53 | +      } | 
|  | 54 | +      ``` | 
|  | 55 | +   1. Node (using [linkedom](https://github.com/WebReflection/linkedom) in this example): | 
|  | 56 | + | 
|  | 57 | +      ```ts | 
|  | 58 | +      // $ npm install linkedom | 
|  | 59 | +      import { DOMParser } from 'linkedom' | 
|  | 60 | +
 | 
|  | 61 | +      function parse_html(html: string) { | 
|  | 62 | +      	return new DOMParser().parseFromString(html, 'text/html') | 
|  | 63 | +      } | 
|  | 64 | +      ``` | 
|  | 65 | + | 
|  | 66 | +### Bringing it together | 
|  | 67 | + | 
|  | 68 | +```ts | 
|  | 69 | +import { calculate_coverage } from '@projectwallace/css-code-coverage' | 
|  | 70 | +
 | 
|  | 71 | +let report = calculcate_coverage(coverage_data, parse_html) | 
|  | 72 | +``` | 
0 commit comments