Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configure local Preact dependencies #12

Merged
merged 19 commits into from
Mar 28, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,10 @@
preact-bench.config.js
out

# Added here so tools like prettier ignore this file, event though we have force
# added to the git repository.
pnpm-lock.yaml


# Logs
logs
Expand Down
10 changes: 1 addition & 9 deletions .prettierrc
Original file line number Diff line number Diff line change
@@ -1,13 +1,5 @@
{
"singleQuote": false,
"arrowParens": "always",
"trailingComma": "all",
"overrides": [
{
"files": "pnpm-lock.yaml",
"options": {
"singleQuote": true
}
}
]
"trailingComma": "all"
}
52 changes: 52 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,3 +54,55 @@ $ pnpm bench --help
$ preact-bench bench apps/todo/todo.html -d preact@local,signals@local -d preact@main,signals@local -i preact-signals -n 2 -t 0
$ preact-bench bench apps/todo/todo.html -d preact@local -d preact@main --trace
```

## Benchmarking within another repository

This repository is intended to be included as a submodule in another repository. This allows you to run benchmarks against local changes in that repository. The `dev` script in this repository starts a benchmarking dev server that is useful when benchmarking changes in another repository.

```
$ pnpm dev --help

Description
Run a dev server to interactively run a benchmark while developing changes

Usage
$ preact-bench dev [benchmark_file] [options]

Options
--interactive Prompt for options (default false)
-d, --dependency What group of dependencies (comma-delimited) and version to
use for a run of the benchmark (package@version) (default latest)
-i, --impl What implementation of the benchmark to run (default preact-class)
-n, --sample-size Minimum number of times to run each benchmark (default 25)
-h, --horizon The degrees of difference to try and resolve when auto-sampling
("N%" or "Nms", comma-delimited) (default 5%)
-t, --timeout Maximum number of minutes to spend auto-sampling (default 1)
--trace Enable performance tracing (Chrome only) (default false)
--debug Enable debug logging (default false)
-b, --browser Which browser to run the benchmarks in: chrome, chrome-headless,
firefox, firefox-headless, safari, edge (default chrome-headless)
-p, --port What port to run the benchmark server on (default 5173)
-h, --help Displays this message

Examples
$ preact-bench dev apps/todo/todo.html -d preact@local -d preact@main -i preact-hooks
$ preact-bench dev apps/todo/todo.html -d preact@local -d preact@local-pinned -i preact-hooks
```

This command shares the same options as the `bench` command. Once you start the server you can press `b⏎` to re-build your local Preact repository (or whatever repository this is within) and re-run the configured benchmarks.

```text
$ pnpm dev apps/many-updates/many-updates.html -i preact -d preact@local -d preact@local-pinned -n 2 -t 0

> @preact/benchmarks@0.0.1 dev /Users/andre_wiggins/github/preactjs/preact-v10/benchmarks
> node cli/bin/preact-bench.js dev "apps/many-updates/many-updates.html" "-i" "preact" "-d" "preact@local" "-d" "preact@local-pinned" "-n" "2" "-t" "0"

➜ Local: http://localhost:5173/
➜ Network: use --host to expose
➜ press p + enter Pin current local changes into local-pinned
➜ press b + enter run Benchmarks
➜ press h + enter show help

```

You can also press the `p⏎` key to build your local repos changes and copy them into the relevant `local-pinned` directory. This command is useful when you want to compare different local changes against each other.
124 changes: 75 additions & 49 deletions cli/bin/preact-bench.js
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,11 @@ import { readdir } from "node:fs/promises";
import inquirer from "inquirer";
import sade from "sade";
import { analyze } from "../src/analyze.js";
import { runBenchServer, runBenchmarks } from "../src/index.js";
import {
runBenchServer,
runBenchmarks,
runBenchmarksInteractively,
} from "../src/index.js";
import { getAppConfig, getDepConfig } from "../src/config.js";
import {
baseTraceLogDir,
Expand Down Expand Up @@ -322,7 +326,12 @@ async function analyzeAction(requestedBench) {
return;
}

const benchmarkNames = await readdir(baseTraceLogDir());
const benchmarkNames = [];
for (let dirName of await readdir(baseTraceLogDir())) {
for (let benchmarkName of await readdir(baseTraceLogDir(dirName))) {
benchmarkNames.push(`${dirName}/${benchmarkName}`);
}
}

/** @type {string} */
let selectedBench;
Expand Down Expand Up @@ -360,8 +369,56 @@ async function analyzeAction(requestedBench) {

const prog = sade("preact-bench").version("0.0.0");

prog
.command("bench [benchmark_file]")
/** @type {(cmd: import('sade').Sade) => import('sade').Sade} */
function setupBenchmarkCLIArgs(cmd) {
cmd
.option("--interactive", "Prompt for options", false)
.option(
"-d, --dependency",
"What group of dependencies (comma-delimited) and version to use for a run of the benchmark (package@version)",
defaultBenchOptions.dependency,
)
.option(
"-i, --impl",
"What implementation of the benchmark to run",
defaultBenchOptions.impl,
)
.option(
"-n, --sample-size",
"Minimum number of times to run each benchmark",
defaultBenchOptions["sample-size"],
)
.option(
"-h, --horizon",
'The degrees of difference to try and resolve when auto-sampling ("N%" or "Nms", comma-delimited)',
defaultBenchOptions.horizon,
)
.option(
"-t, --timeout",
"Maximum number of minutes to spend auto-sampling",
defaultBenchOptions.timeout,
)
.option(
"--trace",
"Enable performance tracing (Chrome only)",
defaultBenchOptions.trace,
)
.option("--debug", "Enable debug logging", defaultBenchOptions.debug)
.option(
"-b, --browser",
"Which browser to run the benchmarks in: chrome, chrome-headless, firefox, firefox-headless, safari, edge",
defaultBenchOptions.browser,
)
.option(
"-p, --port",
"What port to run the benchmark server on",
defaultBenchOptions.port,
);

return cmd;
}

setupBenchmarkCLIArgs(prog.command("bench [benchmark_file]"))
.describe(
"Run the given benchmark using the specified implementation with the specified dependencies. If no benchmark file, no dependencies, or no implementations are specified, will prompt for one.",
)
Expand All @@ -375,57 +432,26 @@ prog
"bench apps/todo/todo.html -d preact@local,signals@local -d preact@main,signals@local -i preact-signals -n 2 -t 0",
)
.example("bench apps/todo/todo.html -d preact@local -d preact@main --trace")
.option(
"--interactive",
"Prompt for options. Defaults to true of no benchmark file, dependencies, or implementations are specified",
defaultBenchOptions.interactive,
)
.option(
"-d, --dependency",
"What group of dependencies (comma-delimited) and version to use for a run of the benchmark (package@version)",
defaultBenchOptions.dependency,
)
.option(
"-i, --impl",
"What implementation of the benchmark to run",
defaultBenchOptions.impl,
)
.option(
"-n, --sample-size",
"Minimum number of times to run each benchmark",
defaultBenchOptions["sample-size"],
)
.option(
"-h, --horizon",
'The degrees of difference to try and resolve when auto-sampling ("N%" or "Nms", comma-delimited)',
defaultBenchOptions.horizon,
)
.option(
"-t, --timeout",
"Maximum number of minutes to spend auto-sampling",
defaultBenchOptions.timeout,
)
.option(
"--trace",
"Enable performance tracing (Chrome only)",
defaultBenchOptions.trace,
.action(benchAction);

setupBenchmarkCLIArgs(prog.command("dev [benchmark_file]"))
.describe(
"Run a dev server to interactively run a benchmark while developing changes",
)
.option("--debug", "Enable debug logging", defaultBenchOptions.debug)
.option(
"-b, --browser",
"Which browser to run the benchmarks in: chrome, chrome-headless, firefox, firefox-headless, safari, edge",
defaultBenchOptions.browser,
.example(
"dev apps/todo/todo.html -d preact@local -d preact@main -i preact-hooks",
)
.option(
"-p, --port",
"What port to run the benchmark server on",
defaultBenchOptions.port,
.example(
"dev apps/todo/todo.html -d preact@local -d preact@local-pinned -i preact-hooks",
)
.action(benchAction);
.action((benchmarkFile, args) => {
const benchConfig = parseBenchmarkCLIArgs(args);
runBenchmarksInteractively(benchmarkFile, benchConfig);
});

prog
.command("start")
.describe("Run a dev server - useful when building benchmarks")
.describe("Run a server to serve benchmark HTML files")
.option(
"-p, --port",
"What port to run the benchmark server on",
Expand Down
2 changes: 1 addition & 1 deletion cli/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -47,8 +47,8 @@
"devDependencies": {
"@types/d3-array": "^3.2.1",
"@types/d3-scale": "^4.0.8",
"@types/node": "^20.11.6",
"@types/inquirer": "^9.0.7",
"@types/node": "^20.11.6",
"@types/selenium-webdriver": "^4.1.21"
}
}
17 changes: 7 additions & 10 deletions cli/src/analyze.js
Original file line number Diff line number Diff line change
@@ -1,10 +1,6 @@
import { existsSync } from "fs";
import { readFile, readdir } from "fs/promises";
import {
baseTraceLogDir,
makeBenchmarkLabel,
parseBenchmarkId,
} from "./utils.js";
import { baseTraceLogDir, makeBenchmarkLabel } from "./utils.js";

import { summaryStats, computeDifferences } from "tachometer/lib/stats.js";
import {
Expand Down Expand Up @@ -307,21 +303,22 @@ export async function analyze(selectedBench) {
/** @type {Map<string, ResultStats[]>} */
const resultStatsMap = new Map();
for (let benchName of benchmarkNames) {
const { implId, dependencies } = parseBenchmarkId(benchName);
const logDir = baseTraceLogDir(selectedBench, benchName);

let logFilePaths;
try {
logFilePaths = (await readdir(logDir)).map((fn) =>
baseTraceLogDir(selectedBench, benchName, fn),
);
logFilePaths = (await readdir(logDir, { withFileTypes: true }))
.filter((dirEntry) => dirEntry.isFile())
.map((dirEntry) =>
baseTraceLogDir(selectedBench, benchName, dirEntry.name),
);
} catch (e) {
// If directory doesn't exist or we fail to read it, just skip
continue;
}

const resultStats = await getStatsFromLogs(
makeBenchmarkLabel(implId, dependencies),
makeBenchmarkLabel(benchName),
logFilePaths,
getDurationThread,
isDurationLog,
Expand Down
13 changes: 11 additions & 2 deletions cli/src/config.js
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
import fs from "fs";
import fs, { existsSync } from "fs";
import { readdir } from "node:fs/promises";
import path from "node:path";
import { appFilePath, depFilePath } from "./utils.js";
Expand Down Expand Up @@ -117,7 +117,16 @@ export async function getDepConfig(useCache = false) {
const version = depDir.slice(index + 1);

if (!dependencies[depName]) dependencies[depName] = {};
dependencies[depName][version] = depDir;

const scriptsPath = depFilePath(depDir, "scripts.js");
if (existsSync(scriptsPath)) {
dependencies[depName][version] = {
path: depDir,
scriptsPath,
};
} else {
dependencies[depName][version] = depDir;
}
}

depConfigCache = dependencies;
Expand Down
Loading
Loading