We have issues labeled as Good First Issue and Help Wanted which are good opportunities for new contributors.
Rust, a C compiler, and CMake are required to build uv.
On Ubuntu and other Debian-based distributions, you can install the C compiler and CMake with:
sudo apt install build-essential cmake
You can install CMake with Homebrew:
brew install cmake
See the Python section for instructions on installing the Python versions.
You can install CMake from the installers or with
pipx install cmake
.
For running tests, we recommend nextest.
If tests fail due to a mismatch in the JSON Schema, run: cargo dev generate-json-schema
.
Testing uv requires multiple specific Python versions; they can be installed with:
cargo run python install
The storage directory can be configured with UV_PYTHON_INSTALL_DIR
.
uv uses insta for snapshot testing. It's recommended (but not necessary) to use
cargo-insta
for a better snapshot review experience. See the
installation guide for more information.
In tests, you can use uv_snapshot!
macro to simplify creating snapshots for uv commands. For
example:
#[test]
fn test_add() {
let context = TestContext::new("3.12");
uv_snapshot!(context.filters(), context.add().arg("requests"), @"");
}
To run and review a specific snapshot test:
cargo test --package <package> --test <test> -- <test_name> -- --exact
cargo insta review
You can invoke your development version of uv with cargo run -- <args>
. For example:
cargo run -- venv
cargo run -- pip install requests
When testing debug builds on Windows, the stack can overflow resulting in a STATUS_STACK_OVERFLOW
error code. This is due to a small stack size limit on Windows that we encounter when running
unoptimized builds — the release builds do not have this problem. We
added a UV_STACK_SIZE
variable to bypass this problem
during testing. We recommend bumping the stack size from the default of 1MB to 3MB, for example:
$Env:UV_STACK_SIZE = '3000000'
Source distributions can run arbitrary code on build and can make unwanted modifications to your system ("Someone's Been Messing With My Subnormals!" on Blogspot, "nvidia-pyindex" on PyPI), which can even occur when just resolving requirements. To prevent this, there's a Docker container you can run commands in:
$ docker buildx build -t uv-builder -f builder.dockerfile --load .
# Build for musl to avoid glibc errors, might not be required with your OS version
cargo build --target x86_64-unknown-linux-musl --profile profiling
docker run --rm -it -v $(pwd):/app uv-builder /app/target/x86_64-unknown-linux-musl/profiling/uv-dev resolve-many --cache-dir /app/cache-docker /app/scripts/popular_packages/pypi_10k_most_dependents.txt
We recommend using this container if you don't trust the dependency tree of the package(s) you are trying to resolve or install.
Please refer to Ruff's Profiling Guide, it applies to uv, too.
We provide diverse sets of requirements for testing and benchmarking the resolver in
scripts/requirements
and for the installer in scripts/requirements/compiled
.
You can use scripts/benchmark
to benchmark predefined workloads between uv versions and with other
tools, e.g., from the scripts/benchmark
directory:
uv run resolver \
--uv-pip \
--poetry \
--benchmark \
resolve-cold \
../scripts/requirements/trio.in
You can use tracing-durations-export to
visualize parallel requests and find any spots where uv is CPU-bound. Example usage, with uv
and
uv-dev
respectively:
RUST_LOG=uv=info TRACING_DURATIONS_FILE=target/traces/jupyter.ndjson cargo run --features tracing-durations-export --profile profiling -- pip compile scripts/requirements/jupyter.in
RUST_LOG=uv=info TRACING_DURATIONS_FILE=target/traces/jupyter.ndjson cargo run --features tracing-durations-export --bin uv-dev --profile profiling -- resolve jupyter
You can enable trace
level logging using the RUST_LOG
environment variable, i.e.
RUST_LOG=trace uv
To preview any changes to the documentation locally:
-
Install the Rust toolchain.
-
Run
cargo dev generate-all
, to update any auto-generated documentation. -
Run the development server with:
# For contributors. uvx --with-requirements docs/requirements.txt -- mkdocs serve -f mkdocs.public.yml # For members of the Astral org, which has access to MkDocs Insiders via sponsorship. uvx --with-requirements docs/requirements-insiders.txt -- mkdocs serve -f mkdocs.insiders.yml
The documentation should then be available locally at http://127.0.0.1:8000/uv/.
To update the documentation dependencies, edit docs/requirements.in
and
docs/requirements-insiders.in
, then run:
uv pip compile docs/requirements.in -o docs/requirements.txt --universal -p 3.12
uv pip compile docs/requirements-insiders.in -o docs/requirements-insiders.txt --universal -p 3.12
Documentation is deployed automatically on release by publishing to the Astral documentation repository, which itself deploys via Cloudflare Pages.
After making changes to the documentation, format the markdown files with:
npx prettier --prose-wrap always --write "**/*.md"
Releases can only be performed by Astral team members.
Changelog entries and version bumps are automated. First, run:
./scripts/release.sh
Then, editorialize the CHANGELOG.md
file to ensure entries are consistently styled.
Then, open a pull request e.g. Bump version to ...
.
Binary builds will automatically be tested for the release.
After merging the pull request, run the
release workflow with the version
tag. Do not include a leading v
. The release will automatically be created on GitHub after
everything else publishes.