Skip to content

Latest commit

 

History

History
444 lines (303 loc) · 22.4 KB

CONTRIBUTING.adoc

File metadata and controls

444 lines (303 loc) · 22.4 KB

Contributing to Plutus

Setting up and working with our development tools

Make sure you have set up the prerequisities.

How to get a shell environment with tools

You can get an environment for developing the entire project using nix-shell in the root directory. This includes a variety of useful tools:

  • The right version of GHC with all the external Haskell dependencies in its package database.

  • cabal-install

  • Agda with the right libraries set up to build our Agda code

  • stylish-haskell

  • haskell-language-server

  • …​ and more

Have a look in shell.nix to see what’s there.

We rely heavily on this approach to ensure that everyone has consistent versions of the tools that we use. Please do make use of this, since problems due to mismatched versions of tools are particularly annoying to fix!

Note
You may want to consider using lorri as a convenient alternative to running nix-shell directly.

How to use helper scripts to fix some common issues

The shell comes with some tools for fixing various simple problems that the CI will complain about. Specifically:

  • fix-stylish-haskell will re-format all the Haskell sources correctly.

  • fix-purty will re-format all the Purescript sources correctly.

  • fix-png-optimization will optimize all PNGs in the repository.

  • updateMaterialized will update the materialized Nix files (see later).

If you’re not using nix-shell all the time and you want to run one of these, you can use nix-shell --run.

How to build the code during development

The nix-shell environment has the correct GHC with all the external Haskell dependencies of the project. From here you can build the project packages directly with cabal.

Note
You may need to run cabal update so that cabal knows about the index state we have pinned.

For stack, you may want to use the Nix integration and point it at shell.nix, which will make sure you at least get the right GHC.

Warning

You can also use cabal and stack outside the nix-shell environment to build the project. However there are two caveats:

  • You may get different versions of packages.

    • This shouldn’t happen, but we can’t guarantee it.

  • We are not currently enabling the Nix integration for these tools, so they will use your system GHC and libraries, rather than that ones that will be used by Nix.

    • We sometimes patch the GHC that we use in Nix, so this can at least potentially cause problems or cause you to be missing bug workarounds.

How to build the code with profiling

If you launch nix-shell with the enableHaskellProfiling argument set to true, you will get a shell where all the dependencies have been built with profiling.

Like this: nix-shell --arg enableHaskellProfiling true.

Warning

The shell with profiling dependencies is not currently cached, so this will result in you rebuilding all of our dependencies with profiling on your machine. This will take a long time.

Once you have a shell with profiling libraries for our dependencies, add profiling: true to cabal.project.local, which will tell cabal that you want profiling (in particular, that will cause it to build our libraries with profiling). Alternatively, you can pass the --enable-profiling option to cabal on an ad-hoc basis, but adding the option to cabal.project.local will make it apply to everything, which is probably what you want when you’re doing profiling work.

At this point you need to configure which cost centres you want GHC to insert. The GHC user guide explains this very well. A typical way of doing this is to add -fprof-auto to either the ghc-options in the .cabal file for the project, or in an OPTIONS_GHC pragma in the module you care about.

Warning

Do not set the -prof option yourself! This will enable profiling libraries unconditionally, which interferes with what cabal wants. Setting profiling: true already sorts this out properly.

Then you can use the RTS -p option to dump a profile e.g. cabal run plc …​ — +RTS -p.

There are various tools for visualizing the resulting profile, e.g. https://hackage.haskell.org/package/ghc-prof-flamegraph.

How to setup haskell-language-server

The nix-shell environment has a haskell-language-server binary for the right version of GHC.

Important
this binary is called haskell-language-server, rather than haskell-language-server-wrapper, which is what some of the editor integrations expect.

We don’t have a hie.yaml, the implicit cradle support in HLS seems to work fine these days.

How to update the generated Haskell package set

Some of the Nix code that builds all the Haskell packages and their dependencies is generated automatically. However, to avoid doing too much work all the time, we have checked the generated output in.

Important
These files needs to be regenerated if you change any dependencies in cabal files. But the CI will tell you if you’ve failed to do so.

You can regenerate the files by running updateMaterialized (provided by nix-shell) from the repository root.

This will also update the plan-256 shas for the extra Haskell tools, if you have a problem iwth that.

How to add a new Haskell package

You need to do a few things when adding a new package, in the following order:

  1. Add the cabal file for the new package.

  2. Add the package to cabal.project.

  3. Add the package to stack.yaml.

  4. Update the package set.

  5. Update the hie-*.yaml files.

  6. Check that you can run nix build -f default.nix plutus.haskell.projectPackages.<package name> successfully.

How to update our pinned Haskell dependencies

We have pinned versions of some Haskell packages specified via the usual source-repository-package (Cabal) and extra-dep (Stack) mechanisms.

These can be managed normally, but ensure that:

  • The specifications remain in sync between cabal.project and stack.yaml.

  • You update the package set.

  • If it is an source-repository-package/extra-dep from Git, you update the sha256 mapping in nix/haskell.nix. For the moment you have to do this by hand, using the following command to get the sha: nix-prefetch-git --quiet <repo-url> <rev> | jq .sha256, or by just getting it wrong and trying to build it, in which case Nix will give you the right value.

How to update our pinned Nix dependencies

We pin versions of some git repositories that are used by Nix, for example nixpkgs.

We use Nix flakes to manage these dependencies, even though we do not actually use flakes for normal Nix builds. To manage the dependencies, see the Nix flake inputs documentation and the Nix flake update command.

Inside of the project nix-shell, a version of Nix with flakes is available under the alias nix-flakes.

How to update the Hackage index state

The Hackage index state is pinned to a particular time in cabal.project. This helps with reproducibility: alongside using the same version of cabal, this ensures that everyone will get the same result from the cabal version solver. If you want to use a Hackage package from after the pinned index state time, you need to bump it. This is not a big deal, since all it does is change what packages cabal considers to be available when doing solving, but it may result in different versions being picked, so it’s not completely safe.

Note that cabal itself keeps track of what index states it knows about, so you may need to update this with cabal update in order for cabal to be happy.

The Nix code which builds our packages also cares about the index state. The set of index states which it knows about is controlled by hackage.nix, which is a Nix representation of Hackage. This therefore needs to be newer than the index state. You can update it with the Nix flake commands.

You will need to update the package set after this to reflect the new build plan that Cabal will pick.

Working conventions

Code is communication

We are a relatively large team working on sometimes quite abstruse problems. As such, it’s important that future people who work on the project know how things work, and just as importantly, why. These future people may even be yourself - we forget things very quickly!

When writing, try to put yourself in the position of someone coming to this code for the first time. What do they need to do to understand it and do their job? Write it down!

Code review is a good lens for this: if you have to explain something to a reviewer, then it is probably not clear in the code and should have a note.

This applies both to the code itself (structure, naming, etc.) and also to comments. How to write useful comments is a large topic which we don’t attempt to cover here, but Antirez is good. If in doubt: write more!

"Notes"

One special kind of comment is worth drawing attention to. We adopt a convention (stolen from GHC) of writing fairly substantial notes in our code with a particular structure. These correspond to what Antirez calls "design comments", with some conventions about cross-referencing them.

The structure is:

  • The Note should be in a multiline comment (i.e. {- -})

  • The first line of the Note should be Note [Name of note]

  • Refer to a Note from where it is relevant with a comment saying See Note [Name of note]

For example:

{- Note [How to write a note]
A note should look a bit like this.

Go wild, write lots of stuff!

Here's a small diagram:
A ----> B >> C

And of course, you should see Note [Another note].
-}

Notes are a great place to put substantial discussion that you need to refer to from multiple places. For example, if you used an encoding trick to fit more data into an output format, you could write a Note describing the trick (and justifying its usage!), and then refer to it from the encoder and the decoder.

Code formatting

We use stylish-haskell for Haskell code formatting, and purty for Purescript. CI checks that running these is a no-op, so if you don’t apply it them your PR will not go green. To avoid annoyance, set up your editor to run them automatically. The nix-shell environment provides stylish-haskell and purty binaries of the correct versions.

You can run stylish-haskell or purty over your tree using the fix-stylish-haskell or fix-purty script provided by the nix-shell environment.

Compiler warnings

The CI builds Haskell code with -Werror, so will fail if there are any compiler warnings. So fix your own warnings!

If the warnings are stupid, we can turn them off, e.g. sometimes it makes sense to add -Wno-orphans to a file where we know it’s safe.

Commit messages

Please make informative commit messages! It makes it much easier to work out why things are the way they are when you’re debugging things later.

A commit message is communication, so as usual, put yourself in the position of the reader: what does a reviewer, or someone reading the commit message later need to do their job? Write it down! It is even better to include this information in the code itself, but sometimes it doesn’t belong there (e.g. ticket info).

Also, include any relevant meta-information, such as ticket numbers. If a commit completely addresses a ticket, you can put that in the headline if you want, but it’s fine to just put it in the body.

There is plenty to say on this topic, but broadly the guidelines in this post are good.

Commit signing

Set it up if you can, it’s relatively easy to do.

Making and reviewing changes

Opening a pull request

A pull request is a change to the codebase, but it is also an artifact which goes through a change acceptance process. There are a bunch of things which we can do to make this process smooth which may have nothing to do with the code itself.

The key bottleneck in getting a PR merged is code review. Code review is great (see below), but it can slow you down if you don’t take the time to make it easy.

The amount of time it’s worth spending doing this is probably much more than you think.

What changes to include

Having a sensible and comprehensible set of changes makes your reviewer’s life much easier.

  • Keep commits to a single logical change where possible. The reviewer will be happier, and you’ll be happier if you ever have to revert it. If you can’t do this (say because you have a huge mess), best to just have one commit with everything in it.

  • Keep your PRs to a single topic. Including unrelated changes makes things harder for your reviewers, slowing them down, and makes it harder to integrate new changes.

  • If you’re working on something that’s likely to conflict with someone else, talk to them. It’s not a race.

Pull request descriptions

A pull request is communication, so as usual, put yourself in the position of the reader: what does your audience (the reviewer) need to know to do their job? This information is easy for you to access, but hard for them to figure out, so write it down!

However, better to put information in the code or commit messages if possible: these persist but PR descriptions do not. It’s okay to repeat information from such places, or simply to point to it. For one-commit PRs, Github will automatically populate the PR description with the commit message, so if you’ve written a good commit message you’re done! Sometimes there is "change-related" information that doesn’t belong in a commit message but is useful ("Kris I think this will fix the issue you had yesterday").

Misc PR tips

  • Review the diff of your own PR at the last minute before hitting "create". It’s amazing how many obvious things you spot here, and it stops the reviewer having to point them all out.

  • It’s fine to make WIP PRs if you just want to show your code to someone else or have the CI check it. Use the Github "draft" feature for this.

Rebasing, force-pushing, and history

Until a PR is merged, the branch is yours to do with as you will. In particular, rebasing and force-pushing is fine. Indeed, if you need to update your branch with changes from master, rebasing is typically better than merging.

So please do use this ability where it helps, for example:

  • Add low-effort or WIP commits to fix review comments, and then squash them away before merging the PR.

  • If you have already had a PR review, don’t rebase away the old commits until the PR is ready to merge, so that the reviewer only has to look at the "new" commits.

  • Rewrite the commits to make the story clearer where possible.

  • Always prefer git push --force-with-lease to just git push --force to ensure that no work gets accidentally deleted.

Don’t be obsessive about history though: a little bit of effort making the history clear is nice, but you can rapidly hit diminishing returns. Use your judgement, but probably don’t merge a PR that has commits called "WIP" or "fix"!

If a PR is just a total mess, consider using Github’s squash-merge feature.

Code review and merging

All pull-requests should be approved by at least one other person. We don’t enforce this, though: a PR fixing a typo is fine to self-merge, beyond that use your judgement.

As an author, code review is an opportunity for you to get feedback from clear eyes. As a reviewer, code review is an opportunity for you to help your colleagues and learn about what they are doing. Make the best use of it you can!

For the author

  • Pick the right reviewer(s). If you don’t know who to pick, ask!

  • Respect your reviewers' time. Their time is as valuable as yours, and it’s typically more efficient for you to spend time explaining or clarifying something in advance than for them to puzzle it out or pose a question.

  • If someone had to ask about your code, it wasn’t clear enough so change it or add a comment.

Read this blog post for more good tips: https://mtlynch.io/code-review-love/

For the reviewer

  • Respond to review requests as quickly as you can. If you can’t review it all, say what you can and come back to it. Waiting for review is often a blocker for other people, so prioritize it.

  • If you don’t understand something, ask. You are as clever as any person who will read this in the future, if it confuses you it’s confusing.

  • Do spend the time to understand the code. This will help you make more useful comments, help you review future changes more easily, and help you if you ever need to work on it yourself.

  • More reviewing is usually helpful. If you think a PR is interesting, you can review it even if nobody asked you to, you will probably have things to contribute and you’ll learn something.

Supporting systems

Continuous integration

We have a few sources of CI checks at the moment:

  • Hydra

  • ReadTheDocs

  • Github Actions

  • Buildkite

The CI will report statuses on your PRs with links to the logs in case of failure. Pull requests cannot be merged without at least the Hydra CI check being green.

Note
This isn’t strictly true: repository admins (notably Michael) can force-merge PRs without the checks being green If you really need this, ask.

CI checks are run on the tip of the PR branch, not on the merge commit that is created with master. As a result, it’s possible to create a "semantic" merge commit where the CI passes on commits C1 and C2, but not on the merge of C1 and C2. In this circumstance we can end up with the CI checks being broken on master. However, this is sufficiently infrequent that we just live with the possibility, since eliminating it is quite awkward.

Hydra

Hydra is the "standard" CI builder for Nix-based projects. It builds everything in the project, including all the tests, documentation, etc.

Hydra builds jobs based on release.nix, although currently this imports a lot of its jobs from ci.nix (was used for Hercules, may be used again in future). Hydra should report a failed status even if release.nix fails to evaluate.

Hydra can be a bit flaky, unfortunately: - If evaluation fails saying "out of memory" or "unexpected EOF reading line", then this is likely a transient failure. These will be automatically retried, but if you’re in a hurry Michael has permissions to force a new evaluation. - If a build fails spuriously, this is a problem: please report it to whoever is responsible for that build and we should try and iron it out. Nondeterministic failures are very annoying. Michael also has permissions to restart failed builds.

ReadTheDocs

The documentation site is built on ReadTheDocs. It will build a preview for each PR which is linked from the PR status. It’s useful to take a look if you’re changing any of the documentation.

Github Actions

These perform some of the same checks as Hydra, but Github Actions is often more available, so they return faster and act as a "smoke check".

Buildkite

Buildkite currently only performs the continuous deployment steps.

Continuous deployment

Several packages are continuously deployed to a staging environment. See deployment/README.md for detailed information on the deployment setup.

Integrating with other Cardano projects

The Cardano ledger (cardano-ledger-specs)

We have a cyclic dependency with cardano-ledger-specs: 1. cardano-ledger-alonzo depends on plutus-ledger-api. 2. plutus-ledger/plutus-contract etc. depend on cardano-api, which depends on cardano-node, which depends on cardo-ledger-alonzo.

This is possible because the cycle is not between packages, merely between the respositories. Still, it causes some problems: if we make a change to plutus-ledger-api or below that causes cardano-ledger-alonzo to not compile, we cannot immediately merge that change! To do that, we would need to change our source-repository-package pin to point to a version of cardano-ledger-specs that works with that change. But we can’t easily make such a PR to cardano-ledger-specs either: that would require them to pin a version of plutus where everything worked.

To make this less painful, we have a (slightly complicated) workflow for such changes.

Staging branch

Firstly, there is a branch plutus-staging in cardano-ledger-specs. The rules for plutus-staging are: 1. It should pass the cardano-ledger-specs CI, so it is always mergeable. This means that you will need to update the plutus pin when you make changes to plutus-staging. 2. It is allowed for the plutus pin in plutus-staging to point to a non-master commit, such as a commit in an unmerged PR. 3. The commit pointed to by the plutus pin in plutus-staging must continue to exist: we should not squash or rebase it away. 3. When merging plutus-staging into master, the plutus pin should be updated to point to a master commit.

PR workflow

  1. Create a plutus PR (#1) with your changes.

  2. Create a cardano-ledger-specs PR (#2) targeting plutus-staging with the necessary changes in cardano-ledger-specs. The plutus pin should point to a commit on the PR #1 branch. While you are working, you can point the cardano-ledger-specs pin in your PR #1 branch to the tip of PR #2, so that everything compiles.

  3. Once the changes to cardano-ledger-specs have settled down, we can merge PR #2. After this be careful not to remove the commit from PR #1 which you used in plutus-staging!

  4. Update the cardano-ledger-specs pin in PR #1 to point to plutus-staging.

  5. Once the changes overall settle down, we can merge PR #1.

Merging the staging branches

We can do several PRs that touch cardano-ledger-specs before we merge plutus-staging into master, but it’s best not to let it diverge too much. The process is:

  1. Open a PR in cardano-ledger-specs merging plutus-staging into master, and updating the plutus pin to point to master.

  2. You may need to rerun the plutus-preprocessor in cardano-ledger-specs before doing this. It generates some test scripts, and isn’t done automatically.

  3. Open a PR in plutus updating the cardano-ledger-specs pin to point to master. This can happen later.