Skip to content

Commit

Permalink
Complete renaming process (#265)
Browse files Browse the repository at this point in the history
* Bump ADTypes dep

* Update readme

* Uncomment docs content + update

* Update interop with LogDensityProblemsAD

* Update readme

* Bump patch

* Add additional ADgradient test

* Fix test

* Fix test properly

* Make tests work

* Tweak test

* More GC preserve statements
  • Loading branch information
willtebbutt authored Sep 26, 2024
1 parent db3b696 commit b31cb44
Show file tree
Hide file tree
Showing 6 changed files with 274 additions and 250 deletions.
4 changes: 2 additions & 2 deletions Project.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name = "Mooncake"
uuid = "da2b9cff-9c12-43a0-ae48-6db2b0edb7d6"
authors = ["Will Tebbutt, Hong Ge, and contributors"]
version = "0.4.0"
version = "0.4.1"

[deps]
ADTypes = "47edcb42-4c32-4615-8424-f2b9edc5f35b"
Expand Down Expand Up @@ -32,7 +32,7 @@ MooncakeLogDensityProblemsADExt = "LogDensityProblemsAD"
MooncakeSpecialFunctionsExt = "SpecialFunctions"

[compat]
ADTypes = "1.2"
ADTypes = "1.9"
BenchmarkTools = "1"
CUDA = "5"
ChainRulesCore = "1"
Expand Down
38 changes: 16 additions & 22 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Mooncake
# Mooncake.jl (formerly Tapir.jl)

[![Build Status](https://github.com/compintell/Mooncake.jl/actions/workflows/CI.yml/badge.svg?branch=main)](https://github.com/compintell/Mooncake.jl/actions/workflows/CI.yml?query=branch%3Amain)
[![codecov](https://codecov.io/github/compintell/Mooncake.jl/graph/badge.svg?token=NUPWTB4IAP)](https://codecov.io/github/compintell/Mooncake.jl)
Expand All @@ -8,41 +8,32 @@

The goal of the `Mooncake.jl` project is to produce a reverse-mode AD package which is written entirely in Julia, which improves over both `ReverseDiff.jl` and `Zygote.jl` in several ways, and is competitive with `Enzyme.jl`.

## Note on renaming

On 18/09/2024 this package was renamed from Tapir.jl to Mooncake.jl.
The last version while the package was called Tapir.jl was 0.2.51.
Upon renaming, the version was bumped to 0.3.0.

We are currently going through the process of updating the name of the package in the general registry and updating dependents to use the new package naming.
This should be largely complete in a few days.
During this time, there will be no new releases of Mooncake.jl, and there will be issues with its interaction with ADTypes.jl, LogDensityProblemsAD.jl, and possibly other things that we haven't thought of.

## Note on project status

`Mooncake.jl` is under active development.
You should presently expect releases involving breaking changes on a semi-regular basis.
We are trying to keep this README as up to date as possible, particularly with regards to the best examples of code to look at to understand how to use Mooncake.jl.
If you encounter a new version of Mooncake.jl in the wild, please consult this README for the most up-to-date advice.
We are trying to keep this README as up to date as possible, particularly with regards to the best examples of code to look at to understand how to use `Mooncake.jl`.
If you encounter a new version of `Mooncake.jl` in the wild, please consult this README for the most up-to-date advice.

# Getting Started

There are several ways to interact with Mooncake.jl.
The one that we recommend people begin with is [DifferentiationInterface.jl](https://github.com/gdalle/DifferentiationInterface.jl/). For example, use it as follows to compute the gradient of a function mapping a `Vector{Float64}` to `Float64`.
There are several ways to interact with `Mooncake.jl`.
The one that we recommend people begin with is [`DifferentiationInterface.jl`](https://github.com/gdalle/DifferentiationInterface.jl/).
For example, use it as follows to compute the gradient of a function mapping a `Vector{Float64}` to `Float64`.
```julia
using DifferentiationInterface
import Mooncake

f(x) = sum(abs2, x)
backend = AutoMooncake()
x = ones(3)
extras = prepare_gradient(f, backend, x)
gradient(f, backend, x, extras)
f(x) = sum(cos, x)
backend = AutoMooncake(; config=nothing)
x = ones(1_000)
prep = prepare_gradient(f, backend, x)
gradient(f, prep, backend, x)
```
You should expect that the first time you run `gradient` that it will take a little bit of time, but subsequent runs should be fast.
You should expect that `prep` takes a little bit of time to run, but that `gradient` is fast.

We are committed to ensuring support for DifferentiationInterface, which is why we recommend using that.
If you are interested in slightly more flexible functionality, you should consider `Mooncake.value_and_gradient!!`. See its docstring for more info.
If you are interested in interacting in a more direct fashion with `Mooncake.jl`, you should consider `Mooncake.value_and_gradient!!`. See its docstring for more info.

# How it works

Expand Down Expand Up @@ -117,6 +108,9 @@ For about 48 hours is was called `Phi.jl`, but the community guidelines state th
We then chose `Tapir.jl`, and didn't initially feel that other work [of the same name](https://github.com/wsmoses/Tapir-LLVM) presented a serious name clash, as it isn't AD-specific or a Julia project.
As it turns out, there has been significant work attempting to integrate the ideas from this work into the [Julia compiler](https://github.com/JuliaLang/julia/pull/39773), so the clash is something of a problem.

On 18/09/2024 this package was renamed from `Tapir.jl` to `Mooncake.jl`.
The last version while the package was called `Tapir.jl` was 0.2.51.
Upon renaming, the version was bumped to 0.3.0.
We finally settled on `Mooncake.jl`. Hopefully this name will stick.

# Project Status
Expand Down
8 changes: 4 additions & 4 deletions docs/src/debug_mode.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,10 +43,10 @@ Mooncake.build_rrule
```

When using ADTypes.jl, you can choose whether or not to use it via the `debug_mode` kwarg:
# ```jldoctest
# julia> AutoMooncake(Mooncake.Config())
# AutoMooncake(Mooncake.Config())
# ```
```jldoctest
julia> AutoMooncake(; config=Mooncake.Config(; debug_mode=true))
AutoMooncake{Mooncake.Config}(Mooncake.Config(true, false))
```

### When Should You Use Debug Mode?

Expand Down
22 changes: 12 additions & 10 deletions ext/MooncakeLogDensityProblemsADExt.jl
Original file line number Diff line number Diff line change
Expand Up @@ -58,16 +58,18 @@ function logdensity_and_gradient(∇l::MooncakeGradientLogDensity, x::Vector{Flo
return Mooncake.primal(y), dx
end

# # Interop with ADTypes.
# function ADgradient(x::ADTypes.AutoMooncake, ℓ)
# if x.debug_mode
# msg = "Running Mooncake in debug mode. This mode is computationally expensive, " *
# "should only be used when debugging a problem with AD, and turned off in " *
# "general use. Do this by using AutoMooncake(debug_mode=false)."
# @info msg
# end
# return ADgradient(Val(:Mooncake), ℓ; debug_mode=x.debug_mode)
# end
# Interop with ADTypes.
function ADgradient(x::ADTypes.AutoMooncake, ℓ)
debug_mode = x.config.debug_mode
if debug_mode
msg = "Running Mooncake in debug mode. This mode is computationally expensive, " *
"should only be used when debugging a problem with AD, and turned off in " *
"general use. Do this by using " *
"AutoMooncake(; config=Mooncake.Config(debug_mode=false))."
@info msg
end
return ADgradient(Val(:Mooncake), ℓ; debug_mode)
end

Base.parent(x::MooncakeGradientLogDensity) = Mooncake.primal(x.ℓ)

Expand Down
Loading

2 comments on commit b31cb44

@willtebbutt
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@JuliaRegistrator
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Registration pull request created: JuliaRegistries/General/116048

Tip: Release Notes

Did you know you can add release notes too? Just add markdown formatted text underneath the comment after the text
"Release notes:" and it will be added to the registry PR, and if TagBot is installed it will also be added to the
release that TagBot creates. i.e.

@JuliaRegistrator register

Release notes:

## Breaking changes

- blah

To add them here just re-invoke and the PR will be updated.

Tagging

After the above pull request is merged, it is recommended that a tag is created on this repository for the registered package version.

This will be done automatically if the Julia TagBot GitHub Action is installed, or can be done manually through the github interface, or via:

git tag -a v0.4.1 -m "<description of version>" b31cb44c715b89cfc06635fc932753731a2cb402
git push origin v0.4.1

Please sign in to comment.