This guide helps you get started developing Grafana.
Make sure you have the following dependencies installed before setting up your developer environment:
- Git
- Go (see go.mod for minimum required version)
- Node.js (Long Term Support), with corepack enabled. See .nvmrc for supported version. We recommend that you use a version manager such as nvm, fnm, or similar.
- GCC (required for Cgo] dependencies)
We recommend using Homebrew for installing any missing dependencies:
brew install git
brew install go
brew install node@20
brew install corepack
corepack enable
If you are running Grafana on Windows 10, we recommend installing the Windows Subsystem for Linux (WSL). For installation instructions, refer to our Grafana setup guide for Windows environment.
We recommend using the Git command-line interface to download the source code for the Grafana project:
- Open a terminal and run
git clone https://github.com/grafana/grafana.git
. This command downloads Grafana to a newgrafana
directory in your current directory. - Open the
grafana
directory in your favorite code editor.
For alternative ways of cloning the Grafana repository, refer to GitHub's documentation.
Caution: Do not use
go get
to download Grafana. Recent versions of Go have added behavior which isn't compatible with the way the Grafana repository is structured.
We use pre-commit hooks (via lefthook) to lint, fix, and format code as you commit your changes. Previously, the Grafana repository automatically installed these hook when you ran yarn install
, but they are now opt-in for all contributors.
To install the precommit hooks:
make lefthook-install
To remove precommit hooks:
make lefthook-uninstall
We strongly encourage contributors who work on the frontend to install the precommit hooks, even if your IDE formats on save. By doing so, the
.betterer.results
file is kept in sync.
When building Grafana, be aware that it consists of two components:
- The frontend, and
- The backend.
Before you can build the frontend assets, you need to install the related dependencies:
yarn install --immutable
If you get the error
The remote archive doesn't match the expected checksum
for a dependency pulled from a link (for example,"tether-drop": "https://github.com/torkelo/drop"
): this is a temporary mismatch. To work around the error (while someone corrects the issue), you can prefix youryarn install --immutable
command withYARN_CHECKSUM_BEHAVIOR=update
.
After the command has finished, you can start building the source code:
yarn start
This command generates SASS theme files, builds all external plugins, and then builds the frontend assets.
After yarn start
has built the assets, it will continue to do so whenever any of the files change. This means you don't have to manually build the assets every time you change the code.
Troubleshooting: if your first build works, after pulling updates you may see unexpected errors in the "Type-checking in progress..." stage. These errors can be caused by the tsbuildinfo cache supporting incremental builds. In this case, you can enter
rm tsconfig.tsbuildinfo
and re-try.
If you want to contribute to any of the plugins listed below (that are found within the public/app/plugins
directory) they require running additional commands to watch and rebuild them.
- azuremonitor
- cloud-monitoring
- grafana-postgresql-datasource
- grafana-pyroscope-datasource
- grafana-testdata-datasource
- jaegar
- mysql
- parca
- tempo
- zipkin
To build and watch all these plugins you can run the following command. Note this can be quite resource intensive as it will start separate build processes for each plugin.
yarn plugin:build:dev
If, instead, you would like to build and watch a specific plugin you can run the following command. Make sure to substitute <name_of_plugin>
with the plugins name field found in its package.json. e.g. @grafana-plugins/tempo
.
yarn workspace <name_of_plugin> dev
Next, we'll explain how to build and run the web server that serves these frontend assets.
Build and run the backend by running make run
in the root directory of the repository. This command compiles the Go source code and starts a web server.
Troubleshooting: Are you having problems with too many open files?
By default, you can access the web server at http://localhost:3000/
.
Log in using the default credentials:
username | password |
---|---|
admin |
admin |
When you log in for the first time, Grafana asks you to change your password.
The Grafana backend includes SQLite, a database which requires GCC to compile. So in order to compile Grafana on Windows you need to install GCC. We recommend TDM-GCC. Eventually, if you use Scoop, you can install GCC through that.
You can build the back-end as follows:
- Follow the instructions to install the Wire tool.
- Generate code using Wire. For example:
# Default Wire tool install path: $GOPATH/bin/wire.exe
<Wire tool install path> gen -tags oss ./pkg/server ./pkg/cmd/grafana-cli/runner
- Build the Grafana binaries:
go run build.go build
The Grafana binaries will be installed in bin\\windows-amd64
.
Alternatively, if you are on Windows and want to use the make
command, install Make for Windows and use it in a UNIX shell (for example, Git Bash).
The test suite consists of three types of tests: Frontend tests, backend tests, and end-to-end tests.
We use Jest for our frontend tests. Run them using Yarn:
yarn test
If you're developing for the backend, run the tests with the standard Go tool:
go test -v ./pkg/...
Running the backend tests on Windows currently needs some tweaking, so use the build.go
script:
go run build.go test
By default, grafana runs SQLite. To run test with SQLite:
go test -covermode=atomic -tags=integration ./pkg/...
To run PostgreSQL and MySQL integration tests locally, start the Docker blocks for test data sources for MySQL, PostgreSQL, or both, by running make devenv sources=mysql_tests,postgres_tests
.
When your test data sources are running, you can execute integration tests by running for MySQL:
make test-go-integration-mysql
For PostgreSQL, you could run:
make test-go-integration-postgres
Grafana uses Cypress to end-to-end test core features. Core plugins use Playwright to run automated end-to-end tests. You can find more information on how to add end-to-end tests to your core plugin in our end-to-end testing style guide
To run all tests in a headless Chromium browser.
yarn e2e
By default, the end-to-end tests start a Grafana instance listening on localhost:3001
. To use a different URL, set the BASE_URL
environment variable:
BASE_URL=http://localhost:3333 yarn e2e
To follow all tests in the browser while they're running, use yarn e2e:debug
yarn e2e:debug
To choose a single test to follow in the browser as it runs, use yarn e2e:dev
yarn e2e:dev
Note: If you're using VS Code as your development editor, it's recommended to install the Playwright test extension. It allows you to run, debug and generate Playwright tests from within the editor. For more information about the extension and how to use reports to analyze failing tests, refer to the Playwright documentation.
Each version of Playwright needs specific versions of browser binaries to operate. You need to use the Playwright CLI to install these browsers.
yarn playwright install chromium
To run all tests in a headless Chromium browser and display results in the terminal. This assumes you have Grafana running on port 3000.
yarn e2e:playwright
The following script starts a Grafana development server (same server that is being used when running e2e tests in Drone CI) on port 3001 and runs the Playwright tests. The development server is provisioned with the devenv dashboards, data sources and apps.
yarn e2e:playwright:server
The default configuration, defaults.ini
, is located in the conf
directory.
To override the default configuration, create a custom.ini
file in the conf
directory. You only need to add the options you wish to override.
Enable the development mode by adding the following line in your custom.ini
:
app_mode = development
By now, you should be able to build and test a change you've made to the Grafana source code. In most cases, you'll need to add at least one data source to verify the change.
To set up data sources for your development environment, go to the devenv directory in the Grafana repository:
cd devenv
Run the setup.sh
script to set up a set of data sources and dashboards in your local Grafana instance. The script creates a set of data sources called gdev-<type>, and a set of dashboards located in a folder called gdev dashboards.
Some of the data sources require databases to run in the background.
Installing and configuring databases can be a tricky business. Grafana uses Docker to make the task of setting up databases a little easier. Make sure you install Docker before proceeding to the next step.
In the root directory of your Grafana repository, run the following command:
make devenv sources=influxdb,loki
The script generates a Docker Compose file with the databases you specify as sources
, and runs them in the background.
See the repository for all the available data sources. Note that some data sources have specific Docker images for macOS; for example, nginx_proxy_mac
.
To build a Docker image, run:
make build-docker-full
The resulting image will be tagged as grafana/grafana:dev
.
Note: If you use Docker for macOS, be sure to set the memory limit to be larger than 2 GiB. Otherwise,
grunt build
may fail. The memory limit settings are available under Docker Desktop -> Preferences -> Advanced.
Are you having issues with setting up your environment? Here are some tips that might help.
Configure your IDE to use the TypeScript version from the Grafana repository. The version should match the TypeScript version in the package.json
file, and is typically located at node_modules/.bin/tsc
.
Previously, Grafana used Yarn PnP to install frontend dependencies, which required additional special IDE configuration. This is no longer the case. If you have custom paths in your IDE for ESLint, Prettier, or TypeScript, you can now remove them and use the defaults from node_modules
.
Depending on your environment, you may have to increase the maximum number of open files allowed. For the rest of this section, we will assume you are on a UNIX-like OS (for example, Linux or macOS), where you can control the maximum number of open files through the ulimit shell command.
To see how many open files are allowed, run:
ulimit -a
To change the number of open files allowed, run:
ulimit -S -n 4096
The number of files needed may be different on your environment. To determine the number of open files needed by make run
, run:
find ./conf ./pkg ./public/views | wc -l
Another alternative is to limit the files being watched. The directories that are watched for changes are listed in the .bra.toml
file in the root directory.
You can retain your ulimit
configuration, that is, save it so it will be remembered for future sessions. To do this, commit it to your command line shell initialization file. Which file this is depends on the shell you are using. For example:
- zsh -> ~/.zshrc
- bash -> ~/.bashrc
Commit your ulimit configuration to your shell initialization file as follows ($LIMIT being your chosen limit and $INIT_FILE being the initialization file for your shell):
echo ulimit -S -n $LIMIT >> $INIT_FILE
Your command shell should read the initialization file in question every time it gets started, and apply your ulimit
command.
For some people, typically using the bash shell, ulimit fails with an error similar to the following:
ulimit: open files: cannot modify limit: Operation not permitted
If that happens to you, chances are you've already set a lower limit and your shell won't let you set a higher one. Try looking in your shell initialization files (~/.bashrc
, typically), to see if there's already an ulimit
command that you can tweak.
If you encounter an AggregateError
when building new tests, this is probably due to a call to our client backend service not being mocked. Our backend service anticipates multiple responses being returned and was built to return errors as an array. A test encountering errors from the service will group those errors as an AggregateError
without breaking down the individual errors within. backend_srv.processRequestError
is called once per error and is a great place to return information on what the individual errors might contain.
- Read our style guides.
- Learn how to create a pull request.
- Read about the architecture.
- Read through the backend documentation.