- Introduction
- Requirements
- Setting up a Warehouse node
- Setting up a Google Cloud Bigtable instance
- Import Solana's Bigtable Instance
- Requirements
- Restoring Missing Blocks
By design an RPC node with a default --limit-ledger-size
will store roughly 2 epochs worth of data so Solana relies on Google Cloud's Bigtable for long term storage.
The public endpoint that Solana provides https://api.mainnet-beta.solana.com has configured its own Bigtable instance to server requests since the Genesis Block.
This guide is meant to allow anyone to run his own Bigtable instance for long term storage in the Solana Blockchain.
- A Warehouse node
- A Google Cloud Bigtable instance
- A Google Cloud Storage bucket (optional)
A Warehouse node is responsible for feeding Bigtable with ledger data, so setting up one is the first thing that needs to be done in order for you to have your own Solana Bigtable instance.
Structurally a Warehouse node is similar to an RPC node that doesn't server RPC calls, but instead uploads ledger data to Bigtable.
Keeping your ledger history consistent is very important on a Warehouse node, since any gap on your local ledger will translate to a gap on your Bigtable instance, although these gaps could be potentially patched up by using solana-ledger-tool
.
Here you'll find all the necessary scripts to run your own Warehouse node.
What different scripts do:
warehouse.sh
→ Startup script for the Warehouse node:warehouse-upload-to-storage-bucket.sh
→ Script to upload the hourly snapshots to Google Cloud Storage every epoch.service-env.sh
→ Source file forwarehouse.sh
.service-env-warehouse.sh
→ Source file forwarehouse-upload-to-storage-bucket.sh
.warehouse-basic.sh
→ Simplified command to start the warehouse node. Run this instead ofwarehouse.sh
.
Before you begin:
- Install solana-cli
- Install gcloud sdk
- Create a gcloud service account.
- When creating the account give it the
Bigtable User
role. - You will get back a file with a name similar to
play-gcp-329606-cccf2690b876.json
. This is the file you'll have to point theGOOGLE_APPLICATION_CREDENTIALS
variable at (below). - Needless to say keep the file private and don't commit to github.
- When creating the account give it the
- Tune your system
To start the validator:
- Fill in the missing variables (eg
<path_to_your_ledger>
) inside the below files. Hint: CTRL-F for "<
" to find all quickly.warehouse.sh
service-env.sh
service-env-warehouse.sh
- If it's the first time you're running a validator, you can leave
ledger_dir
andledger_snapshots_dir
blank. This will tell the node to fetch genesis & the latest snapshot from the cluster. chmod +x
the following files:warehouse.sh
metrics-write-dashboard.sh
- Update the
EXPECTED_SHRED_VERSION
inservice-env.sh
to the appropriate version. ./warehouse.sh
To upload to bigtable:
- Fill in the missing variables inside
<...>
inwarehouse-upload-to-storage-bucket.sh
. chmod +x warehouse-upload-to-storage-bucket.sh
./warehouse-upload-to-storage-bucket.sh
To run as a continuous process as systemctl
:
- Update the user in both
.service
files (currently set tosol
). - Fill in the missing variables inside
<...>
in both.service
files. cp
both files into/etc/systemd/system
sudo systemctl enable --now warehouse-upload-to-storage-bucket && sudo systemctl enable --now warehouse
In order to import Solana's Bigtable Instance, you'll first need to set own Bigtable instance:
- Enable the
BigTable API
if you have not done it already, then click on theCreate Instance
inside theConsole
. - Name your
Instance
and then Select Storage type from HDD and SSD. Set the instance id and name tosolana-ledger
. - Select a location → Region → Zone.
- Choose the number of
Nodes
for the cluster, each node provides 16TB of storage for HDD nodes (as of 09/12/21 at least 4 HDD nodes are required). - Create the following tables with the respective column family names:
Table ID | Column Family Name |
---|---|
blocks | x |
tx | x |
tx-by-addr | x |
- It's very important to give the same
Table ID
andColumn Family Name
inside your Bigtable instance or the Dataflow job will fail.
Alternatively, you create the tables by running the following commands through CLI:
- Update the
.cbtrc
file with credentials of the project and Bigtable instance in which we want to do the read and write operations:echo project = [PROJECT ID] > ~/.cbtrc
echo instance = [BIGTABLE INSTANCE ID] >> ~/.cbtrc
cat ~/.cbtr
- Create the tables inside the Bigtable instance with the family name defined inside it:
cbt createtable [TABLE NAME] “families=[COLUMN FAMILY1]
- When creating the table inside the instance remember the transfer through Dataflow always occurs within tables having the same column family name otherwise it will throw an error like “Requested column family not found = 1”.
Once your Warehouse node has stored ledger data for 1 epoch successfully and you have set up your Bigtable instance as explained above, you are ready to import Solana's Bigtable to yours. The import process is done through a Dataflow template that allows importing Cloud Storage SequenceFile to Bigtable:
- Create a new
Service Account
. - Assign a
Service Account Admin
role to it. - Enabling the
Dataflow API
in the project. - Create the Dataflow job from template
SequenceFile Files on Cloud storage to Cloud BigTable
. - Fill the
Required parameters
.
NOTE: As for now the migration process is on demand, so before creating the Dataflow job you'll need to send and email with the service account credentials you created xxx@xxx.iam.gserviceaccount.com
to joe@solana.com or axl@solana.com.
Sometimes blocks are missing from BigTable. This will be apparent on Explorer where the parent slot & child slot links won't form cycles. For example, before 59437028 was restored 59437027 incorrectly listed 59437029 as a child:
- https://explorer.solana.com/block/59437029: parent is 59437028
- https://explorer.solana.com/block/59437028: missing
- https://explorer.solana.com/block/59437027: child is 59437029
The missing blocks can be restored from GCS as follows:
- Download appropriate ledger data from GCS
- Not all the region buckets have all the data, but us-ny5 is a good starting point
- Find the bucket with the largest slot number that is smaller than the missing block. For example block 59437028 is in 59183944
- Download rocksdb.tar.bz2:
~/missingBlocks/59183944$ wget https://storage.googleapis.com/mainnet-beta-ledger-us-ny5/59183944/rocksdb.tar.bz2
- Also note the version number in version.txt:
curl https://storage.googleapis.com/mainnet-beta-ledger-us-ny5/59183944/version.txt
solana-ledger-tool 1.4.21 (src:50ebc3f4; feat:2221549166)
- Extract the data
~/missingBlocks/59183944$ tar -I lbzip2 -xf rocksdb.tar.bz2
- This can take a while so use a screen session if your connection is unstable.
- Build the ledger tool from the version listed in version.txt
~/solana$ git checkout 50ebc3f4
(can also checkout v1.4.21)~/solana$ cd ledger-tool && ../cargo build --release
- The cargo script in the solana repo uses the rust version associated with the release to solve backwards compatibility problems.
- Check blocks
~/missingBlocks/59183944$ ~/solana/target/release/solana-ledger-tool slot 59437028 -l . | head -n 2
- Output should include correct parent & child. If you get a SlotNotRooted error see below.
- Upload missing block(s) to big table
~/missingBlocks/59183944$ GOOGLE_APPLICATION_CREDENTIALS=<json credentials file with write permission> ~/solana/target/release/solana-ledger-tool bigtable upload 59437028 59437028 -l .
- Specify two blocks to upload a range. Earlier block (smaller number) first.
-l
should specify a directory that contains the rocksdb directory.
- If the previous steps produced a
SlotNotRooted
error, first run the repair-roots command.~/missingBlocks/59183944$ ~/github/solana/target/release/solana-ledger-tool repair-roots --before 59437027 --until 59437029 -l .
- If you get
error: Found argument 'repair-roots' which wasn't expected, or isn't valid in this context
then the ledger tool version pre-dates the repair-roots command. Add it to your local code by cherry pickingddfbae2
or manually applying the changes from PR #17045
- If you get