Guardian is a modular open-source solution that includes best-in-class identity management and decentralized ledger technology (DLT) libraries. At the heart of Guardian solution is a sophisticated Policy Workflow Engine (PWE) that enables applications to offer a digital (or digitized) Measurement, Reporting, and Verification requirements-based tokenization implementation.
HIP-19 · HIP-28 · HIP-29 · Report a Bug · Request a Policy or a Feature
As identified in Hedera Improvement Proposal 19 (HIP-19), each entity on the Hedera network may contain a specific identifier in the memo field for discoverability. Guardian demonstrates this when every Hedera Consensus Service transaction is logged to a Hedera Consensus Service (HCS) Topic. Observing the Hedera Consensus Service Topic, you can discover newly minted tokens.
In the memo field of each token mint transaction you will find a unique Hedera message timestamp. This message contains the url of the Verifiable Presentation (VP) associated with the token. The VP can serve as a starting point from which you can traverse the entire sequence of documents produced by Guardian policy workflow, which led to the creation of the token. This includes a digital Methodology (Policy) HCS Topic, an associated Registry HCS Topic for that Policy, and a Project HCS Topic.
Please see p.17 in the FAQ for more information. This is further defined in Hedera Improvement Proposal 28 (HIP-28).
To get a local copy up and running quickly, follow the steps below. Please refer to https://docs.hedera.com/guardian for complete documentation.
Note. If you have already installed another version of Guardian, remember to perform a backup operation before upgrading.
Note: as of January, 10th 2024 old web3.storage upload API (the main upload API before November 20, 2023) has been sunset. New w3up service accounts/API must be used with Guardian going forward.
When building the reference implementation, you can manually build every component or run a single command with Docker.
If you build with docker MongoDB V6, NodeJS V20, Yarn and Nats 1.12.2 will be installed and configured automatically.
The following steps need to be executed in order to start Guardian using docker:
- Clone the repo
- Configure project level .env file
- Update BC access variables
- Setup IPFS
- Build and launch with Docker
- Browse to http://localhost:3000
Here the steps description follows:
git clone https://github.com/hashgraph/guardian.git
The main configuration file that needs to be provided to the Guardian system is the .env
file.
Cut and paste the .env.template
renaming it as .env
here you may choose the name of the Guardian platform. Leave the field empty or unspecified if you update a production environment to keep previous data ( for more details read here).
For this example purpose let's name the Guardian platform as "develop"
GUARDIAN_ENV="develop"
NOTE: Every single service is provided in its folder with a
.env.template
file, this set of files are only needed for the case of Manual installation.
Update the following files with your Hedera Testnet account info (see prerequisites) as indicated. Please check complete steps to generate Operator_ID and Operator_Key by looking at the link: How to Create Operator_ID and Operator_Key.
The Operator_ID and Operator_Key and HEDERA_NET are all that Guardian needs to access the Hedera Blockchain assuming a role on it. This parameters needs to be configured in a file at the path ./configs
, the file should use the following naming convention:
./configs/.env.\<GUARDIAN_ENV\>.guardian.system
There will be other steps in the Demo Usage Guide that will be required for the generation of Operator_ID and Operator_Key. It is important to mention that the Operator_ID and Operator_Key in the ./configs/.env.<GUARDIAN_ENV>.guardian.system
will be used to generate demo accounts.
The parameter HEDERA_NET
may assume the following values: mainnet
, testnet
, previewnet
, localnode
. choose the right value depending on your target Hedera network on which the OPERATOR_ID
has been defined.
As examples:
following the previous example, the file to configure should be named: ./configs/.env.develop.guardian.system
, this file is already provided in the folder as an example, only update the variables OPERATOR_ID, OPERATOR_KEY and HEDERA_NET.
OPERATOR_ID="..."
OPERATOR_KEY="..."
HEDERA_NET="..."
Starting from Multi-environment release (2.13.0) it has been introduced a new parameter PREUSED_HEDERA_NET
.
Multienvironemnt is a breaking change and the configuration of this parameter intend to smooth the upgrading.
PREUSED_HEDERA_NET
configuration depends on the installation context.
- If the installation is a completely new one just remove the parameter and feel free to jump to the next paragraph.
- if you are upgrading from a release after the Multi-environment (>= to 2.13.0) do not change the state of this parameter (so if you removed the parameter in some previous installation do not introduce it).
- if the installation is an upgrading from a release previous of the Multi-environment (<= to 2.13.0) to a following one you need to configure the
PREUSED_HEDERA_NET
. After that the parameter will last in the configuration unchanged.
The PREUSED_HEDERA_NET
parameter is intended to hold the target Hedera network that the system already started to notarize data to. PREUSED_HEDERA_NET is the reference to the HEDERA_NET that was in use before the upgrade.
To let the Multi-environment transition happen in a transparent way the GUARDIAN_ENV
parameter in the .env
file has to be configured as empty while the PREUSED_HEDERA_NET
has to be set with the same value configured in the HEDERA_NET
parameter in the previous configuration file.
PREUSED_HEDERA_NET
never needs to be changed after the first initialization. On the contrary it will be possible to change HEDERA_NET
to dials with all the Hedera different networks.
- as first Example:
in case of the upgrading from a release minor then 2.13.0 to a bigger one and keep using the same HEDERA_NET="Mainnet"(as example)
configure the name the Guardian platform as empty in the .env
file
GUARDIAN_ENV=""
In this case the configuration is stored in the file named: ./configs/.env..guardian.system
, and is already provided in the folder as an example, updating the variables OPERATOR_ID and OPERATOR_KEY.
OPERATOR_ID="..."
OPERATOR_KEY="..."
PREUSED_HEDERA_NET is the reference to your previous HEDERA_NET configuration then you should set its value to match your previous HEDERA_NET configuration.
HEDERA_NET="mainnet"
PREUSED_HEDERA_NET="mainnet"
because you are keeping on using HEDERA_NET as it was pointing to the "mainnet" in the previous installation too.
- As a second example: to test the new release change the HEDERA_NET to "testnet". This is the complete configuration:
Set the name of the Guardian platform to whatever descripting name in the .env
file
GUARDIAN_ENV="testupgrading"
In this case the configuration is stored in the file named: ./configs/.env.testupgrading.guardian.system
again update the variables OPERATOR_ID and OPERATOR_KEY using your testnet account.
OPERATOR_ID="..."
OPERATOR_KEY="..."
set the HEDERA_NET="testnet" and set the PREUSED_HEDERA_NET to refer to the mainnet as you wish that Mainet data remains unchanged.
HEDERA_NET="testnet"
PREUSED_HEDERA_NET="mainnet"
This configuration allows you to leave untouched all the data referring to Mainnet in the Database while testing on Testnet. Refer to Guardian documentation for more details.
NOTE: You can use the Schema Topic ID (
INITIALIZATION_TOPIC_ID
) already present in the configuration files, or you can specify your own.
NOTE: for any other GUARDIAN_ENV name of your choice just copy and paste the file
./configs/.env.template.guardian.system
and rename as./configs/.env.<choosen name>.guardian.system
4. Now, we have two options to setup IPFS node : 1. Local node 2. IPFS Web3Storage node 3. Filebase Bucket.
-
4.1.1 We need to install and configure any IPFS node. example
-
4.1.2 For setup IPFS local node you need to set variables in the same file
./configs/.env.develop.guardian.system
IPFS_NODE_ADDRESS="..." # Default IPFS_NODE_ADDRESS="http://localhost:5001"
IPFS_PUBLIC_GATEWAY='...' # Default IPFS_PUBLIC_GATEWAY='https://localhost:8080/ipfs/${cid}'
IPFS_PROVIDER="local"
To select this option ensure that IPFS_PROVIDER="web3storage"
setting exists in your ./configs/.env.<environment>.guardian.system
file.
To configure access to the w3up IPFS upload API from web3.storage for your Guardian instance you need to set correct values to the following variables in the ./configs/.env.<environment>.guardian.system
file:
IPFS_STORAGE_KEY="..."
IPFS_STORAGE_PROOF="..."
NOTE: When Windows OS is used for creating the IPFS values, please use bash shell to prevent issues with base64 encoding.
To obtain the values for these variables please follow the steps below:
- Create an account on https://web3.storage, please specify the email you have access to as the account authentication is based on the email validation. Make sure to follow through the registration process to the end, choose an appropriate billing plan for your needs (e.g. 'starter') and enter your payment details.
- Install w3cli as described in the corresponding section of the web3.storage documentation.
- Create your 'space' as described in the 'Create your first space' section of the documentation.
- Execute the following to set the Space you intend on delegating access to:
w3 space use
. - Execute the following command to retrieve your Agent private key and DID:
npx ucan-key ed
. The private key (starting withMg...
) is the value to be used in the environment variableIPFS_STORAGE_KEY
. - Retrieve the PROOF by executing the following:
w3 delegation create <did_from_ucan-key_command_above> | base64
. The output of this command is the value to be used in the environment variableIPFS_STORAGE_PROOF
.
To summarise, the process of configuring delegated access to the w3up API consists of execution the following command sequence:
w3 login
w3 space create
w3 space use
npx ucan-key ed
w3 delegation
The complete guide to using the new w3up web3.storage API is available at https://web3.storage/docs/w3up-client.
To configure the Filebase IPFS provider, set the following variables in the file ./configs/.env.<environment>.guardian.system
IPFS_STORAGE_API_KEY="Generated Firebase Bucket Token"
IPFS_PROVIDER="filebase"
Create a new "bucket" on Filebase since we utilize the IPFS Pinning Service API Endpoint service. The token generated for a bucket corresponds to the IPFS_STORAGE_API_KEY environment variable within the guardian's configuration.
For detailed setup instructions, refer to the official https://docs.filebase.com/api-documentation/ipfs-pinning-service-api.
For setting up AI and Guided Search, we need to set OPENAI_API_KEY variable in ./configs/.env*
files.
OPENAI_API_KEY="..."
6. Build and launch with Docker. Please note that this build is meant to be used in production and will not contain any debug information. From the project's root folder:
docker compose up -d --build
NOTE: About docker-compose: from the end of June 2023 Compose V1 won’t be supported anymore and will be removed from all Docker Desktop versions. Make sure you use Docker Compose V2 (comes with Docker Desktop > 3.6.0) as at https://docs.docker.com/compose/install/
7. Browse to http://localhost:3000 and complete the setup.
for other examples go to:
- Deploying Guardian using a specific environment( DEVELOP)
- Steps to deploy Guardian using a specific Environment ( QA)
- Steps to deploy Guardian using default Environment
If you want to manually build every component with debug information, then build and run the services and packages in the following sequence: Interfaces, Logger Helper, Message Broker, Logger Service, Auth Service, IPFS, Guardian Service, UI Service, and lastly, the MRV Sender Service. See below for commands.
Install, configure and start all the prerequisites, then build and start each component.
-
for each of the services create the file
./<service_name>/.env
to do this copy, paste and rename the file./<service_name>/.env.template
For example:
in
./guardian-service/.env
:GUARDIAN_ENV="develop"
If need to configure OVERRIDE uncomment the variable in file
./guardian-service/.env
:OVERRIDE="false"
-
configure the file
./<service_name>/configs/.env.<service>.<GUARDIAN_ENV>
file: to do this copy, paste and rename the file./<service_name>/.env.<service>.template
following previous example:
in
./guardian-service/configs/.env.guardian.develop
:OPERATOR_ID="..." OPERATOR_KEY="..."
-
Setting up Chat GPT API KEY to enable AI Search and Guided Search: For setting up AI and Guided Search, we need to set OPENAI_API_KEY variable in
./ai-service/configs/.env*
files.OPENAI_KEY="..."
NOTE: Once you start each service, please wait for the initialization process to be completed.**
git clone https://github.com/hashgraph/guardian.git
Yarn:
yarn
Npm:
npm install
Yarn:
yarn workspace @guardian/interfaces run build
Npm:
npm --workspace=@guardian/interfaces run build
Yarn:
yarn workspace @guardian/common run build
Npm:
npm --workspace=@guardian/common run build
To build the service:
Yarn:
yarn workspace logger-service run build
Npm:
npm --workspace=logger-service run build
Configure the service as previously described. Do not need special configuration variables.
To start the service:
Yarn:
yarn workspace logger-service start
Npm:
npm --workspace=logger-service start
To build the service:
Yarn:
yarn workspace auth-service run build
Npm:
npm --workspace=auth-service run build
Configure the service as previously described. Do not need special configuration variables.
To start the service:
Yarn:
yarn workspace auth-service start
Npm:
npm --workspace=auth-service start
To build the service:
Yarn:
yarn workspace policy-service run build
Npm:
npm --workspace=policy-service run build
Configure the service as previously described. Do not need special configuration variables.
To start the service:
Yarn:
yarn workspace policy-service start
Npm:
npm --workspace=policy-service start
Yarn: To build the service:
yarn workspace worker-service run build
Npm:
npm --workspace=worker-service run build
Configure the service as previously described. Update IPFS_STORAGE_API_KEY value in ./worker-service/configs/.env.worker
file.
Yarn: To start the service:
yarn workspace worker-service start
Npm:
npm --workspace=worker-service start
To build the service:
Yarn:
yarn workspace notification-service run build
Npm:
npm --workspace=notification-service run build
Configure the service as previously described. Update OPERATOR_ID and OPERATOR_KEY values in ./guardian-service/configs/.env.worker
file as in the example above.
To start the service (found on http://localhost:3002):
Yarn:
yarn workspace notification-service start
Npm:
npm --workspace=notification-service start
To build the service:
Yarn:
yarn workspace guardian-service run build
Npm:
npm --workspace=guardian-service run build
Configure the service as previously described. Update OPERATOR_ID and OPERATOR_KEY values
in ./guardian-service/configs/.env.worker
file as in the example above.
To start the service (found on http://localhost:3002):
Yarn:
yarn workspace guardian-service start
Npm:
npm --workspace=guardian-service start
To build the service:
Yarn:
yarn workspace api-gateway run build
Npm:
npm --workspace=api-gateway run build
Configure the service as previously described. Do not need special configuration variables.
To start the service (found on http://localhost:3002):
Yarn:
yarn workspace api-gateway start
Npm:
npm --workspace=api-gateway start
To build the service:
```shell
npm install
npm run build
```
Configure the service as previously described. Do not need special configuration variables.
To start the service (found on <http://localhost:3005>):
```shell
npm start
```
To build the service:
Yarn:
yarn workspace ai-service run build
Npm:
npm --workspace=ai-service run build
Configure the service as previously described. Do not need special configuration variables.
Yarn:
yarn workspace ai-service start
Npm:
npm --workspace=ai-service start
To build the service:
```shell
npm install
npm run build
```
To start the service (found on <http://localhost:4200>):
```shell
npm start
```
1. Install a Hedera Local Network following the official documentation
OPERATOR_ID=""
OPERATOR_KEY=""
LOCALNODE_ADDRESS="11.11.11.11"
LOCALNODE_PROTOCOL="http"
HEDERA_NET="localnode"
Note:
- Set
LOCALNODE_ADDRESS
to the IP address of your local node instance. The value above is given as an example. - Set
HEDERA_NET
tolocalnode
. If not specified, the default value istestnet
. - Configure
OPERATOR_ID
andOPERATOR_KEY
accordingly with your local node configuration. - Remove
INITIALIZATION_TOPIC_ID
as the topic will be created automatically. - Set
LOCALNODE_PROTOCOL
tohttp
orhttps
accordingly with your local node configuration (it uses HTTP by default).
VAULT_PROVIDER = "hashicorp"
Note: VAULT_PROVIDER can be set to "database" or "hashicorp" to select Database instance or a hashicorp vault instance correspondingly.
If the VAULT_PROVIDER value is set to "hashicorp" the following 3 parameters should be configured in the auth-service folder.
- HASHICORP_ADDRESS : http://localhost:8200 for using local vault. For remote vault, we need to use the value from the configuration settings of Hashicorp vault service.
- HASHICORP_TOKEN : the token from the Hashicorp vault.
- HASHICORP_WORKSPACE : this is only needed when we are using cloud vault for Hashicorp. Default value is "admin".
2. Hashicorp should be configured with the created Key-Value storage, named "secret" by default, with the settingKey= records for the following keys:
1. OPERATOR_ID
2. OPERATOR_KEY
3. IPFS_STORAGE_API_KEY
Note: These records in the vault will be created automatically if there are environment variables with the matching names.
How to import existing user keys from DB into the vault:
During Guardian services initialization, we need to set the following configuration settings in auth-service folder:
IMPORT_KEYS_FROM_DB = 1
VAULT_PROVIDER = "hashicorp"
cp .env.example .env
docker compose -f docker-compose-dev.yml up --build
3. Access local development using http://localhost:3000 or http://localhost:4200
To delete all the containers:
docker builder prune --all
To run by cleaning Docker cache:
docker compose build --no-cache
To run guardian-service unit tests, following commands needs to be executed:
cd guardian-service
npm run test
It is also an ability to run Hedera network tests only. To do that, the following command needs to be executed:
npm run test:network
To run stability tests (certain transactions will be executed 10 times each), the following command needs to be executed:
npm run test:stability
Please refer to https://docs.hedera.com/guardian for complete documentation about the following topics:
- Swagger API
- Postman Collection
- Demo Usage guide
- Contribute a New Policy
- Reference Implementation
- Technologies Built on
- Roadmap
- Change Log
- Contributing
- License
- Security
For any questions, please reach out to the Envision Blockchain Solutions team at:
- Website: <www.envisionblockchain.com>
- Email: info@envisionblockchain.com