Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mainnet karlsenhashv2 #58

Merged
merged 48 commits into from
Aug 29, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
48 commits
Select commit Hold shift + click to select a range
707297b
First fishhash tests
Dec 26, 2023
3f0537b
refresh seeder url
Dec 27, 2023
a7eaaea
add minor version
Dec 27, 2023
f239d00
fix logging
Dec 27, 2023
4c1311d
Merge pull request #2 from karlsen-network/master
wam-rd Dec 27, 2023
17b616d
vers
Dec 29, 2023
59e81f9
Fix issue on word32 usage in kernel mixing step
Dec 29, 2023
f19c8a5
Fix the dataset and light cache generation.
Jan 4, 2024
7e8bc86
Last fixes on kernel has
Jan 5, 2024
b6f296a
Code cleanup and dag generation correct logging
Jan 21, 2024
223b0cd
Added lock on mainnet connection
Jan 21, 2024
b9c6d1a
Merge pull request #29 from wam-rd/fishhash
lemois-1337 Jan 21, 2024
c093cef
Merge pull request #5 from karlsen-network/master
okilisan Mar 13, 2024
cd5888a
Fishhash plus implementation with hard fork procedure
Mar 19, 2024
0bed91d
Remove blocking tests on block version because node contains both alg…
Mar 31, 2024
a8405b3
hard fork procedure from khashv1 to khashv2
Jun 26, 2024
5dea9c1
fix critical bug in matrix generation
Jun 28, 2024
e7ce09b
fix lint issues
Jun 28, 2024
39f0f3f
fix lint issues 2
Jun 28, 2024
4ab6b22
HF procedure with diff adjustment
okilisan Aug 11, 2024
687bb34
Merge pull request #52 from okilisan/mainnet_karlsenhashv2
lemois-1337 Aug 23, 2024
5707e8c
Removed periodic race detection workflow
lemois-1337 Aug 23, 2024
e99a2e1
Added 'HFDAAScore' to 'simnet' to pass tests
lemois-1337 Aug 23, 2024
d1e678a
align with rusty block version test
okilisan Aug 23, 2024
866f1d6
Merge pull request #53 from okilisan/mainnet_karlsenhashv2
lemois-1337 Aug 24, 2024
0d50f2b
Merge remote-tracking branch 'refs/remotes/origin/mainnet_karlsenhash…
lemois-1337 Aug 24, 2024
e135ccd
Fixed pruning_test in simnet and devnet genesis from Rust node
lemois-1337 Aug 25, 2024
0cab1d1
Fixed remaining integration tests and Go modules update
lemois-1337 Aug 26, 2024
75d4ccf
Use 4-char abbreviation as rest of KLS logging system (POW->POWK)
lemois-1337 Aug 26, 2024
32a6946
Increase windows runner pagefile to 32gb
lemois-1337 Aug 26, 2024
f8d8eae
Remove Go cache in test workflow due to its constant failures
lemois-1337 Aug 26, 2024
e34b69a
Increase code coverage timeout to 120m due to khashv2.
lemois-1337 Aug 26, 2024
98b0730
Increase timeout in integration tests and sequential execution
lemois-1337 Aug 26, 2024
03a8258
Fixed 'BlockVersionKHashV2' in debug output and removed linebreak
lemois-1337 Aug 27, 2024
629e525
Partially revert e135ccd6ca1dafef9fd06c72639793fe6708647e:
lemois-1337 Aug 27, 2024
2ff2931
Moving khashv2 pre-computed dag file during stability tests
lemois-1337 Aug 27, 2024
8862cc3
Partially revert 0cab1d18a7b5dad418f2aa7683d4af39ac19bb99
lemois-1337 Aug 27, 2024
23b175a
Give orphans stability test more time to process blocks
lemois-1337 Aug 27, 2024
c56f01e
Increase Linux swapfile size in GitHub runner to avoid OOM
lemois-1337 Aug 28, 2024
f2d4fb4
Merge pull request #54 from lemois-1337/mainnet_karlsenhashv2
lemois-1337 Aug 28, 2024
29726b7
Increase swap size for code coverage to support khashv2
lemois-1337 Aug 28, 2024
89f8351
Version bump to 2.1.0 for khashv2
lemois-1337 Aug 28, 2024
68bd3ca
Merge pull request #55 from lemois-1337/mainnet_karlsenhashv2
lemois-1337 Aug 28, 2024
c2d369f
Mainnet HFDAAScore set to 26962009 to switch to khashv2
lemois-1337 Aug 28, 2024
5582011
Updated README.md and added khashv2 paragraph
lemois-1337 Aug 29, 2024
825a94c
Merge pull request #56 from lemois-1337/mainnet_karlsenhashv2
lemois-1337 Aug 29, 2024
bc65e01
Re-enable mainnet sync
lemois-1337 Aug 29, 2024
a133806
Merge pull request #57 from lemois-1337/mainnet_karlsenhashv2
lemois-1337 Aug 29, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions .github/workflows/SetPageFileSize.ps1
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@
#>

param(
[System.UInt64] $MinimumSize = 16gb ,
[System.UInt64] $MaximumSize = 16gb ,
[System.UInt64] $MinimumSize = 32gb ,
[System.UInt64] $MaximumSize = 32gb ,
[System.String] $DiskRoot = "D:"
)

Expand Down Expand Up @@ -193,4 +193,4 @@ namespace Util
Add-Type -TypeDefinition $source

# Set SetPageFileSize
[Util.PageFile]::SetPageFileSize($minimumSize, $maximumSize, $diskRoot)
[Util.PageFile]::SetPageFileSize($minimumSize, $maximumSize, $diskRoot)
48 changes: 0 additions & 48 deletions .github/workflows/race.yaml

This file was deleted.

36 changes: 26 additions & 10 deletions .github/workflows/tests.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -28,28 +28,36 @@ jobs:
if: runner.os == 'Windows'
run: powershell -command .github\workflows\SetPageFileSize.ps1

# Increase the swap size on Linux to aviod running out of memory
- name: Increase swap size on Linux
if: runner.os == 'Linux'
uses: thejerrybao/setup-swap-space@v1
with:
swap-size-gb: 12

- name: Setup Go
uses: actions/setup-go@v4
with:
go-version: 1.21

# Source: https://github.com/actions/cache/blob/main/examples.md#go---modules
- name: Go Cache
uses: actions/cache@v4
with:
path: ~/go/pkg/mod
key: ${{ runner.os }}-go-${{ hashFiles('**/go.sum') }}
restore-keys: |
${{ runner.os }}-go-

- name: Test
shell: bash
env:
NO_PARALLEL: 1
run: ./build_and_test.sh

stability-test-fast:
runs-on: ubuntu-latest
name: Fast stability tests
steps:

# Increase the swap size on Linux to aviod running out of memory
- name: Increase swap size on Linux
if: runner.os == 'Linux'
uses: thejerrybao/setup-swap-space@v1
with:
swap-size-gb: 12

- name: Setup Go
uses: actions/setup-go@v4
with:
Expand All @@ -71,6 +79,14 @@ jobs:
runs-on: ubuntu-latest
name: Produce code coverage
steps:

# Increase the swap size on Linux to aviod running out of memory
- name: Increase swap size on Linux
if: runner.os == 'Linux'
uses: thejerrybao/setup-swap-space@v1
with:
swap-size-gb: 12

- name: Check out code into the Go module directory
uses: actions/checkout@v4

Expand All @@ -83,7 +99,7 @@ jobs:
run: rm -r stability-tests

- name: Create coverage file
run: go test -v -covermode=atomic -coverpkg=./... -coverprofile coverage.txt ./...
run: go test -timeout 120m -parallel=1 -v -covermode=atomic -coverpkg=./... -coverprofile coverage.txt ./...

- name: Upload coverage file
run: bash <(curl -s https://codecov.io/bash)
43 changes: 34 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,18 +38,15 @@ miners. We will ensure long-term GPU-friendly mining.
### Hashing Function

We initially started with `kHeavyHash` and `blake3` modifications
on-top. This algorithm is called `KarlsenHashv1`. However `kHeavyHash`
and `blake3` are not future proof in ASIC resistence. Therefore we've
launched already our `testnet-1` with [FishHash](https://github.com/iron-fish/fish-hash/blob/main/FishHash.pdf).
It is the worlds first implementation of FishHash with Golang in a
1bps blockchain.
on-top. This algorithm is called `KarlsenHashv1`.

`KarlsenHashv1` is currently used in [mainnet](https://github.com/karlsen-network/karlsend/releases/tag/v1.1.0)
and can be mined using the following miners maintained by the Karlsen
developers:

* Built-in CPU miner from `karlsend`
* Karlsen [GPU miner](https://github.com/karlsen-network/karlsen-miner) as reference implementation of `kHeavyHash` with `blake3`.
* Karlsen [GPU miner](https://github.com/karlsen-network/karlsen-miner)
as reference implementation of `kHeavyHash` with `blake3`.

The following third-party miners are available and have added
`KarlsenHashv1`:
Expand All @@ -61,14 +58,42 @@ The following third-party miners are available and have added
* [Rigel](https://github.com/rigelminer/rigel)
* [GMiner](https://github.com/develsoftware/GMinerRelease)

`KarlsenHashv2` is currently being investigated and tested in [testnet-1](https://github.com/karlsen-network/karlsend/releases/tag/v2.0.0-testnet-1-fishhash)
`KarlsenHashv2` will become active via hardfork at DAA score `26.962.009`.
It is based on [FishHash](https://github.com/iron-fish/fish-hash/blob/main/FishHash.pdf)
written from scratch in our Golang node implementation. It is FPGA/ASIC
resistent. It is the worlds first implementation of FishHash with Golang
in `mainnet` in a 1bps blockchain.

`KarlsenHashv2` is currently used in [mainnet](https://github.com/karlsen-network/karlsend/releases/tag/v2.1.0)
and can be mined using the following miners maintained by the Karlsen
developers:

* Built-in CPU miner from `karlsend`
* Karlsen [GPU miner](https://github.com/wam-rd/karlsen-miner/releases/tag/v2.0.0-alpha) as bleeding edge and unoptimized reference implementation of FishHash.
* Karlsen [GPU miner](https://github.com/karlsen-network/karlsen-miner/releases/tag/v2.0.0)
as bleeding edge and unoptimized reference implementation of
`KarlsenHashv2`. Please follow the steps in the [README.md](https://github.com/karlsen-network/karlsen-miner/blob/main/README.md)
to generate a DAG file.

The following third-party miners are available and have added
`KarlsenHashv2`:

* [SRBMiner](https://github.com/doktor83/SRBMiner-Multi)

### DAG Generation

To start mining using the built-in CPU miner it needs a pre-generated
DAG file. `KarlsenHashv2` miner uses a 4GB DAG for efficient mining.
It generates this DAG with 8 CPU threads and saves it as `hashes.dat`
for faster subsequent runs.

* First Run: Generates a 4GB DAG using 8 CPU threads. This may take
time depending on your computer. Saves the DAG as `hashes.dat` for
future use.
* Next Runs: Loads `hashes.dat` to skip DAG generation, speeding up
startup.

There are no third-party miners available as of now.
If you need to regenerate the DAG, delete `hashes.dat` and run the
`karlsenminer` again.

## Smart Contracts

Expand Down
2 changes: 2 additions & 0 deletions app/app.go
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ import (
"runtime"
"time"

"github.com/karlsen-network/karlsend/domain/consensus/utils/pow"
"github.com/karlsen-network/karlsend/infrastructure/config"
"github.com/karlsen-network/karlsend/infrastructure/db/database"
"github.com/karlsen-network/karlsend/infrastructure/db/database/ldb"
Expand Down Expand Up @@ -82,6 +83,7 @@ func (app *karlsendApp) main(startedChan chan<- struct{}) error {

// Show version at startup.
log.Infof("Version %s", version.Version())
log.Infof("Using KarlsenHashV2 impl: %s", pow.GetHashingAlgoVersion())

// Enable http profiling server if requested.
if app.cfg.Profile != "" {
Expand Down
3 changes: 3 additions & 0 deletions app/protocol/manager.go
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,9 @@ func (m *Manager) AddTransaction(tx *externalapi.DomainTransaction, allowOrphan

// AddBlock adds the given block to the DAG and propagates it.
func (m *Manager) AddBlock(block *externalapi.DomainBlock) error {
//TODO switch this to debug level
log.Infof("NEW BLOCK ADDED ***************************************")
log.Infof("BlueWork[%s] BlueScore[%d] DAAScore[%d] Bits[%d] Version[%d]", block.Header.BlueWork(), block.Header.BlueScore(), block.Header.DAAScore(), block.Header.Bits(), block.Header.Version())
return m.context.AddBlock(block)
}

Expand Down
4 changes: 2 additions & 2 deletions build_and_test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ go build -v -o karlsend .

# check if parallel tests are enabled.
[ -n "${NO_PARALLEL}" ] && {
go test -timeout 20m -parallel=1 -v ./...
go test -timeout 30m -parallel=1 -v ./...
} || {
go test -timeout 20m -v ./...
go test -timeout 30m -v ./...
}
2 changes: 2 additions & 0 deletions cmd/karlsenminer/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ import (

_ "net/http/pprof"

"github.com/karlsen-network/karlsend/domain/consensus/utils/pow"
"github.com/karlsen-network/karlsend/infrastructure/os/signal"
"github.com/karlsen-network/karlsend/util/panics"
"github.com/karlsen-network/karlsend/util/profiling"
Expand All @@ -29,6 +30,7 @@ func main() {

// Show version at startup.
log.Infof("Version %s", version.Version())
log.Infof("Using KarlsenHashV2 impl: %s", pow.GetHashingAlgoVersion())

// Enable http profiling server if requested.
if cfg.Profile != "" {
Expand Down
17 changes: 15 additions & 2 deletions cmd/karlsenminer/mineloop.go
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ import (
)

var hashesTried uint64
var dagReady = false

const logHashRateInterval = 10 * time.Second

Expand Down Expand Up @@ -97,6 +98,12 @@ func logHashRate() {
spawn("logHashRate", func() {
lastCheck := time.Now()
for range time.Tick(logHashRateInterval) {

if !dagReady {
log.Infof("Generating DAG, please wait ...")
continue
}

currentHashesTried := atomic.LoadUint64(&hashesTried)
currentTime := time.Now()
kiloHashesTried := float64(currentHashesTried) / 1000.0
Expand Down Expand Up @@ -138,7 +145,11 @@ func handleFoundBlock(client *minerClient, block *externalapi.DomainBlock) error
func mineNextBlock(mineWhenNotSynced bool) *externalapi.DomainBlock {
nonce := rand.Uint64() // Use the global concurrent-safe random source.
for {
if !dagReady {
continue
}
nonce++
//fmt.Printf("mineNextBlock -- log1\n")
// For each nonce we try to build a block from the most up to date
// block template.
// In the rare case where the nonce space is exhausted for a specific
Expand All @@ -165,7 +176,6 @@ func getBlockForMining(mineWhenNotSynced bool) (*externalapi.DomainBlock, *pow.S

for {
tryCount++

shouldLog := (tryCount-1)%10 == 0
template, state, isSynced := templatemanager.Get()
if template == nil {
Expand Down Expand Up @@ -207,7 +217,10 @@ func templatesLoop(client *minerClient, miningAddr util.Address, errChan chan er
errChan <- errors.Wrapf(err, "Error getting block template from %s", client.Address())
return
}
err = templatemanager.Set(template)
err = templatemanager.Set(template, backendLog)
// after first template DAG is supposed to be ready
// TODO: refresh dag status in real time
dagReady = true
if err != nil {
errChan <- errors.Wrapf(err, "Error setting block template from %s", client.Address())
return
Expand Down
6 changes: 4 additions & 2 deletions cmd/karlsenminer/templatemanager/templatemanager.go
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ import (
"github.com/karlsen-network/karlsend/app/appmessage"
"github.com/karlsen-network/karlsend/domain/consensus/model/externalapi"
"github.com/karlsen-network/karlsend/domain/consensus/utils/pow"
"github.com/karlsen-network/karlsend/infrastructure/logger"
)

var currentTemplate *externalapi.DomainBlock
Expand All @@ -27,15 +28,16 @@ func Get() (*externalapi.DomainBlock, *pow.State, bool) {
}

// Set sets the current template to work on
func Set(template *appmessage.GetBlockTemplateResponseMessage) error {
func Set(template *appmessage.GetBlockTemplateResponseMessage, backendLog *logger.Backend) error {
block, err := appmessage.RPCBlockToDomainBlock(template.Block)
if err != nil {
return err
}
lock.Lock()
defer lock.Unlock()
currentTemplate = block
currentState = pow.NewState(block.Header.ToMutable())
pow.SetLogger(backendLog, logger.LevelTrace)
currentState = pow.NewState(block.Header.ToMutable(), true)
isSynced = template.IsSynced
return nil
}
2 changes: 2 additions & 0 deletions domain/consensus/factory.go
Original file line number Diff line number Diff line change
Expand Up @@ -349,6 +349,7 @@ func (f *factory) NewConsensus(config *Config, db infrastructuredatabase.Databas
config.MaxBlockParents,
config.TimestampDeviationTolerance,
config.TargetTimePerBlock,
config.HFDAAScore,
config.MaxBlockLevel,

dbManager,
Expand Down Expand Up @@ -397,6 +398,7 @@ func (f *factory) NewConsensus(config *Config, db infrastructuredatabase.Databas
blockBuilder := blockbuilder.New(
dbManager,
genesisHash,
config.HFDAAScore,

difficultyManager,
pastMedianTimeManager,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,4 +10,5 @@ type DifficultyManager interface {
StageDAADataAndReturnRequiredDifficulty(stagingArea *StagingArea, blockHash *externalapi.DomainHash, isBlockWithTrustedData bool) (uint32, error)
RequiredDifficulty(stagingArea *StagingArea, blockHash *externalapi.DomainHash) (uint32, error)
EstimateNetworkHashesPerSecond(startHash *externalapi.DomainHash, windowSize int) (uint64, error)
GenesisDifficulty() uint32
}
Loading
Loading