Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor: update execution client state change logs #5771

Merged
merged 2 commits into from
Jul 20, 2023

Conversation

nflaig
Copy link
Member

@nflaig nflaig commented Jul 18, 2023

Motivation

These logs are meant for end users as they are info/warn/error and not debug only. The term "ExecutionEngine" is quite technical in my opinion and to avoid any confusion would recommend we just stick to "Execution client" which is the term used by most users.

The second change of this PR is the update of log level to warn from info if execution client is syncing. This better aligns it with the validator log if beacon node is syncing. We have discussed in #5467 if info or warn should be used there.

Description

  • Update execution client state change messages
  • Use warn instead of info if execution client is syncing

@nflaig nflaig requested a review from a team as a code owner July 18, 2023 09:00
@nflaig nflaig changed the title refactor: update execution client state change messages refactor: update execution client state change logs Jul 18, 2023
@github-actions
Copy link
Contributor

Performance Report

✔️ no performance regression detected

Full benchmark results
Benchmark suite Current: 77d2a04 Previous: ec81531 Ratio
getPubkeys - index2pubkey - req 1000 vs - 250000 vc 949.95 us/op 499.51 us/op 1.90
getPubkeys - validatorsArr - req 1000 vs - 250000 vc 104.05 us/op 77.714 us/op 1.34
BLS verify - blst-native 1.3658 ms/op 1.2404 ms/op 1.10
BLS verifyMultipleSignatures 3 - blst-native 2.9311 ms/op 2.4868 ms/op 1.18
BLS verifyMultipleSignatures 8 - blst-native 5.6414 ms/op 5.4670 ms/op 1.03
BLS verifyMultipleSignatures 32 - blst-native 21.181 ms/op 19.455 ms/op 1.09
BLS aggregatePubkeys 32 - blst-native 26.848 us/op 25.936 us/op 1.04
BLS aggregatePubkeys 128 - blst-native 104.45 us/op 102.79 us/op 1.02
getAttestationsForBlock 54.610 ms/op 58.167 ms/op 0.94
isKnown best case - 1 super set check 327.00 ns/op 361.00 ns/op 0.91
isKnown normal case - 2 super set checks 272.00 ns/op 296.00 ns/op 0.92
isKnown worse case - 16 super set checks 280.00 ns/op 309.00 ns/op 0.91
CheckpointStateCache - add get delete 4.9320 us/op 5.6110 us/op 0.88
validate api signedAggregateAndProof - struct 2.8099 ms/op 2.8282 ms/op 0.99
validate gossip signedAggregateAndProof - struct 2.8123 ms/op 2.8034 ms/op 1.00
validate api attestation - struct 1.3353 ms/op 1.3411 ms/op 1.00
validate gossip attestation - struct 1.3702 ms/op 1.3540 ms/op 1.01
pickEth1Vote - no votes 1.1838 ms/op 1.2052 ms/op 0.98
pickEth1Vote - max votes 12.133 ms/op 9.2216 ms/op 1.32
pickEth1Vote - Eth1Data hashTreeRoot value x2048 8.9636 ms/op 9.1892 ms/op 0.98
pickEth1Vote - Eth1Data hashTreeRoot tree x2048 15.835 ms/op 13.699 ms/op 1.16
pickEth1Vote - Eth1Data fastSerialize value x2048 656.86 us/op 725.07 us/op 0.91
pickEth1Vote - Eth1Data fastSerialize tree x2048 4.3923 ms/op 4.8911 ms/op 0.90
bytes32 toHexString 511.00 ns/op 540.00 ns/op 0.95
bytes32 Buffer.toString(hex) 289.00 ns/op 302.00 ns/op 0.96
bytes32 Buffer.toString(hex) from Uint8Array 431.00 ns/op 514.00 ns/op 0.84
bytes32 Buffer.toString(hex) + 0x 292.00 ns/op 344.00 ns/op 0.85
Object access 1 prop 0.16800 ns/op 0.18900 ns/op 0.89
Map access 1 prop 0.15500 ns/op 0.16000 ns/op 0.97
Object get x1000 7.4220 ns/op 8.6720 ns/op 0.86
Map get x1000 0.64300 ns/op 0.60800 ns/op 1.06
Object set x1000 51.168 ns/op 55.033 ns/op 0.93
Map set x1000 39.500 ns/op 45.812 ns/op 0.86
Return object 10000 times 0.23810 ns/op 0.24940 ns/op 0.95
Throw Error 10000 times 3.8595 us/op 3.8930 us/op 0.99
fastMsgIdFn sha256 / 200 bytes 3.3130 us/op 3.4850 us/op 0.95
fastMsgIdFn h32 xxhash / 200 bytes 273.00 ns/op 339.00 ns/op 0.81
fastMsgIdFn h64 xxhash / 200 bytes 354.00 ns/op 391.00 ns/op 0.91
fastMsgIdFn sha256 / 1000 bytes 11.633 us/op 11.885 us/op 0.98
fastMsgIdFn h32 xxhash / 1000 bytes 409.00 ns/op 515.00 ns/op 0.79
fastMsgIdFn h64 xxhash / 1000 bytes 448.00 ns/op 492.00 ns/op 0.91
fastMsgIdFn sha256 / 10000 bytes 104.98 us/op 110.40 us/op 0.95
fastMsgIdFn h32 xxhash / 10000 bytes 1.9740 us/op 2.2110 us/op 0.89
fastMsgIdFn h64 xxhash / 10000 bytes 1.3450 us/op 1.4460 us/op 0.93
enrSubnets - fastDeserialize 64 bits 1.2540 us/op 1.6580 us/op 0.76
enrSubnets - ssz BitVector 64 bits 432.00 ns/op 555.00 ns/op 0.78
enrSubnets - fastDeserialize 4 bits 175.00 ns/op 263.00 ns/op 0.67
enrSubnets - ssz BitVector 4 bits 435.00 ns/op 516.00 ns/op 0.84
prioritizePeers score -10:0 att 32-0.1 sync 2-0 107.86 us/op 117.40 us/op 0.92
prioritizePeers score 0:0 att 32-0.25 sync 2-0.25 137.81 us/op 146.99 us/op 0.94
prioritizePeers score 0:0 att 32-0.5 sync 2-0.5 166.45 us/op 201.14 us/op 0.83
prioritizePeers score 0:0 att 64-0.75 sync 4-0.75 313.53 us/op 352.50 us/op 0.89
prioritizePeers score 0:0 att 64-1 sync 4-1 368.11 us/op 388.15 us/op 0.95
array of 16000 items push then shift 1.6247 us/op 1.6361 us/op 0.99
LinkedList of 16000 items push then shift 8.9970 ns/op 9.3710 ns/op 0.96
array of 16000 items push then pop 56.866 ns/op 58.786 ns/op 0.97
LinkedList of 16000 items push then pop 8.8240 ns/op 9.6370 ns/op 0.92
array of 24000 items push then shift 2.4397 us/op 2.4937 us/op 0.98
LinkedList of 24000 items push then shift 9.0260 ns/op 9.5940 ns/op 0.94
array of 24000 items push then pop 121.44 ns/op 123.04 ns/op 0.99
LinkedList of 24000 items push then pop 8.6500 ns/op 9.2830 ns/op 0.93
intersect bitArray bitLen 8 6.9070 ns/op 6.9870 ns/op 0.99
intersect array and set length 8 57.895 ns/op 68.111 ns/op 0.85
intersect bitArray bitLen 128 33.011 ns/op 33.071 ns/op 1.00
intersect array and set length 128 782.63 ns/op 838.72 ns/op 0.93
bitArray.getTrueBitIndexes() bitLen 128 1.6080 us/op 1.8600 us/op 0.86
bitArray.getTrueBitIndexes() bitLen 248 2.6110 us/op 2.9950 us/op 0.87
bitArray.getTrueBitIndexes() bitLen 512 4.9100 us/op 5.7730 us/op 0.85
Buffer.concat 32 items 1.1180 us/op 1.2170 us/op 0.92
Uint8Array.set 32 items 2.6460 us/op 2.3020 us/op 1.15
transfer serialized Status (84 B) 2.2380 us/op 1.9430 us/op 1.15
copy serialized Status (84 B) 1.7450 us/op 1.7150 us/op 1.02
transfer serialized SignedVoluntaryExit (112 B) 2.3430 us/op 2.2160 us/op 1.06
copy serialized SignedVoluntaryExit (112 B) 1.7240 us/op 1.9360 us/op 0.89
transfer serialized ProposerSlashing (416 B) 2.7650 us/op 2.7160 us/op 1.02
copy serialized ProposerSlashing (416 B) 2.8450 us/op 2.5350 us/op 1.12
transfer serialized Attestation (485 B) 2.6230 us/op 2.6170 us/op 1.00
copy serialized Attestation (485 B) 2.5910 us/op 2.3920 us/op 1.08
transfer serialized AttesterSlashing (33232 B) 2.5110 us/op 2.3140 us/op 1.09
copy serialized AttesterSlashing (33232 B) 5.7110 us/op 7.6890 us/op 0.74
transfer serialized Small SignedBeaconBlock (128000 B) 2.7050 us/op 2.6050 us/op 1.04
copy serialized Small SignedBeaconBlock (128000 B) 12.401 us/op 16.162 us/op 0.77
transfer serialized Avg SignedBeaconBlock (200000 B) 3.3200 us/op 3.5800 us/op 0.93
copy serialized Avg SignedBeaconBlock (200000 B) 18.726 us/op 24.036 us/op 0.78
transfer serialized BlobsSidecar (524380 B) 3.0890 us/op 3.4520 us/op 0.89
copy serialized BlobsSidecar (524380 B) 81.136 us/op 118.30 us/op 0.69
transfer serialized Big SignedBeaconBlock (1000000 B) 3.5620 us/op 4.3110 us/op 0.83
copy serialized Big SignedBeaconBlock (1000000 B) 149.37 us/op 181.71 us/op 0.82
pass gossip attestations to forkchoice per slot 2.1978 ms/op 2.2886 ms/op 0.96
forkChoice updateHead vc 100000 bc 64 eq 0 2.1382 ms/op 2.3048 ms/op 0.93
forkChoice updateHead vc 600000 bc 64 eq 0 14.164 ms/op 14.659 ms/op 0.97
forkChoice updateHead vc 1000000 bc 64 eq 0 19.146 ms/op 23.681 ms/op 0.81
forkChoice updateHead vc 600000 bc 320 eq 0 19.805 ms/op 21.132 ms/op 0.94
forkChoice updateHead vc 600000 bc 1200 eq 0 88.320 ms/op 89.007 ms/op 0.99
forkChoice updateHead vc 600000 bc 64 eq 1000 19.660 ms/op 22.222 ms/op 0.88
forkChoice updateHead vc 600000 bc 64 eq 10000 21.251 ms/op 28.512 ms/op 0.75
forkChoice updateHead vc 600000 bc 64 eq 300000 28.885 ms/op 46.369 ms/op 0.62
computeDeltas 3.2764 ms/op 3.5909 ms/op 0.91
computeProposerBoostScoreFromBalances 398.77 us/op 408.21 us/op 0.98
altair processAttestation - 250000 vs - 7PWei normalcase 4.2759 ms/op 2.5074 ms/op 1.71
altair processAttestation - 250000 vs - 7PWei worstcase 4.7382 ms/op 3.6328 ms/op 1.30
altair processAttestation - setStatus - 1/6 committees join 259.33 us/op 234.39 us/op 1.11
altair processAttestation - setStatus - 1/3 committees join 491.85 us/op 440.40 us/op 1.12
altair processAttestation - setStatus - 1/2 committees join 720.31 us/op 620.63 us/op 1.16
altair processAttestation - setStatus - 2/3 committees join 696.98 us/op 714.34 us/op 0.98
altair processAttestation - setStatus - 4/5 committees join 987.97 us/op 1.0034 ms/op 0.98
altair processAttestation - setStatus - 100% committees join 1.0552 ms/op 1.2135 ms/op 0.87
altair processBlock - 250000 vs - 7PWei normalcase 10.164 ms/op 10.240 ms/op 0.99
altair processBlock - 250000 vs - 7PWei normalcase hashState 17.365 ms/op 17.942 ms/op 0.97
altair processBlock - 250000 vs - 7PWei worstcase 42.178 ms/op 41.606 ms/op 1.01
altair processBlock - 250000 vs - 7PWei worstcase hashState 58.196 ms/op 62.390 ms/op 0.93
phase0 processBlock - 250000 vs - 7PWei normalcase 2.0031 ms/op 2.4706 ms/op 0.81
phase0 processBlock - 250000 vs - 7PWei worstcase 28.048 ms/op 32.483 ms/op 0.86
altair processEth1Data - 250000 vs - 7PWei normalcase 480.90 us/op 635.69 us/op 0.76
getExpectedWithdrawals 250000 eb:1,eth1:1,we:0,wn:0,smpl:15 10.484 us/op 19.635 us/op 0.53
getExpectedWithdrawals 250000 eb:0.95,eth1:0.1,we:0.05,wn:0,smpl:219 63.654 us/op 93.603 us/op 0.68
getExpectedWithdrawals 250000 eb:0.95,eth1:0.3,we:0.05,wn:0,smpl:42 23.352 us/op 32.103 us/op 0.73
getExpectedWithdrawals 250000 eb:0.95,eth1:0.7,we:0.05,wn:0,smpl:18 15.669 us/op 22.196 us/op 0.71
getExpectedWithdrawals 250000 eb:0.1,eth1:0.1,we:0,wn:0,smpl:1020 221.09 us/op 194.57 us/op 1.14
getExpectedWithdrawals 250000 eb:0.03,eth1:0.03,we:0,wn:0,smpl:11777 2.1454 ms/op 1.8345 ms/op 1.17
getExpectedWithdrawals 250000 eb:0.01,eth1:0.01,we:0,wn:0,smpl:16384 2.1288 ms/op 2.3220 ms/op 0.92
getExpectedWithdrawals 250000 eb:0,eth1:0,we:0,wn:0,smpl:16384 2.1075 ms/op 2.2242 ms/op 0.95
getExpectedWithdrawals 250000 eb:0,eth1:0,we:0,wn:0,nocache,smpl:16384 3.5675 ms/op 4.1340 ms/op 0.86
getExpectedWithdrawals 250000 eb:0,eth1:1,we:0,wn:0,smpl:16384 2.7349 ms/op 2.8205 ms/op 0.97
getExpectedWithdrawals 250000 eb:0,eth1:1,we:0,wn:0,nocache,smpl:16384 5.1047 ms/op 7.3576 ms/op 0.69
Tree 40 250000 create 352.52 ms/op 431.55 ms/op 0.82
Tree 40 250000 get(125000) 213.18 ns/op 212.47 ns/op 1.00
Tree 40 250000 set(125000) 986.95 ns/op 1.7240 us/op 0.57
Tree 40 250000 toArray() 22.053 ms/op 27.023 ms/op 0.82
Tree 40 250000 iterate all - toArray() + loop 21.367 ms/op 24.356 ms/op 0.88
Tree 40 250000 iterate all - get(i) 70.833 ms/op 84.792 ms/op 0.84
MutableVector 250000 create 11.470 ms/op 14.785 ms/op 0.78
MutableVector 250000 get(125000) 6.7790 ns/op 7.0570 ns/op 0.96
MutableVector 250000 set(125000) 250.74 ns/op 516.80 ns/op 0.49
MutableVector 250000 toArray() 3.4555 ms/op 3.0889 ms/op 1.12
MutableVector 250000 iterate all - toArray() + loop 3.1857 ms/op 3.4674 ms/op 0.92
MutableVector 250000 iterate all - get(i) 1.5424 ms/op 1.8186 ms/op 0.85
Array 250000 create 3.0081 ms/op 3.0940 ms/op 0.97
Array 250000 clone - spread 1.1979 ms/op 1.6260 ms/op 0.74
Array 250000 get(125000) 0.58600 ns/op 0.83000 ns/op 0.71
Array 250000 set(125000) 0.65800 ns/op 0.91700 ns/op 0.72
Array 250000 iterate all - loop 89.314 us/op 89.398 us/op 1.00
effectiveBalanceIncrements clone Uint8Array 300000 58.892 us/op 57.068 us/op 1.03
effectiveBalanceIncrements clone MutableVector 300000 391.00 ns/op 394.00 ns/op 0.99
effectiveBalanceIncrements rw all Uint8Array 300000 194.13 us/op 191.16 us/op 1.02
effectiveBalanceIncrements rw all MutableVector 300000 96.282 ms/op 112.46 ms/op 0.86
phase0 afterProcessEpoch - 250000 vs - 7PWei 122.55 ms/op 119.53 ms/op 1.03
phase0 beforeProcessEpoch - 250000 vs - 7PWei 41.681 ms/op 43.228 ms/op 0.96
altair processEpoch - mainnet_e81889 361.58 ms/op 389.34 ms/op 0.93
mainnet_e81889 - altair beforeProcessEpoch 70.789 ms/op 76.800 ms/op 0.92
mainnet_e81889 - altair processJustificationAndFinalization 28.116 us/op 21.296 us/op 1.32
mainnet_e81889 - altair processInactivityUpdates 6.6547 ms/op 6.9439 ms/op 0.96
mainnet_e81889 - altair processRewardsAndPenalties 78.784 ms/op 55.163 ms/op 1.43
mainnet_e81889 - altair processRegistryUpdates 4.1630 us/op 4.7590 us/op 0.87
mainnet_e81889 - altair processSlashings 639.00 ns/op 873.00 ns/op 0.73
mainnet_e81889 - altair processEth1DataReset 619.00 ns/op 989.00 ns/op 0.63
mainnet_e81889 - altair processEffectiveBalanceUpdates 2.3823 ms/op 2.0109 ms/op 1.18
mainnet_e81889 - altair processSlashingsReset 3.9810 us/op 5.7410 us/op 0.69
mainnet_e81889 - altair processRandaoMixesReset 8.6710 us/op 9.8020 us/op 0.88
mainnet_e81889 - altair processHistoricalRootsUpdate 866.00 ns/op 1.9570 us/op 0.44
mainnet_e81889 - altair processParticipationFlagUpdates 4.0720 us/op 5.0420 us/op 0.81
mainnet_e81889 - altair processSyncCommitteeUpdates 958.00 ns/op 1.4940 us/op 0.64
mainnet_e81889 - altair afterProcessEpoch 128.17 ms/op 140.29 ms/op 0.91
phase0 processEpoch - mainnet_e58758 410.17 ms/op 425.49 ms/op 0.96
mainnet_e58758 - phase0 beforeProcessEpoch 156.82 ms/op 181.74 ms/op 0.86
mainnet_e58758 - phase0 processJustificationAndFinalization 22.383 us/op 24.079 us/op 0.93
mainnet_e58758 - phase0 processRewardsAndPenalties 81.352 ms/op 73.725 ms/op 1.10
mainnet_e58758 - phase0 processRegistryUpdates 13.709 us/op 20.131 us/op 0.68
mainnet_e58758 - phase0 processSlashings 737.00 ns/op 1.0620 us/op 0.69
mainnet_e58758 - phase0 processEth1DataReset 664.00 ns/op 884.00 ns/op 0.75
mainnet_e58758 - phase0 processEffectiveBalanceUpdates 1.9768 ms/op 3.6006 ms/op 0.55
mainnet_e58758 - phase0 processSlashingsReset 2.9830 us/op 3.5000 us/op 0.85
mainnet_e58758 - phase0 processRandaoMixesReset 7.2070 us/op 4.9680 us/op 1.45
mainnet_e58758 - phase0 processHistoricalRootsUpdate 891.00 ns/op 833.00 ns/op 1.07
mainnet_e58758 - phase0 processParticipationRecordUpdates 5.4380 us/op 5.3940 us/op 1.01
mainnet_e58758 - phase0 afterProcessEpoch 96.838 ms/op 101.36 ms/op 0.96
phase0 processEffectiveBalanceUpdates - 250000 normalcase 1.5446 ms/op 1.2894 ms/op 1.20
phase0 processEffectiveBalanceUpdates - 250000 worstcase 0.5 2.1197 ms/op 1.5259 ms/op 1.39
altair processInactivityUpdates - 250000 normalcase 23.737 ms/op 25.146 ms/op 0.94
altair processInactivityUpdates - 250000 worstcase 23.148 ms/op 23.977 ms/op 0.97
phase0 processRegistryUpdates - 250000 normalcase 10.178 us/op 11.120 us/op 0.92
phase0 processRegistryUpdates - 250000 badcase_full_deposits 588.07 us/op 383.63 us/op 1.53
phase0 processRegistryUpdates - 250000 worstcase 0.5 114.66 ms/op 165.40 ms/op 0.69
altair processRewardsAndPenalties - 250000 normalcase 76.901 ms/op 80.914 ms/op 0.95
altair processRewardsAndPenalties - 250000 worstcase 65.316 ms/op 74.366 ms/op 0.88
phase0 getAttestationDeltas - 250000 normalcase 7.6737 ms/op 9.1916 ms/op 0.83
phase0 getAttestationDeltas - 250000 worstcase 7.8178 ms/op 11.086 ms/op 0.71
phase0 processSlashings - 250000 worstcase 2.2308 ms/op 2.4875 ms/op 0.90
altair processSyncCommitteeUpdates - 250000 162.10 ms/op 169.12 ms/op 0.96
BeaconState.hashTreeRoot - No change 288.00 ns/op 367.00 ns/op 0.78
BeaconState.hashTreeRoot - 1 full validator 50.674 us/op 52.018 us/op 0.97
BeaconState.hashTreeRoot - 32 full validator 488.63 us/op 508.35 us/op 0.96
BeaconState.hashTreeRoot - 512 full validator 5.1399 ms/op 5.5139 ms/op 0.93
BeaconState.hashTreeRoot - 1 validator.effectiveBalance 61.466 us/op 63.497 us/op 0.97
BeaconState.hashTreeRoot - 32 validator.effectiveBalance 857.96 us/op 882.49 us/op 0.97
BeaconState.hashTreeRoot - 512 validator.effectiveBalance 10.998 ms/op 11.409 ms/op 0.96
BeaconState.hashTreeRoot - 1 balances 48.285 us/op 47.965 us/op 1.01
BeaconState.hashTreeRoot - 32 balances 435.95 us/op 439.06 us/op 0.99
BeaconState.hashTreeRoot - 512 balances 4.0313 ms/op 4.2768 ms/op 0.94
BeaconState.hashTreeRoot - 250000 balances 74.256 ms/op 74.826 ms/op 0.99
aggregationBits - 2048 els - zipIndexesInBitList 18.462 us/op 21.594 us/op 0.85
regular array get 100000 times 34.269 us/op 35.948 us/op 0.95
wrappedArray get 100000 times 34.188 us/op 35.992 us/op 0.95
arrayWithProxy get 100000 times 15.046 ms/op 15.326 ms/op 0.98
ssz.Root.equals 262.00 ns/op 256.00 ns/op 1.02
byteArrayEquals 259.00 ns/op 258.00 ns/op 1.00
shuffle list - 16384 els 7.3668 ms/op 7.5896 ms/op 0.97
shuffle list - 250000 els 105.72 ms/op 106.62 ms/op 0.99
processSlot - 1 slots 7.9620 us/op 8.2370 us/op 0.97
processSlot - 32 slots 1.3894 ms/op 1.5065 ms/op 0.92
getEffectiveBalanceIncrementsZeroInactive - 250000 vs - 7PWei 51.070 ms/op 66.345 ms/op 0.77
getCommitteeAssignments - req 1 vs - 250000 vc 2.5029 ms/op 2.8123 ms/op 0.89
getCommitteeAssignments - req 100 vs - 250000 vc 3.7080 ms/op 4.1924 ms/op 0.88
getCommitteeAssignments - req 1000 vs - 250000 vc 4.0787 ms/op 4.3252 ms/op 0.94
RootCache.getBlockRootAtSlot - 250000 vs - 7PWei 4.5400 ns/op 5.7400 ns/op 0.79
state getBlockRootAtSlot - 250000 vs - 7PWei 533.24 ns/op 665.09 ns/op 0.80
computeProposers - vc 250000 8.7246 ms/op 9.5163 ms/op 0.92
computeEpochShuffling - vc 250000 106.43 ms/op 113.83 ms/op 0.94
getNextSyncCommittee - vc 250000 150.26 ms/op 173.85 ms/op 0.86
computeSigningRoot for AttestationData 13.837 us/op 14.199 us/op 0.97
hash AttestationData serialized data then Buffer.toString(base64) 2.3803 us/op 2.3946 us/op 0.99
toHexString serialized data 1.1611 us/op 1.2598 us/op 0.92
Buffer.toString(base64) 225.38 ns/op 250.01 ns/op 0.90

by benchmarkbot/action

Copy link
Member

@wemeetagain wemeetagain left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, I agree with you re the log level discussion

break;
case ExecutionEngineState.SYNCING:
this.logger.info("ExecutionEngine is syncing");
this.logger.warn("Execution client is syncing");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should not be a warning. It's a normal scenario that will happen a lot of times and syncing is a normal process not something a user should be worried about.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My assumption here is that this is only printed out if the state changes, therefore this should not happen often during normal operation, if the EL gets into syncing mode there is degraded performance, the node might not be able to perform correct head votes / miss block proposals.

Copy link
Contributor

@g11tech g11tech Jul 18, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think warn level is good here because it will only come if execution goes into syncing from synced, which is actually not supposed to happen often once execution syncs up with the CL, and users getting notified that node won't be able to perform its duties till execution is back up

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's the sync stats among the EL nodes, not the with the CL. And EL nodes can get unsynced and synced a lot?

Copy link
Member Author

@nflaig nflaig Jul 18, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sync stats among the EL nodes

That's an interesting topic, how does this actually work if you specify multiple ELs in --execution.urls.

I see issues with this

  • if one beacon node has multiple ELs only one gets payload and fork choice updates, how are the fallback ELs able to sync? I thought this requires some sort of multiplexer / middleware that sends request to all ELs.
  • if a execution client is connected to multiple beacon nodes it causes frequent reorgs as it might received different fork choice updates

Copy link
Member Author

@nflaig nflaig Jul 18, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

EL nodes can get unsynced and synced a lot

At least in the normal case with just one execution client I have not yet seen that the EL is getting back to syncing mode if it was already synced before and has been up 24/7

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's the sync stats among the EL nodes, not the with the CL. And EL nodes can get unsynced and synced a lot?

no they can't once synced they should stay synced with the CL head as long as CL doesn't go out of sync and then CL doesn't has to do batch updates to EL (our req/resp is batch processing) to get back to head

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think warn should be better then, let's keep the change

break;
case ExecutionEngineState.SYNCING:
this.logger.info("ExecutionEngine is syncing");
this.logger.warn("Execution client is syncing");
Copy link
Contributor

@g11tech g11tech Jul 18, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
this.logger.warn("Execution client is syncing");
this.logger.warn("Execution client is syncing hence client would not be able to fully participate in the network till its synced");

Copy link
Member Author

@nflaig nflaig Jul 18, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not really sure about adding this, we are not printing this out when the beacon node itself is syncing, my general feeling about this is that users/operators generally understand what a syncing client implies (I guess it becomes more complicated in case of EL, see my other comment below)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hence client would not be able to fully participate in the network till its synced

@g11tech on this topic, what does "not fully" mean in that case, I saw users complain about missed head votes which was due to EL lacking behind / unsynced.

My understanding is

  1. if EL is really far behind --> missed attestations / proposals
  2. if EL is close to being synced (~ 1-2 epochs behind) --> incorrect / missing head votes, att target / source votes still work, missed proposals
  3. if EL is slow to process blocks (can't quite keep up), same as degraded performance as point 2

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if el is not synced with the cl head then except gossip we do not participate in the network at all including attests, proposals aggregates etc. so none of validator duties

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we do not participate in the network at all

In that case I think it should be fine to just print "Execution client is syncing" as warn. Maybe the scenario where all head votes are missed is related to the CL not being able to keep up with head (but close to it).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not a fan really of long explanations in the logs. I think something more concise would be: Execution client is syncing. Validator duties on hold.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not a fan really of long explanations in the logs

Yeah, I think we should keep logs simple and use commonly understood terms. For errors that crash the process or fail a CLI command we can add more details as it is a one time message and we should help the user to fix the issue without having to guess what went wrong or having to ask in discord because the error is unclear.

Validator duties on hold

hmm personally not a fan, would have to add this to offline state change message as well and then add another message to online, synced that indiciates that those are not "on hold" anymore.

break;
case ExecutionEngineState.SYNCED:
this.logger.info("ExecutionEngine is synced");
this.logger.info("Execution client is synced");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
this.logger.info("Execution client is synced");
this.logger.info("Execution client is synced and client can now participate fully in the network");

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a note on this log "ExecutionEngine is synced", right now it is printed out any time the beacon node starts because the default state is online and then it goes to synced state if EL is synced.

Therefore, printing "can now participate" is not that accurate in that case. I would suggest we keep the logs simple for now and revisit it if we get user feedback on it or if we notice that there is confusion around what this means.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How long does it take usually for Lodestar to know whether the EL is actually in a synced state or if this is just the default online state due to beacon startup? If there's some sort of way to delay/timeout this message to ensure it actually is synced, that might work better?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How long does it take usually for Lodestar to know whether the EL is actually in a synced state

After first request to EL we know the actual status and print out a status change message

ensure it actually is synced

We know the correct state at this point in time since we made an request to EL, "can now participate" is just unnecessary here because beacon node was just starting up and EL was likely always synced.
The message only makes sense to me if EL goes from syncing to synced.

@nflaig nflaig requested a review from nazarhussain July 19, 2023 16:03
@nflaig nflaig merged commit 4e6408a into unstable Jul 20, 2023
12 checks passed
@nflaig nflaig deleted the nflaig/update-ee-status-message branch July 20, 2023 13:26
@wemeetagain
Copy link
Member

🎉 This PR is included in v1.10.0 🎉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants