You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, we use the tool to test TPCC for YugabyteDB cluster of 3 nodes.
For each node, it is of the same hardware:
1、16 CPU core
2、64G Memory
3、1.8T NVMe SSD
4、2500Mb/s LAN
The client app run in another machine which is not of the 3-cluster-nodes.
For high load pressure, we set sleep time to zero, i.e., no sleep() will be called for every worker thread,
this way we test the TPCC for the following scenario
warehouse = 1000
terminals = 600
And we compare the tpmC for YB 3-node cluster with the one for PG in ONLY ONE machine. (We use BenchmarkSQL 5.0 for PG)
For YB and PG, no compression is used. And we adjust PG for the shared buffer GUC (no default value for 128M, but 20G instead)
Testing is for a quite long time, usually one hour.
The result is suprising.
For YB cluster of 3 nodes, the tpmC is around 200K.
For PG of one node, the tpmC is around 300K.
We also compare the CPU usage, Disk IO, Network IO.
For Disk IO, it is far less than the bandwidth. Each test consume 200MB/s,and the bandwidth of Disk is over 1GB/s for read or write, the NVMe model is Samsung 990 Pro.
For Network IO, it is also far less than the bandwidth. About 100MB/s (Read + Write) for YB, no data for PG because it is a single node.
For CPU, the avarage usage for YB is around 80%, but it is 50% for PG.
My questions are:
Why tpmC of YB of 3 machines is worse than PG of 1 machine?
We think YB is good for scaling-out. But the test shows that one node PG is equal 4.5 nodes YB cluster.
Why YB use more CPU but give less tpmC?
BTW, we test a lot of TPCC for YB, we guess the bottleneck may be the CPU because all tests for YB show higher CPU usage.
The text was updated successfully, but these errors were encountered:
Hi, we use the tool to test TPCC for YugabyteDB cluster of 3 nodes.
For each node, it is of the same hardware:
1、16 CPU core
2、64G Memory
3、1.8T NVMe SSD
4、2500Mb/s LAN
The client app run in another machine which is not of the 3-cluster-nodes.
For high load pressure, we set sleep time to zero, i.e., no sleep() will be called for every worker thread,
this way we test the TPCC for the following scenario
And we compare the tpmC for YB 3-node cluster with the one for PG in ONLY ONE machine. (We use BenchmarkSQL 5.0 for PG)
For YB and PG, no compression is used. And we adjust PG for the shared buffer GUC (no default value for 128M, but 20G instead)
Testing is for a quite long time, usually one hour.
The result is suprising.
For YB cluster of 3 nodes, the tpmC is around 200K.
For PG of one node, the tpmC is around 300K.
We also compare the CPU usage, Disk IO, Network IO.
For Disk IO, it is far less than the bandwidth. Each test consume 200MB/s,and the bandwidth of Disk is over 1GB/s for read or write, the NVMe model is Samsung 990 Pro.
For Network IO, it is also far less than the bandwidth. About 100MB/s (Read + Write) for YB, no data for PG because it is a single node.
For CPU, the avarage usage for YB is around 80%, but it is 50% for PG.
My questions are:
Why tpmC of YB of 3 machines is worse than PG of 1 machine?
We think YB is good for scaling-out. But the test shows that one node PG is equal 4.5 nodes YB cluster.
Why YB use more CPU but give less tpmC?
BTW, we test a lot of TPCC for YB, we guess the bottleneck may be the CPU because all tests for YB show higher CPU usage.
The text was updated successfully, but these errors were encountered: