TPC-C test comparison between Klustron and TiDB
TPC-C test comparison between Klustron and TiDB
The Klustron team aims at friendly exchanges, learning and reference. After each version is released, we will make a series of performance comparisons between Klustron and several other common distributed database systems in the industry for reference by industry insiders.
We will release the comparison results of each product in succession in the near future. You are welcome to read the comparison and verify it yourself. If you have any questions, please ask questions in our official website forum.
Specifically, we will use Klustron to compare the performance of sysbench, TPC-C, TPC-H, TPC-DS with TiDB, CockroachDB, OceanBase, and compare the performance of TPC-H and TPC-DS with greenplum.
At the same time, we will use Klustron-storage (that is, the storage node of Klustron) to compare the performance of sysbench and TPC-C with Percona-MySQL, PostgreSQL, and OpenGauss. Welcome everyone to continue to pay attention, forward and comment.
TPC-C test comparison between Klustron and TiDB
server information
This test uses four servers, of which 132, 134, and 140 are used to install and deploy the cluster, and 126 is used to run the client of go-tpc.
Cluster deployment
Kunlun Base
- Klustron version is 1.1.1;
- Deploy a computing node and a metadata node on each of the three servers 132, 134, and 140;
- Deploy two storage slices on the three servers 132, 134, and 140:
- Both 132 and 134 have a storage shard master node;
- On 132, there is a 134 primary storage shard backup node;
- On 134, there is a 132 primary storage shard standby node;
- 140 There are two standby nodes that store shards.
TiDB
- Use tiup to deploy a Tikv, Tidb, and pd on 132, 134, and 140 respectively.
- The deployed TiDB cluster version is 6.1.2
Cluster configuration parameters
Kunlun Base
innodb_buffer_pool_size :10737418240
enable_fullsync : true
sync_binlog : 1
innodb_flush_log_at_trx_commit = 1
TiDB
server_configs:
tidb:
log.slow-threshold: 300
log.level: "error"
tikv:
readpool.storage.use-unified-pool: false
log-level: "error"
rocksdb.defaultcf.block-cache-size: "30GB"
rocksdb.writecf.block-cache-size: "24GB"
storage.block-cache.capacity: "30GB"
readpool.coprocessor.use-unified-pool: true
pd:
schedule.leader-schedule-limit: 4
schedule.region-schedule-limit: 2048
schedule.replica-schedule-limit: 64
Commands used for testing
Kunlun Base
Fill 100 warehouses:
./go-tpc tpcc --conn-params sslmode=disable --db postgres --driver postgres \
--password abc --port 35001 --threads 10 --user abc --warehouses 100 --host 192.168.0.140 prepare
Run the test with RC isolation level:
./go-tpc tpcc --conn-params sslmode=disable --db postgres --driver postgres \
--password abc --port 35001 --threads $i --time 5m --user abc --host 192.168.0.140 \
--isolation 2 --warehouses 100 run > result/$i
Run the test with RR isolation level:
./go-tpc tpcc --conn-params sslmode=disable --db postgres --driver postgres \
--password abc --port 35001 --threads $i --time 5m --user abc --host 192.168.0.140 \
--isolation 4 --warehouses 100 run > result/$i
TiDB
Fill 100 warehouses:
./go-tpc tpcc --db tpcc --password root --port 40002 --threads 10 --time 5m \
--user root --host 192.168.0.140 --warehouses 100 prepare
Run the test with RC isolation level:
./go-tpc tpcc --db tpcc --isolation 2 --password root --port 40002 --threads $i \
--time 5m --user root --host 192.168.0.140 --warehouses 100 run > result/$i
Run the test with RR isolation level:
./go-tpc tpcc --db tpcc --isolation 4 --password root --port 40002 --threads $i \
--time 5m --user root --host 192.168.0.140 --warehouses 100 run > result/$i
result
The version of go-tpc used is v1.12.0, and the program location is on ~/.tiup/components/bench/v1.12.0 of 126.
At the end of each test, it will pause and wait for 3 minutes before proceeding to the next concurrent test.
RC isolation level:
RR isolation level: