Klustron vs. Oracle: TPCC Performance Comparison
Klustron vs. Oracle: TPCC Performance Comparison
Objective:
By using the popular TPCC performance testing tool (BenchmarkSQL), performance tests will be conducted separately for Klustron and Oracle to derive a comparison conclusion.
Test Environment One: (Oracle 19c)
Oracle Version: 19.14.0.0.0
Deployment Mode: Single Instance
Key Parameters: SGA: 32G, PGA_TARGET: 12G, Archiving not enabled
Operating Environment: CentOS 8.5 Linux 64-bit, CPU: AMD Ryzen 9 7950X 16-Core Processor, MEM: 128G, Storage: M.2 NVME SSD PCIE 4, 2TB Solid State
IP: 192.168.0.20
Test Environment Two: (Klustron)
Node Type | IP | Port |
---|---|---|
Compute Node | 192.168.0.20 | 47001 |
Shard1 Primary Node | 192.168.0.21 | 57003 |
Shard2 Primary Node | 192.168.0.22 | 57005 |
XPanel | 192.168.0.20 | 40580 |
Klustron Version: Klustron-1.2.1
Deployment Mode: Similar to Oracle, without a standby machine, two storage shards, and one compute node.
Operating Environment: CentOS 8.5 Linux 64-bit, CPU: AMD Ryzen 9 7950X 16-Core Processor, Memory: 128GB, Storage: M.2 NVME SSD PCIE 4, 2TB Solid State.
Test Environment Three: (BenchmarkSQL)
BenchmarkSQL: v5.0
Deployment Mode: Single Node
Key Parameters:
warehouses=50
terminals=10
runMins=1
newOrderWeight=45
paymentWeight=43
orderStatusWeight=4
deliveryWeight=4
stockLevelWeight=4
Operating Environment: CentOS 8.5 Linux 64-bit, CPU: AMD Ryzen 9 7950X 16-Core Processor, MEM: 128G, Storage: M.2 NVME SSD PCIE 4, 2TB Solid State
IP: 192.168.0.19
Note: The Linux environment where BenchmarkSQL is located should have R installed. Otherwise, the test result reports won't display graphical analysis results.
01 Oracle Performance Testing
1.1. Test Preparation
1.1.1. Setting up Resource Monitoring on the Oracle Machine
Within the BenchmarkSQL configuration file (props.ora), set up resource monitoring to gather CPU, storage IO, and network IO load parameters during stress testing. Key parameters:
osCollectorSSHAddr=oracle@192.168.0.20 #Password-free access should be configured.
osCollectorDevices=net_enp4s0 blk_nvme1n1 #Adjust according to the target machine's network interface and storage device names.
1.1.2. Generating Test Data
Run runDatabaseBuild.sh props.ora
to generate test data. The output will resemble the following:
1.1.3. Statistical Analysis of the Test Data
On the Oracle server, connect to the database as a sysdba user and collect statistical information:
SQL> conn / as sysdba
Connected.
SQL> exec dbms_stats.gather_schema_stats('BENCH'); #Testing data under the 'BENCH' user
PL/SQL procedure successfully completed.
1.2. Conducting the Test
Run runBenchMark.sh props.ora
, and you'll get an output similar to:
Upon test completion, a directory similar to my_result_2023-04-15_222553
will be created in the current directory.
1.3. Generating and Reviewing the Test Report
Execute generateReport.sh < my_result_directory > to generate an HTML analysis report. Given the current test case configuration (warehouses=50, terminals=20, runMins=1), the key output information in the report file is as follows:
Transaction Type | Latency | Count | Percent | Rollback | Errors | Skipped Deliveries | |
---|---|---|---|---|---|---|---|
90th % | Maximum | ||||||
NEW_ORDER | 0.005s | 8.794s | 79230 | 44.886% | 0.963% | 0 | N/A |
PAYMENT | 0.003s | 8.792s | 76026 | 43.071% | N/A | 0 | N/A |
ORDER_STATUS | 0.001s | 8.788s | 7083 | 4.013% | N/A | 0 | N/A |
STOCK_LEVEL | 0.001s | 0.016s | 7025 | 3.980% | N/A | 0 | N/A |
DELIVERY | 0.000s | 0.001s | 7148 | 4.050% | N/A | 0 | N/A |
DELIVERY_BG | 0.008s | 8.801s | 7148 | N/A | N/A | 0 | 0 |
Overall tpmC: 79230.00
Overall tpmTotal: 176512.00
The load monitoring for the machine where Oracle is located is as follows:
1.4. Statistics obtained for different concurrent users set during the stress test: 20,50,100,200
Concurrent Users | tpmC | tpmTotal | Notes |
---|---|---|---|
20 | 78995.64 | 175990.89 | |
50 | 84918.44 | 188703.54 | |
100 | 97179.37 | 215940.76 | |
200 | 96457.92 | 214142.63 |
500warehouses
Concurrent Users | tpmC | tpmTotal | Notes |
---|---|---|---|
50 | 136071.81 | 302484.41 | |
100 | 149889.71 | 332070.93 | |
200 | 98523.34 | 218430.36 | |
400 | 62831.25 | 139880.84 | |
600 | 87321.82 | 194175.92 | |
800 | 76169.41 | 169711.68 | |
1000 | 65598.93 | 145822.86 |
02 Klustron Performance Testing
2.1. Test Preparation
2.1.1. Klustron Installation
Details omitted
Attached are the parameters set for the test cluster:
2.1.2. Adjusting Parameters for the Computing and Storage Nodes
Key Parameters for Compute Node (PG):
shared_buffers = 34359738368 (%25 RAM)
statement_timeout=6000000
mysql_read_timeout=1200
mysql_write_timeout=1200
lock_timeout=1200000
log_min_duration_statement=1200000
effective_cache_size = 68719476736 (%50 RAM)
work_mem = 8MB
wal_buffers = ‘64MB’
autovacuum=false
psql -h 192.168.0.20 -p 47001 -U abc postgres
alter system set shared_buffers='32GB';
alter system set statement_timeout=6000000;
alter system set mysql_read_timeout=1200;
alter system set mysql_write_timeout=1200;
alter system set lock_timeout=1200000;
alter system set log_min_duration_statement=1200000;
alter system set effective_cache_size = '64GB';
alter system set work_mem = '8MB';
alter system set wal_buffers='64MB';
alter system set autovacuum=false;
alter system set log_min_messages='DEBUG1' ; # Enable for SQL debugging
alter system set enable_sql_log=on; # Enable for SQL debugging
pg_ctl restart -D /nvme2/kunlun/server_datadir/47001
Key Parameters for Storage Node (MYSQL):
innodb_buffer_pool_size=32*1024*1024*1024
lock_wait_timeout=1200
innodb_lock_wait_timeout=1200
fullsync_timeout=1200000
enable_fullsync=false
innodb_flush_log_at_trx_commit=2
sync_binlog=0
max_binlog_size=1*1024*1024*1024
mysql -h 192.168.0.21 -P 57003 -upgx -ppgx_pwd
mysql -h 192.168.0.22 -P 57005 -upgx -ppgx_pwd
set global innodb_buffer_pool_size=32*1024*1024*1024;
set global lock_wait_timeout=1200;
set global innodb_lock_wait_timeout=1200;
set global fullsync_timeout=1200000;
set global enable_fullsync=false;
set global innodb_flush_log_at_trx_commit=2;
set global sync_binlog=0;
set global max_binlog_size=1*1024*1024*1024;
set global enable_fullsync=off; # Disable full synchronization on all shard primary nodes
set enable_fullsync=off; # Disable full synchronization on all shard primary nodes
set global general_log=on ; # Enable for SQL debugging
pg_ctl restart -D /nvme2/kunlun/server_datadir/47001
Key Parameters for Storage Node (MYSQL):
innodb_buffer_pool_size=32*1024*1024*1024
lock_wait_timeout=1200
innodb_lock_wait_timeout=1200
fullsync_timeout=1200000
enable_fullsync=false
innodb_flush_log_at_trx_commit=2
sync_binlog=0 # Disk bandwidth is limited; binlog is turned off
max_binlog_size=1*1024*1024*1024
mysql -h 192.168.0.21 -P 57003 -upgx -ppgx_pwd
mysql -h 192.168.0.22 -P 57005 -upgx -ppgx_pwd
set global innodb_buffer_pool_size=32*1024*1024*1024;
set global lock_wait_timeout=1200;
set global innodb_lock_wait_timeout=1200;
set global fullsync_timeout=1200000;
set global enable_fullsync=false;
set global innodb_flush_log_at_trx_commit=2;
set global sync_binlog=0;
set global max_binlog_size=1*1024*1024*1024;
set global enable_fullsync=off; # Disable full synchronization on all shard primary nodes
set enable_fullsync=off; # Disable full synchronization on all shard primary nodes
set global general_log=on ; # Enable for SQL debugging
Failover timeout setting: 10800 seconds (Xpanel)
For the 3-node MySQL MGR metadata cluster:
export LD_LIBRARY_PATH=/nvme2/kunlun/instance_binaries/storage/57001/Klustron-storage-1.2.1/lib:$LD_LIBRARY_PATH ;
/nvme2/kunlun/instance_binaries/storage/57001/Klustron-storage-1.2.1/bin/mysql --connect-expired-password -S/nvme2/kunlun/storage_logdir/57001/mysql.sock -uroot -proot
set global innodb_buffer_pool_size=1*1024*1024*1024; (Default 128M)
2.1.3. Adjust BenchmarkSQL for Klustron
Modify BenchmarkSQL's table creation script (partition the table, specify shards).
cd benchmarksql5.0/run/sql.common
cat << EOF > tableCreates.sql
create table bmsql_config (
cfg_name varchar(30) primary key,
cfg_value varchar(50)
);
create table bmsql_warehouse (
w_id integer not null,
w_ytd decimal(12,2),
w_tax decimal(4,4),
w_name varchar(10),
w_street_1 varchar(20),
w_street_2 varchar(20),
w_city varchar(20),
w_state char(2),
w_zip char(9)
) partition by hash(w_id);
create table bmsql_warehouse_0 partition of bmsql_warehouse for values with (MODULUS 6, REMAINDER 0) with (shard = 1);
create table bmsql_warehouse_1 partition of bmsql_warehouse for values with (MODULUS 6, REMAINDER 1) with (shard = 1);
create table bmsql_warehouse_2 partition of bmsql_warehouse for values with (MODULUS 6, REMAINDER 2) with (shard = 1);
create table bmsql_warehouse_3 partition of bmsql_warehouse for values with (MODULUS 6, REMAINDER 3) with (shard = 2);
create table bmsql_warehouse_4 partition of bmsql_warehouse for values with (MODULUS 6, REMAINDER 4) with (shard = 2);
create table bmsql_warehouse_5 partition of bmsql_warehouse for values with (MODULUS 6, REMAINDER 5) with (shard = 2);
create table bmsql_district (
d_w_id integer not null,
d_id integer not null,
d_ytd decimal(12,2),
d_tax decimal(4,4),
d_next_o_id integer,
d_name varchar(10),
d_street_1 varchar(20),
d_street_2 varchar(20),
d_city varchar(20),
d_state char(2),
d_zip char(9)
) partition by hash(d_w_id);
create table bmsql_district_0 partition of bmsql_district for values with (MODULUS 6, REMAINDER 0) with (shard = 1);
create table bmsql_district_1 partition of bmsql_district for values with (MODULUS 6, REMAINDER 1) with (shard = 1);
create table bmsql_district_2 partition of bmsql_district for values with (MODULUS 6, REMAINDER 2) with (shard = 1);
create table bmsql_district_3 partition of bmsql_district for values with (MODULUS 6, REMAINDER 3) with (shard = 2);
create table bmsql_district_4 partition of bmsql_district for values with (MODULUS 6, REMAINDER 4) with (shard = 2);
create table bmsql_district_5 partition of bmsql_district for values with (MODULUS 6, REMAINDER 5) with (shard = 2);
create table bmsql_customer (
c_w_id integer not null,
c_d_id integer not null,
c_id integer not null,
c_discount decimal(4,4),
c_credit char(2),
c_last varchar(16),
c_first varchar(16),
c_credit_lim decimal(12,2),
c_balance decimal(12,2),
c_ytd_payment decimal(12,2),
c_payment_cnt integer,
c_delivery_cnt integer,
c_street_1 varchar(20),
c_street_2 varchar(20),
c_city varchar(20),
c_state char(2),
c_zip char(9),
c_phone char(16),
c_since timestamp,
c_middle char(2),
c_data varchar(500)
) partition by hash(c_w_id);
create table bmsql_customer_0 partition of bmsql_customer for values with (MODULUS 6, REMAINDER 0) with (shard = 1);
create table bmsql_customer_1 partition of bmsql_customer for values with (MODULUS 6, REMAINDER 1) with (shard = 1);
create table bmsql_customer_2 partition of bmsql_customer for values with (MODULUS 6, REMAINDER 2) with (shard = 1);
create table bmsql_customer_3 partition of bmsql_customer for values with (MODULUS 6, REMAINDER 3) with (shard = 2);
create table bmsql_customer_4 partition of bmsql_customer for values with (MODULUS 6, REMAINDER 4) with (shard = 2);
create table bmsql_customer_5 partition of bmsql_customer for values with (MODULUS 6, REMAINDER 5) with (shard = 2);
create sequence bmsql_hist_id_seq;
create table bmsql_history (
hist_id integer,
h_c_id integer,
h_c_d_id integer,
h_c_w_id integer,
h_d_id integer,
h_w_id integer,
h_date timestamp,
h_amount decimal(6,2),
h_data varchar(24)
) partition by hash(h_c_w_id);
create table bmsql_history_0 partition of bmsql_history for values with (MODULUS 6, REMAINDER 0) with (shard = 1);
create table bmsql_history_1 partition of bmsql_history for values with (MODULUS 6, REMAINDER 1) with (shard = 1);
create table bmsql_history_2 partition of bmsql_history for values with (MODULUS 6, REMAINDER 2) with (shard = 1);
create table bmsql_history_3 partition of bmsql_history for values with (MODULUS 6, REMAINDER 3) with (shard = 2);
create table bmsql_history_4 partition of bmsql_history for values with (MODULUS 6, REMAINDER 4) with (shard = 2);
create table bmsql_history_5 partition of bmsql_history for values with (MODULUS 6, REMAINDER 5) with (shard = 2);
create table bmsql_new_order (
no_w_id integer not null,
no_d_id integer not null,
no_o_id integer not null
) partition by hash(no_w_id);
create table bmsql_new_order_0 partition of bmsql_new_order for values with (MODULUS 6, REMAINDER 0) with (shard = 1);
create table bmsql_new_order_1 partition of bmsql_new_order for values with (MODULUS 6, REMAINDER 1) with (shard = 1);
create table bmsql_new_order_2 partition of bmsql_new_order for values with (MODULUS 6, REMAINDER 2) with (shard = 1);
create table bmsql_new_order_3 partition of bmsql_new_order for values with (MODULUS 6, REMAINDER 3) with (shard = 2);
create table bmsql_new_order_4 partition of bmsql_new_order for values with (MODULUS 6, REMAINDER 4) with (shard = 2);
create table bmsql_new_order_5 partition of bmsql_new_order for values with (MODULUS 6, REMAINDER 5) with (shard = 2);
create table bmsql_oorder (
o_w_id integer not null,
o_d_id integer not null,
o_id integer not null,
o_c_id integer,
o_carrier_id integer,
o_ol_cnt integer,
o_all_local integer,
o_entry_d timestamp
) partition by hash(o_w_id);
create table bmsql_oorder_0 partition of bmsql_oorder for values with (MODULUS 6, REMAINDER 0) with (shard = 1);
create table bmsql_oorder_1 partition of bmsql_oorder for values with (MODULUS 6, REMAINDER 1) with (shard = 1);
create table bmsql_oorder_2 partition of bmsql_oorder for values with (MODULUS 6, REMAINDER 2) with (shard = 1);
create table bmsql_oorder_3 partition of bmsql_oorder for values with (MODULUS 6, REMAINDER 3) with (shard = 2);
create table bmsql_oorder_4 partition of bmsql_oorder for values with (MODULUS 6, REMAINDER 4) with (shard = 2);
create table bmsql_oorder_5 partition of bmsql_oorder for values with (MODULUS 6, REMAINDER 5) with (shard = 2);
create table bmsql_order_line (
ol_w_id integer not null,
ol_d_id integer not null,
ol_o_id integer not null,
ol_number integer not null,
ol_i_id integer not null,
ol_delivery_d timestamp,
ol_amount decimal(6,2),
ol_supply_w_id integer,
ol_quantity integer,
ol_dist_info char(24)
) partition by hash(ol_w_id);
create table bmsql_order_line_0 partition of bmsql_order_line for values with (MODULUS 6, REMAINDER 0) with (shard = 1);
create table bmsql_order_line_1 partition of bmsql_order_line for values with (MODULUS 6, REMAINDER 1) with (shard = 1);
create table bmsql_order_line_2 partition of bmsql_order_line for values with (MODULUS 6, REMAINDER 2) with (shard = 1);
create table bmsql_order_line_3 partition of bmsql_order_line for values with (MODULUS 6, REMAINDER 3) with (shard = 2);
create table bmsql_order_line_4 partition of bmsql_order_line for values with (MODULUS 6, REMAINDER 4) with (shard = 2);
create table bmsql_order_line_5 partition of bmsql_order_line for values with (MODULUS 6, REMAINDER 5) with (shard = 2);
create table bmsql_item (
i_id integer not null,
i_name varchar(24),
i_price decimal(5,2),
i_data varchar(50),
i_im_id integer
) with (shard = all);
create table bmsql_stock (
s_w_id integer not null,
s_i_id integer not null,
s_quantity integer,
s_ytd integer,
s_order_cnt integer,
s_remote_cnt integer,
s_data varchar(50),
s_dist_01 char(24),
s_dist_02 char(24),
s_dist_03 char(24),
s_dist_04 char(24),
s_dist_05 char(24),
s_dist_06 char(24),
s_dist_07 char(24),
s_dist_08 char(24),
s_dist_09 char(24),
s_dist_10 char(24)
) partition by hash(s_w_id);
create table bmsql_stock_0 partition of bmsql_stock for values with (MODULUS 6, REMAINDER 0) with (shard = 1);
create table bmsql_stock_1 partition of bmsql_stock for values with (MODULUS 6, REMAINDER 1) with (shard = 1);
create table bmsql_stock_2 partition of bmsql_stock for values with (MODULUS 6, REMAINDER 2) with (shard = 1);
create table bmsql_stock_3 partition of bmsql_stock for values with (MODULUS 6, REMAINDER 3) with (shard = 2);
create table bmsql_stock_4 partition of bmsql_stock for values with (MODULUS 6, REMAINDER 4) with (shard = 2);
create table bmsql_stock_5 partition of bmsql_stock for values with (MODULUS 6, REMAINDER 5) with (shard = 2);
alter table bmsql_warehouse add constraint warehouse_pkey primary key (w_id);
alter table bmsql_district add constraint district_pkey primary key (d_w_id, d_id);
alter table bmsql_customer add constraint customer_pkey primary key (c_w_id, c_d_id, c_id);
create index bmsql_customer_idx1 on bmsql_customer (c_w_id, c_d_id, c_last, c_first);
alter table bmsql_oorder add constraint oorder_pkey primary key (o_w_id, o_d_id, o_id);
create unique index bmsql_oorder_idx1 on bmsql_oorder (o_w_id, o_d_id, o_carrier_id, o_id);
alter table bmsql_new_order add constraint new_order_pkey primary key (no_w_id, no_d_id, no_o_id);
alter table bmsql_order_line add constraint order_line_pkey primary key (ol_w_id, ol_d_id, ol_o_id, ol_number);
alter table bmsql_stock add constraint stock_pkey primary key (s_w_id, s_i_id);
alter table bmsql_item add constraint item_pkey primary key (i_id);
EOF
mv indexCreates.sql indexCreates.sql_old
mv foreignKeys.sql foreignKeys.sql.bak
Modify /home/kunlun/benchmarksql-5.0/run/sql.postgres/extraHistID.sql
, changing: alter table bmsql_history add primary key (hist_id);
to the following command.
alter table bmsql_history add primary key (hist_id,h_c_w_id);
mv /home/kunlun/benchmarksql-5.0/run/sql.postgres/buildFinish.sql /home/kunlun/benchmarksql-5.0/run/sql.postgres/buildFinish.sql.bak
2.1.4. Creating the Test Database 'bench' and Test User 'bench' in Klustron
postgres=# create user bench with password 'bench';
postgres=# create database bench with owner bench ;
2.1.5. Modifying parameters in benchmarksql5.0/run/props.pg
db=postgres
driver=org.postgresql.Driver
conn=jdbc:postgresql://192.168.0.20:47001/bench50
user=bench50
password=bench50
warehouses=50
loadWorkers=50
terminals=20
//To run specified transactions per terminal- runMins must equal zero
runTxnsPerTerminal=0
//To run for specified minutes- runTxnsPerTerminal must equal zero
runMins=1
//Number of total transactions per minute
limitTxnsPerMin=0
//Set to true to run in 4.x compatible mode. Set to false to use the
//entire configured database evenly.
terminalWarehouseFixed=false
//The following five values must add up to 100
//The default percentages of 45, 43, 4, 4 & 4 match the TPC-C spec
newOrderWeight=45
paymentWeight=43
orderStatusWeight=4
deliveryWeight=4
stockLevelWeight=4
// Directory name to create for collecting detailed result data.
// Comment this out to suppress.
resultDirectory=my_result_%tY-%tm-%td_%tH%tM%tS
osCollectorScript=./misc/os_collector_linux.py
osCollectorInterval=1
osCollectorSSHAddr=kunlun@192.168.0.21
osCollectorDevices=net_enp4s0 blk_ nvme1n1
2.1.6. Generating Test Data
cd benchmarksql5.0/run/
./runDatabaseBuild.sh props.pg
2.1.7. Analyzing the Test Data Tables and Columns, Collecting Statistical Information
Copy tableCreates.sql
to the compute node, and in the current directory, create a script named ana.sh
.
for i in `cat ./tableCreates.sql | grep 'create table' | grep -v partition | awk '{print $3}'`
do
echo "select * from $i limit 1;" > a.sql
clomun=`psql -h 192.168.0.20 -p 47001 -U bench bench -f a.sql | sed -n 1p | sed 's/|/,/g'`
analyze="analyze $i($clomun)"
echo $analyze
echo $analyze > tmp.sql
psql -h 192.168.0.20 -p 47001 -U bench bench -f tmp.sql
done
Execute the script to complete statistical analysis for all test tables.
2.1.8. Performing a Transaction Cleanup on All Storage Nodes, Creating a Script Named xa.sh
echo 'xa recover'> tmp.sql
echo $1
for i in `mysql -h 192.168.0.$1 -ppgx_pwd -upgx -P$2 < tmp.sql | awk '{print $4}' | sed '1d'`
do
printf "xa rollback \'$i\'" > tmp.sql
echo "========$i========"
mysql -h 192.168.0.$1 -ppgx_pwd -upgx -P$2 < tmp.sql
done
Copy the script to storage nodes shard1(192.168.0.21) and shard2(192.168.0.22) and input the corresponding parameters to execute the script.
Example: sh xa.sh 21 57003
2.2. Running the Test
Execute runBenchMark.sh props.pg
, and you will see an output similar to the following:
Upon completing the stress test, a directory similar to my_result_2023-04-16_074338
will be generated in the current directory.
2.3. Generating and Reading the Test Report
Running generateReport.sh <my_result_directory>
will produce an HTML analysis report. The key output information, based on the current stress test configuration (warehouses=50, terminals=20, runMins=1), is as follows:
Transaction Type | Latency | Count | Percent | Rollback | Errors | Skipped Deliveries | |
---|---|---|---|---|---|---|---|
90th % | Maximum | ||||||
NEW_ORDER | 0.222s | 1.255s | 12903 | 44.794% | 0.930% | 0 | N/A |
PAYMENT | 0.013s | 1.042s | 12383 | 42.989% | N/A | 0 | N/A |
ORDER_STATUS | 0.004s | 0.418s | 1237 | 4.294% | N/A | 0 | N/A |
STOCK_LEVEL | 0.025s | 1.067s | 1153 | 4.003% | N/A | 0 | N/A |
DELIVERY | 0.000s | 0.001s | 1129 | 3.919% | N/A | 0 | N/A |
DELIVERY_BG | 0.250s | 1.096s | 1129 | N/A | N/A | 0 | 0 |
Overall tpmC: 12903.00
Overall tpmTotal: 28805.00
Below is the load monitoring situation of the Klustron storage node (192.168.0.21):
2.4. Statistics obtained by setting different concurrent counts: 20,50,100,200
Concurrent Users | tpmC | tpmTotal | Notes |
---|---|---|---|
20 | 12765.76 | 28499.62 | |
50 | 37030.55 | 81530.63 | |
100 | 69464.26 | 154490.6 | |
200 | 66644.32 | 148092.77 |
500warehouses
Concurrent Users | tpmC | tpmTotal | Notes |
---|---|---|---|
50 | 31018.69 | 68931.3 | |
100 | 57587.56 | 128006.8 | |
200 | 59698.94 | 131773.86 | |
400 | 33875.61 | 75435.1 | |
600 | 26498.45 | 58929.34 | |
800 | 22157.08 | 48785.76 |
03 TPCC Performance Testing Comparison Results
Comparison for 50 warehouses:
Concurrent Users | Klustron | Oracle | Notes | ||
---|---|---|---|---|---|
tpmC | tpmTotal | tpmC | tpmTotal | ||
20 | 12765.76 | 28499.62 | 78995.64 | 175990.89 | |
50 | 37030.55 | 81530.63 | 84918.44 | 188703.54 | |
100 | 69464.26 | 154490.6 | 97179.37 | 215940.76 | |
200 | 66644.32 | 148092.77 | 96457.92 | 214142.63 |
Comparison for 500 warehouses:
Concurrent Users | Klustron | Oracle | Notes | ||
---|---|---|---|---|---|
tpmC | tpmTotal | tpmC | tpmTotal | ||
50 | 31018.69 | 68931.3 | 136071.81 | 302484.41 | |
100 | 57587.56 | 128006.8 | 149889.71 | 332070.93 | |
200 | 59698.94 | 131773.86 | 98523.34 | 218430.36 | |
400 | 33875.61 | 75435.1 | 62831.25 | 139880.84 | |
600 | 26498.45 | 58929.34 | 87321.82 | 194175.92 | |
800 | 22157.08 | 48785.76 | 76169.41 | 169711.68 |
Analysis of Results
- Disk Bandwidth Constraints: Due to limited disk bandwidth, it was necessary to set
sync_binlog=0
for KunlunBase shards.
For server-grade disks, setting sync_binlog=1 would not significantly affect the results of this test. As Oracle does not use a standby machine, Klustron’s shards also operate without standby machines, thus eliminating the delays associated with waiting for an acknowledgment from the standby during full sync operations.
- Consistency in Key Settings
According to the explanation in Oracle's COMMIT_WAIT documentation
If the parameter is set to FORCE_WAIT, the default behavior (immediate flushing of the redo log buffer with wait) is used.
the default behavior corresponds to MySQL’s innodb_flush_log_at_trx_commit=2
, which flushes the redo log buffer to the redo log file. However, the Oracle documentation does not mention syncing the file to persistent storage. Therefore, we set innodb_flush_log_at_trx_commit=2
for each shard’s Kunlun-storage node, aligning IO latency and overhead with Oracle.
- Computational Layer Overhead: Due to the additional computational layer in KunlunBase, its TPMC is about 2/3 of Oracle’s.
However, if the data volume is so large that a single server's computational resources are insufficient, KunlunBase can scale horizontally. DBAs can increase the overall throughput of the Klustron cluster by adding more servers and maintain steady latency. In contrast, the throughput of an Oracle database cannot be enhanced. KunlunBase’s distributed database architecture is akin to a train with carriages that can be added as needed to increase capacity, where each carriage is powered thus ensuring linear performance scaling; whereas a single-server database is more like a truck that cannot increase its capacity once full.
- Connection Scaling Issues: Both Oracle and KunlunBase experience performance degradation with increased connection counts.
However, Oracle’s performance starts to decline after surpassing 100 connections, while KunlunBase can handle up to 200 connections before seeing performance degradation. This is primarily because both Oracle and KunlunBase's compute nodes use a multi-process architecture. As the number of processes increases, the Linux scheduling algorithm reduces the time slice allocated to each process (or thread, since in Linux, both are treated as tasks). There is a lower limit to each process’s time slice because if it were reduced indefinitely, each process would barely have time to execute before another context switch is necessary, leading to high overhead from process switching. Consequently, when there are many processes on the same server, the performance of these processes visibly decreases, the execution latency increases, and the overhead from process switching becomes a significant portion of the system’s overhead, reducing the overall throughput.
KunlunBase’s compute nodes will evolve into a thread pool architecture to avoid these issues and maintain stable performance even with thousands of connections.