sbc-bench v0.9.9 Radxa ROCK 5B (Tue, 06 Dec 2022 22:08:57 +0100)
Distributor ID: Ubuntu
Description: Ubuntu 22.04.1 LTS
Release: 22.04
Codename: jammy
Armbian info: Rock 5B, rockchip-rk3588, rockchip-rk3588, 22.11.1, https://github.com/armbian/build
/usr/bin/gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
Uptime: 22:08:57 up 3:12, 3 users, load average: 0.27, 0.30, 0.18, 37.9°C, 9.37V, 152958635
Linux 5.10.72-rockchip-rk3588 (rock-5b) 12/06/22 _aarch64_ (8 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
0.23 0.00 0.12 0.00 0.00 99.65
Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd
mmcblk1 1.29 45.34 48.55 0.00 524234 561374 0
mtdblock0 0.00 0.03 0.00 0.00 336 0 0
zram0 0.05 0.20 0.00 0.00 2264 4 0
zram1 0.06 0.04 0.58 0.00 408 6688 0
total used free shared buff/cache available
Mem: 7.5Gi 198Mi 7.2Gi 42Mi 145Mi 7.2Gi
Swap: 3.8Gi 0B 3.8Gi
Filename Type Size Used Priority
/dev/zram0 partition 3937420 0 5
##########################################################################
Checking cpufreq OPP for cpu0-cpu3 (Cortex-A55):
Cpufreq OPP: 1800 Measured: 1822 (1822.123/1822.083/1822.003) (+1.2%)
Cpufreq OPP: 1608 Measured: 1638 (1638.797/1638.797/1638.675) (+1.9%)
Cpufreq OPP: 1416 Measured: 1416 (1416.840/1416.810/1416.658)
Cpufreq OPP: 1200 Measured: 1230 (1230.543/1230.457/1230.429) (+2.5%)
Cpufreq OPP: 1008 Measured: 1060 (1060.093/1059.965/1059.944) (+5.2%)
Cpufreq OPP: 816 Measured: 846 (846.154/846.154/846.048) (+3.7%)
Cpufreq OPP: 600 Measured: 591 (591.249/591.236/591.211) (-1.5%)
Cpufreq OPP: 408 Measured: 393 (393.364/393.364/393.355) (-3.7%)
Checking cpufreq OPP for cpu4-cpu5 (Cortex-A76):
Cpufreq OPP: 2400 Measured: 2344 (2344.492/2344.492/2344.492) (-2.3%)
Cpufreq OPP: 2208 Measured: 2181 (2181.413/2181.321/2181.321) (-1.2%)
Cpufreq OPP: 2016 Measured: 2012 (2012.429/2012.429/2012.331)
Cpufreq OPP: 1800 Measured: 1812 (1812.651/1812.612/1812.572)
Cpufreq OPP: 1608 Measured: 1621 (1621.753/1621.673/1621.554)
Cpufreq OPP: 1416 Measured: 1434 (1434.205/1434.205/1434.174) (+1.3%)
Cpufreq OPP: 1200 Measured: 1252 (1252.248/1252.218/1252.189) (+4.3%)
Cpufreq OPP: 1008 Measured: 1053 (1053.348/1053.296/1053.165) (+4.5%)
Cpufreq OPP: 816 Measured: 845 (845.499/845.478/845.415) (+3.6%)
Cpufreq OPP: 600 Measured: 592 (592.958/592.932/592.880) (-1.3%)
Cpufreq OPP: 408 Measured: 394 (395.022/394.968/394.950) (-3.4%)
Checking cpufreq OPP for cpu6-cpu7 (Cortex-A76):
Cpufreq OPP: 2400 Measured: 2343 (2343.216/2343.216/2343.162) (-2.4%)
Cpufreq OPP: 2208 Measured: 2180 (2180.953/2180.907/2180.907) (-1.3%)
Cpufreq OPP: 2016 Measured: 2012 (2012.870/2012.870/2012.772)
Cpufreq OPP: 1800 Measured: 1814 (1814.323/1814.283/1814.044)
Cpufreq OPP: 1608 Measured: 1623 (1623.306/1623.306/1623.186)
Cpufreq OPP: 1416 Measured: 1433 (1433.490/1433.304/1433.179) (+1.2%)
Cpufreq OPP: 1200 Measured: 1254 (1254.476/1254.357/1254.327) (+4.5%)
Cpufreq OPP: 1008 Measured: 1055 (1055.766/1055.740/1055.687) (+4.7%)
Cpufreq OPP: 816 Measured: 846 (846.070/846.048/845.985) (+3.7%)
Cpufreq OPP: 600 Measured: 592 (592.945/592.945/592.906) (-1.3%)
Cpufreq OPP: 408 Measured: 394 (394.995/394.986/394.959) (-3.4%)
##########################################################################
Hardware sensors:
gpu_thermal-virtual-0
temp1: +37.0 C
littlecore_thermal-virtual-0
temp1: +37.9 C
bigcore0_thermal-virtual-0
temp1: +37.0 C
tcpm_source_psy_4_0022-i2c-4-22
in0: 9.00 V (min = +9.00 V, max = +9.00 V)
curr1: 1.67 A (max = +1.67 A)
npu_thermal-virtual-0
temp1: +37.0 C
center_thermal-virtual-0
temp1: +36.1 C
bigcore1_thermal-virtual-0
temp1: +37.0 C
soc_thermal-virtual-0
temp1: +37.9 C (crit = +115.0 C)
##########################################################################
Executing benchmark on cpu0 (Cortex-A55):
tinymembench v0.4.9 (simple benchmark for memory throughput and latency)
==========================================================================
== Memory bandwidth tests ==
== ==
== Note 1: 1MB = 1000000 bytes ==
== Note 2: Results for 'copy' tests show how many bytes can be ==
== copied per second (adding together read and writen ==
== bytes would have provided twice higher numbers) ==
== Note 3: 2-pass copy means that we are using a small temporary buffer ==
== to first fetch data into it, and only then write it to the ==
== destination (source -> L1 cache, L1 cache -> destination) ==
== Note 4: If sample standard deviation exceeds 0.1%, it is shown in ==
== brackets ==
==========================================================================
C copy backwards : 3202.2 MB/s (0.5%)
C copy backwards (32 byte blocks) : 3180.7 MB/s
C copy backwards (64 byte blocks) : 3204.2 MB/s
C copy : 5741.6 MB/s (1.8%)
C copy prefetched (32 bytes step) : 2261.8 MB/s (4.0%)
C copy prefetched (64 bytes step) : 5555.9 MB/s (1.0%)
C 2-pass copy : 2769.6 MB/s
C 2-pass copy prefetched (32 bytes step) : 1671.1 MB/s (1.1%)
C 2-pass copy prefetched (64 bytes step) : 2675.8 MB/s
C fill : 12543.7 MB/s (0.1%)
C fill (shuffle within 16 byte blocks) : 12539.3 MB/s
C fill (shuffle within 32 byte blocks) : 12537.1 MB/s
C fill (shuffle within 64 byte blocks) : 12243.7 MB/s
---
standard memcpy : 5861.6 MB/s
standard memset : 21075.3 MB/s
---
NEON LDP/STP copy : 5183.1 MB/s
NEON LDP/STP copy pldl2strm (32 bytes step) : 1820.4 MB/s (11.0%)
NEON LDP/STP copy pldl2strm (64 bytes step) : 3086.5 MB/s
NEON LDP/STP copy pldl1keep (32 bytes step) : 2091.9 MB/s
NEON LDP/STP copy pldl1keep (64 bytes step) : 4941.1 MB/s (0.3%)
NEON LD1/ST1 copy : 4925.4 MB/s
NEON STP fill : 20803.2 MB/s
NEON STNP fill : 13372.4 MB/s (3.0%)
ARM LDP/STP copy : 5086.0 MB/s
ARM STP fill : 20786.4 MB/s (0.4%)
ARM STNP fill : 13350.1 MB/s (2.2%)
==========================================================================
== Framebuffer read tests. ==
== ==
== Many ARM devices use a part of the system memory as the framebuffer, ==
== typically mapped as uncached but with write-combining enabled. ==
== Writes to such framebuffers are quite fast, but reads are much ==
== slower and very sensitive to the alignment and the selection of ==
== CPU instructions which are used for accessing memory. ==
== ==
== Many x86 systems allocate the framebuffer in the GPU memory, ==
== accessible for the CPU via a relatively slow PCI-E bus. Moreover, ==
== PCI-E is asymmetric and handles reads a lot worse than writes. ==
== ==
== If uncached framebuffer reads are reasonably fast (at least 100 MB/s ==
== or preferably >300 MB/s), then using the shadow framebuffer layer ==
== is not necessary in Xorg DDX drivers, resulting in a nice overall ==
== performance improvement. For example, the xf86-video-fbturbo DDX ==
== uses this trick. ==
==========================================================================
NEON LDP/STP copy (from framebuffer) : 186.2 MB/s (9.0%)
NEON LDP/STP 2-pass copy (from framebuffer) : 130.8 MB/s
NEON LD1/ST1 copy (from framebuffer) : 34.6 MB/s
NEON LD1/ST1 2-pass copy (from framebuffer) : 34.3 MB/s
ARM LDP/STP copy (from framebuffer) : 69.0 MB/s
ARM LDP/STP 2-pass copy (from framebuffer) : 67.5 MB/s
==========================================================================
== Memory latency test ==
== ==
== Average time is measured for random memory accesses in the buffers ==
== of different sizes. The larger is the buffer, the more significant ==
== are relative contributions of TLB, L1/L2 cache misses and SDRAM ==
== accesses. For extremely large buffer sizes we are expecting to see ==
== page table walk with several requests to SDRAM for almost every ==
== memory access (though 64MiB is not nearly large enough to experience ==
== this effect to its fullest). ==
== ==
== Note 1: All the numbers are representing extra time, which needs to ==
== be added to L1 cache latency. The cycle timings for L1 cache ==
== latency can be usually found in the processor documentation. ==
== Note 2: Dual random read means that we are simultaneously performing ==
== two independent memory accesses at a time. In the case if ==
== the memory subsystem can't handle multiple outstanding ==
== requests, dual random read has the same timings as two ==
== single reads performed one after another. ==
==========================================================================
block size : single random read / dual random read
1024 : 0.0 ns / 0.0 ns
2048 : 0.0 ns / 0.0 ns
4096 : 0.0 ns / 0.0 ns
8192 : 0.0 ns / 0.0 ns
16384 : 0.1 ns / 0.1 ns
32768 : 0.6 ns / 1.0 ns
65536 : 1.5 ns / 2.6 ns
131072 : 2.7 ns / 4.3 ns
262144 : 8.0 ns / 11.8 ns
524288 : 11.7 ns / 15.1 ns
1048576 : 13.8 ns / 16.1 ns
2097152 : 15.8 ns / 18.5 ns
4194304 : 43.9 ns / 66.8 ns
8388608 : 93.5 ns / 149.2 ns
16777216 : 123.2 ns / 147.4 ns
33554432 : 250.3 ns / 312.1 ns
67108864 : 128.2 ns / 156.4 ns
Executing benchmark on cpu4 (Cortex-A76):
tinymembench v0.4.9 (simple benchmark for memory throughput and latency)
==========================================================================
== Memory bandwidth tests ==
== ==
== Note 1: 1MB = 1000000 bytes ==
== Note 2: Results for 'copy' tests show how many bytes can be ==
== copied per second (adding together read and writen ==
== bytes would have provided twice higher numbers) ==
== Note 3: 2-pass copy means that we are using a small temporary buffer ==
== to first fetch data into it, and only then write it to the ==
== destination (source -> L1 cache, L1 cache -> destination) ==
== Note 4: If sample standard deviation exceeds 0.1%, it is shown in ==
== brackets ==
==========================================================================
C copy backwards : 9245.8 MB/s
C copy backwards (32 byte blocks) : 9194.0 MB/s
C copy backwards (64 byte blocks) : 9189.8 MB/s
C copy : 9406.0 MB/s (0.3%)
C copy prefetched (32 bytes step) : 9569.8 MB/s
C copy prefetched (64 bytes step) : 9595.9 MB/s
C 2-pass copy : 4775.4 MB/s (0.1%)
C 2-pass copy prefetched (32 bytes step) : 7160.6 MB/s
C 2-pass copy prefetched (64 bytes step) : 7421.6 MB/s
C fill : 24687.4 MB/s (0.7%)
C fill (shuffle within 16 byte blocks) : 24695.8 MB/s (1.0%)
C fill (shuffle within 32 byte blocks) : 24753.1 MB/s (0.4%)
C fill (shuffle within 64 byte blocks) : 24675.9 MB/s (0.8%)
---
standard memcpy : 9580.9 MB/s
standard memset : 24751.4 MB/s (0.7%)
---
NEON LDP/STP copy : 9600.4 MB/s
NEON LDP/STP copy pldl2strm (32 bytes step) : 9637.7 MB/s
NEON LDP/STP copy pldl2strm (64 bytes step) : 9656.2 MB/s
NEON LDP/STP copy pldl1keep (32 bytes step) : 9677.7 MB/s
NEON LDP/STP copy pldl1keep (64 bytes step) : 9673.5 MB/s
NEON LD1/ST1 copy : 9499.7 MB/s
NEON STP fill : 24730.4 MB/s (0.6%)
NEON STNP fill : 24747.7 MB/s (0.3%)
ARM LDP/STP copy : 9580.3 MB/s
ARM STP fill : 24533.1 MB/s (0.4%)
ARM STNP fill : 24710.7 MB/s (0.3%)
==========================================================================
== Framebuffer read tests. ==
== ==
== Many ARM devices use a part of the system memory as the framebuffer, ==
== typically mapped as uncached but with write-combining enabled. ==
== Writes to such framebuffers are quite fast, but reads are much ==
== slower and very sensitive to the alignment and the selection of ==
== CPU instructions which are used for accessing memory. ==
== ==
== Many x86 systems allocate the framebuffer in the GPU memory, ==
== accessible for the CPU via a relatively slow PCI-E bus. Moreover, ==
== PCI-E is asymmetric and handles reads a lot worse than writes. ==
== ==
== If uncached framebuffer reads are reasonably fast (at least 100 MB/s ==
== or preferably >300 MB/s), then using the shadow framebuffer layer ==
== is not necessary in Xorg DDX drivers, resulting in a nice overall ==
== performance improvement. For example, the xf86-video-fbturbo DDX ==
== uses this trick. ==
==========================================================================
NEON LDP/STP copy (from framebuffer) : 1738.2 MB/s (12.5%)
NEON LDP/STP 2-pass copy (from framebuffer) : 985.3 MB/s
NEON LD1/ST1 copy (from framebuffer) : 1211.8 MB/s
NEON LD1/ST1 2-pass copy (from framebuffer) : 1007.4 MB/s
ARM LDP/STP copy (from framebuffer) : 1196.0 MB/s
ARM LDP/STP 2-pass copy (from framebuffer) : 1004.4 MB/s
==========================================================================
== Memory latency test ==
== ==
== Average time is measured for random memory accesses in the buffers ==
== of different sizes. The larger is the buffer, the more significant ==
== are relative contributions of TLB, L1/L2 cache misses and SDRAM ==
== accesses. For extremely large buffer sizes we are expecting to see ==
== page table walk with several requests to SDRAM for almost every ==
== memory access (though 64MiB is not nearly large enough to experience ==
== this effect to its fullest). ==
== ==
== Note 1: All the numbers are representing extra time, which needs to ==
== be added to L1 cache latency. The cycle timings for L1 cache ==
== latency can be usually found in the processor documentation. ==
== Note 2: Dual random read means that we are simultaneously performing ==
== two independent memory accesses at a time. In the case if ==
== the memory subsystem can't handle multiple outstanding ==
== requests, dual random read has the same timings as two ==
== single reads performed one after another. ==
==========================================================================
block size : single random read / dual random read
1024 : 0.0 ns / 0.0 ns
2048 : 0.0 ns / 0.0 ns
4096 : 0.0 ns / 0.0 ns
8192 : 0.0 ns / 0.0 ns
16384 : 0.0 ns / 0.0 ns
32768 : 0.0 ns / 0.0 ns
65536 : 0.0 ns / 0.0 ns
131072 : 1.1 ns / 1.5 ns
262144 : 2.2 ns / 2.8 ns
524288 : 4.5 ns / 5.8 ns
1048576 : 10.1 ns / 13.0 ns
2097152 : 13.9 ns / 15.6 ns
4194304 : 68.7 ns / 112.1 ns
8388608 : 165.6 ns / 241.0 ns
16777216 : 104.6 ns / 133.8 ns
33554432 : 116.5 ns / 139.6 ns
67108864 : 124.0 ns / 144.3 ns
Executing benchmark on cpu6 (Cortex-A76):
tinymembench v0.4.9 (simple benchmark for memory throughput and latency)
==========================================================================
== Memory bandwidth tests ==
== ==
== Note 1: 1MB = 1000000 bytes ==
== Note 2: Results for 'copy' tests show how many bytes can be ==
== copied per second (adding together read and writen ==
== bytes would have provided twice higher numbers) ==
== Note 3: 2-pass copy means that we are using a small temporary buffer ==
== to first fetch data into it, and only then write it to the ==
== destination (source -> L1 cache, L1 cache -> destination) ==
== Note 4: If sample standard deviation exceeds 0.1%, it is shown in ==
== brackets ==
==========================================================================
C copy backwards : 9265.7 MB/s
C copy backwards (32 byte blocks) : 9207.6 MB/s
C copy backwards (64 byte blocks) : 9210.3 MB/s
C copy : 9359.4 MB/s
C copy prefetched (32 bytes step) : 9539.0 MB/s
C copy prefetched (64 bytes step) : 9564.7 MB/s (0.2%)
C 2-pass copy : 4764.9 MB/s
C 2-pass copy prefetched (32 bytes step) : 7081.3 MB/s
C 2-pass copy prefetched (64 bytes step) : 7361.5 MB/s
C fill : 24439.7 MB/s (0.9%)
C fill (shuffle within 16 byte blocks) : 24558.2 MB/s (0.4%)
C fill (shuffle within 32 byte blocks) : 24429.7 MB/s
C fill (shuffle within 64 byte blocks) : 24532.1 MB/s (0.3%)
---
standard memcpy : 9538.2 MB/s
standard memset : 24443.0 MB/s (0.8%)
---
NEON LDP/STP copy : 9556.9 MB/s
NEON LDP/STP copy pldl2strm (32 bytes step) : 9612.9 MB/s
NEON LDP/STP copy pldl2strm (64 bytes step) : 9631.6 MB/s
NEON LDP/STP copy pldl1keep (32 bytes step) : 9659.0 MB/s (0.2%)
NEON LDP/STP copy pldl1keep (64 bytes step) : 9653.4 MB/s
NEON LD1/ST1 copy : 9475.0 MB/s
NEON STP fill : 24576.4 MB/s (0.9%)
NEON STNP fill : 24445.3 MB/s (0.4%)
ARM LDP/STP copy : 9535.9 MB/s
ARM STP fill : 24555.4 MB/s (0.5%)
ARM STNP fill : 24512.2 MB/s (0.5%)
==========================================================================
== Framebuffer read tests. ==
== ==
== Many ARM devices use a part of the system memory as the framebuffer, ==
== typically mapped as uncached but with write-combining enabled. ==
== Writes to such framebuffers are quite fast, but reads are much ==
== slower and very sensitive to the alignment and the selection of ==
== CPU instructions which are used for accessing memory. ==
== ==
== Many x86 systems allocate the framebuffer in the GPU memory, ==
== accessible for the CPU via a relatively slow PCI-E bus. Moreover, ==
== PCI-E is asymmetric and handles reads a lot worse than writes. ==
== ==
== If uncached framebuffer reads are reasonably fast (at least 100 MB/s ==
== or preferably >300 MB/s), then using the shadow framebuffer layer ==
== is not necessary in Xorg DDX drivers, resulting in a nice overall ==
== performance improvement. For example, the xf86-video-fbturbo DDX ==
== uses this trick. ==
==========================================================================
NEON LDP/STP copy (from framebuffer) : 1735.0 MB/s
NEON LDP/STP 2-pass copy (from framebuffer) : 1492.4 MB/s (12.1%)
NEON LD1/ST1 copy (from framebuffer) : 1214.7 MB/s
NEON LD1/ST1 2-pass copy (from framebuffer) : 1025.4 MB/s
ARM LDP/STP copy (from framebuffer) : 1193.9 MB/s
ARM LDP/STP 2-pass copy (from framebuffer) : 1024.8 MB/s
==========================================================================
== Memory latency test ==
== ==
== Average time is measured for random memory accesses in the buffers ==
== of different sizes. The larger is the buffer, the more significant ==
== are relative contributions of TLB, L1/L2 cache misses and SDRAM ==
== accesses. For extremely large buffer sizes we are expecting to see ==
== page table walk with several requests to SDRAM for almost every ==
== memory access (though 64MiB is not nearly large enough to experience ==
== this effect to its fullest). ==
== ==
== Note 1: All the numbers are representing extra time, which needs to ==
== be added to L1 cache latency. The cycle timings for L1 cache ==
== latency can be usually found in the processor documentation. ==
== Note 2: Dual random read means that we are simultaneously performing ==
== two independent memory accesses at a time. In the case if ==
== the memory subsystem can't handle multiple outstanding ==
== requests, dual random read has the same timings as two ==
== single reads performed one after another. ==
==========================================================================
block size : single random read / dual random read
1024 : 0.0 ns / 0.0 ns
2048 : 0.0 ns / 0.0 ns
4096 : 0.0 ns / 0.0 ns
8192 : 0.0 ns / 0.0 ns
16384 : 0.0 ns / 0.0 ns
32768 : 0.0 ns / 0.0 ns
65536 : 0.0 ns / 0.0 ns
131072 : 1.1 ns / 1.5 ns
262144 : 2.2 ns / 2.8 ns
524288 : 4.1 ns / 5.3 ns
1048576 : 9.9 ns / 13.0 ns
2097152 : 13.4 ns / 15.7 ns
4194304 : 36.5 ns / 55.1 ns
8388608 : 80.4 ns / 112.8 ns
16777216 : 219.3 ns / 288.2 ns
33554432 : 247.3 ns / 306.2 ns
67108864 : 262.1 ns / 311.8 ns
##########################################################################
Executing ramlat on cpu0 (Cortex-A55), results in ns:
size: 1x32 2x32 1x64 2x64 1xPTR 2xPTR 4xPTR 8xPTR
4k: 1.651 1.651 1.651 1.651 1.101 1.651 2.236 4.507
8k: 1.651 1.651 1.650 1.651 1.101 1.651 2.236 4.507
16k: 1.666 1.651 1.664 1.651 1.108 1.651 2.236 4.506
32k: 1.672 1.653 1.676 1.653 1.115 1.653 2.240 4.512
64k: 9.397 10.94 9.396 10.93 9.618 10.96 16.03 29.20
128k: 13.72 14.79 13.71 14.77 14.24 14.76 21.85 41.45
256k: 15.88 16.35 15.88 16.35 15.22 16.39 25.54 49.48
512k: 16.71 16.82 16.64 16.82 15.97 17.03 26.58 52.83
1024k: 16.92 16.91 16.75 16.91 16.16 17.10 27.72 52.98
2048k: 21.68 24.12 21.11 23.97 20.38 24.34 39.16 76.59
4096k: 151.5 192.6 166.6 187.7 149.9 185.7 283.1 542.0
8192k: 266.0 252.5 200.3 234.7 196.8 236.8 415.3 819.7
16384k: 275.6 285.8 248.8 264.4 239.1 265.7 459.1 838.2
Executing ramlat on cpu4 (Cortex-A76), results in ns:
size: 1x32 2x32 1x64 2x64 1xPTR 2xPTR 4xPTR 8xPTR
4k: 1.712 1.712 1.712 1.712 1.712 1.712 1.712 3.258
8k: 1.712 1.712 1.712 1.712 1.712 1.712 1.712 3.336
16k: 1.712 1.712 1.712 1.712 1.712 1.712 1.712 3.336
32k: 1.712 1.712 1.712 1.712 1.712 1.712 1.712 3.338
64k: 1.713 1.712 1.713 1.712 1.713 1.713 1.713 3.339
128k: 5.143 5.139 5.136 5.139 5.136 5.676 7.185 12.97
256k: 6.045 6.114 6.055 6.132 6.047 6.033 7.536 12.97
512k: 9.826 9.327 9.735 9.325 9.723 9.833 11.48 17.62
1024k: 17.95 17.49 17.71 17.48 17.70 17.59 19.44 28.83
2048k: 25.63 22.69 24.59 22.65 24.39 23.35 27.22 42.27
4096k: 123.6 107.3 122.4 106.5 121.8 105.3 101.1 68.71
8192k: 118.9 87.77 101.6 86.73 101.2 87.96 116.8 141.7
16384k: 195.0 235.6 241.3 236.5 240.5 218.4 143.7 129.0
Executing ramlat on cpu6 (Cortex-A76), results in ns:
size: 1x32 2x32 1x64 2x64 1xPTR 2xPTR 4xPTR 8xPTR
4k: 1.713 1.713 1.713 1.713 1.713 1.713 1.714 3.258
8k: 1.713 1.713 1.713 1.713 1.713 1.713 1.713 3.338
16k: 1.713 1.713 1.713 1.713 1.713 1.713 1.713 3.338
32k: 1.713 1.713 1.713 1.713 1.713 1.713 1.713 3.340
64k: 1.714 1.713 1.714 1.713 1.714 1.714 1.714 3.341
128k: 5.144 5.139 5.138 5.140 5.138 5.674 7.208 12.97
256k: 7.151 7.218 7.053 7.219 7.028 7.360 8.523 14.27
512k: 9.953 9.571 9.912 9.566 9.874 10.13 11.56 17.70
1024k: 17.92 17.48 17.72 17.47 17.66 17.83 19.67 28.89
2048k: 22.31 20.79 21.67 20.79 21.64 21.36 24.54 38.13
4096k: 125.5 107.9 124.9 107.6 123.3 107.9 101.5 70.26
8192k: 119.5 87.24 100.2 86.53 99.35 87.86 93.70 97.85
16384k: 131.1 115.1 121.2 111.6 122.0 147.9 154.2 162.8
##########################################################################
Executing benchmark on each cluster individually
OpenSSL 3.0.2, built on 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes 16384 bytes
aes-128-cbc 161383.67k 477860.16k 935484.42k 1236525.06k 1363058.69k 1373650.94k (Cortex-A55)
aes-128-cbc 661648.33k 1317811.18k 1690648.32k 1811161.09k 1858052.10k 1862718.81k (Cortex-A76)
aes-128-cbc 653606.50k 1304631.21k 1688533.08k 1809831.59k 1856678.57k 1861479.08k (Cortex-A76)
aes-192-cbc 153751.49k 425465.19k 761016.75k 950681.94k 1025952.43k 1032088.23k (Cortex-A55)
aes-192-cbc 615589.00k 1153821.53k 1432311.47k 1507698.69k 1549727.06k 1552596.99k (Cortex-A76)
aes-192-cbc 615469.00k 1151637.78k 1430890.33k 1505556.82k 1548151.47k 1551329.96k (Cortex-A76)
aes-256-cbc 149009.40k 391002.26k 656184.06k 794822.31k 846686.89k 850875.73k (Cortex-A55)
aes-256-cbc 582298.33k 1026549.01k 1241051.14k 1304572.93k 1328704.17k 1331074.39k (Cortex-A76)
aes-256-cbc 593542.07k 1024435.50k 1238821.80k 1303066.97k 1327658.33k 1330113.19k (Cortex-A76)
##########################################################################
Executing benchmark single-threaded on cpu0 (Cortex-A55)
7-Zip (a) [64] 16.02 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-21
p7zip Version 16.02 (locale=C,Utf16=off,HugeFiles=on,64 bits,8 CPUs LE)
LE
CPU Freq: - - - - - - - - 2048000000
RAM size: 7690 MB, # CPU hardware threads: 8
RAM usage: 435 MB, # Benchmark threads: 1
Compressing | Decompressing
Dict Speed Usage R/U Rating | Speed Usage R/U Rating
KiB/s % MIPS MIPS | KiB/s % MIPS MIPS
22: 1118 99 1094 1088 | 21148 98 1845 1806
23: 982 100 1001 1001 | 21159 100 1832 1832
24: 978 99 1060 1052 | 20614 100 1810 1810
25: 915 100 1045 1045 | 19867 100 1768 1768
---------------------------------- | ------------------------------
Avr: 100 1050 1047 | 99 1814 1804
Tot: 100 1432 1425
Executing benchmark single-threaded on cpu4 (Cortex-A76)
7-Zip (a) [64] 16.02 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-21
p7zip Version 16.02 (locale=C,Utf16=off,HugeFiles=on,64 bits,8 CPUs LE)
LE
CPU Freq: - - - - - - - - -
RAM size: 7690 MB, # CPU hardware threads: 8
RAM usage: 435 MB, # Benchmark threads: 1
Compressing | Decompressing
Dict Speed Usage R/U Rating | Speed Usage R/U Rating
KiB/s % MIPS MIPS | KiB/s % MIPS MIPS
22: 2054 100 1999 1999 | 38000 100 3245 3245
23: 1879 100 1915 1915 | 37182 100 3219 3218
24: 1782 100 1917 1917 | 35843 100 3147 3147
25: 1675 100 1913 1913 | 34176 100 3042 3042
---------------------------------- | ------------------------------
Avr: 100 1936 1936 | 100 3163 3163
Tot: 100 2550 2549
Executing benchmark single-threaded on cpu6 (Cortex-A76)
7-Zip (a) [64] 16.02 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-21
p7zip Version 16.02 (locale=C,Utf16=off,HugeFiles=on,64 bits,8 CPUs LE)
LE
CPU Freq: - - - 64000000 - - - - -
RAM size: 7690 MB, # CPU hardware threads: 8
RAM usage: 435 MB, # Benchmark threads: 1
Compressing | Decompressing
Dict Speed Usage R/U Rating | Speed Usage R/U Rating
KiB/s % MIPS MIPS | KiB/s % MIPS MIPS
22: 2050 100 1995 1995 | 38205 100 3262 3262
23: 1881 100 1917 1917 | 37152 100 3216 3216
24: 1780 100 1914 1914 | 35806 100 3143 3143
25: 1678 100 1916 1916 | 34211 100 3045 3045
---------------------------------- | ------------------------------
Avr: 100 1936 1936 | 100 3167 3167
Tot: 100 2551 2551
##########################################################################
Executing benchmark 3 times multi-threaded on CPUs 0-7
7-Zip (a) [64] 16.02 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-21
p7zip Version 16.02 (locale=C,Utf16=off,HugeFiles=on,64 bits,8 CPUs LE)
LE
CPU Freq: - 64000000 - - - 256000000 - - -
RAM size: 7690 MB, # CPU hardware threads: 8
RAM usage: 1765 MB, # Benchmark threads: 8
Compressing | Decompressing
Dict Speed Usage R/U Rating | Speed Usage R/U Rating
KiB/s % MIPS MIPS | KiB/s % MIPS MIPS
22: 14585 742 1912 14189 | 206010 679 2588 17572
23: 13773 729 1926 14034 | 200107 680 2546 17317
24: 13184 754 1880 14176 | 193942 681 2500 17022
25: 12478 761 1873 14248 | 186819 679 2449 16626
---------------------------------- | ------------------------------
Avr: 746 1898 14162 | 680 2521 17134
Tot: 713 2209 15648
7-Zip (a) [64] 16.02 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-21
p7zip Version 16.02 (locale=C,Utf16=off,HugeFiles=on,64 bits,8 CPUs LE)
LE
CPU Freq: - - - - - - - - -
RAM size: 7690 MB, # CPU hardware threads: 8
RAM usage: 1765 MB, # Benchmark threads: 8
Compressing | Decompressing
Dict Speed Usage R/U Rating | Speed Usage R/U Rating
KiB/s % MIPS MIPS | KiB/s % MIPS MIPS
22: 14672 753 1895 14274 | 204069 677 2573 17406
23: 13807 744 1892 14068 | 199209 680 2536 17239
24: 13416 775 1861 14425 | 193112 680 2493 16949
25: 12412 778 1823 14172 | 186625 679 2445 16609
---------------------------------- | ------------------------------
Avr: 762 1868 14235 | 679 2512 17051
Tot: 721 2190 15643
7-Zip (a) [64] 16.02 : Copyright (c) 1999-2016 Igor Pavlov : 2016-05-21
p7zip Version 16.02 (locale=C,Utf16=off,HugeFiles=on,64 bits,8 CPUs LE)
LE
CPU Freq: - - - - - - - - -
RAM size: 7690 MB, # CPU hardware threads: 8
RAM usage: 1765 MB, # Benchmark threads: 8
Compressing | Decompressing
Dict Speed Usage R/U Rating | Speed Usage R/U Rating
KiB/s % MIPS MIPS | KiB/s % MIPS MIPS
22: 14644 741 1922 14246 | 204560 679 2569 17448
23: 13552 727 1900 13808 | 199081 680 2533 17228
24: 13063 742 1892 14046 | 193545 681 2493 16987
25: 12445 771 1842 14209 | 187078 681 2443 16649
---------------------------------- | ------------------------------
Avr: 745 1889 14077 | 681 2510 17078
Tot: 713 2199 15578
Compression: 14162,14235,14077
Decompression: 17134,17051,17078
Total: 15648,15643,15578
##########################################################################
Testing maximum cpufreq again, still under full load. System health now:
Time big.LITTLE load %cpu %sys %usr %nice %io %irq Temp DC(V)
22:40:17: 2400/1800MHz 6.76 91% 1% 89% 0% 0% 0% 66.5°C 9.31
Checking cpufreq OPP for cpu0-cpu3 (Cortex-A55):
Cpufreq OPP: 1800 Measured: 1803 (1803.592/1803.553/1802.963)
Checking cpufreq OPP for cpu4-cpu5 (Cortex-A76):
Cpufreq OPP: 2400 Measured: 2315 (2315.588/2315.121/2314.861) (-3.5%)
Checking cpufreq OPP for cpu6-cpu7 (Cortex-A76):
Cpufreq OPP: 2400 Measured: 2316 (2316.990/2316.938/2316.574) (-3.5%)
##########################################################################
Hardware sensors:
gpu_thermal-virtual-0
temp1: +54.5 C
littlecore_thermal-virtual-0
temp1: +55.5 C
bigcore0_thermal-virtual-0
temp1: +55.5 C
tcpm_source_psy_4_0022-i2c-4-22
in0: 9.00 V (min = +9.00 V, max = +9.00 V)
curr1: 1.67 A (max = +1.67 A)
npu_thermal-virtual-0
temp1: +54.5 C
center_thermal-virtual-0
temp1: +54.5 C
bigcore1_thermal-virtual-0
temp1: +55.5 C
soc_thermal-virtual-0
temp1: +55.5 C (crit = +115.0 C)
##########################################################################
Transitions since last boot (13453520ms ago):
/sys/devices/platform/dmc/devfreq/dmc:
From : To
: 528000000106800000015600000002112000000 time(ms)
* 528000000: 0 0 0 218 12361740
1068000000: 58 0 0 16 213303
1560000000: 3 17 0 3 84273
2112000000: 158 57 23 0 791323
Total transition : 553
##########################################################################
Thermal source: /sys/devices/virtual/thermal/thermal_zone0/ (soc-thermal)
System health while running tinymembench:
Time big.LITTLE load %cpu %sys %usr %nice %io %irq Temp DC(V)
22:10:41: 2400/1800MHz 0.95 0% 0% 0% 0% 0% 0% 37.9°C 9.34
22:12:41: 2400/1800MHz 1.07 12% 0% 12% 0% 0% 0% 44.4°C 9.39
22:14:41: 2400/1800MHz 1.01 12% 0% 12% 0% 0% 0% 40.7°C 9.35
22:16:41: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 39.8°C 9.39
22:18:41: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 51.8°C 9.28
22:20:41: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 44.4°C 9.37
22:22:42: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 43.5°C 9.39
22:24:42: 2400/1800MHz 1.03 12% 0% 12% 0% 0% 0% 55.5°C 9.33
22:26:42: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 46.2°C 9.36
22:28:42: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 44.4°C 9.39
System health while running ramlat:
Time big.LITTLE load %cpu %sys %usr %nice %io %irq Temp DC(V)
22:29:50: 2400/1800MHz 1.00 1% 0% 1% 0% 0% 0% 43.5°C 9.32
22:29:59: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 42.5°C 9.39
22:30:08: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 42.5°C 9.33
22:30:17: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 42.5°C 9.34
22:30:26: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 42.5°C 9.37
22:30:35: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 43.5°C 9.27
22:30:44: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 43.5°C 9.33
22:30:54: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 44.4°C 9.32
22:31:03: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 44.4°C 9.30
System health while running OpenSSL benchmark:
Time big.LITTLE load %cpu %sys %usr %nice %io %irq Temp DC(V)
22:31:04: 2400/1800MHz 1.00 1% 0% 1% 0% 0% 0% 44.4°C 9.33
22:31:20: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 42.5°C 9.34
22:31:36: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 43.5°C 9.33
22:31:52: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 44.4°C 9.30
22:32:08: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 42.5°C 9.39
22:32:24: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 44.4°C 9.40
22:32:40: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 44.4°C 9.32
22:32:56: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 43.5°C 9.39
22:33:12: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 44.4°C 9.32
22:33:28: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 44.4°C 9.31
22:33:44: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 44.4°C 9.34
System health while running 7-zip single core benchmark:
Time big.LITTLE load %cpu %sys %usr %nice %io %irq Temp DC(V)
22:33:46: 2400/1800MHz 1.00 1% 0% 1% 0% 0% 0% 44.4°C 9.40
22:33:59: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 43.5°C 9.33
22:34:12: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 42.5°C 9.40
22:34:25: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 42.5°C 9.38
22:34:38: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 42.5°C 9.36
22:34:51: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 42.5°C 9.34
22:35:04: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 42.5°C 9.35
22:35:17: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 42.5°C 9.39
22:35:30: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 41.6°C 9.35
22:35:43: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 43.5°C 9.31
22:35:56: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 44.4°C 9.36
22:36:09: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 43.5°C 9.34
22:36:23: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 43.5°C 9.37
22:36:36: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 44.4°C 9.32
22:36:49: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 44.4°C 9.34
22:37:02: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 44.4°C 9.36
22:37:15: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 44.4°C 9.35
22:37:28: 2400/1800MHz 1.00 12% 0% 12% 0% 0% 0% 44.4°C 9.30
System health while running 7-zip multi core benchmark:
Time big.LITTLE load %cpu %sys %usr %nice %io %irq Temp DC(V)
22:37:37: 2400/1800MHz 1.00 1% 0% 1% 0% 0% 0% 45.3°C 9.38
22:37:47: 2400/1800MHz 2.08 90% 0% 90% 0% 0% 0% 55.5°C 9.32
22:37:57: 2400/1800MHz 3.21 86% 0% 85% 0% 0% 0% 58.2°C 9.30
22:38:09: 2400/1800MHz 3.19 85% 1% 84% 0% 0% 0% 61.0°C 9.33
22:38:20: 2400/1800MHz 4.33 80% 1% 79% 0% 0% 0% 61.0°C 9.34
22:38:30: 2400/1800MHz 4.57 88% 0% 87% 0% 0% 0% 61.0°C 9.33
22:38:40: 2400/1800MHz 4.66 91% 0% 90% 0% 0% 0% 62.8°C 9.37
22:38:50: 2400/1800MHz 5.25 87% 0% 86% 0% 0% 0% 63.8°C 9.24
22:39:03: 2400/1800MHz 5.82 84% 1% 83% 0% 0% 0% 66.5°C 9.28
22:39:13: 2400/1800MHz 6.02 81% 1% 80% 0% 0% 0% 64.7°C 9.33
22:39:23: 2400/1800MHz 6.62 91% 1% 90% 0% 0% 0% 64.7°C 9.32
22:39:33: 2400/1800MHz 6.39 91% 0% 90% 0% 0% 0% 66.5°C 9.28
22:39:43: 2400/1800MHz 6.34 85% 0% 85% 0% 0% 0% 68.4°C 9.32
22:39:57: 2400/1800MHz 6.52 82% 1% 81% 0% 0% 0% 68.4°C 9.33
22:40:07: 2400/1800MHz 6.53 81% 1% 80% 0% 0% 0% 67.5°C 9.34
22:40:17: 2400/1800MHz 6.76 91% 1% 89% 0% 0% 0% 66.5°C 9.31
##########################################################################
dmesg output while running the benchmarks:
[11680.469203] r8125 0004:41:00.0 enP4p65s0: rss get rxnfc
[11765.108375] dwhdmi-rockchip fde80000.hdmi: use tmds mode
[11809.707272] r8125 0004:41:00.0 enP4p65s0: rss get rxnfc
[11895.908371] dwhdmi-rockchip fde80000.hdmi: use tmds mode
[11898.108380] dwhdmi-rockchip fde80000.hdmi: use tmds mode
[11929.835311] r8125 0004:41:00.0 enP4p65s0: rss get rxnfc
[11938.700385] dwhdmi-rockchip fde80000.hdmi: use tmds mode
[11940.044367] dwhdmi-rockchip fde80000.hdmi: use tmds mode
[11940.540389] dwhdmi-rockchip fde80000.hdmi: use tmds mode
[11941.780389] dwhdmi-rockchip fde80000.hdmi: use tmds mode
[12049.963775] r8125 0004:41:00.0 enP4p65s0: rss get rxnfc
[12149.636814] dwhdmi-rockchip fde80000.hdmi: use tmds mode
[12151.220823] dwhdmi-rockchip fde80000.hdmi: use tmds mode
[12170.092208] r8125 0004:41:00.0 enP4p65s0: rss get rxnfc
[12290.220781] r8125 0004:41:00.0 enP4p65s0: rss get rxnfc
[12410.348901] r8125 0004:41:00.0 enP4p65s0: rss get rxnfc
[12530.473073] r8125 0004:41:00.0 enP4p65s0: rss get rxnfc
[12659.694239] r8125 0004:41:00.0 enP4p65s0: rss get rxnfc
[12779.823119] r8125 0004:41:00.0 enP4p65s0: rss get rxnfc
[12855.092382] dwhdmi-rockchip fde80000.hdmi: use tmds mode
[12899.951337] r8125 0004:41:00.0 enP4p65s0: rss get rxnfc
[13020.079133] r8125 0004:41:00.0 enP4p65s0: rss get rxnfc
[13062.212371] dwhdmi-rockchip fde80000.hdmi: use tmds mode
[13072.116364] dwhdmi-rockchip fde80000.hdmi: use tmds mode
[13074.036358] dwhdmi-rockchip fde80000.hdmi: use tmds mode
[13074.628381] dwhdmi-rockchip fde80000.hdmi: use tmds mode
[13096.756371] dwhdmi-rockchip fde80000.hdmi: use tmds mode
[13097.580529] dwhdmi-rockchip fde80000.hdmi: use tmds mode
[13140.207523] r8125 0004:41:00.0 enP4p65s0: rss get rxnfc
[13260.337472] r8125 0004:41:00.0 enP4p65s0: rss get rxnfc
[13380.462244] r8125 0004:41:00.0 enP4p65s0: rss get rxnfc
##########################################################################
Linux 5.10.72-rockchip-rk3588 (rock-5b) 12/06/22 _aarch64_ (8 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
2.83 0.00 0.13 0.00 0.00 97.04
Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd
mmcblk1 1.24 50.66 41.89 0.00 681510 563574 0
mtdblock0 0.00 0.02 0.00 0.00 336 0 0
zram0 0.04 0.17 0.00 0.00 2264 4 0
zram1 0.06 0.03 0.52 0.00 416 7008 0
total used free shared buff/cache available
Mem: 7.5Gi 195Mi 7.0Gi 43Mi 301Mi 7.2Gi
Swap: 3.8Gi 0B 3.8Gi
Filename Type Size Used Priority
/dev/zram0 partition 3937420 0 5
CPU sysfs topology (clusters, cpufreq members, clockspeeds)
cpufreq min max
CPU cluster policy speed speed core type
0 0 0 408 1800 Cortex-A55 / r2p0
1 0 0 408 1800 Cortex-A55 / r2p0
2 0 0 408 1800 Cortex-A55 / r2p0
3 0 0 408 1800 Cortex-A55 / r2p0
4 1 4 408 2400 Cortex-A76 / r4p0
5 1 4 408 2400 Cortex-A76 / r4p0
6 2 6 408 2400 Cortex-A76 / r4p0
7 2 6 408 2400 Cortex-A76 / r4p0
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Vendor ID: ARM
Model name: Cortex-A55
Model: 0
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
Stepping: r2p0
CPU max MHz: 1800.0000
CPU min MHz: 408.0000
BogoMIPS: 48.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp
Model name: Cortex-A76
Model: 0
Thread(s) per core: 1
Core(s) per socket: 2
Socket(s): 2
Stepping: r4p0
CPU max MHz: 2400.0000
CPU min MHz: 408.0000
BogoMIPS: 48.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm lrcpc dcpop asimddp
L1d cache: 384 KiB (8 instances)
L1i cache: 384 KiB (8 instances)
L2 cache: 2.5 MiB (8 instances)
L3 cache: 3 MiB (1 instance)
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
SoC guess: Rockchip RK3588/RK3588s (35880000)
DMC gov: dmc_ondemand (upthreshold: 40)
DT compat: radxa,rock-5b
rockchip,rk3588
Compiler: /usr/bin/gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 / aarch64-linux-gnu
Userland: arm64
Kernel: 5.10.72-rockchip-rk3588/aarch64
CONFIG_HZ=300
CONFIG_HZ_300=y
CONFIG_PREEMPT_NOTIFIERS=y
CONFIG_PREEMPT_VOLUNTARY=y
raid6: neonx8 gen() 5983 MB/s
raid6: neonx8 xor() 4607 MB/s
raid6: neonx4 gen() 5992 MB/s
raid6: neonx4 xor() 4683 MB/s
raid6: neonx2 gen() 5835 MB/s
raid6: neonx2 xor() 4378 MB/s
raid6: neonx1 gen() 4758 MB/s
raid6: neonx1 xor() 3423 MB/s
raid6: int64x8 gen() 1474 MB/s
raid6: int64x8 xor() 985 MB/s
raid6: int64x4 gen() 1925 MB/s
raid6: int64x4 xor() 1072 MB/s
raid6: int64x2 gen() 2660 MB/s
raid6: int64x2 xor() 1453 MB/s
raid6: int64x1 gen() 2191 MB/s
raid6: int64x1 xor() 1074 MB/s
raid6: using algorithm neonx4 gen() 5992 MB/s
raid6: .... xor() 4683 MB/s, rmw enabled
raid6: using neon recovery algorithm
xor: measuring software checksum speed
xor: using function: arm64_neon (10689 MB/sec)
cpu cpu0: pvtm=1521
cpu cpu0: pvtm-volt-sel=5
cpu cpu4: pvtm=1782
cpu cpu4: pvtm-volt-sel=7
cpu cpu6: pvtm=1784
cpu cpu6: pvtm-volt-sel=7
cpu0/index0: 32K, level: 1, type: Data
cpu0/index1: 32K, level: 1, type: Instruction
cpu0/index2: 128K, level: 2, type: Unified
cpu0/index3: 3072K, level: 3, type: Unified
cpu1/index0: 32K, level: 1, type: Data
cpu1/index1: 32K, level: 1, type: Instruction
cpu1/index2: 128K, level: 2, type: Unified
cpu1/index3: 3072K, level: 3, type: Unified
cpu2/index0: 32K, level: 1, type: Data
cpu2/index1: 32K, level: 1, type: Instruction
cpu2/index2: 128K, level: 2, type: Unified
cpu2/index3: 3072K, level: 3, type: Unified
cpu3/index0: 32K, level: 1, type: Data
cpu3/index1: 32K, level: 1, type: Instruction
cpu3/index2: 128K, level: 2, type: Unified
cpu3/index3: 3072K, level: 3, type: Unified
cpu4/index0: 64K, level: 1, type: Data
cpu4/index1: 64K, level: 1, type: Instruction
cpu4/index2: 512K, level: 2, type: Unified
cpu4/index3: 3072K, level: 3, type: Unified
cpu5/index0: 64K, level: 1, type: Data
cpu5/index1: 64K, level: 1, type: Instruction
cpu5/index2: 512K, level: 2, type: Unified
cpu5/index3: 3072K, level: 3, type: Unified
cpu6/index0: 64K, level: 1, type: Data
cpu6/index1: 64K, level: 1, type: Instruction
cpu6/index2: 512K, level: 2, type: Unified
cpu6/index3: 3072K, level: 3, type: Unified
cpu7/index0: 64K, level: 1, type: Data
cpu7/index1: 64K, level: 1, type: Instruction
cpu7/index2: 512K, level: 2, type: Unified
cpu7/index3: 3072K, level: 3, type: Unified
##########################################################################
vdd_cpu_big0_s0: 1000 mV (1050 mV max)
vdd_cpu_big1_s0: 1000 mV (1050 mV max)
vdd_npu_s0: 788 mV (950 mV max)
cluster0-opp-table:
408 MHz 675.0 mV (00ff ffff)
600 MHz 675.0 mV (00ff ffff)
816 MHz 675.0 mV (00ff ffff)
1008 MHz 675.0 mV (00ff ffff)
1200 MHz 712.5 mV (00ff ffff)
1416 MHz 762.5 mV (00ff ffff)
1608 MHz 850.0 mV (00ff ffff)
1800 MHz 950.0 mV (00ff ffff)
cluster1-opp-table:
408 MHz 675.0 mV (00ff ffff)
600 MHz 675.0 mV (00ff ffff)
816 MHz 675.0 mV (00ff ffff)
1008 MHz 675.0 mV (00ff ffff)
1200 MHz 675.0 mV (00ff ffff)
1416 MHz 725.0 mV (00ff ffff)
1608 MHz 762.5 mV (00ff ffff)
1800 MHz 850.0 mV (00ff ffff)
2016 MHz 925.0 mV (00ff ffff)
2208 MHz 987.5 mV (00ff ffff)
2256 MHz 1000.0 mV (00ff 0000)
2304 MHz 1000.0 mV (00ff 0000)
2352 MHz 1000.0 mV (00ff 0000)
2400 MHz 1000.0 mV (00ff ffff)
cluster2-opp-table:
408 MHz 675.0 mV (00ff ffff)
600 MHz 675.0 mV (00ff ffff)
816 MHz 675.0 mV (00ff ffff)
1008 MHz 675.0 mV (00ff ffff)
1200 MHz 675.0 mV (00ff ffff)
1416 MHz 725.0 mV (00ff ffff)
1608 MHz 762.5 mV (00ff ffff)
1800 MHz 850.0 mV (00ff ffff)
2016 MHz 925.0 mV (00ff ffff)
2208 MHz 987.5 mV (00ff ffff)
2256 MHz 1000.0 mV (00ff 0000)
2304 MHz 1000.0 mV (00ff 0000)
2352 MHz 1000.0 mV (00ff 0000)
2400 MHz 1000.0 mV (00ff ffff)
dmc-opp-table:
528 MHz 675.0 mV
1068 MHz 725.0 mV
1560 MHz 800.0 mV
2750 MHz 875.0 mV
gpu-opp-table:
300 MHz 675.0 mV
400 MHz 675.0 mV
500 MHz 675.0 mV
600 MHz 675.0 mV
700 MHz 700.0 mV
800 MHz 750.0 mV
900 MHz 800.0 mV
1000 MHz 850.0 mV
npu-opp-table:
300 MHz 700.0 mV
400 MHz 700.0 mV
500 MHz 700.0 mV
600 MHz 700.0 mV
700 MHz 700.0 mV
800 MHz 750.0 mV
900 MHz 800.0 mV
1000 MHz 850.0 mV
| Radxa ROCK 5B | 2400/1800 MHz | 5.10 | Armbian 22.11.1 Jammy arm64 | 15620 | 2551 | 1331070 | 9580 | 24750 | - | RetroPIE