一、环境配置

  • 源码环境:OpenHarmony 4.1 Release
  • 平台硬件: RK3568
  • 平台软件:OpenHarmony 4.1 Release 32bit(4.1.7.8)
  • fio版本:3.37

二、fio简介

fio是一个I/O测试工具,可以运行在Linux、Windows等多种系统之上,可以用来测试本地磁盘、网络存储等的I/O性能。
fio官网:https://fio.readthedocs.io/en/latest/fio_doc.html
fio源码下载:https://brick.kernel.dk/snaps/

三、交叉编译fio源码

将下载的fio源码fio-3.37.tar.gz放入OpenHarmony项目源码third_party/目录下进行解压

 

$ cd third_party
$ tar -zxvf fio-3.37.tar.gz

3.1、配置交叉编译环境

在third_party/fio-3.37/目录下创建交叉编译环境的配置脚本fio_env32.sh和fio_env64.sh

 fio_env32.sh文件如下

#!/bin/bash make clean
#arm-linux-ohos ./configure --cpu=aarch64 --prefix=$PWD/install
SDK_PATH=../../prebuilts/ohos-sdk/linux/11/native
CUR_PATH=$(pwd)
cd $SDK_PATH
export OHOS_SDK=$(pwd)
cd $CUR_PATH
export AS=${OHOS_SDK}/llvm/bin/llvm-as
export CC="${OHOS_SDK}/llvm/bin/clang --target=arm-linux-ohos"
export CXX="${OHOS_SDK}/llvm/bin/clang++ --target=arm-linux-ohos"
export LD="${OHOS_SDK}/llvm/bin/lld --target=arm-linux-ohos"
export STRIP=${OHOS_SDK}/llvm/bin/llvm-strip
export RANLIB=${OHOS_SDK}/llvm/bin/llvm-ranlib
export OBJDUMP=${OHOS_SDK}/llvm/bin/llvm-objdump
export OBJCOPY=${OHOS_SDK}/llvm/bin/llvm-objcopy
export NM=${OHOS_SDK}/llvm/bin/llvm-nm
export AR=${OHOS_SDK}/llvm/bin/llvm-ar
export CFLAGS="-fPIC -D__MUSL__=1"
export CXXFLAGS="-fPIC -D__MUSL__=1"

 fio_env64.sh文件如下

#!/bin/bash
#aarch64-linux-ohos ./configure --cpu=aarch64 --prefix=$PWD/install
SDK_PATH=../../prebuilts/ohos-sdk/linux/11/native
CUR_PATH=$(pwd)
cd $SDK_PATH
export OHOS_SDK=$(pwd)
cd $CUR_PATH
export AS=${OHOS_SDK}/llvm/bin/llvm-as
export CC="${OHOS_SDK}/llvm/bin/clang --target=aarch64-linux-ohos"
export CXX="${OHOS_SDK}/llvm/bin/clang++ --target=aarch64-linux-ohos"
export LD="${OHOS_SDK}/llvm/bin/lld --target=aarch64-linux-ohos"
export STRIP=${OHOS_SDK}/llvm/bin/llvm-strip
export RANLIB=${OHOS_SDK}/llvm/bin/llvm-ranlib
export OBJDUMP=${OHOS_SDK}/llvm/bin/llvm-objdump
export OBJCOPY=${OHOS_SDK}/llvm/bin/llvm-objcopy
export NM=${OHOS_SDK}/llvm/bin/llvm-nm
export AR=${OHOS_SDK}/llvm/bin/llvm-ar
export CFLAGS="-fPIC -D__MUSL__=1"
export CXXFLAGS="-fPIC -D__MUSL__=1"

 

3.2、编译fio源码

编译前我们先分清楚cpu架构位数和软件架构的位数有什么区别,这个决定了我们选择fio_env32.sh还是fio_env64.sh,选择aarch64还是arm

首先,hdc shell getconf LONG_BIT 输出的是OS位数,决定了我们使用fio_env32.sh还是fio_env64.sh,当然,64位OS也向下支持32位,但32位却不能运行64位,建议使用对应版本的fio_env.sh。

其次 hdc shell uname -a 输出的是cpu架构,决定了我们--cpu = arm 还是等于aarch64 。当然也有x86、x86_64等,选对应的即可。

注意:交叉编译环境与设备CPU架构和软件系统架构有关,需根据实际情况配置和修改。

  • RK3568 cpu架构为aarch64,查看方法如下:
> hdc shell uname -a
Linux localhost 5.10.184 #1 SMP Sun Jun 16 01:10:03 CST 2024 aarch64
  • 设备系统软件架构为32bit,查看方法如下:
> hdc shell getconf LONG_BIT
32

    在third_party/fio-3.37/目录下运行fio_env32.sh,此处以32为例:

    $ cd third_party/fio-3.37/
    $ source fio_env32.sh

     

    接着上面步骤,在third_party/fio-3.37/目录下执行下面的命令

    #配置fio编译环境,cpu架构为aarch64; 安装文件生成路径为./install
    
    
    $ ./configure --cpu=aarch64 --prefix=$PWD/install
    Operating system              Linux
    CPU                           arm64
    Big endian                    no
    Compiler                      /home/oh4.1/prebuilts/ohos-sdk/linux/11/native/llvm/bin/clang --target=arm-linux-ohos
    Cross compile                 yes
    
    Static build                  no
    Wordsize                      32
    ...
    TCMalloc support              no
    seed_buckets                  4
    
    
    #执行编译
    $ make && make install
    ...
    install -m 755 -d /home/oh4.1/third_party/fio-3.37/install/bin
    install fio t/fio-genzipf t/fio-btrace2fio t/fio-dedupe t/fio-verify-state ./tools/fio_generate_plots ./tools/plot/fio2gnuplot ./tools/genfio ./tools/fiologparser.py ./tools/hist/fiologparser_hist.py ./tools/hist/fio-histo-log-pctiles.py ./tools/fio_jsonplus_clat2csv /home/oh4.1/third_party/fio-3.37/install/bin
    install -m 755 -d /home/oh4.1/third_party/fio-3.37/install/man/man1
    install -m 644 ./fio.1 /home/oh4.1/third_party/fio-3.37/install/man/man1
    install -m 644 ./tools/fio_generate_plots.1 /home/oh4.1/third_party/fio-3.37/install/man/man1
    install -m 644 ./tools/plot/fio2gnuplot.1 /home/oh4.1/third_party/fio-3.37/install/man/man1
    install -m 644 ./tools/hist/fiologparser_hist.py.1 /home/oh4.1/third_party/fio-3.37/install/man/man1
    install -m 755 -d /home/oh4.1/third_party/fio-3.37/install/share/fio
    install -m 644 ./tools/plot/*gpm /home/oh4.1/third_party/fio-3.37/install/share/fio/
    

    注意:切换fio_env32.sh配置时,需要清除旧文件,执行make clean。

     编译通过后在third_party/fio-3.37/install/bin目录下生成fio可执行文件

    $ tree install
    install
    ├── bin
    │   ├── fio
    │   ├── fio2gnuplot
    │   ├── fio-btrace2fio
    │   ├── fio-dedupe
    │   ├── fio_generate_plots
    │   ├── fio-genzipf
    │   ├── fio-histo-log-pctiles.py
    │   ├── fio_jsonplus_clat2csv
    │   ├── fiologparser_hist.py
    │   ├── fiologparser.py
    │   ├── fio-verify-state
    │   └── genfio
    ├── man
    │   └── man1
    │       ├── fio.1
    │       ├── fio2gnuplot.1
    │       ├── fio_generate_plots.1
    │       └── fiologparser_hist.py.1
    └── share
        └── fio
            ├── graph2D.gpm
            ├── graph3D.gpm
            └── math.gpm
    
    5 directories, 19 files

    使用file检测fio文件属性

    $ file install/bin/fio
    install/bin/fio: ELF 32-bit LSB shared object, ARM, EABI5 version 1 (SYSV), dynamically linked, interpreter /lib/ld-musl-arm.so.1, with debug_info, not stripped

    3.3、验证fio可执行文件

    将上述生成的fio 放入开发板/system/bin下

    # 重新挂载为可读写
    > hdc shell "mount -o remount,rw /"
    
    # 发送文件,可以先发送到windows再操作,第二个参数为文件位置
    > hdc file send xxx\third_party\fio-3.37\install\bin\fio /system/bin
    
    > hdc shell
    $ cd system/bin
    $ chmod 777 fio
    
    # 输出fio版本信息,检测fio运行正常与否
    $ fio -v
    fio-3.37

     

    四、测试RK3568 I/O性能

    在进行I/O测试前,有必要先了解下fio 常用参数

    参数举例说明
    filenamefilename=/data/fio_read_test测试文件名,通常选择data分区
    directdirect=11表示不使用I/O缓存,反之则使用。为了使测试结果更真实,通常设为1。
    rwrw=read / write / randread / randwrite / randrwI/O类型,可设置为顺序读 、顺序写、随机读、随机写、混合随机读写
    bsbs=4k单次I/O的块文件大小,通常为2k、4k、6k、8k等
    bsrangebsrange=512-2048设定单次I/O块文件大小范围
    sizesize=5G设定本次测试文件大小为5GB,每次以bs=4k大小进行I/O测试
    numjobsnumjobs=30测试线程数为30
    runtimeruntime=120测试时长为120秒,未设置默认写满size=5GB大小
    ioengineioegine=psyncI/O引擎类型,包括libaio、sync、psync等
    rwmixwriterwmixwrite=30混合读写模式下,写占比30%
    group_reporting 按组展示结果
    outputoutput=/data/fio_test_result输出测试结果至/data/fio_test_result文件中
    output-formatoutput-format=json设置fio输出结果的格式
    lockmemlockmem=1g只使用1g内存进行测试
    iodepthiodepth 1文件上I/O模块的数量(大于1的iodepth对同步io无意义)
    thread 使用pthread_create创建线程代替使用fork创建进程,可在一定程度上节省系统开销

    4.1、测试实例

    • 顺序写
    $ fio -filename=/data/fio_test_ordwrite -direct=1 -iodepth 1 -thread -rw=write -ioengine=psync -bs=1M -size=1G -numjobs=5 -group_reporting -name=mytest
    
    mytest: (g=0): rw=write, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=psync, iodepth=1
    ...
    fio-3.37
    Starting 5 threads
    mytest: Laying out IO file (1 file / 1024MiB)
    Jobs: 5 (f=5): [W(5)][99.0%][w=73.9MiB/s][w=73 IOPS][eta 00m:01s]
    mytest: (groupid=0, jobs=5): err= 0: pid=25849: Sat Aug  5 17:21:06 2017
      write: IOPS=51, BW=51.3MiB/s (53.8MB/s)(5120MiB/99751msec); 0 zone resets
        clat (msec): min=8, max=1339, avg=97.14, stdev=136.03
         lat (msec): min=8, max=1339, avg=97.33, stdev=136.02
        clat percentiles (msec):
         |  1.00th=[   10],  5.00th=[   46], 10.00th=[   46], 20.00th=[   47],
         | 30.00th=[   48], 40.00th=[   50], 50.00th=[   52], 60.00th=[   54],
         | 70.00th=[   58], 80.00th=[   93], 90.00th=[  211], 95.00th=[  255],
         | 99.00th=[  869], 99.50th=[  953], 99.90th=[ 1099], 99.95th=[ 1217],
         | 99.99th=[ 1334]
       bw (  KiB/s): min=10215, max=114688, per=100.00%, avg=57312.65, stdev=6463.58, samples=913
       iops        : min=    5, max=  112, avg=54.24, stdev= 6.38, samples=913
      lat (msec)   : 10=1.76%, 20=0.78%, 50=39.10%, 100=38.81%, 250=14.20%
      lat (msec)   : 500=2.77%, 750=1.02%, 1000=1.21%, 2000=0.35%
      cpu          : usr=0.28%, sys=1.26%, ctx=10954, majf=0, minf=0
      IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         issued rwts: total=0,5120,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=1
    
    Run status group 0 (all jobs):
      WRITE: bw=51.3MiB/s (53.8MB/s), 51.3MiB/s-51.3MiB/s (53.8MB/s-53.8MB/s), io=5120MiB (5369MB), run=99751-99751msec
    
    Disk stats (read/write):
      mmcblk0: ios=0/11171, sectors=0/10477592, merge=0/621, ticks=0/189218, in_queue=190316, util=99.87%

     

    • 顺序读
      fio -filename=/data/fio_test_ordread -direct=1 -iodepth 1 -thread -rw=read -ioengine=psync -bs=1M -size=1G -numjobs=5 -group_reporting -name=mytest
    $ fio -filename=/data/fio_test_ordread -direct=1 -iodepth 1 -thread -rw=read -ioengine=psync -bs=1M -size=1G -numjobs=5 -group_reporting -name=mytest
    
    mytest: (g=0): rw=read, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=psync, iodepth=1
    ...
    fio-3.37
    Starting 5 threads
    mytest: Laying out IO file (1 file / 1024MiB)
    Jobs: 5 (f=5): [R(5)][100.0%][r=166MiB/s][r=165 IOPS][eta 00m:00s]
    mytest: (groupid=0, jobs=5): err= 0: pid=30504: Sat Aug  5 17:23:04 2017
      read: IOPS=164, BW=164MiB/s (172MB/s)(5120MiB/31144msec)
        clat (usec): min=10759, max=47676, avg=30373.46, stdev=4712.25
         lat (usec): min=10762, max=47679, avg=30376.29, stdev=4712.22
        clat percentiles (usec):
         |  1.00th=[23725],  5.00th=[23987], 10.00th=[23987], 20.00th=[24249],
         | 30.00th=[27919], 40.00th=[30016], 50.00th=[30278], 60.00th=[30540],
         | 70.00th=[31851], 80.00th=[35914], 90.00th=[36439], 95.00th=[36963],
         | 99.00th=[39584], 99.50th=[40109], 99.90th=[43254], 99.95th=[46400],
         | 99.99th=[47449]
       bw (  KiB/s): min=161466, max=174080, per=100.00%, avg=168434.35, stdev=1036.44, samples=310
       iops        : min=  153, max=  170, avg=163.58, stdev= 1.12, samples=310
      lat (msec)   : 20=0.06%, 50=99.94%
      cpu          : usr=0.12%, sys=0.95%, ctx=5556, majf=0, minf=1280
      IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         issued rwts: total=5120,0,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=1
    
    Run status group 0 (all jobs):
       READ: bw=164MiB/s (172MB/s), 164MiB/s-164MiB/s (172MB/s-172MB/s), io=5120MiB (5369MB), run=31144-31144msec
    
    Disk stats (read/write):
      mmcblk0: ios=10252/264, sectors=10454184/4296, merge=0/197, ticks=262110/10413, in_queue=273505, util=99.89%
    • 随机写
    • fio -filename=/data/fio_test_randw -direct=1 -iodepth 1 -thread -rw=randwrite -ioengine=psync -bs=1M -size=256M -numjobs=5 -group_reporting -name=mytest
    $ fio -filename=/data/fio_test_randw -direct=1 -iodepth 1 -thread -rw=randwrite -ioengine=psync -bs=1M -size=256M -numjobs=5 -group_reporting -name=mytest
    
    mytest: (g=0): rw=randwrite, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=psync, iodepth=1
    ...
    fio-3.37
    Starting 5 threads
    Jobs: 5 (f=5): [w(5)][96.9%][w=49.0MiB/s][w=49 IOPS][eta 00m:01s]
    mytest: (groupid=0, jobs=5): err= 0: pid=9827: Sat Aug  5 17:30:18 2017
      write: IOPS=40, BW=40.9MiB/s (42.9MB/s)(1280MiB/31269msec); 0 zone resets
        clat (msec): min=8, max=1223, avg=121.70, stdev=176.89
         lat (msec): min=8, max=1223, avg=121.89, stdev=176.89
        clat percentiles (msec):
         |  1.00th=[   10],  5.00th=[   45], 10.00th=[   46], 20.00th=[   47],
         | 30.00th=[   48], 40.00th=[   51], 50.00th=[   52], 60.00th=[   56],
         | 70.00th=[   69], 80.00th=[  180], 90.00th=[  234], 95.00th=[  447],
         | 99.00th=[ 1003], 99.50th=[ 1045], 99.90th=[ 1200], 99.95th=[ 1217],
         | 99.99th=[ 1217]
       bw (  KiB/s): min=10215, max=102400, per=100.00%, avg=47362.62, stdev=5641.43, samples=272
       iops        : min=    5, max=  100, avg=44.30, stdev= 5.53, samples=272
      lat (msec)   : 10=2.27%, 20=0.86%, 50=36.88%, 100=34.14%, 250=16.64%
      lat (msec)   : 500=4.53%, 750=1.88%, 1000=1.80%, 2000=1.02%
      cpu          : usr=0.22%, sys=0.76%, ctx=2911, majf=0, minf=0
      IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         issued rwts: total=0,1280,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=1
    
    Run status group 0 (all jobs):
      WRITE: bw=40.9MiB/s (42.9MB/s), 40.9MiB/s-40.9MiB/s (42.9MB/s-42.9MB/s), io=1280MiB (1342MB), run=31269-31269msec
    
    Disk stats (read/write):
      mmcblk0: ios=0/2779, sectors=0/2590656, merge=0/188, ticks=0/63665, in_queue=64209, util=99.33%
    • 随机读
      fio -filename=/data/fio_test_randw -direct=1 -iodepth 1 -thread -rw=randread -ioengine=psync -bs=1M -size=256M -numjobs=5 -group_reporting -name=mytest
    $ fio -filename=/data/fio_test_randw -direct=1 -iodepth 1 -thread -rw=randread -ioengine=psync -bs=1M -size=256M -numjobs=5 -group_reporting -name=mytest
    
    mytest: (g=0): rw=randread, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=psync, iodepth=1
    ...
    fio-3.37
    Starting 5 threads
    Jobs: 5 (f=5): [r(5)][87.5%][r=166MiB/s][r=166 IOPS][eta 00m:01s]
    mytest: (groupid=0, jobs=5): err= 0: pid=11352: Sat Aug  5 17:30:55 2017
      read: IOPS=164, BW=165MiB/s (173MB/s)(1280MiB/7759msec)
        clat (usec): min=6379, max=91263, avg=30039.40, stdev=5829.52
         lat (usec): min=6382, max=91266, avg=30042.45, stdev=5829.48
        clat percentiles (usec):
         |  1.00th=[17695],  5.00th=[18220], 10.00th=[23987], 20.00th=[24511],
         | 30.00th=[30016], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278],
         | 70.00th=[30540], 80.00th=[35914], 90.00th=[36439], 95.00th=[37487],
         | 99.00th=[42730], 99.50th=[42730], 99.90th=[54789], 99.95th=[91751],
         | 99.99th=[91751]
       bw (  KiB/s): min=151370, max=184320, per=100.00%, avg=169014.07, stdev=1977.80, samples=75
       iops        : min=  145, max=  180, avg=162.80, stdev= 2.01, samples=75
      lat (msec)   : 10=0.55%, 20=6.25%, 50=93.05%, 100=0.16%
      cpu          : usr=0.15%, sys=1.01%, ctx=4052, majf=0, minf=1280
      IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         issued rwts: total=1280,0,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=1
    
    Run status group 0 (all jobs):
       READ: bw=165MiB/s (173MB/s), 165MiB/s-165MiB/s (173MB/s-173MB/s), io=1280MiB (1342MB), run=7759-7759msec
    
    Disk stats (read/write):
      mmcblk0: ios=2532/94, sectors=2578464/1488, merge=0/65, ticks=71827/3738, in_queue=75836, util=98.92%
    

     

    • 随机读&写 50%
      fio -filename=/data/fio_test_randrw -direct=1 -iodepth 1 -thread -rw=randrw -rwmixread=50 -ioengine=psync -bs=1M -size=256M -numjobs=5 -group_reporting -name=mytest
    $ fio -filename=/data/fio_test_randrw -direct=1 -iodepth 1 -thread -rw=randrw -rwmixread=50 -ioengine=psync -bs=1M -size=256M -numjobs=5 -group_reporting -name=mytest
    
    mytest: (g=0): rw=randrw, bs=(R) 1024KiB-1024KiB, (W) 1024KiB-1024KiB, (T) 1024KiB-1024KiB, ioengine=psync, iodepth=1
    ...
    fio-3.37
    Starting 5 threads
    mytest: Laying out IO file (1 file / 256MiB)
    Jobs: 5 (f=5): [m(5)][90.5%][r=25.0MiB/s,w=14.0MiB/s][r=25,w=14 IOPS][eta 00m:02s]
    mytest: (groupid=0, jobs=5): err= 0: pid=12467: Sat Aug  5 17:31:50 2017
      read: IOPS=32, BW=32.9MiB/s (34.5MB/s)(638MiB/19390msec)
        clat (msec): min=6, max=295, avg=21.22, stdev=27.10
         lat (msec): min=6, max=295, avg=21.22, stdev=27.10
        clat percentiles (msec):
         |  1.00th=[    8],  5.00th=[    8], 10.00th=[    8], 20.00th=[    9],
         | 30.00th=[   10], 40.00th=[   14], 50.00th=[   17], 60.00th=[   17],
         | 70.00th=[   19], 80.00th=[   23], 90.00th=[   48], 95.00th=[   54],
         | 99.00th=[  207], 99.50th=[  213], 99.90th=[  296], 99.95th=[  296],
         | 99.99th=[  296]
       bw (  KiB/s): min=10225, max=96215, per=100.00%, avg=42005.03, stdev=5053.73, samples=150
       iops        : min=    7, max=   93, avg=40.69, stdev= 4.96, samples=150
      write: IOPS=33, BW=33.1MiB/s (34.7MB/s)(642MiB/19390msec); 0 zone resets
        clat (msec): min=9, max=993, avg=127.99, stdev=152.36
         lat (msec): min=9, max=994, avg=128.16, stdev=152.36
        clat percentiles (msec):
         |  1.00th=[   10],  5.00th=[   43], 10.00th=[   52], 20.00th=[   58],
         | 30.00th=[   63], 40.00th=[   68], 50.00th=[   73], 60.00th=[   79],
         | 70.00th=[   89], 80.00th=[  178], 90.00th=[  262], 95.00th=[  405],
         | 99.00th=[  844], 99.50th=[  944], 99.90th=[  995], 99.95th=[  995],
         | 99.99th=[  995]
       bw (  KiB/s): min=10230, max=71680, per=100.00%, avg=37243.13, stdev=4108.85, samples=169
       iops        : min=    8, max=   70, avg=36.08, stdev= 4.04, samples=169
      lat (msec)   : 10=16.02%, 20=22.03%, 50=13.28%, 100=34.14%, 250=8.91%
      lat (msec)   : 500=3.67%, 750=0.86%, 1000=1.09%
      cpu          : usr=0.20%, sys=0.54%, ctx=2019, majf=0, minf=0
      IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
         submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
         issued rwts: total=638,642,0,0 short=0,0,0,0 dropped=0,0,0,0
         latency   : target=0, window=0, percentile=100.00%, depth=1
    
    Run status group 0 (all jobs):
       READ: bw=32.9MiB/s (34.5MB/s), 32.9MiB/s-32.9MiB/s (34.5MB/s-34.5MB/s), io=638MiB (669MB), run=19390-19390msec
      WRITE: bw=33.1MiB/s (34.7MB/s), 33.1MiB/s-33.1MiB/s (34.7MB/s-34.7MB/s), io=642MiB (673MB), run=19390-19390msec
    
    Disk stats (read/write):
      mmcblk0: ios=1278/1458, sectors=1306632/1305464, merge=0/127, ticks=24824/39502, in_queue=64971, util=99.33%

    参考:

    OpenHarmony 3.2使用fio进行I/O性能测试_openharmony 测试工具-CSDN博客

    Logo

    社区规范:仅讨论OpenHarmony相关问题。

    更多推荐