[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e587c572-bcd7-87c4-5eea-30ccdc7455db@acm.org>
Date: Mon, 2 Aug 2021 09:17:31 -0700
From: Bart Van Assche <bvanassche@....org>
To: "yukuai (C)" <yukuai3@...wei.com>, axboe@...nel.dk,
ming.lei@...hat.com
Cc: linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
yi.zhang@...wei.com
Subject: Re: [PATCH] blk-mq: allow hardware queue to get more tag while
sharing a tag set
On 8/2/21 6:34 AM, yukuai (C) wrote:
> I run a test on both null_blk and nvme, results show that there are no
> performance degradation:
>
> test platform: x86
> test cpu: 2 nodes, total 72
> test scheduler: none
> test device: null_blk / nvme
>
> test cmd: fio -filename=/dev/xxx -name=test -ioengine=libaio -direct=1
> -numjobs=72 -iodepth=16 -bs=4k -rw=write -offset_increment=1G
> -cpus_allowed=0:71 -cpus_allowed_policy=split -group_reporting
> -runtime=120
>
> test results: iops
> 1) null_blk before this patch: 280k
> 2) null_blk after this patch: 282k
> 3) nvme before this patch: 378k
> 4) nvme after this patch: 384k
Please use io_uring for performance tests.
The null_blk numbers seem way too low to me. If I run a null_blk
performance test inside a VM with 6 CPU cores (Xeon W-2135 CPU) I see
about 6 million IOPS for synchronous I/O and about 4.4 million IOPS when
using libaio. The options I used and that are not in the above command
line are: --thread --gtod_reduce=1 --ioscheduler=none.
Thanks,
Bart.
Powered by blists - more mailing lists