[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <74f6fb34-c4c2-6a7e-3614-78c34246c6bd@gmail.com>
Date: Mon, 23 Nov 2020 23:42:35 +0000
From: Pavel Begunkov <asml.silence@...il.com>
To: David Howells <dhowells@...hat.com>,
Christoph Hellwig <hch@...radead.org>
Cc: Matthew Wilcox <willy@...radead.org>, Jens Axboe <axboe@...nel.dk>,
Alexander Viro <viro@...iv.linux.org.uk>,
Linus Torvalds <torvalds@...ux-foundation.org>,
linux-fsdevel@...r.kernel.org, linux-block@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 01/29] iov_iter: Switch to using a table of operations
On 23/11/2020 10:31, David Howells wrote:
> Christoph Hellwig <hch@...radead.org> wrote:
>
>> Please run performance tests. I think the indirect calls could totally
>> wreck things like high performance direct I/O, especially using io_uring
>> on x86.
>
> Here's an initial test using fio and null_blk. I left null_blk in its default
> configuration and used the following command line:
I'd prefer something along no_sched=1 submit_queues=$(nproc) to reduce overhead.
>
> fio --ioengine=libaio --direct=1 --gtod_reduce=1 --name=readtest --filename=/dev/nullb0 --bs=4k --iodepth=128 --time_based --runtime=120 --readwrite=randread --iodepth_low=96 --iodepth_batch=16 --numjobs=4
fio is relatively heavy, I'd suggest to try fio/t/io_uring with nullblk
>
> I borrowed some of the parameters from an email I found online, so I'm not
> sure if they're that useful.
>
> I tried three different sets of patches: none, just the first (which adds the
> jump table without getting rid of the conditional branches), and all of them.
>
> I'm not sure which stats are of particular interest here, so I took the two
> summary stats from the output of fio and also added together the "issued rwts:
> total=a,b,c,d" from each test thread (only the first of which is non-zero).
>
> The CPU is an Intel(R) Core(TM) i3-4170 CPU @ 3.70GHz, so 4 single-thread
> cores, and 16G of RAM. No virtualisation is involved.
>
> Unpatched:
>
> READ: bw=4109MiB/s (4308MB/s), 1025MiB/s-1029MiB/s (1074MB/s-1079MB/s), io=482GiB (517GB), run=120001-120001msec
> READ: bw=4097MiB/s (4296MB/s), 1020MiB/s-1029MiB/s (1070MB/s-1079MB/s), io=480GiB (516GB), run=120001-120001msec
> READ: bw=4113MiB/s (4312MB/s), 1025MiB/s-1031MiB/s (1075MB/s-1082MB/s), io=482GiB (517GB), run=120001-120001msec
> READ: bw=4125MiB/s (4325MB/s), 1028MiB/s-1033MiB/s (1078MB/s-1084MB/s), io=483GiB (519GB), run=120001-120001msec
>
> nullb0: ios=126017326/0, merge=53/0, ticks=3538817/0, in_queue=3538817, util=100.00%
> nullb0: ios=125655193/0, merge=55/0, ticks=3548157/0, in_queue=3548157, util=100.00%
> nullb0: ios=126133014/0, merge=58/0, ticks=3545621/0, in_queue=3545621, util=100.00%
> nullb0: ios=126512562/0, merge=57/0, ticks=3531600/0, in_queue=3531600, util=100.00%
>
> sum issued rwts = 126224632
> sum issued rwts = 125861368
> sum issued rwts = 126340344
> sum issued rwts = 126718648
>
> Just first patch:
>
> READ: bw=4106MiB/s (4306MB/s), 1023MiB/s-1030MiB/s (1073MB/s-1080MB/s), io=481GiB (517GB), run=120001-120001msec
> READ: bw=4126MiB/s (4327MB/s), 1029MiB/s-1034MiB/s (1079MB/s-1084MB/s), io=484GiB (519GB), run=120001-120001msec
> READ: bw=4109MiB/s (4308MB/s), 1025MiB/s-1029MiB/s (1075MB/s-1079MB/s), io=481GiB (517GB), run=120001-120001msec
> READ: bw=4097MiB/s (4296MB/s), 1023MiB/s-1025MiB/s (1073MB/s-1074MB/s), io=480GiB (516GB), run=120001-120001msec
>
> nullb0: ios=125939152/0, merge=62/0, ticks=3534917/0, in_queue=3534917, util=100.00%
> nullb0: ios=126554181/0, merge=61/0, ticks=3532067/0, in_queue=3532067, util=100.00%
> nullb0: ios=126012346/0, merge=54/0, ticks=3530504/0, in_queue=3530504, util=100.00%
> nullb0: ios=125653775/0, merge=54/0, ticks=3537438/0, in_queue=3537438, util=100.00%
>
> sum issued rwts = 126144952
> sum issued rwts = 126765368
> sum issued rwts = 126215928
> sum issued rwts = 125864120
>
> All patches:
> nullb0: ios=10477062/0, merge=2/0, ticks=284992/0, in_queue=284992, util=95.87%
> nullb0: ios=10405246/0, merge=2/0, ticks=291886/0, in_queue=291886, util=99.82%
> nullb0: ios=10425583/0, merge=1/0, ticks=291699/0, in_queue=291699, util=99.22%
> nullb0: ios=10438845/0, merge=3/0, ticks=292445/0, in_queue=292445, util=99.31%
>
> READ: bw=4118MiB/s (4318MB/s), 1028MiB/s-1032MiB/s (1078MB/s-1082MB/s), io=483GiB (518GB), run=120001-120001msec
> READ: bw=4109MiB/s (4308MB/s), 1024MiB/s-1030MiB/s (1073MB/s-1080MB/s), io=481GiB (517GB), run=120001-120001msec
> READ: bw=4108MiB/s (4308MB/s), 1026MiB/s-1029MiB/s (1076MB/s-1079MB/s), io=481GiB (517GB), run=120001-120001msec
> READ: bw=4112MiB/s (4312MB/s), 1025MiB/s-1031MiB/s (1075MB/s-1081MB/s), io=482GiB (517GB), run=120001-120001msec
>
> nullb0: ios=126282410/0, merge=58/0, ticks=3557384/0, in_queue=3557384, util=100.00%
> nullb0: ios=126004837/0, merge=67/0, ticks=3565235/0, in_queue=3565235, util=100.00%
> nullb0: ios=125988876/0, merge=59/0, ticks=3563026/0, in_queue=3563026, util=100.00%
> nullb0: ios=126118279/0, merge=57/0, ticks=3566122/0, in_queue=3566122, util=100.00%
>
> sum issued rwts = 126494904
> sum issued rwts = 126214200
> sum issued rwts = 126198200
> sum issued rwts = 126328312
>
>
> David
>
--
Pavel Begunkov
Powered by blists - more mailing lists