[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <da0c7aea-d917-4f3a-5136-89c30d12ba1f@grimberg.me>
Date: Fri, 13 Nov 2020 12:34:55 -0800
From: Sagi Grimberg <sagi@...mberg.me>
To: Jens Axboe <axboe@...nel.dk>, Rachit Agarwal <rach4x0r@...il.com>,
Christoph Hellwig <hch@....de>
Cc: linux-block@...r.kernel.org, linux-nvme@...ts.infradead.org,
linux-kernel@...r.kernel.org, Keith Busch <kbusch@...nel.org>,
Ming Lei <ming.lei@...hat.com>,
Jaehyun Hwang <jaehyun.hwang@...nell.edu>,
Qizhe Cai <qc228@...nell.edu>,
Midhul Vuppalapati <mvv25@...nell.edu>,
Rachit Agarwal <ragarwal@...cornell.edu>,
Sagi Grimberg <sagi@...htbitslabs.com>,
Rachit Agarwal <ragarwal@...nell.edu>
Subject: Re: [PATCH] iosched: Add i10 I/O Scheduler
> I haven't taken a close look at the code yet so far, but one quick note
> that patches like this should be against the branches for 5.11. In fact,
> this one doesn't even compile against current -git, as
> blk_mq_bio_list_merge is now called blk_bio_list_merge.
Ugh, I guess that Jaehyun had this patch bottled up and didn't rebase
before submitting.. Sorry about that.
> In any case, I did run this through some quick peak testing as I was
> curious, and I'm seeing about 20% drop in peak IOPS over none running
> this. Perf diff:
>
> 10.71% -2.44% [kernel.vmlinux] [k] read_tsc
> 2.33% -1.99% [kernel.vmlinux] [k] _raw_spin_lock
You ran this with nvme? or null_blk? I guess neither would benefit
from this because if the underlying device will not benefit from
batching (at least enough for the extra cost of accounting for it) it
will be counter productive to use this scheduler.
> Also:
>
>> [5] https://github.com/i10-kernel/upstream-linux/blob/master/dss-evaluation.pdf
>
> Was curious and wanted to look it up, but it doesn't exist.
I think this is the right one:
https://github.com/i10-kernel/upstream-linux/blob/master/i10-evaluation.pdf
We had some back and forth around the naming, hence this was probably
omitted.
Powered by blists - more mailing lists