lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <26a1cd20-6b25-eaa6-7ab6-ba7f5afaf6dd@kernel.dk>
Date:   Fri, 13 Nov 2020 14:44:17 -0700
From:   Jens Axboe <axboe@...nel.dk>
To:     Sagi Grimberg <sagi@...mberg.me>,
        Rachit Agarwal <rach4x0r@...il.com>,
        Christoph Hellwig <hch@....de>
Cc:     linux-block@...r.kernel.org, linux-nvme@...ts.infradead.org,
        linux-kernel@...r.kernel.org, Keith Busch <kbusch@...nel.org>,
        Ming Lei <ming.lei@...hat.com>,
        Jaehyun Hwang <jaehyun.hwang@...nell.edu>,
        Qizhe Cai <qc228@...nell.edu>,
        Midhul Vuppalapati <mvv25@...nell.edu>,
        Rachit Agarwal <ragarwal@...cornell.edu>,
        Sagi Grimberg <sagi@...htbitslabs.com>,
        Rachit Agarwal <ragarwal@...nell.edu>
Subject: Re: [PATCH] iosched: Add i10 I/O Scheduler

On 11/13/20 2:36 PM, Sagi Grimberg wrote:
> 
>>> But if you think this has a better home, I'm assuming that the guys
>>> will be open to that.
>>
>> Also see the reply from Ming. It's a balancing act - don't want to add
>> extra overhead to the core, but also don't want to carry an extra
>> scheduler if the main change is really just variable dispatch batching.
>> And since we already have a notion of that, seems worthwhile to explore
>> that venue.
> 
> I agree,
> 
> The main difference is that this balancing is not driven from device
> resource pressure, but rather from an assumption of device specific
> optimization (and also with a specific optimization target), hence a
> scheduler a user would need to opt-in seemed like a good compromise.
> 
> But maybe Ming has some good ideas on a different way to add it..

So here's another case - virtualized nvme. The commit overhead is
suitably large there that performance suffers quite a bit, similarly to
your remote storage case. If we had suitable logic in the core, then we
could easily propagate this knowledge when setting up the queue. Then it
could happen automatically, without needing a configuration to switch to
a specific scheduler.

-- 
Jens Axboe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ