lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171113141849.GH983427@devbig577.frc2.facebook.com>
Date:   Mon, 13 Nov 2017 06:18:49 -0800
From:   Tejun Heo <tj@...nel.org>
To:     Shaohua Li <shli@...nel.org>
Cc:     Jens Axboe <axboe@...nel.dk>, linux-kernel@...r.kernel.org,
        kernel-team@...com
Subject: Re: [PATCH 1/2] blk-throtl: make latency= absolute

Hello, Shaohua.  Just a bit of addition.

On Mon, Nov 13, 2017 at 03:27:10AM -0800, Tejun Heo wrote:
> What I'm trying to say is that the latency is defined as "from bio
> issue to completion", not "in-flight time on device".  Whether the
> on-device latency is 50us or 500us, the host side queueing latency can
> be in orders of magnitude higher.
> 
> For things like starvation protection for managerial workloads which
> work fine on rotating disks, the only thing we need to protect against
> is excessive host side queue overflowing leading to starvation of such
> workloads.  IOW, we're talking about latency target in tens or lower
> hundreds of millisecs.  Whether the on-device time is 50 or 500us
> doesn't matter that much.

So, the absolute latency target can express the requirements of the
workload in question - it's saying "if the IO latency stays within
this boundary, regardless of the underlying device, this workload is
gonna be happy enough".  There are workloads which are this way -
e.g. it has some IOs to do and some deadline requirements (like
heartbeat period).  For those workloads, it doesn't matter what the
underlying device is.  It can be a rotating disk, or a slow or
lightening-fast SSD.  As long as the absolute target latency is met,
the workload will be happy.

The % notation can express how much proportional hit the workload is
willing to take to share the underlying device with others - "I'm
willing to take 20% extra hit in latency so that I can be a nice
neighbor", which also makes sense to me.

The baseline + slack (the current one) is the mix of the two.  IOW,
the configuration is dependent on both the workload requirements and
the performance characteristics of the underlying device - you can't
use a single value across different workloads or devices.  We can
absolutely keep supporting this but I think it fits worse than the
previous two and am having a bit of hard time to come up with why we'd
want this.

Thanks.

-- 
tejun

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ