lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Zj3qJP_J-D3DEP6W@ryzen.lan>
Date: Fri, 10 May 2024 11:34:28 +0200
From: Niklas Cassel <cassel@...nel.org>
To: John Meneghini <jmeneghi@...hat.com>
Cc: tj@...nel.org, josef@...icpanda.com, axboe@...nel.dk, kbusch@...nel.org,
	hch@....de, sagi@...mberg.me, emilne@...hat.com, hare@...nel.org,
	linux-block@...r.kernel.org, cgroups@...r.kernel.org,
	linux-nvme@...ts.infradead.org, linux-kernel@...r.kernel.org,
	jrani@...estorage.com, randyj@...estorage.com, aviv.coro@....com
Subject: Re: [PATCH v3 0/3] block,nvme: latency-based I/O scheduler

On Thu, May 09, 2024 at 04:43:21PM -0400, John Meneghini wrote:
> I'm re-issuing Hannes's latency patches in preparation for LSFMM

Hello John,

Just a small note.

Please don't reply-to the previous version of the series (v2), when sending
out a v3.

It creates "an unmanageable forest of references in email clients".

See:
https://www.kernel.org/doc/html/latest/process/submitting-patches.html#explicit-in-reply-to-headers

Instead just add the url to the v2 on lore.kernel.org.

See you at LSFMM!


Kind regards,
Niklas

> 
> Changes since V2:
> 
> I've done quite a bit of work cleaning up these patches. There were a
> number of checkpatch.pl problems as well as some compile time errors
> when config BLK_NODE_LATENCY was turned off. After the clean up I
> rebased these patches onto Ewan's "nvme: queue-depth multipath iopolicy"
> patches. This allowed me to test both iopolicy changes together. 
> 
> All of my test results, together with the scripts I used to generate these
> graphs, are available at:
> 
>   https://github.com/johnmeneghini/iopolicy
> 
> Please use the scripts in this repository to do your own testing.
> 
> Changes since V1:
> 
> Hi all,
> 
> there had been several attempts to implement a latency-based I/O
> scheduler for native nvme multipath, all of which had its issues.
> 
> So time to start afresh, this time using the QoS framework
> already present in the block layer.
> It consists of two parts:
> - a new 'blk-nlatency' QoS module, which is just a simple per-node
>   latency tracker
> - a 'latency' nvme I/O policy
> 
> Using the 'tiobench' fio script with 512 byte blocksize I'm getting
> the following latencies (in usecs) as a baseline:
> - seq write: avg 186 stddev 331
> - rand write: avg 4598 stddev 7903
> - seq read: avg 149 stddev 65
> - rand read: avg 150 stddev 68
> 
> Enabling the 'latency' iopolicy:
> - seq write: avg 178 stddev 113
> - rand write: avg 3427 stddev 6703
> - seq read: avg 140 stddev 59
> - rand read: avg 141 stddev 58
> 
> Setting the 'decay' parameter to 10:
> - seq write: avg 182 stddev 65
> - rand write: avg 2619 stddev 5894
> - seq read: avg 142 stddev 57
> - rand read: avg 140 stddev 57  
> 
> That's on a 32G FC testbed running against a brd target,
> fio running with 48 threads. So promises are met: latency
> goes down, and we're even able to control the standard
> deviation via the 'decay' parameter.
> 
> As usual, comments and reviews are welcome.
> 
> Changes to the original version:
> - split the rqos debugfs entries
> - Modify commit message to indicate latency
> - rename to blk-nlatency
> 
> Hannes Reinecke (2):
>   block: track per-node I/O latency
>   nvme: add 'latency' iopolicy
> 
> John Meneghini (1):
>   nvme: multipath: pr_notice when iopolicy changes
> 
>  MAINTAINERS                   |   1 +
>  block/Kconfig                 |   9 +
>  block/Makefile                |   1 +
>  block/blk-mq-debugfs.c        |   2 +
>  block/blk-nlatency.c          | 389 ++++++++++++++++++++++++++++++++++
>  block/blk-rq-qos.h            |   6 +
>  drivers/nvme/host/multipath.c |  73 ++++++-
>  drivers/nvme/host/nvme.h      |   1 +
>  include/linux/blk-mq.h        |  11 +
>  9 files changed, 484 insertions(+), 9 deletions(-)
>  create mode 100644 block/blk-nlatency.c
> 
> -- 
> 2.39.3
> 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ