lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Thu, 22 Feb 2024 09:40:07 +0100
From: Paul Menzel <pmenzel@...gen.mpg.de>
To: Kuai Yu <yukuai1@...weicloud.com>,
 Paul E Luse <paul.e.luse@...ux.intel.com>
Cc: song@...nel.org, neilb@...e.com, shli@...com, linux-raid@...r.kernel.org,
 linux-kernel@...r.kernel.org, yukuai3@...wei.com, yi.zhang@...wei.com,
 yangerkun@...wei.com
Subject: Re: [PATCH md-6.9 00/10] md/raid1: refactor read_balance() and some
 minor fix

Dear Kuai, dear Paul,


Thank you for your work. Some nits and request for benchmarks below.


Am 22.02.24 um 08:57 schrieb Yu Kuai:
> From: Yu Kuai <yukuai3@...wei.com>
> 
> The orignial idea is that Paul want to optimize raid1 read

original

> performance([1]), however, we think that the orignial code for

original

> read_balance() is quite complex, and we don't want to add more
> complexity. Hence we decide to refactor read_balance() first, to make
> code cleaner and easier for follow up.
> 
> Before this patchset, read_balance() has many local variables and many
> braches, it want to consider all the scenarios in one iteration. The

branches

> idea of this patch is to devide them into 4 different steps:

divide

> 1) If resync is in progress, find the first usable disk, patch 5;
> Otherwise:
> 2) Loop through all disks and skipping slow disks and disks with bad
> blocks, choose the best disk, patch 10. If no disk is found:
> 3) Look for disks with bad blocks and choose the one with most number of
> sectors, patch 8. If no disk is found:
> 4) Choose first found slow disk with no bad blocks, or slow disk with
> most number of sectors, patch 7.
> 
> Note that step 3) and step 4) are super code path, and performance
> should not be considered.
> 
> And after this patchset, we'll continue to optimize read_balance for
> step 2), specifically how to choose the best rdev to read.

Is there a change in performance with the current patch set? Is radi1 
well enough covered by the test suite?


Kind regards,

Paul


> [1] https://lore.kernel.org/all/20240102125115.129261-1-paul.e.luse@linux.intel.com/
> 
> Yu Kuai (10):
>    md: add a new helper rdev_has_badblock()
>    md: record nonrot rdevs while adding/removing rdevs to conf
>    md/raid1: fix choose next idle in read_balance()
>    md/raid1-10: add a helper raid1_check_read_range()
>    md/raid1-10: factor out a new helper raid1_should_read_first()
>    md/raid1: factor out read_first_rdev() from read_balance()
>    md/raid1: factor out choose_slow_rdev() from read_balance()
>    md/raid1: factor out choose_bb_rdev() from read_balance()
>    md/raid1: factor out the code to manage sequential IO
>    md/raid1: factor out helpers to choose the best rdev from
>      read_balance()
> 
>   drivers/md/md.c       |  28 ++-
>   drivers/md/md.h       |  12 ++
>   drivers/md/raid1-10.c |  69 +++++++
>   drivers/md/raid1.c    | 454 ++++++++++++++++++++++++------------------
>   drivers/md/raid10.c   |  66 ++----
>   drivers/md/raid5.c    |  35 ++--
>   6 files changed, 402 insertions(+), 262 deletions(-)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ