lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALTww29ajdNAXQwAYG90HC26b_hZQz=s28nCsJazwkQ+YsW53w@mail.gmail.com>
Date: Tue, 27 Feb 2024 10:04:06 +0800
From: Xiao Ni <xni@...hat.com>
To: Yu Kuai <yukuai1@...weicloud.com>
Cc: paul.e.luse@...ux.intel.com, song@...nel.org, neilb@...e.com, shli@...com, 
	linux-raid@...r.kernel.org, linux-kernel@...r.kernel.org, yukuai3@...wei.com, 
	yi.zhang@...wei.com, yangerkun@...wei.com
Subject: Re: [PATCH md-6.9 09/10] md/raid1: factor out the code to manage
 sequential IO

On Thu, Feb 22, 2024 at 4:05 PM Yu Kuai <yukuai1@...weicloud.com> wrote:
>
> From: Yu Kuai <yukuai3@...wei.com>
>
> There is no functional change for now, make read_balance() cleaner and
> prepare to fix problems and refactor the handler of sequential IO.
>
> Co-developed-by: Paul Luse <paul.e.luse@...ux.intel.com>
> Signed-off-by: Paul Luse <paul.e.luse@...ux.intel.com>
> Signed-off-by: Yu Kuai <yukuai3@...wei.com>
> ---
>  drivers/md/raid1.c | 71 +++++++++++++++++++++++++---------------------
>  1 file changed, 38 insertions(+), 33 deletions(-)
>
> diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
> index 4694e0e71e36..223ef8d06f67 100644
> --- a/drivers/md/raid1.c
> +++ b/drivers/md/raid1.c
> @@ -705,6 +705,31 @@ static int choose_slow_rdev(struct r1conf *conf, struct r1bio *r1_bio,
>         return bb_disk;
>  }
>
> +static bool is_sequential(struct r1conf *conf, int disk, struct r1bio *r1_bio)
> +{
> +       /* TODO: address issues with this check and concurrency. */
> +       return conf->mirrors[disk].next_seq_sect == r1_bio->sector ||
> +              conf->mirrors[disk].head_position == r1_bio->sector;
> +}
> +
> +/*
> + * If buffered sequential IO size exceeds optimal iosize, check if there is idle
> + * disk. If yes, choose the idle disk.
> + */
> +static bool should_choose_next(struct r1conf *conf, int disk)
> +{
> +       struct raid1_info *mirror = &conf->mirrors[disk];
> +       int opt_iosize;
> +
> +       if (!test_bit(Nonrot, &mirror->rdev->flags))
> +               return false;
> +
> +       opt_iosize = bdev_io_opt(mirror->rdev->bdev) >> 9;
> +       return opt_iosize > 0 && mirror->seq_start != MaxSector &&
> +              mirror->next_seq_sect > opt_iosize &&
> +              mirror->next_seq_sect - opt_iosize >= mirror->seq_start;
> +}
> +
>  /*
>   * This routine returns the disk from which the requested read should
>   * be done. There is a per-array 'next expected sequential IO' sector
> @@ -767,42 +792,22 @@ static int read_balance(struct r1conf *conf, struct r1bio *r1_bio, int *max_sect
>                 pending = atomic_read(&rdev->nr_pending);
>                 dist = abs(this_sector - conf->mirrors[disk].head_position);
>                 /* Don't change to another disk for sequential reads */
> -               if (conf->mirrors[disk].next_seq_sect == this_sector
> -                   || dist == 0) {
> -                       int opt_iosize = bdev_io_opt(rdev->bdev) >> 9;
> -                       struct raid1_info *mirror = &conf->mirrors[disk];
> -
> -                       /*
> -                        * If buffered sequential IO size exceeds optimal
> -                        * iosize, check if there is idle disk. If yes, choose
> -                        * the idle disk. read_balance could already choose an
> -                        * idle disk before noticing it's a sequential IO in
> -                        * this disk. This doesn't matter because this disk
> -                        * will idle, next time it will be utilized after the
> -                        * first disk has IO size exceeds optimal iosize. In
> -                        * this way, iosize of the first disk will be optimal
> -                        * iosize at least. iosize of the second disk might be
> -                        * small, but not a big deal since when the second disk
> -                        * starts IO, the first disk is likely still busy.
> -                        */
> -                       if (test_bit(Nonrot, &rdev->flags) && opt_iosize > 0 &&
> -                           mirror->seq_start != MaxSector &&
> -                           mirror->next_seq_sect > opt_iosize &&
> -                           mirror->next_seq_sect - opt_iosize >=
> -                           mirror->seq_start) {
> -                               /*
> -                                * Add 'pending' to avoid choosing this disk if
> -                                * there is other idle disk.
> -                                * Set 'dist' to 0, so that if there is no other
> -                                * idle disk and all disks are rotational, this
> -                                * disk will still be chosen.
> -                                */
> -                               pending++;
> -                               dist = 0;
> -                       } else {
> +               if (is_sequential(conf, disk, r1_bio)) {
> +                       if (!should_choose_next(conf, disk)) {
>                                 best_disk = disk;
>                                 break;
>                         }
> +
> +                       /*
> +                        * Add 'pending' to avoid choosing this disk if there is
> +                        * other idle disk.
> +                        */
> +                       pending++;
> +                       /*
> +                        * Set 'dist' to 0, so that if there is no other idle
> +                        * disk, this disk will still be chosen.
> +                        */
> +                       dist = 0;
>                 }
>
>                 if (min_pending > pending) {
> --
> 2.39.2
>
>
Hi
This patch looks good to me.
Reviewed-by: Xiao Ni <xni@...hat.com>


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ