[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b0bd9204-1f76-aba3-b754-464e28763f59@molgen.mpg.de>
Date: Thu, 21 Apr 2022 11:17:50 +0200
From: Paul Menzel <pmenzel@...gen.mpg.de>
To: Logan Gunthorpe <logang@...tatee.com>
Cc: linux-kernel@...r.kernel.org, linux-raid@...r.kernel.org,
Song Liu <song@...nel.org>,
Christoph Hellwig <hch@...radead.org>,
Guoqing Jiang <guoqing.jiang@...ux.dev>,
Stephen Bates <sbates@...thlin.com>,
Martin Oliveira <Martin.Oliveira@...eticom.com>,
David Sloan <David.Sloan@...eticom.com>,
Christoph Hellwig <hch@....de>
Subject: Re: [PATCH v2 01/12] md/raid5: Factor out ahead_of_reshape() function
Dear Logan,
Thank you for these patches.
Am 20.04.22 um 21:54 schrieb Logan Gunthorpe:
> There are a few uses of an ugly ternary operator in raid5_make_request()
> to check if a sector is a head of a reshape sector.
>
> Factor this out into a simple helper called ahead_of_reshape().
>
> This appears to fix the first bio_wouldblock_error() check which appears
> to have comparison operators that didn't match the check below which
> causes a schedule. Besides this, no functional changes intended.
If there is an error, could that be fixed in a separate commit, which
could be applied to the stable series?
> Suggested-by: Christoph Hellwig <hch@....de>
> Signed-off-by: Logan Gunthorpe <logang@...tatee.com>
> ---
> drivers/md/raid5.c | 29 +++++++++++++++++------------
> 1 file changed, 17 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
> index 7f7d1546b9ba..97b23c18402b 100644
> --- a/drivers/md/raid5.c
> +++ b/drivers/md/raid5.c
> @@ -5787,6 +5787,15 @@ static void make_discard_request(struct mddev *mddev, struct bio *bi)
> bio_endio(bi);
> }
>
> +static bool ahead_of_reshape(struct mddev *mddev, sector_t sector,
> + sector_t reshape_sector)
> +{
> + if (mddev->reshape_backwards)
> + return sector < reshape_sector;
> + else
> + return sector >= reshape_sector;
I like the ternary operator. ;-)
return mddev->reshape_backwards ? (return sector < reshape_sector)
: (sector >= reshape_sector)
Sorry, does not matter.
Reviewed-by: Paul Menzel <pmenzel@...gen.mpg.de>
Kind regards,
Paul
> +}
> +
> static bool raid5_make_request(struct mddev *mddev, struct bio * bi)
> {
> struct r5conf *conf = mddev->private;
> @@ -5843,9 +5852,8 @@ static bool raid5_make_request(struct mddev *mddev, struct bio * bi)
> /* Bail out if conflicts with reshape and REQ_NOWAIT is set */
> if ((bi->bi_opf & REQ_NOWAIT) &&
> (conf->reshape_progress != MaxSector) &&
> - (mddev->reshape_backwards
> - ? (logical_sector > conf->reshape_progress && logical_sector <= conf->reshape_safe)
> - : (logical_sector >= conf->reshape_safe && logical_sector < conf->reshape_progress))) {
> + !ahead_of_reshape(mddev, logical_sector, conf->reshape_progress) &&
> + ahead_of_reshape(mddev, logical_sector, conf->reshape_safe)) {
> bio_wouldblock_error(bi);
> if (rw == WRITE)
> md_write_end(mddev);
> @@ -5874,14 +5882,12 @@ static bool raid5_make_request(struct mddev *mddev, struct bio * bi)
> * to check again.
> */
> spin_lock_irq(&conf->device_lock);
> - if (mddev->reshape_backwards
> - ? logical_sector < conf->reshape_progress
> - : logical_sector >= conf->reshape_progress) {
> + if (ahead_of_reshape(mddev, logical_sector,
> + conf->reshape_progress)) {
> previous = 1;
> } else {
> - if (mddev->reshape_backwards
> - ? logical_sector < conf->reshape_safe
> - : logical_sector >= conf->reshape_safe) {
> + if (ahead_of_reshape(mddev, logical_sector,
> + conf->reshape_safe)) {
> spin_unlock_irq(&conf->device_lock);
> schedule();
> do_prepare = true;
> @@ -5912,9 +5918,8 @@ static bool raid5_make_request(struct mddev *mddev, struct bio * bi)
> */
> int must_retry = 0;
> spin_lock_irq(&conf->device_lock);
> - if (mddev->reshape_backwards
> - ? logical_sector >= conf->reshape_progress
> - : logical_sector < conf->reshape_progress)
> + if (!ahead_of_reshape(mddev, logical_sector,
> + conf->reshape_progress))
> /* mismatch, need to try again */
> must_retry = 1;
> spin_unlock_irq(&conf->device_lock);
Powered by blists - more mailing lists