lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 27 Apr 2022 10:18:05 -0600
From:   Logan Gunthorpe <logang@...tatee.com>
To:     Guoqing Jiang <guoqing.jiang@...ux.dev>,
        linux-kernel@...r.kernel.org, linux-raid@...r.kernel.org,
        Song Liu <song@...nel.org>
Cc:     Christoph Hellwig <hch@...radead.org>,
        Stephen Bates <sbates@...thlin.com>,
        Martin Oliveira <Martin.Oliveira@...eticom.com>,
        David Sloan <David.Sloan@...eticom.com>
Subject: Re: [PATCH v2 12/12] md/raid5: Pivot raid5_make_request()



On 2022-04-26 20:06, Guoqing Jiang wrote:
>>   +static int add_all_stripe_bios(struct stripe_head *sh, struct bio *bi,
>> +        sector_t first_logical_sector, sector_t last_sector,
>> +        int forwrite, int previous)
>> +{
>> +    int dd_idx;
>> +    int ret = 1;
>> +
>> +    spin_lock_irq(&sh->stripe_lock);
>> +
>> +    for (dd_idx = 0; dd_idx < sh->disks; dd_idx++) {
>> +        struct r5dev *dev = &sh->dev[dd_idx];
>> +
>> +        clear_bit(R5_BioReady, &dev->flags);
>> +
>> +        if (dd_idx == sh->pd_idx)
>> +            continue;
>> +
>> +        if (dev->sector < first_logical_sector ||
>> +            dev->sector >= last_sector)
>> +            continue;
>> +
>> +        if (stripe_bio_overlaps(sh, bi, dd_idx, forwrite)) {
>> +            set_bit(R5_Overlap, &dev->flags);
>> +            ret = 0;
>> +            continue;
>> +        }
>> +
>> +        set_bit(R5_BioReady, &dev->flags);
> 
> Is  it possible to just call __add_stripe_bio here? And change above
> "continue"
> to "return",

No. The reason it was done this way is because we either have to add the
bio to all the disks or none of them. Otherwise, if one fails and we
have to retry and we can't know which stripes were already added or not.

>> @@ -5869,6 +5911,10 @@ enum stripe_result {
>>   struct stripe_request_ctx {
>>       bool do_flush;
>>       struct stripe_head *batch_last;
>> +    sector_t disk_sector_done;
>> +    sector_t start_disk_sector;
>> +    bool first_wrap;
>> +    sector_t last_sector;
>>   };
> 
> Could you add some comments to above members if possible?

Yes, Christoph already asked for this and I've reworked this patch to
make it much clearer for v3.

>>   static enum stripe_result make_stripe_request(struct mddev *mddev,
>> @@ -5908,6 +5954,36 @@ static enum stripe_result
>> make_stripe_request(struct mddev *mddev,
>>         new_sector = raid5_compute_sector(conf, logical_sector, previous,
>>                         &dd_idx, NULL);
>> +
>> +    /*
>> +     * This is a tricky algorithm to figure out which stripe_heads that
>> +     * have already been visited and exit early if the stripe_head has
>> +     * already been done. (Seeing all disks are added to a stripe_head
>> +     * once in add_all_stripe_bios().
>> +     *
>> +     * To start with, the disk sector of the last stripe that has been
>> +     * completed is stored in ctx->disk_sector_done. If the
>> new_sector is
>> +     * less than this value, the stripe_head has already been done.
>> +     *
>> +     * There's one issue with this: if the request starts in the
>> middle of
>> +     * a chunk, all the stripe heads before the starting offset will be
>> +     * missed. To account for this, set the first_wrap boolean to true
>> +     * if new_sector is less than the starting sector. Clear the
>> +     * boolean once the start sector is hit for the second time.
>> +     * When first_wrap is set, ignore the disk_sector_done.
>> +     */
>> +    if (ctx->start_disk_sector == MaxSector) {
>> +        ctx->start_disk_sector = new_sector;
>> +    } else if (new_sector < ctx->start_disk_sector) {
>> +        ctx->first_wrap = true;
>> +    } else if (new_sector == ctx->start_disk_sector) {
>> +        ctx->first_wrap = false;
>> +        ctx->start_disk_sector = 0;
>> +        return STRIPE_SUCCESS;
>> +    } else if (!ctx->first_wrap && new_sector <=
>> ctx->disk_sector_done) {
>> +        return STRIPE_SUCCESS;
>> +    }
>> +
> 
> Hmm, with above tricky algorithm, I guess the point is that we can avoid
> to call below
> stripe_add_to_batch_list where has hash_lock contention. If so, maybe we
> can change
> stripe_can_batch for the purpose.

No, that's not the purpose. The purpose is to add the bio to the stripe
for every disk, so that we can avoid calling find_get_stripe() for every
single page and only call it once per stripe head.


>> diff --git a/drivers/md/raid5.h b/drivers/md/raid5.h
>> index 638d29863503..e73b58844f83 100644
>> --- a/drivers/md/raid5.h
>> +++ b/drivers/md/raid5.h
>> @@ -308,6 +308,7 @@ enum r5dev_flags {
>>       R5_Wantwrite,
>>       R5_Overlap,    /* There is a pending overlapping request
>>                * on this block */
>> +    R5_BioReady,    /* The current bio can be added to this disk */
> 
> This doesn't seem right to me, since the comment describes bio status
> while others
> are probably for r5dev.

I'm not sure I understand the objection. If you have a better option
please let me know.

I'm still working on this patch. Caught a couple more rare bugs that I'm
working to fix. The next version should also hopefully be clearer.

Thanks,

Logan


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ