lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ae66036a-0673-297a-69b8-81721d6b8efc@huaweicloud.com>
Date:   Tue, 20 Jun 2023 21:38:05 +0800
From:   Yu Kuai <yukuai1@...weicloud.com>
To:     Xiao Ni <xni@...hat.com>, Yu Kuai <yukuai1@...weicloud.com>
Cc:     song@...nel.org, linux-raid@...r.kernel.org,
        linux-kernel@...r.kernel.org, yi.zhang@...wei.com,
        yangerkun@...wei.com, "yukuai (C)" <yukuai3@...wei.com>
Subject: Re: [PATCH -next 4/8] md/raid1: switch to use md_account_bio() for io
 accounting

Hi,

在 2023/06/20 17:07, Xiao Ni 写道:
> On Mon, Jun 19, 2023 at 8:49 PM Yu Kuai <yukuai1@...weicloud.com> wrote:
>>
>> From: Yu Kuai <yukuai3@...wei.com>
>>
>> Two problems can be fixed this way:
>>
>> 1) 'active_io' will represent inflight io instead of io that is
>> dispatching.
>>
>> 2) If io accounting is enabled or disabled while io is still inflight,
>> bio_start_io_acct() and bio_end_io_acct() is not balanced and io
>> inflight counter will be leaked.
>>
>> Signed-off-by: Yu Kuai <yukuai3@...wei.com>
>> ---
>>   drivers/md/raid1.c | 14 ++++++--------
>>   drivers/md/raid1.h |  1 -
>>   2 files changed, 6 insertions(+), 9 deletions(-)
>>
>> diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
>> index dd25832eb045..06fa1580501f 100644
>> --- a/drivers/md/raid1.c
>> +++ b/drivers/md/raid1.c
>> @@ -304,8 +304,6 @@ static void call_bio_endio(struct r1bio *r1_bio)
>>          if (!test_bit(R1BIO_Uptodate, &r1_bio->state))
>>                  bio->bi_status = BLK_STS_IOERR;
>>
>> -       if (blk_queue_io_stat(bio->bi_bdev->bd_disk->queue))
>> -               bio_end_io_acct(bio, r1_bio->start_time);
>>          bio_endio(bio);
>>   }
>>
>> @@ -1303,10 +1301,10 @@ static void raid1_read_request(struct mddev *mddev, struct bio *bio,
>>          }
>>
>>          r1_bio->read_disk = rdisk;
>> -
>> -       if (!r1bio_existed && blk_queue_io_stat(bio->bi_bdev->bd_disk->queue))
>> -               r1_bio->start_time = bio_start_io_acct(bio);
>> -
>> +       if (!r1bio_existed) {
>> +               md_account_bio(mddev, &bio);
>> +               r1_bio->master_bio = bio;
>> +       }
>>          read_bio = bio_alloc_clone(mirror->rdev->bdev, bio, gfp,
>>                                     &mddev->bio_set);
>>
>> @@ -1500,8 +1498,8 @@ static void raid1_write_request(struct mddev *mddev, struct bio *bio,
>>                  r1_bio->sectors = max_sectors;
>>          }
>>
>> -       if (blk_queue_io_stat(bio->bi_bdev->bd_disk->queue))
>> -               r1_bio->start_time = bio_start_io_acct(bio);
>> +       md_account_bio(mddev, &bio);
>> +       r1_bio->master_bio = bio;
>>          atomic_set(&r1_bio->remaining, 1);
>>          atomic_set(&r1_bio->behind_remaining, 0);
>>
>> diff --git a/drivers/md/raid1.h b/drivers/md/raid1.h
>> index 468f189da7a0..14d4211a123a 100644
>> --- a/drivers/md/raid1.h
>> +++ b/drivers/md/raid1.h
>> @@ -157,7 +157,6 @@ struct r1bio {
>>          sector_t                sector;
>>          int                     sectors;
>>          unsigned long           state;
>> -       unsigned long           start_time;
>>          struct mddev            *mddev;
>>          /*
>>           * original bio going to /dev/mdx
>> --
>> 2.39.2
>>
> 
> Hi Kuai
> 
> After this patch, raid1 will have one more memory allocation in the
> I/O path. Not sure if it can affect performance. Beside this, the
> patch is good for me.

Yes, I'm aware of this additional memory allocation, however, raid1(and
similar to other levels) already need to allocate r1bio and some bios(1
for read, and copies for write), so this is not a none -> new case,
it's just a allocate 2 -> allocate 3 case.

I think performance under memory pressure are both bad with or without
this patch, and one one bio clone latency without memory reclaim should
be fine.

Thanks,
Kuai
> 
> Reviewed-by: Xiao Ni <xni@...hat.com>
> 
> .
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ