[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <302127e5-67fa-1ff0-0551-4719005640b8@huaweicloud.com>
Date: Wed, 14 Sep 2022 18:28:17 +0800
From: Yu Kuai <yukuai1@...weicloud.com>
To: Paul Menzel <pmenzel@...gen.mpg.de>,
Yu Kuai <yukuai1@...weicloud.com>
Cc: song@...nel.org, logang@...tatee.com, guoqing.jiang@...ux.dev,
linux-raid@...r.kernel.org, linux-kernel@...r.kernel.org,
yi.zhang@...wei.com, "yukuai (C)" <yukuai3@...wei.com>
Subject: Re: [PATCH v2 1/4] md/raid10: cleanup wait_barrier()
Hi, Paul
在 2022/09/14 14:36, Paul Menzel 写道:
> Dear Yu,
>
>
> Thank you for the improved patch. Three minor nits.
>
> Am 14.09.22 um 03:49 schrieb Yu Kuai:
>> From: Yu Kuai <yukuai3@...wei.com>
>
> In the summary/title, I’d spell *Clean up* with a space. Maybe even use:
>
> md/raid10: Factor out code from wait_barrier() to stop_waiting_barrier()
Ok, sounds good, I'll change that in next version.
>
>> Currently the nasty condition is wait_barrier() is hard to read. This
>> patch factor out the condition into a function.
>
> The first *is* above can be removed, and factor*s* needs an s.
Yes, you're right.
>> There are no functional changes.
>>
>> Signed-off-by: Yu Kuai <yukuai3@...wei.com>
>> ---
>> drivers/md/raid10.c | 56 ++++++++++++++++++++++++++-------------------
>> 1 file changed, 32 insertions(+), 24 deletions(-)
>>
>> diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
>> index 64d6e4cd8a3a..56458a53043d 100644
>> --- a/drivers/md/raid10.c
>> +++ b/drivers/md/raid10.c
>> @@ -957,44 +957,52 @@ static void lower_barrier(struct r10conf *conf)
>> wake_up(&conf->wait_barrier);
>> }
>> +static bool stop_waiting_barrier(struct r10conf *conf)
>> +{
>> + /* barrier is dropped */
>> + if (!conf->barrier)
>> + return true;
>> +
>> + /*
>> + * If there are already pending requests (preventing the barrier
>> from
>> + * rising completely), and the pre-process bio queue isn't empty,
>> then
>> + * don't wait, as we need to empty that queue to get the nr_pending
>> + * count down.
>> + */
>> + if (atomic_read(&conf->nr_pending)) {
>> + struct bio_list *bio_list = current->bio_list;
>> +
>> + if (bio_list && (!bio_list_empty(&bio_list[0]) ||
>> + !bio_list_empty(&bio_list[1])))
>> + return true;
>> + }
>> +
>> + /* move on if recovery thread is blocked by us */
>> + if (conf->mddev->thread->tsk == current &&
>> + test_bit(MD_RECOVERY_RUNNING, &conf->mddev->recovery) &&
>> + conf->nr_queued > 0)
>> + return true;
>> +
>> + return false;
>> +}
>> +
>> static bool wait_barrier(struct r10conf *conf, bool nowait)
>> {
>> bool ret = true;
>> spin_lock_irq(&conf->resync_lock);
>> if (conf->barrier) {
>> - struct bio_list *bio_list = current->bio_list;
>> - conf->nr_waiting++;
>> - /* Wait for the barrier to drop.
>> - * However if there are already pending
>> - * requests (preventing the barrier from
>> - * rising completely), and the
>> - * pre-process bio queue isn't empty,
>> - * then don't wait, as we need to empty
>> - * that queue to get the nr_pending
>> - * count down.
>> - */
>> /* Return false when nowait flag is set */
>> if (nowait) {
>> ret = false;
>> } else {
>> + conf->nr_waiting++;
>> raid10_log(conf->mddev, "wait barrier");
>> wait_event_lock_irq(conf->wait_barrier,
>> - !conf->barrier ||
>> - (atomic_read(&conf->nr_pending) &&
>> - bio_list &&
>> - (!bio_list_empty(&bio_list[0]) ||
>> - !bio_list_empty(&bio_list[1]))) ||
>> - /* move on if recovery thread is
>> - * blocked by us
>> - */
>> - (conf->mddev->thread->tsk == current &&
>> - test_bit(MD_RECOVERY_RUNNING,
>> - &conf->mddev->recovery) &&
>> - conf->nr_queued > 0),
>> + stop_waiting_barrier(conf),
>> conf->resync_lock);
>> + conf->nr_waiting--;
>> }
>> - conf->nr_waiting--;
>> if (!conf->nr_waiting)
>> wake_up(&conf->wait_barrier);
>> }
>
> Acked-by: Paul Menzel <pmenzel@...gen.mpg.de>
Thanks,
Kuai
>
>
> Kind regards,
>
> Paul
>
> .
>
Powered by blists - more mailing lists