lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALTww2_8vcszjZF5emGHjcm+XngHptGV+i11TVs00o8E0fPWGw@mail.gmail.com>
Date:   Wed, 31 May 2023 23:23:41 +0800
From:   Xiao Ni <xni@...hat.com>
To:     Yu Kuai <yukuai1@...weicloud.com>
Cc:     song@...nel.org, neilb@...e.de, akpm@...l.org,
        linux-raid@...r.kernel.org, linux-kernel@...r.kernel.org,
        yi.zhang@...wei.com, yangerkun@...wei.com,
        "yukuai (C)" <yukuai3@...wei.com>
Subject: Re: [PATCH -next v3 6/7] md/raid1-10: don't handle pluged bio by
 daemon thread

On Wed, May 31, 2023 at 4:06 PM Yu Kuai <yukuai1@...weicloud.com> wrote:
>
> Hi,
>
> 在 2023/05/31 16:00, Xiao Ni 写道:
> > On Wed, May 31, 2023 at 3:55 PM Yu Kuai <yukuai1@...weicloud.com> wrote:
> >>
> >> Hi,
> >>
> >> 在 2023/05/31 15:50, Xiao Ni 写道:
> >>> On Mon, May 29, 2023 at 9:14 PM Yu Kuai <yukuai1@...weicloud.com> wrote:
> >>>>
> >>>> From: Yu Kuai <yukuai3@...wei.com>
> >>>>
> >>>> current->bio_list will be set under submit_bio() context, in this case
> >>>> bitmap io will be added to the list and wait for current io submission to
> >>>> finish, while current io submission must wait for bitmap io to be done.
> >>>> commit 874807a83139 ("md/raid1{,0}: fix deadlock in bitmap_unplug.") fix
> >>>> the deadlock by handling plugged bio by daemon thread.
> >>>
> >>> Thanks for the historic introduction. I did a test and printed the
> >>> logs in raid10_unplug. The tools I used are dd and mkfs. from_schedule
> >>> is always true during I/O and it's 0 when io finishes. So I have a
> >>> question here, how can I trigger the condition that from_schedule is 0
> >>> and current->list is not NULL? In other words, is there really a
> >>> deadlock here? Before your patch it looks like all bios are merged
> >>> into conf->pending_bio_list and are handled by raid10d. It can't
> >>> submit bio directly in the originating process which mentioned in
> >>> 57c67df48866
> >>>
> >> As I mentioned below, after commit a214b949d8e3, this deadlock doesn't
> >> exist anymore, and without this patch, patch 7 will introduce this
> >> scenario again.
> >>
> >> Thanks,
> >> Kuai
> >>>>
> >>>> On the one hand, the deadlock won't exist after commit a214b949d8e3
> >>>> ("blk-mq: only flush requests from the plug in blk_mq_submit_bio"). On
> >>>> the other hand, current solution makes it impossible to flush plugged bio
> >>>> in raid1/10_make_request(), because this will cause that all the writes
> >>>> will goto daemon thread.
> >>>>
> >>>> In order to limit the number of plugged bio, commit 874807a83139
> >>>> ("md/raid1{,0}: fix deadlock in bitmap_unplug.") is reverted, and the
> >>>> deadlock is fixed by handling bitmap io asynchronously.
> >>>>
> >>>> Signed-off-by: Yu Kuai <yukuai3@...wei.com>
> >>>> ---
> >>>>    drivers/md/raid1-10.c | 14 ++++++++++++++
> >>>>    drivers/md/raid1.c    |  4 ++--
> >>>>    drivers/md/raid10.c   |  8 +++-----
> >>>>    3 files changed, 19 insertions(+), 7 deletions(-)
> >>>>
> >>>> diff --git a/drivers/md/raid1-10.c b/drivers/md/raid1-10.c
> >>>> index 73cc3cb9154d..17e55c1fd5a1 100644
> >>>> --- a/drivers/md/raid1-10.c
> >>>> +++ b/drivers/md/raid1-10.c
> >>>> @@ -151,3 +151,17 @@ static inline bool raid1_add_bio_to_plug(struct mddev *mddev, struct bio *bio,
> >>>>
> >>>>           return true;
> >>>>    }
> >>>> +
> >>>> +/*
> >>>> + * current->bio_list will be set under submit_bio() context, in this case bitmap
> >>>> + * io will be added to the list and wait for current io submission to finish,
> >>>> + * while current io submission must wait for bitmap io to be done. In order to
> >>>> + * avoid such deadlock, submit bitmap io asynchronously.
> >>>> + */
> >>>> +static inline void raid1_prepare_flush_writes(struct bitmap *bitmap)
> >>>> +{
> >>>> +       if (current->bio_list)
> >>>> +               md_bitmap_unplug_async(bitmap);
> >>>> +       else
> >>>> +               md_bitmap_unplug(bitmap);
> >>>> +}
> >>>> diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
> >>>> index 0778e398584c..006620fed595 100644
> >>>> --- a/drivers/md/raid1.c
> >>>> +++ b/drivers/md/raid1.c
> >>>> @@ -794,7 +794,7 @@ static int read_balance(struct r1conf *conf, struct r1bio *r1_bio, int *max_sect
> >>>>    static void flush_bio_list(struct r1conf *conf, struct bio *bio)
> >>>>    {
> >>>>           /* flush any pending bitmap writes to disk before proceeding w/ I/O */
> >>>> -       md_bitmap_unplug(conf->mddev->bitmap);
> >>>> +       raid1_prepare_flush_writes(conf->mddev->bitmap);
> >>>
> >>> If we unplug bitmap asynchronously, can we make sure the bitmap are
> >>> flushed before the corresponding data?
> >
> > Could you explain this question?
>
> Sorry that I missed this... See the new helper in patch 5,
> md_bitmap_unplug_async() will still wait for bitmap io to finish.
>
> md_bitmap_unplug_async
>   DECLARE_COMPLETION_ONSTACK(done)
>   ...
>   wait_for_completion(&done)

Ah I c. You use this way to avoid putting the bitmap io to
current->bio_list. Thanks for the explanation :)

Regards
Xiao
>
> Thanks,
> Kuai
> >
> > Regards
> > Xiao
> >
> >
> >>>
> >>> Regards
> >>> Xiao
> >>>
> >>>>           wake_up(&conf->wait_barrier);
> >>>>
> >>>>           while (bio) { /* submit pending writes */
> >>>> @@ -1166,7 +1166,7 @@ static void raid1_unplug(struct blk_plug_cb *cb, bool from_schedule)
> >>>>           struct r1conf *conf = mddev->private;
> >>>>           struct bio *bio;
> >>>>
> >>>> -       if (from_schedule || current->bio_list) {
> >>>> +       if (from_schedule) {
> >>>>                   spin_lock_irq(&conf->device_lock);
> >>>>                   bio_list_merge(&conf->pending_bio_list, &plug->pending);
> >>>>                   spin_unlock_irq(&conf->device_lock);
> >>>> diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
> >>>> index 6640507ecb0d..fb22cfe94d32 100644
> >>>> --- a/drivers/md/raid10.c
> >>>> +++ b/drivers/md/raid10.c
> >>>> @@ -902,9 +902,7 @@ static void flush_pending_writes(struct r10conf *conf)
> >>>>                   __set_current_state(TASK_RUNNING);
> >>>>
> >>>>                   blk_start_plug(&plug);
> >>>> -               /* flush any pending bitmap writes to disk
> >>>> -                * before proceeding w/ I/O */
> >>>> -               md_bitmap_unplug(conf->mddev->bitmap);
> >>>> +               raid1_prepare_flush_writes(conf->mddev->bitmap);
> >>>>                   wake_up(&conf->wait_barrier);
> >>>>
> >>>>                   while (bio) { /* submit pending writes */
> >>>> @@ -1108,7 +1106,7 @@ static void raid10_unplug(struct blk_plug_cb *cb, bool from_schedule)
> >>>>           struct r10conf *conf = mddev->private;
> >>>>           struct bio *bio;
> >>>>
> >>>> -       if (from_schedule || current->bio_list) {
> >>>> +       if (from_schedule) {
> >>>>                   spin_lock_irq(&conf->device_lock);
> >>>>                   bio_list_merge(&conf->pending_bio_list, &plug->pending);
> >>>>                   spin_unlock_irq(&conf->device_lock);
> >>>> @@ -1120,7 +1118,7 @@ static void raid10_unplug(struct blk_plug_cb *cb, bool from_schedule)
> >>>>
> >>>>           /* we aren't scheduling, so we can do the write-out directly. */
> >>>>           bio = bio_list_get(&plug->pending);
> >>>> -       md_bitmap_unplug(mddev->bitmap);
> >>>> +       raid1_prepare_flush_writes(mddev->bitmap);
> >>>>           wake_up(&conf->wait_barrier);
> >>>>
> >>>>           while (bio) { /* submit pending writes */
> >>>> --
> >>>> 2.39.2
> >>>>
> >>>
> >>> .
> >>>
> >>
> >
> > .
> >
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ