lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPhsuW4dXrwt7VTifcdbdwH6Uz3b4m4Z54fBfD3LDjXy89PTkQ@mail.gmail.com>
Date:   Tue, 26 Sep 2023 17:15:13 -0700
From:   Song Liu <song@...nel.org>
To:     Yu Kuai <yukuai1@...weicloud.com>
Cc:     mariusz.tkaczyk@...ux.intel.com, xni@...hat.com,
        linux-raid@...r.kernel.org, linux-kernel@...r.kernel.org,
        yukuai3@...wei.com, yi.zhang@...wei.com, yangerkun@...wei.com
Subject: Re: [PATCH v2 1/2] md: factor out a new helper to put mddev

On Mon, Sep 25, 2023 at 8:04 PM Yu Kuai <yukuai1@...weicloud.com> wrote:
>
> From: Yu Kuai <yukuai3@...wei.com>
>
> There are no functional changes, the new helper will still hold
> 'all_mddevs_lock' after putting mddev, and it will be used to simplify
> md_seq_ops.
>
> Signed-off-by: Yu Kuai <yukuai3@...wei.com>
> ---
>  drivers/md/md.c | 18 +++++++++++++++---
>  1 file changed, 15 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/md/md.c b/drivers/md/md.c
> index 10cb4dfbf4ae..a5ef6f7da8ec 100644
> --- a/drivers/md/md.c
> +++ b/drivers/md/md.c
> @@ -616,10 +616,15 @@ static inline struct mddev *mddev_get(struct mddev *mddev)
>
>  static void mddev_delayed_delete(struct work_struct *ws);
>
> -void mddev_put(struct mddev *mddev)
> +static void __mddev_put(struct mddev *mddev, bool locked)
>  {
> -       if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))
> +       if (locked) {
> +               spin_lock(&all_mddevs_lock);
> +               if (!atomic_dec_and_test(&mddev->active))
> +                       return;
> +       } else if (!atomic_dec_and_lock(&mddev->active, &all_mddevs_lock))
>                 return;
> +

This condition is indeed very confusing. No matter whether we call the
flag "locked" or "do_lock", it is not really accurate.

How about we factor out a helper with the following logic:

        if (!mddev->raid_disks && list_empty(&mddev->disks) &&
            mddev->ctime == 0 && !mddev->hold_active) {
                /* Array is not configured at all, and not held active,
                 * so destroy it */
                set_bit(MD_DELETED, &mddev->flags);

                /*
                 * Call queue_work inside the spinlock so that
                 * flush_workqueue() after mddev_find will succeed in waiting
                 * for the work to be done.
                 */
                queue_work(md_misc_wq, &mddev->del_work);
        }

and then use it at the two callers?

Does this make sense?

Thanks,
Song

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ