lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACVXFVMADNe1qwMK5Wg7T8_FNWm7U_u5WAtcEyM6CTA3H_1dMg@mail.gmail.com>
Date:	Fri, 30 Oct 2015 08:37:05 +0800
From:	Ming Lei <tom.leiming@...il.com>
To:	Jeff Moyer <jmoyer@...hat.com>
Cc:	Jens Axboe <axboe@...nel.dk>, Jason Luo <zhangqing.luo@...cle.com>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Guru Anbalagane <guru.anbalagane@...cle.com>,
	Feng Jin <joe.jin@...cle.com>, Tejun Heo <tj@...nel.org>
Subject: Re: [patch] blk-mq: avoid excessive boot delays with large lun counts

On Thu, Oct 29, 2015 at 11:18 PM, Jeff Moyer <jmoyer@...hat.com> wrote:
> Ming Lei <tom.leiming@...il.com> writes:
>
>> Looks we should have cleared the TAG_SHARED flag during
>> blk_mq_init_hctx() and just let blk_mq_update_tag_set_depth()
>> deal with that, then the race can be avoided.
>
> The whole point of the patch set is to propagate the flag up to the tag
> set so that we can avoid iterating all hctxs in all queues.

Yes, the idea is correct, and the difference is that I don't see there isn't
any race, for which your original patch tryed to address.

>
>>> At this point, neither queue's hctxs have the shared flag set.  Next,
>>> both will race to get the tag_list_lock for the tag_set inside of
>>> blk_mq_add_queue_tag_set.  Only one will win and mark the initial
>>> queue's hctx's as shared (as well as its own).  Then, when the second
>>> queue gets the lock, it will find that the shared tag is already set,
>>> and assume that it doesn't have to do anything.  But, because its
>>
>> As I suggested, we can set it always in case that TAG_SHARED
>> is set in set->flags because we know the queue isn't ready yet at that
>> time.
>
> I see.  You are suggesting that I just get rid of the conditional.  We
> could do that, but you will get the exact same result as what I posted.
> I'm not sure why you would prefer that over the explicit check.  With
> the patch I posted, we can avoid walking the list of hctxs a second
> time.
>
> Anyway, here's a patch that I think implements your suggestion.  I
> prefer the original, but this should achieve the same exact result.
> Let me know if I've misunderstood.

Yes, this one is better, and the big difference is that there isn't so called
race, and the new one is more clean.

>
> Cheers,
> Jeff
>
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 85f0143..7bf717a 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -1860,27 +1860,26 @@ static void blk_mq_map_swqueue(struct request_queue *q,
>         }
>  }

The following change should be put into blk_mq_init_hctx() too.

     hctx->flags = set->flags & ~BLK_MQ_F_TAG_SHARED;

and the idea is that we always figure out the shared flag in
following blk_mq_update_tag_set_depth().

>
> -static void blk_mq_update_tag_set_depth(struct blk_mq_tag_set *set)
> +static void queue_set_hctx_shared(struct request_queue *q, bool shared)
>  {
>         struct blk_mq_hw_ctx *hctx;
> -       struct request_queue *q;
> -       bool shared;
>         int i;
>
> -       if (set->tag_list.next == set->tag_list.prev)
> -               shared = false;
> -       else
> -               shared = true;
> +       queue_for_each_hw_ctx(q, hctx, i) {
> +               if (shared)
> +                       hctx->flags |= BLK_MQ_F_TAG_SHARED;
> +               else
> +                       hctx->flags &= ~BLK_MQ_F_TAG_SHARED;
> +       }
> +}
> +
> +static void blk_mq_update_tag_set_depth(struct blk_mq_tag_set *set, bool shared)
> +{
> +       struct request_queue *q;
>
>         list_for_each_entry(q, &set->tag_list, tag_set_list) {
>                 blk_mq_freeze_queue(q);
> -
> -               queue_for_each_hw_ctx(q, hctx, i) {
> -                       if (shared)
> -                               hctx->flags |= BLK_MQ_F_TAG_SHARED;
> -                       else
> -                               hctx->flags &= ~BLK_MQ_F_TAG_SHARED;
> -               }
> +               queue_set_hctx_shared(q, shared);
>                 blk_mq_unfreeze_queue(q);
>         }
>  }
> @@ -1891,7 +1890,13 @@ static void blk_mq_del_queue_tag_set(struct request_queue *q)
>
>         mutex_lock(&set->tag_list_lock);
>         list_del_init(&q->tag_set_list);
> -       blk_mq_update_tag_set_depth(set);
> +
> +       if (set->tag_list.next == set->tag_list.prev) {
> +               /* just transitioned to unshared */
> +               set->flags &= ~BLK_MQ_F_TAG_SHARED;
> +               /* update existing queue */
> +               blk_mq_update_tag_set_depth(set, false);
> +       }
>         mutex_unlock(&set->tag_list_lock);
>  }
>
> @@ -1902,7 +1907,21 @@ static void blk_mq_add_queue_tag_set(struct blk_mq_tag_set *set,
>
>         mutex_lock(&set->tag_list_lock);
>         list_add_tail(&q->tag_set_list, &set->tag_list);
> -       blk_mq_update_tag_set_depth(set);
> +
> +       if (set->tag_list.next != set->tag_list.prev) {
> +               /*
> +                * Only update the tag set state if the state has
> +                * actually changed.
> +                */
> +               if (!(set->flags & BLK_MQ_F_TAG_SHARED)) {
> +                       /* just transitioned to shared tags */
> +                       set->flags |= BLK_MQ_F_TAG_SHARED;
> +                       blk_mq_update_tag_set_depth(set, true);
> +               } else {
> +                       /* ensure we didn't race with another addition */

The above comment should be removed, and I don't see there
is any race now.

> +                       queue_set_hctx_shared(q, true);
> +               }
> +       }
>         mutex_unlock(&set->tag_list_lock);
>  }
>

This one is a bit more clean than original one, and the trick of
queue_first_hw_ctx() can be avoided.

-- 
Ming Lei
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ