[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACVXFVP2kNfVKK3rQoZAyRMds8GJ2_oiDnYPrp9f5v0nCQHnLQ@mail.gmail.com>
Date: Tue, 3 Nov 2015 09:12:16 +0800
From: Ming Lei <tom.leiming@...il.com>
To: Jeff Moyer <jmoyer@...hat.com>
Cc: Jens Axboe <axboe@...nel.dk>, Jason Luo <zhangqing.luo@...cle.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Guru Anbalagane <guru.anbalagane@...cle.com>,
Feng Jin <joe.jin@...cle.com>, Tejun Heo <tj@...nel.org>
Subject: Re: [patch, v2] blk-mq: avoid excessive boot delays with large lun counts
On Mon, Nov 2, 2015 at 10:04 PM, Jeff Moyer <jmoyer@...hat.com> wrote:
> Ming Lei <tom.leiming@...il.com> writes:
>
>> You can add
>> Reviewed-by: Ming Lei <ming.lei@...onical.com>
>> if the following trivial issues(especially the 2nd one) are addressed.
>
> [snip]
>
>>> @@ -1891,7 +1890,12 @@ static void blk_mq_del_queue_tag_set(struct request_queue *q)
>>>
>>> mutex_lock(&set->tag_list_lock);
>>> list_del_init(&q->tag_set_list);
>>> - blk_mq_update_tag_set_depth(set);
>>> + if (set->tag_list.next == set->tag_list.prev) {
>>
>> list_is_singular() should be better.
>
> Didn't even know that existed. Thanks.
>
>>> + /* just transitioned to unshared */
>>> + set->flags &= ~BLK_MQ_F_TAG_SHARED;
>>> + /* update existing queue */
>>> + blk_mq_update_tag_set_depth(set, false);
>>> + }
>>> mutex_unlock(&set->tag_list_lock);
>>> }
>>>
>>> @@ -1901,8 +1905,17 @@ static void blk_mq_add_queue_tag_set(struct blk_mq_tag_set *set,
>>> q->tag_set = set;
>>>
>>> mutex_lock(&set->tag_list_lock);
>>> +
>>> + /* Check to see if we're transitioning to shared (from 1 to 2 queues). */
>>> + if (!list_empty(&set->tag_list) && !(set->flags & BLK_MQ_F_TAG_SHARED)) {
>>> + set->flags |= BLK_MQ_F_TAG_SHARED;
>>> + /* update existing queue */
>>> + blk_mq_update_tag_set_depth(set, true);
>>> + }
>>> + if (set->flags & BLK_MQ_F_TAG_SHARED)
>>
>> The above should be 'else if', otherwise the current queue will be set
>> twice.
>
> I moved the list add below this to avoid that very issue. See:
>
>>> + queue_set_hctx_shared(q, true);
>>> list_add_tail(&q->tag_set_list, &set->tag_list);
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>
> This seemed the cleanest way to structure the code to avoid the double
> walking of the hctx list for the current q.
OK, it is correct, then v1 is fine.
Reviewed-by: Ming Lei <ming.lei@...onical.com>
>
> -Jeff
>
>>> - blk_mq_update_tag_set_depth(set);
>>> +
>>> mutex_unlock(&set->tag_list_lock);
>>> }
>>>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists