lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <YVBr9Km1p7+uDioG@T590>
Date:   Sun, 26 Sep 2021 20:47:48 +0800
From:   Ming Lei <ming.lei@...hat.com>
To:     Laibin Qiu <qiulaibin@...wei.com>
Cc:     axboe@...nel.dk, linux-block@...r.kernel.org,
        linux-kernel@...r.kernel.org, martin.petersen@...cle.com,
        hare@...e.de, asml.silence@...il.com, bvanassche@....org
Subject: Re: [PATCH -next] blk-mq: fix tag_get wait task can't be awakened

Hi Laibin,

On Mon, Sep 13, 2021 at 04:12:48PM +0800, Laibin Qiu wrote:
> When multiple hctx share one tagset. The wake_batch is calculated
> during initialization by queue_depth. But when multiple hctx share one
> tagset. The queue depth assigned to each user may be smaller than
> wakup_batch. This may cause the waiting queue to fail to wakup and leads
> to Hang.

In case of shared tags, there might be more than one hctx which
allocates tag from single tags, and each hctx is limited to allocate
at most:

 	hctx_max_depth = max((bt->sb.depth + users - 1) / users, 4U);

	and

	users = atomic_read(&hctx->tags->active_queues)

See hctx_may_queue().

tag idle detection is lazy, and may be delayed for 30sec, so
there could be just one real active hctx(queue) but all others are
actually idle and still accounted as active because of the lazy
idle detection. Then if wake_batch is > hctx_max_depth, driver
tag allocation may wait forever on this real active hctx.

Correct me if my understanding is wrong.

> 
> Fix this by recalculating wake_batch when inc or dec active_queues.
> 
> Fixes: 0d2602ca30e41 ("blk-mq: improve support for shared tags maps")
> Signed-off-by: Laibin Qiu <qiulaibin@...wei.com>
> ---
>  block/blk-mq-tag.c      | 44 +++++++++++++++++++++++++++++++++++++++--
>  include/linux/sbitmap.h |  8 ++++++++
>  lib/sbitmap.c           |  3 ++-
>  3 files changed, 52 insertions(+), 3 deletions(-)
> 
> diff --git a/block/blk-mq-tag.c b/block/blk-mq-tag.c
> index 86f87346232a..d02f5ac0004c 100644
> --- a/block/blk-mq-tag.c
> +++ b/block/blk-mq-tag.c
> @@ -16,6 +16,27 @@
>  #include "blk-mq-sched.h"
>  #include "blk-mq-tag.h"
>  
> +static void bt_update_wake_batch(struct sbitmap_queue *bt, unsigned int users)
> +{
> +	unsigned int depth;
> +
> +	depth = max((bt->sb.depth + users - 1) / users, 4U);
> +	sbitmap_queue_update_wake_batch(bt, depth);
> +}

Use the hctx's max queue depth could reduce wake_batch a lot, then
performance may be degraded.

Just wondering why not set sbq->wake_batch as hctx_max_depth if
sbq->wake_batch is < hctx_max_depth?



Thanks,
Ming

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ