[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACVXFVNi_US+e=YJ=n-6PFrv5XN+Q3Zfx5c0MbrAOL6BvSMRfA@mail.gmail.com>
Date: Sun, 19 Jul 2015 18:24:16 +0800
From: Ming Lei <tom.leiming@...il.com>
To: Akinobu Mita <akinobu.mita@...il.com>
Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Keith Busch <keith.busch@...el.com>,
Jens Axboe <axboe@...nel.dk>
Subject: Re: [PATCH v3 1/7] blk-mq: avoid access hctx->tags->cpumask before allocation
On Sun, Jul 19, 2015 at 12:28 AM, Akinobu Mita <akinobu.mita@...il.com> wrote:
> When unmapped hw queue is remapped after CPU topology is changed,
> hctx->tags->cpumask is set before hctx->tags is allocated in
> blk_mq_map_swqueue().
>
> In order to fix this null pointer dereference, hctx->tags must be
> allocated before configuring hctx->tags->cpumask.
The root cause should be that the mapping between hctx and ctx
can be changed after CPU topo is changed, then hctx->tags can
be changed too, so hctx->tags->cpumask has to be set after
hctx->tags is setup.
>
> Fixes: f26cdc8536 ("blk-mq: Shared tag enhancements")
I am wondering if the above commit considers CPU hotplug, and
nvme uses tag->cpumask to set irq affinity hint just during
starting queue. Looks it should be reasonalbe to
introduce one callback of mapping_changed() for handling
this kind of stuff. But this isn't related with this patch.
> Signed-off-by: Akinobu Mita <akinobu.mita@...il.com>
> Cc: Keith Busch <keith.busch@...el.com>
> Cc: Jens Axboe <axboe@...nel.dk>
> Cc: Ming Lei <tom.leiming@...il.com>
> ---
> block/blk-mq.c | 9 ++++++++-
> 1 file changed, 8 insertions(+), 1 deletion(-)
>
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 7d842db..f29f766 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -1821,7 +1821,6 @@ static void blk_mq_map_swqueue(struct request_queue *q)
>
> hctx = q->mq_ops->map_queue(q, i);
> cpumask_set_cpu(i, hctx->cpumask);
> - cpumask_set_cpu(i, hctx->tags->cpumask);
> ctx->index_hw = hctx->nr_ctx;
> hctx->ctxs[hctx->nr_ctx++] = ctx;
> }
> @@ -1861,6 +1860,14 @@ static void blk_mq_map_swqueue(struct request_queue *q)
> hctx->next_cpu = cpumask_first(hctx->cpumask);
> hctx->next_cpu_batch = BLK_MQ_CPU_WORK_BATCH;
> }
> +
> + queue_for_each_ctx(q, ctx, i) {
> + if (!cpu_online(i))
> + continue;
> +
> + hctx = q->mq_ops->map_queue(q, i);
> + cpumask_set_cpu(i, hctx->tags->cpumask);
If tags->cpumask is always same with hctx->cpumaks, this
CPU iterator can be avoided.
> + }
> }
>
> static void blk_mq_update_tag_set_depth(struct blk_mq_tag_set *set)
> --
> 1.9.1
>
--
Ming Lei
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists