[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1437487125.7490.5.camel@mita-ThinkPad-T540p>
Date: Tue, 21 Jul 2015 22:58:45 +0900
From: Akinobu Mita <akinobu.mita@...il.com>
To: Ming Lei <tom.leiming@...il.com>
Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Keith Busch <keith.busch@...el.com>,
Jens Axboe <axboe@...nel.dk>
Subject: Re: [PATCH v3 1/7] blk-mq: avoid access hctx->tags->cpumask before
allocation
2015-07-19 (日) の 18:24 +0800 に Ming Lei さんは書きました:
> On Sun, Jul 19, 2015 at 12:28 AM, Akinobu Mita <akinobu.mita@...il.com> wrote:
> > When unmapped hw queue is remapped after CPU topology is changed,
> > hctx->tags->cpumask is set before hctx->tags is allocated in
> > blk_mq_map_swqueue().
> >
> > In order to fix this null pointer dereference, hctx->tags must be
> > allocated before configuring hctx->tags->cpumask.
>
> The root cause should be that the mapping between hctx and ctx
> can be changed after CPU topo is changed, then hctx->tags can
> be changed too, so hctx->tags->cpumask has to be set after
> hctx->tags is setup.
>
> >
> > Fixes: f26cdc8536 ("blk-mq: Shared tag enhancements")
>
> I am wondering if the above commit considers CPU hotplug, and
> nvme uses tag->cpumask to set irq affinity hint just during
> starting queue. Looks it should be reasonalbe to
> introduce one callback of mapping_changed() for handling
> this kind of stuff. But this isn't related with this patch.
>
> > Signed-off-by: Akinobu Mita <akinobu.mita@...il.com>
> > Cc: Keith Busch <keith.busch@...el.com>
> > Cc: Jens Axboe <axboe@...nel.dk>
> > Cc: Ming Lei <tom.leiming@...il.com>
> > ---
> > block/blk-mq.c | 9 ++++++++-
> > 1 file changed, 8 insertions(+), 1 deletion(-)
> >
> > diff --git a/block/blk-mq.c b/block/blk-mq.c
> > index 7d842db..f29f766 100644
> > --- a/block/blk-mq.c
> > +++ b/block/blk-mq.c
> > @@ -1821,7 +1821,6 @@ static void blk_mq_map_swqueue(struct request_queue *q)
> >
> > hctx = q->mq_ops->map_queue(q, i);
> > cpumask_set_cpu(i, hctx->cpumask);
> > - cpumask_set_cpu(i, hctx->tags->cpumask);
> > ctx->index_hw = hctx->nr_ctx;
> > hctx->ctxs[hctx->nr_ctx++] = ctx;
> > }
> > @@ -1861,6 +1860,14 @@ static void blk_mq_map_swqueue(struct request_queue *q)
> > hctx->next_cpu = cpumask_first(hctx->cpumask);
> > hctx->next_cpu_batch = BLK_MQ_CPU_WORK_BATCH;
> > }
> > +
> > + queue_for_each_ctx(q, ctx, i) {
> > + if (!cpu_online(i))
> > + continue;
> > +
> > + hctx = q->mq_ops->map_queue(q, i);
> > + cpumask_set_cpu(i, hctx->tags->cpumask);
>
> If tags->cpumask is always same with hctx->cpumaks, this
> CPU iterator can be avoided.
How about this patch?
Or should we use cpumask_or() instead cpumask_copy?
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 7d842db..56f814a 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1821,7 +1821,6 @@ static void blk_mq_map_swqueue(struct request_queue *q)
hctx = q->mq_ops->map_queue(q, i);
cpumask_set_cpu(i, hctx->cpumask);
- cpumask_set_cpu(i, hctx->tags->cpumask);
ctx->index_hw = hctx->nr_ctx;
hctx->ctxs[hctx->nr_ctx++] = ctx;
}
@@ -1846,7 +1845,10 @@ static void blk_mq_map_swqueue(struct request_queue *q)
if (!set->tags[i])
set->tags[i] = blk_mq_init_rq_map(set, i);
hctx->tags = set->tags[i];
- WARN_ON(!hctx->tags);
+ if (hctx->tags)
+ cpumask_copy(hctx->tags->cpumask, hctx->cpumask);
+ else
+ WARN_ON(1);
/*
* Set the map size to the number of mapped software queues.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists