[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180117035159.GA9487@ming.t460p>
Date: Wed, 17 Jan 2018 11:52:05 +0800
From: Ming Lei <ming.lei@...hat.com>
To: "jianchao.wang" <jianchao.w.wang@...cle.com>
Cc: Jens Axboe <axboe@...com>, linux-block@...r.kernel.org,
Christoph Hellwig <hch@...radead.org>,
Christian Borntraeger <borntraeger@...ibm.com>,
Stefan Haberland <sth@...ux.vnet.ibm.com>,
Thomas Gleixner <tglx@...utronix.de>,
linux-kernel@...r.kernel.org, Christoph Hellwig <hch@....de>
Subject: Re: [PATCH 2/2] blk-mq: simplify queue mapping & schedule with each
possisble CPU
Hi Jianchao,
On Wed, Jan 17, 2018 at 10:56:13AM +0800, jianchao.wang wrote:
> Hi ming
>
> Thanks for your patch and kindly response.
You are welcome!
>
> On 01/16/2018 11:32 PM, Ming Lei wrote:
> > OK, I got it, and it should have been the only corner case in which
> > all CPUs mapped to this hctx become offline, and I believe the following
> > patch should address this case, could you give a test?
> >
> > ---
> > diff --git a/block/blk-mq.c b/block/blk-mq.c
> > index c376d1b6309a..23f0f3ddffcf 100644
> > --- a/block/blk-mq.c
> > +++ b/block/blk-mq.c
> > @@ -1416,21 +1416,44 @@ static void __blk_mq_run_hw_queue(struct blk_mq_hw_ctx *hctx)
> > */
> > static int blk_mq_hctx_next_cpu(struct blk_mq_hw_ctx *hctx)
> > {
> > + bool tried = false;
> > +
> > if (hctx->queue->nr_hw_queues == 1)
> > return WORK_CPU_UNBOUND;
> >
> > if (--hctx->next_cpu_batch <= 0) {
> > int next_cpu;
> > +select_cpu:
> >
> > next_cpu = cpumask_next_and(hctx->next_cpu, hctx->cpumask,
> > cpu_online_mask);
> > if (next_cpu >= nr_cpu_ids)
> > next_cpu = cpumask_first_and(hctx->cpumask,cpu_online_mask);
> >
> > - hctx->next_cpu = next_cpu;
> > + /*
> > + * No online CPU can be found here when running from
> > + * blk_mq_hctx_notify_dead(), so make sure hctx->next_cpu
> > + * is set correctly.
> > + */
> > + if (next_cpu >= nr_cpu_ids)
> > + hctx->next_cpu = cpumask_first_and(hctx->cpumask,
> > + cpu_possible_mask);
> > + else
> > + hctx->next_cpu = next_cpu;
> > hctx->next_cpu_batch = BLK_MQ_CPU_WORK_BATCH;
> > }
> >
> > + /*
> > + * Do unbound schedule if we can't find a online CPU for this hctx,
> > + * and it should happen only if hctx->next_cpu is becoming DEAD.
> > + */
> > + if (!cpu_online(hctx->next_cpu)) {
> > + if (!tried) {
> > + tried = true;
> > + goto select_cpu;
> > + }
> > + return WORK_CPU_UNBOUND;
> > + }
> > return hctx->next_cpu;
> > }
>
> I have tested this patch. The panic was gone, but I got the following:
>
> [ 231.674464] WARNING: CPU: 0 PID: 263 at /home/will/u04/source_code/linux-block/block/blk-mq.c:1315 __blk_mq_run_hw_queue+0x92/0xa0
>
......
> It is here.
> __blk_mq_run_hw_queue()
> ....
> WARN_ON(!cpumask_test_cpu(raw_smp_processor_id(), hctx->cpumask) &&
> cpu_online(hctx->next_cpu));
I think this warning is triggered after the CPU of hctx->next_cpu becomes
online again, and it should have been dealt with by the following trick,
and this patch is against the previous one, please test it and see if
the warning can be fixed.
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 23f0f3ddffcf..0620ccb65e4e 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1452,6 +1452,9 @@ static int blk_mq_hctx_next_cpu(struct blk_mq_hw_ctx *hctx)
tried = true;
goto select_cpu;
}
+
+ /* handle after this CPU of hctx->next_cpu becomes online again */
+ hctx->next_cpu_batch = 1;
return WORK_CPU_UNBOUND;
}
return hctx->next_cpu;
> ....
>
> To eliminate this risk totally, we could blk_mq_hctx_next_cpu return the cpu even if the cpu is offlined and modify the cpu_online above to cpu_active.
> The kworkers of the per-cpu pool must have be migrated back when the cpu is set active.
> But there seems to be issues in DASD as your previous comment.
Yes, we can't break DASD.
> >>>>
> That is the original version of this patch, and both Christian and Stefan
> reported that system can't boot from DASD in this way[2], and I changed
> to AND with cpu_online_mask, then their system can boot well
> >>>>
>
> On the other hand, there is also risk in
>
> @@ -440,7 +440,7 @@ struct request *blk_mq_alloc_request_hctx(struct request_queue *q,
> blk_queue_exit(q);
> return ERR_PTR(-EXDEV);
> }
> - cpu = cpumask_first(alloc_data.hctx->cpumask);
> + cpu = cpumask_first_and(alloc_data.hctx->cpumask, cpu_online_mask);
> alloc_data.ctx = __blk_mq_get_ctx(q, cpu);
>
> what if the cpus in alloc_data.hctx->cpumask are all offlined ?
This one is crazy, and is used by NVMe only, it should be fine if
the passed 'hctx_idx' is retrieved by the current running CPU, such
as the way of blk_mq_map_queue(). But if not, bad thing may happen.
Thanks,
Ming
Powered by blists - more mailing lists