[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190625015512.GC23777@ming.t460p>
Date: Tue, 25 Jun 2019 09:55:13 +0800
From: Ming Lei <ming.lei@...hat.com>
To: Wenbin Zeng <wenbin.zeng@...il.com>
Cc: axboe@...nel.dk, keith.busch@...el.com, hare@...e.com,
osandov@...com, sagi@...mberg.me, bvanassche@....org,
linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
Wenbin Zeng <wenbinzeng@...cent.com>
Subject: Re: [PATCH] blk-mq: update hctx->cpumask at cpu-hotplug
On Mon, Jun 24, 2019 at 11:24:07PM +0800, Wenbin Zeng wrote:
> Currently hctx->cpumask is not updated when hot-plugging new cpus,
> as there are many chances kblockd_mod_delayed_work_on() getting
> called with WORK_CPU_UNBOUND, workqueue blk_mq_run_work_fn may run
There are only two cases in which WORK_CPU_UNBOUND is applied:
1) single hw queue
2) multiple hw queue, and all CPUs in this hctx become offline
For 1), all CPUs can be found in hctx->cpumask.
> on the newly-plugged cpus, consequently __blk_mq_run_hw_queue()
> reporting excessive "run queue from wrong CPU" messages because
> cpumask_test_cpu(raw_smp_processor_id(), hctx->cpumask) returns false.
The message means CPU hotplug race is triggered.
Yeah, there is big problem in blk_mq_hctx_notify_dead() which is called
after one CPU is dead, but still run this hw queue to dispatch request,
and all CPUs in this hctx might become offline.
We have some discussion before on this issue:
https://lore.kernel.org/linux-block/CACVXFVN729SgFQGUgmu1iN7P6Mv5+puE78STz8hj9J5bS828Ng@mail.gmail.com/
>
> This patch added a cpu-hotplug handler into blk-mq, updating
> hctx->cpumask at cpu-hotplug.
This way isn't correct, hctx->cpumask should be kept as sync with
queue mapping.
Thanks,
Ming
Powered by blists - more mailing lists