[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2421555d500d4efcad07956eead119cc@tencent.com>
Date: Tue, 25 Jun 2019 02:21:47 +0000
From: wenbinzeng(曾文斌) <wenbinzeng@...cent.com>
To: Dongli Zhang <dongli.zhang@...cle.com>,
Wenbin Zeng <wenbin.zeng@...il.com>
CC: "axboe@...nel.dk" <axboe@...nel.dk>,
"keith.busch@...el.com" <keith.busch@...el.com>,
"hare@...e.com" <hare@...e.com>,
"ming.lei@...hat.com" <ming.lei@...hat.com>,
"osandov@...com" <osandov@...com>,
"sagi@...mberg.me" <sagi@...mberg.me>,
"bvanassche@....org" <bvanassche@....org>,
"linux-block@...r.kernel.org" <linux-block@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [PATCH] blk-mq: update hctx->cpumask at cpu-hotplug(Internet
mail)
Hi Dongli,
> -----Original Message-----
> From: Dongli Zhang <dongli.zhang@...cle.com>
> Sent: Tuesday, June 25, 2019 9:30 AM
> To: Wenbin Zeng <wenbin.zeng@...il.com>
> Cc: axboe@...nel.dk; keith.busch@...el.com; hare@...e.com; ming.lei@...hat.com;
> osandov@...com; sagi@...mberg.me; bvanassche@....org;
> linux-block@...r.kernel.org; linux-kernel@...r.kernel.org; wenbinzeng(曾文斌)
> <wenbinzeng@...cent.com>
> Subject: Re: [PATCH] blk-mq: update hctx->cpumask at cpu-hotplug(Internet mail)
>
> Hi Wenbin,
>
> On 6/24/19 11:24 PM, Wenbin Zeng wrote:
> > Currently hctx->cpumask is not updated when hot-plugging new cpus,
> > as there are many chances kblockd_mod_delayed_work_on() getting
> > called with WORK_CPU_UNBOUND, workqueue blk_mq_run_work_fn may run
> > on the newly-plugged cpus, consequently __blk_mq_run_hw_queue()
> > reporting excessive "run queue from wrong CPU" messages because
> > cpumask_test_cpu(raw_smp_processor_id(), hctx->cpumask) returns false.
> >
> > This patch added a cpu-hotplug handler into blk-mq, updating
> > hctx->cpumask at cpu-hotplug.
> >
> > Signed-off-by: Wenbin Zeng <wenbinzeng@...cent.com>
> > ---
> > block/blk-mq.c | 29 +++++++++++++++++++++++++++++
> > include/linux/blk-mq.h | 1 +
> > 2 files changed, 30 insertions(+)
> >
> > diff --git a/block/blk-mq.c b/block/blk-mq.c
> > index ce0f5f4..2e465fc 100644
> > --- a/block/blk-mq.c
> > +++ b/block/blk-mq.c
> > @@ -39,6 +39,8 @@
> > #include "blk-mq-sched.h"
> > #include "blk-rq-qos.h"
> >
> > +static enum cpuhp_state cpuhp_blk_mq_online;
> > +
> > static void blk_mq_poll_stats_start(struct request_queue *q);
> > static void blk_mq_poll_stats_fn(struct blk_stat_callback *cb);
> >
> > @@ -2215,6 +2217,21 @@ int blk_mq_alloc_rqs(struct blk_mq_tag_set *set, struct
> blk_mq_tags *tags,
> > return -ENOMEM;
> > }
> >
> > +static int blk_mq_hctx_notify_online(unsigned int cpu, struct hlist_node *node)
> > +{
> > + struct blk_mq_hw_ctx *hctx;
> > +
> > + hctx = hlist_entry_safe(node, struct blk_mq_hw_ctx, cpuhp_online);
> > +
> > + if (!cpumask_test_cpu(cpu, hctx->cpumask)) {
> > + mutex_lock(&hctx->queue->sysfs_lock);
> > + cpumask_set_cpu(cpu, hctx->cpumask);
> > + mutex_unlock(&hctx->queue->sysfs_lock);
> > + }
> > +
> > + return 0;
> > +}
> > +
>
> As this callback is registered for each hctx, when a cpu is online, it is called
> for each hctx.
>
> Just taking a 4-queue nvme as example (regardless about other block like loop).
> Suppose cpu=2 (out of 0, 1, 2 and 3) is offline. When we online cpu=2,
>
> blk_mq_hctx_notify_online() called: cpu=2 and blk_mq_hw_ctx->queue_num=3
> blk_mq_hctx_notify_online() called: cpu=2 and blk_mq_hw_ctx->queue_num=2
> blk_mq_hctx_notify_online() called: cpu=2 and blk_mq_hw_ctx->queue_num=1
> blk_mq_hctx_notify_online() called: cpu=2 and blk_mq_hw_ctx->queue_num=0
>
> There is no need to set cpu 2 for blk_mq_hw_ctx->queue_num=[3, 1, 0]. I am
> afraid this patch would erroneously set cpumask for blk_mq_hw_ctx->queue_num=[3,
> 1, 0].
>
> I used to submit the below patch explaining above for removing a cpu and it is
> unfortunately not merged yet.
>
> https://patchwork.kernel.org/patch/10889307/
>
>
> Another thing is during initialization, the hctx->cpumask should already been
> set and even the cpu is offline. Would you please explain the case hctx->cpumask
> is not set correctly, e.g., how to reproduce with a kvm guest running
> scsi/virtio/nvme/loop?
My scenario is:
A kvm guest started with single cpu, during initialization only one cpu was visible by kernel.
After boot, I hot-add some cpus via qemu monitor (I believe virsh setvcpus --live can do the same thing), for example:
(qemu) cpu-add 1
(qemu) cpu-add 2
(qemu) cpu-add 3
In such scenario, hctx->cpumask doesn't get updated when these cpus are added.
>
> Dongli Zhang
Powered by blists - more mailing lists