[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9bae2938-dcb6-de91-b16f-36ce8af8b7fb@oracle.com>
Date: Tue, 25 Jun 2019 10:51:31 +0800
From: Dongli Zhang <dongli.zhang@...cle.com>
To: wenbinzeng(曾文斌) <wenbinzeng@...cent.com>
Cc: Ming Lei <ming.lei@...hat.com>,
Wenbin Zeng <wenbin.zeng@...il.com>,
"axboe@...nel.dk" <axboe@...nel.dk>,
"keith.busch@...el.com" <keith.busch@...el.com>,
"hare@...e.com" <hare@...e.com>, "osandov@...com" <osandov@...com>,
"sagi@...mberg.me" <sagi@...mberg.me>,
"bvanassche@....org" <bvanassche@....org>,
"linux-block@...r.kernel.org" <linux-block@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] blk-mq: update hctx->cpumask at cpu-hotplug(Internet
mail)
On 6/25/19 10:27 AM, Ming Lei wrote:
> On Tue, Jun 25, 2019 at 02:14:46AM +0000, wenbinzeng(曾文斌) wrote:
>> Hi Ming,
>>
>>> -----Original Message-----
>>> From: Ming Lei <ming.lei@...hat.com>
>>> Sent: Tuesday, June 25, 2019 9:55 AM
>>> To: Wenbin Zeng <wenbin.zeng@...il.com>
>>> Cc: axboe@...nel.dk; keith.busch@...el.com; hare@...e.com; osandov@...com;
>>> sagi@...mberg.me; bvanassche@....org; linux-block@...r.kernel.org;
>>> linux-kernel@...r.kernel.org; wenbinzeng(曾文斌) <wenbinzeng@...cent.com>
>>> Subject: Re: [PATCH] blk-mq: update hctx->cpumask at cpu-hotplug(Internet mail)
>>>
>>> On Mon, Jun 24, 2019 at 11:24:07PM +0800, Wenbin Zeng wrote:
>>>> Currently hctx->cpumask is not updated when hot-plugging new cpus,
>>>> as there are many chances kblockd_mod_delayed_work_on() getting
>>>> called with WORK_CPU_UNBOUND, workqueue blk_mq_run_work_fn may run
>>>
>>> There are only two cases in which WORK_CPU_UNBOUND is applied:
>>>
>>> 1) single hw queue
>>>
>>> 2) multiple hw queue, and all CPUs in this hctx become offline
>>>
>>> For 1), all CPUs can be found in hctx->cpumask.
>>>
>>>> on the newly-plugged cpus, consequently __blk_mq_run_hw_queue()
>>>> reporting excessive "run queue from wrong CPU" messages because
>>>> cpumask_test_cpu(raw_smp_processor_id(), hctx->cpumask) returns false.
>>>
>>> The message means CPU hotplug race is triggered.
>>>
>>> Yeah, there is big problem in blk_mq_hctx_notify_dead() which is called
>>> after one CPU is dead, but still run this hw queue to dispatch request,
>>> and all CPUs in this hctx might become offline.
>>>
>>> We have some discussion before on this issue:
>>>
>>> https://lore.kernel.org/linux-block/CACVXFVN729SgFQGUgmu1iN7P6Mv5+puE78STz8hj
>>> 9J5bS828Ng@...l.gmail.com/
>>>
>>
>> There is another scenario, you can reproduce it by hot-plugging cpus to kvm guests via qemu monitor (I believe virsh setvcpus --live can do the same thing), for example:
>> (qemu) cpu-add 1
>> (qemu) cpu-add 2
>> (qemu) cpu-add 3
>>
>> In such scenario, cpu 1, 2 and 3 are not visible at boot, hctx->cpumask doesn't get synced when these cpus are added.
Here is how I play with it with the most recent qemu and linux.
Boot VM with 1 out of 4 vcpu online.
# qemu-system-x86_64 -hda disk.img \
-smp 1,maxcpus=4 \
-m 4096M -enable-kvm \
-device nvme,drive=lightnvme,serial=deadbeaf1 \
-drive file=nvme.img,if=none,id=lightnvme \
-vnc :0 \
-kernel /.../mainline-linux/arch/x86_64/boot/bzImage \
-append "root=/dev/sda1 init=/sbin/init text" \
-monitor stdio -net nic -net user,hostfwd=tcp::5022-:22
As Ming mentioned, after boot:
# cat /proc/cpuinfo | grep processor
processor : 0
# cat /sys/block/nvme0n1/mq/0/cpu_list
0
# cat /sys/block/nvme0n1/mq/1/cpu_list
1
# cat /sys/block/nvme0n1/mq/2/cpu_list
2
# cat /sys/block/nvme0n1/mq/3/cpu_list
3
# cat /proc/interrupts | grep nvme
24: 11 PCI-MSI 65536-edge nvme0q0
25: 78 PCI-MSI 65537-edge nvme0q1
26: 0 PCI-MSI 65538-edge nvme0q2
27: 0 PCI-MSI 65539-edge nvme0q3
28: 0 PCI-MSI 65540-edge nvme0q4
I hotplug with "device_add
qemu64-x86_64-cpu,id=core1,socket-id=1,core-id=0,thread-id=0" in qemu monitor.
Dongli Zhang
>
> It is CPU cold-plug, we suppose to support it.
>
> The new added CPUs should be visible to hctx, since we spread queues
> among all possible CPUs(), please see blk_mq_map_queues() and
> irq_build_affinity_masks(), which is like static allocation on CPU
> resources.
>
> Otherwise, you might use an old kernel or there is bug somewhere.
>
>>
>>>>
>>>> This patch added a cpu-hotplug handler into blk-mq, updating
>>>> hctx->cpumask at cpu-hotplug.
>>>
>>> This way isn't correct, hctx->cpumask should be kept as sync with
>>> queue mapping.
>>
>> Please advise what should I do to deal with the above situation? Thanks a lot.
>
> As I shared in last email, there is one approach discussed, which seems
> doable.
>
> Thanks,
> Ming
>
Powered by blists - more mailing lists