lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <YNqL+3LDsIPKm1ol@T590>
Date:   Tue, 29 Jun 2021 10:56:59 +0800
From:   Ming Lei <ming.lei@...hat.com>
To:     Wen Xiong <wenxiong@...ibm.com>
Cc:     dwagner@...e.de, james.smart@...adcom.com,
        linux-kernel@...r.kernel.org, sagi@...mberg.me
Subject: Re: [PATCH 1/1] block: System crashes when cpu hotplug + bouncing
 port

Hi Wen Xiong,

On Tue, Jun 29, 2021 at 02:43:42AM +0000, Wen Xiong wrote:
>    >>NVMe users have to pass correct hctx_idx to blk_mq_alloc_request_hctx(),
>    but
>    >>from the info you provided, they don't provide valid hctx_idx to blk-mq,
>    so
>    >>q->queue_hw_ctx[hctx_idx] is NULL and kernel panic.
>     
>    Hi Ming,
>     
>    Daniel's two patches didn't fix the crash issue. My patch is on top of two
>    patches.
>    That is the reason why I am continue debugging the issue.

Can you provide the dmesg log after applying Daniel's patches?

Yeah, one known issue is that the following line in blk_mq_alloc_request_hctx()
won't work well even though Daniel's patches are applied:

	data.ctx = __blk_mq_get_ctx(q, cpu);

Is that the kernel crash in your observation?

>     
>    What  hctx_idx you suggest to provide to blk-mq for this issue?
>     
>    Before cpu hotplug, num_online_cpus() is 16: 0-15 are online.
>    After cpu hotplug, num_online_cpus() is 8: 0,1,2,3,8,9, 10,11 are online
>    4,5,6,7,12,13,14,15 are offline.
>     
>    What hctx_idx you suggest to provide to blk-mq by calling
>    blk_mq_alloc_request_hctx() in this case?

At least the hctx_idx shouldn't be >= q->nr_hw_queues/set->nr_hw_queues.

Also can you collect the queue mapping log?

#./dump-qmap /dev/nvme1n1


[1] http://people.redhat.com/minlei/tests/tools/dump-qmap


Thanks, 
Ming

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ