[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZxWwvF0Er-Aj-rtX@fedora>
Date: Mon, 21 Oct 2024 09:39:08 +0800
From: Ming Lei <ming.lei@...hat.com>
To: zhuxiaohui <zhuxiaohui400@...il.com>
Cc: axboe@...nel.dk, kbusch@...nel.org, hch@....de, sagi@...mberg.me,
linux-block@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-nvme@...ts.infradead.org,
Zhu Xiaohui <zhuxiaohui.400@...edance.com>
Subject: Re: [PATCH v1] blk-mq: add one blk_mq_req_flags_t type to support mq
ctx fallback
On Sun, Oct 20, 2024 at 10:40:41PM +0800, zhuxiaohui wrote:
> From: Zhu Xiaohui <zhuxiaohui.400@...edance.com>
>
> It is observed that nvme connect to a nvme over fabric target will
> always fail when 'nohz_full' is set.
>
> In commit a46c27026da1 ("blk-mq: don't schedule block kworker on
> isolated CPUs"), it clears hctx->cpumask for all isolate CPUs,
> and when nvme connect to a remote target, it may fails on this stack:
>
> blk_mq_alloc_request_hctx+1
> __nvme_submit_sync_cmd+106
> nvmf_connect_io_queue+181
> nvme_tcp_start_queue+293
> nvme_tcp_setup_ctrl+948
> nvme_tcp_create_ctrl+735
> nvmf_dev_write+532
> vfs_write+237
> ksys_write+107
> do_syscall_64+128
> entry_SYSCALL_64_after_hwframe+118
>
> due to that the given blk_mq_hw_ctx->cpumask is cleared with no available
> blk_mq_ctx on the hw queue.
>
> This patch introduce a new blk_mq_req_flags_t flag 'BLK_MQ_REQ_ARB_MQ'
> as well as a nvme_submit_flags_t 'NVME_SUBMIT_ARB_MQ' which are used to
> indicate that block layer can fallback to a blk_mq_ctx whose cpu
> is not isolated.
blk_mq_alloc_request_hctx()
...
cpu = cpumask_first_and(data.hctx->cpumask, cpu_online_mask);
...
It can happen in case of non-cpu-isolation too, such as when this hctx hasn't
online CPUs, both are same actually from this viewpoint.
It is one long-time problem for nvme fc.
Thanks,
Ming
Powered by blists - more mailing lists