[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0caa9d00-3f69-4ade-b93b-eea307fe6f72@linux.ibm.com>
Date: Tue, 25 Nov 2025 16:43:25 +0530
From: Nilay Shroff <nilay@...ux.ibm.com>
To: Chaitanya Kulkarni <chaitanyak@...dia.com>,
Christoph Hellwig <hch@...radead.org>,
Chaitanya Kulkarni <ckulkarnilinux@...il.com>
Cc: "kbusch@...nel.org" <kbusch@...nel.org>, "hch@....de" <hch@....de>,
"hare@...e.de" <hare@...e.de>, "sagi@...mberg.me" <sagi@...mberg.me>,
"axboe@...nel.dk" <axboe@...nel.dk>,
"dlemoal@...nel.org"
<dlemoal@...nel.org>,
"wagi@...nel.org" <wagi@...nel.org>,
"mpatocka@...hat.com" <mpatocka@...hat.com>,
"yukuai3@...wei.com" <yukuai3@...wei.com>,
"xni@...hat.com"
<xni@...hat.com>,
"linan122@...wei.com" <linan122@...wei.com>,
"bmarzins@...hat.com" <bmarzins@...hat.com>,
"john.g.garry@...cle.com" <john.g.garry@...cle.com>,
"edumazet@...gle.com" <edumazet@...gle.com>,
"ncardwell@...gle.com" <ncardwell@...gle.com>,
"kuniyu@...gle.com" <kuniyu@...gle.com>,
"davem@...emloft.net" <davem@...emloft.net>,
"dsahern@...nel.org" <dsahern@...nel.org>,
"kuba@...nel.org"
<kuba@...nel.org>,
"pabeni@...hat.com" <pabeni@...hat.com>,
"horms@...nel.org" <horms@...nel.org>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-nvme@...ts.infradead.org" <linux-nvme@...ts.infradead.org>,
"linux-block@...r.kernel.org" <linux-block@...r.kernel.org>
Subject: Re: [RFC blktests fix PATCH] tcp: use GFP_ATOMIC in tcp_disconnect
On 11/25/25 12:58 PM, Chaitanya Kulkarni wrote:
> On 11/24/25 22:27, Christoph Hellwig wrote:
>> I don't think GFP_ATOMIC is right here, you want GFP_NOIO.
>>
>> And just use the scope API so that you don't have to pass a gfp_t
>> several layers down.
>>
>>
> are you saying something like this ?
>
> diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
> index 29ad4735fac6..56d0a3777a4d 100644
> --- a/drivers/nvme/host/tcp.c
> +++ b/drivers/nvme/host/tcp.c
> @@ -1438,17 +1438,28 @@ static void nvme_tcp_free_queue(struct nvme_ctrl *nctrl, int qid)
> struct nvme_tcp_ctrl *ctrl = to_tcp_ctrl(nctrl);
> struct nvme_tcp_queue *queue = &ctrl->queues[qid];
> unsigned int noreclaim_flag;
> + unsigned int noio_flag;
>
> if (!test_and_clear_bit(NVME_TCP_Q_ALLOCATED, &queue->flags))
> return;
>
> page_frag_cache_drain(&queue->pf_cache);
>
> + /**
> + * Prevent memory reclaim from triggering block I/O during socket
> + * teardown. The socket release path fput -> tcp_close ->
> + * tcp_disconnect -> tcp_send_active_reset may allocate memory, and
> + * allowing reclaim to issue I/O could deadlock if we're being called
> + * from block device teardown (e.g., del_gendisk -> elevator cleanup)
> + * which holds locks that the I/O completion path needs.
> + */
> + noio_flag = memalloc_noio_save();
> noreclaim_flag = memalloc_noreclaim_save();
> /* ->sock will be released by fput() */
> fput(queue->sock->file);
> queue->sock = NULL;
> memalloc_noreclaim_restore(noreclaim_flag);
> + memalloc_noio_restore(noio_flag);
>
> kfree(queue->pdu);
> mutex_destroy(&queue->send_mutex);
The memalloc_noreclaim_save() above shall already prevent filesystem reclaim,
so if the goal is to avoid fs_reclaim, we should not need an additional
memalloc_noio_save() here. That makes me wonder whether we are looking at the
correct code path. If this teardown path (nvme_tcp_free_queue()) is indeed executed,
it should already be avoiding filesystem reclaim in the first place.
Thanks,
--Nilay
Powered by blists - more mailing lists