lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2025021019163296203221@cestc.cn>
Date: Mon, 10 Feb 2025 19:16:33 +0800
From: "zhang.guanghui@...tc.cn" <zhang.guanghui@...tc.cn>
To: mgurtovoy <mgurtovoy@...dia.com>, 
	"Maurizio Lombardi" <mlombard@...backstore.eu>, 
	sagi <sagi@...mberg.me>, 
	kbusch <kbusch@...nel.org>, 
	sashal <sashal@...nel.org>, 
	chunguang.xu <chunguang.xu@...pee.com>
Cc: linux-kernel <linux-kernel@...r.kernel.org>, 
	linux-nvme <linux-nvme@...ts.infradead.org>, 
	linux-block <linux-block@...r.kernel.org>
Subject: Re: Re: nvme-tcp: fix a possible UAF when failing to send request

Hi, 



     Thank you for your reply.


     In nvme-rdma we use nvme_host_path_error(rq) ,   the prerequisites are -EIO,  ignore this condition? 
 in nvme-tcp this judgment condition is different,  use the function nvme_host_path_error , 
it should be possible to go to nvme_complete_rq ->  nvme_retry_req -- request->mq_hctx have been freed, is NULL.            




zhang.guanghui@...tc.cn



 



From: Max Gurtovoy



Date: 2025-02-10 18:24



To: Maurizio Lombardi; zhang.guanghui@...tc.cn; sagi; kbusch; sashal; chunguang.xu



CC: linux-kernel; linux-nvme; linux-block



Subject: Re: nvme-tcp: fix a possible UAF when failing to send request



 



On 10/02/2025 12:01, Maurizio Lombardi wrote:



> On Mon Feb 10, 2025 at 8:41 AM CET, zhang.guanghui@...tc.cn wrote:



>> Hello



>>



>>



> I guess you have to fix your mail client.



>



>>      When using the nvme-tcp driver in a storage cluster, the driver may trigger a null pointer causing the host to crash several times.



>> By analyzing the vmcore, we know the direct cause is that  the request->mq_hctx was used after free.



>>



>>



>> CPU1                                                                   CPU2



>>



>> nvme_tcp_poll                                                          nvme_tcp_try_send  --failed to send reqrest 13



> This simply looks like a race condition between nvme_tcp_poll() and nvme_tcp_try_send()



> Personally, I would try to fix it inside the nvme-tcp driver without



> touching the core functions.



>



> Maybe nvme_tcp_poll should just ensure that io_work completes before



> calling nvme_tcp_try_recv(), the POLLING flag should then prevent io_work



> from getting rescheduled by the nvme_tcp_data_ready() callback.



>



>



> Maurizio



 



It seems to me that the HOST_PATH_ERROR handling can be improved in



nvme-tcp.



 



In nvme-rdma we use nvme_host_path_error(rq) and nvme_cleanup_cmd(rq) in



case we fail to submit a command..



 



can you try to replacing nvme_tcp_end_request(blk_mq_rq_from_pdu(req),



NVME_SC_HOST_PATH_ERROR); call with the similar logic we use in



nvme-rdma for host path error handling ?



 



 



 



 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ