lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8a3093b6-d5b2-496d-828a-0667abbf1670@suse.de>
Date: Thu, 24 Apr 2025 14:10:20 +0200
From: Hannes Reinecke <hare@...e.de>
To: Daniel Wagner <dwagner@...e.de>
Cc: Daniel Wagner <wagi@...nel.org>, James Smart <james.smart@...adcom.com>,
 Christoph Hellwig <hch@....de>, Sagi Grimberg <sagi@...mberg.me>,
 Chaitanya Kulkarni <kch@...dia.com>, Keith Busch <kbusch@...nel.org>,
 linux-nvme@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v5 10/14] nvmet-fcloop: don't wait for lport cleanup

On 4/24/25 13:48, Daniel Wagner wrote:
> On Thu, Apr 24, 2025 at 12:21:09PM +0200, Hannes Reinecke wrote:
>>> @@ -1776,7 +1763,7 @@ static void __exit fcloop_exit(void)
>>>    		spin_unlock_irqrestore(&fcloop_lock, flags);
>>> -		ret = __wait_localport_unreg(lport);
>>> +		ret = __localport_unreg(lport);
>>>    		if (ret)
>>>    			pr_warn("%s: Failed deleting local port\n", __func__);
>>>
>> And who is freeing the allocated fcloop_lsreq structures?
> 
> After a fcloop_lsreq is allocated it is put on either rport->ls_list or
> tport->ls_list and later freed in fcloop_tport_lsrqst_work, resp.
> fcloop_rport_lsrqst_work for the non-error case (or in the callbacks).
> 
> That means when the localport is unregistered successfully there are no
> fcloop_lsreq around. The issue is what to do when __localport_unreq
> returns an error.
> 
> This code could loop forever when the port_state is not in
> FC_OBJSTATE_ONLINE when calling nvme_fc_unregister_localport(). This
> should not happen (it did before this series) but now it shouldn't be
> the case anymore. I suppose a pr_warn_once() and a sleep might be a good
> idea to avoid a busy loop?

My point was more: you call kmem_cache_destroy() unconditionally upon
exit. But the boilerplate says that you have to free all allocations
from the kmem_cache before that call.
Yet the exit function doesn't do that.
Question is: what are the guarantees that the cache is empty upon exit?

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@...e.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ