lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <b89fcad1-ed97-4354-9892-14631794d95d@suse.de>
Date: Tue, 18 Mar 2025 15:10:29 +0100
From: Hannes Reinecke <hare@...e.de>
To: Daniel Wagner <dwagner@...e.de>
Cc: Daniel Wagner <wagi@...nel.org>, James Smart <james.smart@...adcom.com>,
 Christoph Hellwig <hch@....de>, Sagi Grimberg <sagi@...mberg.me>,
 Chaitanya Kulkarni <kch@...dia.com>, Keith Busch <kbusch@...nel.org>,
 linux-nvme@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 04/18] nvmet-fcloop: refactor fcloop_nport_alloc

On 3/18/25 14:38, Daniel Wagner wrote:
> On Tue, Mar 18, 2025 at 12:02:48PM +0100, Hannes Reinecke wrote:
>>> -	list_for_each_entry(tmplport, &fcloop_lports, lport_list) {
>>> -		if (tmplport->localport->node_name == opts->wwnn &&
>>> -		    tmplport->localport->port_name == opts->wwpn)
>>> -			goto out_invalid_opts;
>>> +		INIT_LIST_HEAD(&nport->nport_list);
>>> +		nport->node_name = opts->wwnn;
>>> +		nport->port_name = opts->wwpn;
>>> +		refcount_set(&nport->ref, 1);
>>> -		if (tmplport->localport->node_name == opts->lpwwnn &&
>>> -		    tmplport->localport->port_name == opts->lpwwpn)
>>> -			lport = tmplport;
>>> +		spin_lock_irqsave(&fcloop_lock, flags);
>>> +		list_add_tail(&nport->nport_list, &fcloop_nports);
>>> +		spin_unlock_irqrestore(&fcloop_lock, flags);
>>>    	}
>> Hmm. I don't really like this pattern; there is a race condition
>> between lookup and allocation leading to possibly duplicate entries
>> on the list.
> 
> Yes, that's not a good thing.
> 
>> Lookup and allocation really need to be under the same lock.
> 
> This means the new entry has always to be allocated first and then we
> either free it again or insert into the list, because it's not possible
> to allocate under the spinlock. Not that beautiful but correctness wins.
Allocate first, and then free it if the entry is already present.
Slightly wasteful, but that's what it is.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@...e.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ