[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <9d18c084-afd5-4bd7-8650-496b88584ed4@flourine.local>
Date: Thu, 24 Apr 2025 13:13:48 +0200
From: Daniel Wagner <dwagner@...e.de>
To: Hannes Reinecke <hare@...e.de>
Cc: Daniel Wagner <wagi@...nel.org>,
James Smart <james.smart@...adcom.com>, Christoph Hellwig <hch@....de>, Sagi Grimberg <sagi@...mberg.me>,
Chaitanya Kulkarni <kch@...dia.com>, Keith Busch <kbusch@...nel.org>, linux-nvme@...ts.infradead.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v5 01/14] nvmet-fcloop: track ref counts for nports
On Thu, Apr 24, 2025 at 12:08:33PM +0200, Hannes Reinecke wrote:
> > + nport = fcloop_nport_lookup(nodename, portname);
> > + if (!nport)
> > + return -ENOENT;
> > + spin_lock_irqsave(&fcloop_lock, flags);
> > + tport = __unlink_target_port(nport);
> > spin_unlock_irqrestore(&fcloop_lock, flags);
> Hmm. This now has a race condition; we're taking the lock
> during lokup, drop the lock, take the lock again, and unlink
> the port.
> Please do a __fcloop_nport_lookup() function which doesn't
> take a lock and avoid this race.
The lock is protecting the list iteration not the object itself. The
lookup function will increase the refcount, so it doesn't get freed
after the unlock. How is this racing?
Powered by blists - more mailing lists