[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240513092426.12297-1-kuniyu@amazon.com>
Date: Mon, 13 May 2024 18:24:26 +0900
From: Kuniyuki Iwashima <kuniyu@...zon.com>
To: <mhal@...x.co>
CC: <billy@...rlabs.sg>, <davem@...emloft.net>, <edumazet@...gle.com>,
<kuba@...nel.org>, <kuni1840@...il.com>, <kuniyu@...zon.com>,
<netdev@...r.kernel.org>, <pabeni@...hat.com>
Subject: Re: [PATCH v2 net] af_unix: Update unix_sk(sk)->oob_skb under sk_receive_queue lock.
From: Michal Luczaj <mhal@...x.co>
Date: Mon, 13 May 2024 11:14:39 +0200
> On 5/13/24 09:44, Kuniyuki Iwashima wrote:
> > From: Michal Luczaj <mhal@...x.co>
> > Date: Mon, 13 May 2024 08:40:34 +0200
> >> What I'm talking about is the quoted above (unchanged) part in manage_oob():
> >>
> >> if (!WARN_ON_ONCE(skb_unref(skb)))
> >> kfree_skb(skb);
> >
> > Ah, I got your point, good catch!
> >
> > Somehow I was thinking of new GC where alive recvq is not touched
> > and lockdep would end up with false-positive.
> >
> > We need to delay freeing oob_skb in that case like below.
> > ...
>
> So this not a lockdep false positive after all?
>
> Here's my understanding: the only way manage_oob() can lead to an inverted locking
> order is when the receiver socket is _not_ in gc_candidates. And when it's not
> there, no risk of deadlock. What do you think?
For the new GC, it's false positive, but for the old GC, it's not.
The old GC locks unix_gc_lock and could iterate alive sockets if
they are linked to gc_inflight_list, and then recvq is locked.
The new GC only touches recvq when it's detected as dead, meaning
there's no recv() call in progress for the socket.
This patch is for both GC impl, so we need to free skb after
unlocking recvq in manage_oob().
Powered by blists - more mailing lists