[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAAVpQUCshzAAHZQQ7sVE+3UdKmBV42bKudcdDR0KtaTcTqn5gA@mail.gmail.com>
Date: Tue, 3 Feb 2026 23:58:24 -0800
From: Kuniyuki Iwashima <kuniyu@...gle.com>
To: Martin KaFai Lau <martin.lau@...ux.dev>
Cc: mhal@...x.co, bpf@...r.kernel.org, daniel@...earbox.net,
davem@...emloft.net, edumazet@...gle.com, horms@...nel.org,
jakub@...udflare.com, john.fastabend@...il.com, kuba@...nel.org,
linux-kernel@...r.kernel.org, netdev@...r.kernel.org, pabeni@...hat.com
Subject: Re: [PATCH bpf] bpf, sockmap: Fix af_unix null-ptr-deref in proto update
On Tue, Feb 3, 2026 at 11:15 PM Martin KaFai Lau <martin.lau@...ux.dev> wrote:
>
> On 2/3/26 11:47 AM, Kuniyuki Iwashima wrote:
> > From: Michal Luczaj <mhal@...x.co>
> > Date: Tue, 3 Feb 2026 10:57:46 +0100
> >> On 2/3/26 04:53, Martin KaFai Lau wrote:
> >>> On 2/2/26 7:10 AM, Michal Luczaj wrote:
> >>>> In related news, looks like bpf_iter_unix_seq_show() is missing
> >>>> unix_state_lock(): lock_sock_fast() won't stop unix_release_sock(). E.g.
> >>>> bpf iterator can grab unix_sock::peer as it is being released.
> >>>
> >>> If the concern is the bpf iterator prog may use a released unix_peer(sk)
> >>> pointer, it should be fine. The unix_peer(sk) pointer is not a trusted
> >>> pointer to the bpf prog, so nothing bad will happen other than
> >>> potentially reading incorrect values.
> >>
> >> But if the prog passes a released peer pointer to a bpf helper:
> >>
> >> BUG: KASAN: slab-use-after-free in bpf_skc_to_unix_sock+0x95/0xb0
> >> Read of size 1 at addr ffff888110654c92 by task test_progs/1936
>
> hmm... bpf_skc_to_unix_sock is exposed to tracing. bpf_iter is a tracing
> bpf prog.
>
> >
> > Can you cook a patch for this ? probably like below
>
> This can help the bpf_iter but not the other tracing prog such as fentry.
Oh well ... then bpf_skc_to_unix_sock() can be used even
with SEQ_START_TOKEN at fentry of bpf_iter_unix_seq_show() ??
How about adding notrace to all af_unix bpf iterator functions ?
The procfs iterator holds a spinlock of the hashtable from
->start/next() to ->stop() to prevent the race with unix_release_sock().
I think other (non-iterator) functions cannot do such racy
access with tracing prog.
>
> >
> > ---8<---
> > diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
> > index 02ebad6afac7..9c7e9fbde362 100644
> > --- a/net/unix/af_unix.c
> > +++ b/net/unix/af_unix.c
> > @@ -3740,8 +3740,9 @@ static int bpf_iter_unix_seq_show(struct seq_file *seq, void *v)
> > return 0;
> >
> > slow = lock_sock_fast(sk);
> > + unix_state_lock(sk);
> >
> > - if (unlikely(sk_unhashed(sk))) {
> > + if (unlikely(sock_flag(other, SOCK_DEAD))) {
> > ret = SEQ_SKIP;
> > goto unlock;
> > }
> > @@ -3751,6 +3752,7 @@ static int bpf_iter_unix_seq_show(struct seq_file *seq, void *v)
> > prog = bpf_iter_get_info(&meta, false);
> > ret = unix_prog_seq_show(prog, &meta, v, uid);
> > unlock:
> > + unix_staet_unlock(sk);
> > unlock_sock_fast(sk, slow);
> > return ret;
> > }
> > ---8<---
> >
> > Thanks!
>
Powered by blists - more mailing lists