[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <408569e7-2b82-4eff-b767-79ce6ef6cae0@rbox.co>
Date: Wed, 4 Feb 2026 16:41:23 +0100
From: Michal Luczaj <mhal@...x.co>
To: Kuniyuki Iwashima <kuniyu@...gle.com>,
Martin KaFai Lau <martin.lau@...ux.dev>
Cc: bpf@...r.kernel.org, daniel@...earbox.net, davem@...emloft.net,
edumazet@...gle.com, horms@...nel.org, jakub@...udflare.com,
john.fastabend@...il.com, kuba@...nel.org, linux-kernel@...r.kernel.org,
netdev@...r.kernel.org, pabeni@...hat.com
Subject: Re: [PATCH bpf] bpf, sockmap: Fix af_unix null-ptr-deref in proto
update
On 2/4/26 08:58, Kuniyuki Iwashima wrote:
> On Tue, Feb 3, 2026 at 11:15 PM Martin KaFai Lau <martin.lau@...ux.dev> wrote:
>>
>> On 2/3/26 11:47 AM, Kuniyuki Iwashima wrote:
>>> From: Michal Luczaj <mhal@...x.co>
>>> Date: Tue, 3 Feb 2026 10:57:46 +0100
>>>> On 2/3/26 04:53, Martin KaFai Lau wrote:
>>>>> On 2/2/26 7:10 AM, Michal Luczaj wrote:
>>>>>> In related news, looks like bpf_iter_unix_seq_show() is missing
>>>>>> unix_state_lock(): lock_sock_fast() won't stop unix_release_sock(). E.g.
>>>>>> bpf iterator can grab unix_sock::peer as it is being released.
>>>>>
>>>>> If the concern is the bpf iterator prog may use a released unix_peer(sk)
>>>>> pointer, it should be fine. The unix_peer(sk) pointer is not a trusted
>>>>> pointer to the bpf prog, so nothing bad will happen other than
>>>>> potentially reading incorrect values.
>>>>
>>>> But if the prog passes a released peer pointer to a bpf helper:
>>>>
>>>> BUG: KASAN: slab-use-after-free in bpf_skc_to_unix_sock+0x95/0xb0
>>>> Read of size 1 at addr ffff888110654c92 by task test_progs/1936
>>
>> hmm... bpf_skc_to_unix_sock is exposed to tracing. bpf_iter is a tracing
>> bpf prog.
>>
>>>
>>> Can you cook a patch for this ? probably like below
>>
>> This can help the bpf_iter but not the other tracing prog such as fentry.
>
> Oh well ... then bpf_skc_to_unix_sock() can be used even
> with SEQ_START_TOKEN at fentry of bpf_iter_unix_seq_show() ??
>
> How about adding notrace to all af_unix bpf iterator functions ?
>
> The procfs iterator holds a spinlock of the hashtable from
> ->start/next() to ->stop() to prevent the race with unix_release_sock().
>
> I think other (non-iterator) functions cannot do such racy
> access with tracing prog.
But then there's SOCK_DGRAM where you can drop unix_peer(sk) without
releasing sk; see AF_UNSPEC in unix_dgram_connect(). I think Martin is
right, we can crash at many fentries.
BUG: KASAN: slab-use-after-free in bpf_skc_to_unix_sock+0xa4/0xb0
Read of size 2 at addr ffff888147d38890 by task test_progs/2495
Call Trace:
dump_stack_lvl+0x5d/0x80
print_report+0x170/0x4f3
kasan_report+0xe1/0x180
bpf_skc_to_unix_sock+0xa4/0xb0
bpf_prog_564a1c39c35d86a2_unix_shutdown_entry+0x8a/0x8e
bpf_trampoline_6442564662+0x47/0xab
unix_shutdown+0x9/0x880
__sys_shutdown+0xe1/0x160
__x64_sys_shutdown+0x52/0x90
do_syscall_64+0x6b/0x3a0
entry_SYSCALL_64_after_hwframe+0x76/0x7e
>
>>
>>>
>>> ---8<---
>>> diff --git a/net/unix/af_unix.c b/net/unix/af_unix.c
>>> index 02ebad6afac7..9c7e9fbde362 100644
>>> --- a/net/unix/af_unix.c
>>> +++ b/net/unix/af_unix.c
>>> @@ -3740,8 +3740,9 @@ static int bpf_iter_unix_seq_show(struct seq_file *seq, void *v)
>>> return 0;
>>>
>>> slow = lock_sock_fast(sk);
>>> + unix_state_lock(sk);
>>>
>>> - if (unlikely(sk_unhashed(sk))) {
>>> + if (unlikely(sock_flag(other, SOCK_DEAD))) {
>>> ret = SEQ_SKIP;
>>> goto unlock;
>>> }
>>> @@ -3751,6 +3752,7 @@ static int bpf_iter_unix_seq_show(struct seq_file *seq, void *v)
>>> prog = bpf_iter_get_info(&meta, false);
>>> ret = unix_prog_seq_show(prog, &meta, v, uid);
>>> unlock:
>>> + unix_staet_unlock(sk);
>>> unlock_sock_fast(sk, slow);
>>> return ret;
>>> }
>>> ---8<---
>>>
>>> Thanks!
>>
Powered by blists - more mailing lists