[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ae28b0bc62c76db342c7d5c552eedb1dbc143e49.camel@kernel.org>
Date: Thu, 27 Jun 2024 15:38:43 +0800
From: Geliang Tang <geliang@...nel.org>
To: John Fastabend <john.fastabend@...il.com>
Cc: Jakub Sitnicki <jakub@...udflare.com>, "David S. Miller"
<davem@...emloft.net>, Jakub Kicinski <kuba@...nel.org>, Paolo Abeni
<pabeni@...hat.com>, David Ahern <dsahern@...nel.org>, Andrii Nakryiko
<andrii@...nel.org>, Eduard Zingerman <eddyz87@...il.com>, Mykola Lysenko
<mykolal@...com>, Alexei Starovoitov <ast@...nel.org>, Daniel Borkmann
<daniel@...earbox.net>, Martin KaFai Lau <martin.lau@...ux.dev>, Song Liu
<song@...nel.org>, Yonghong Song <yonghong.song@...ux.dev>, KP Singh
<kpsingh@...nel.org>, Stanislav Fomichev <sdf@...gle.com>, Eric Dumazet
<edumazet@...gle.com>, Hao Luo <haoluo@...gle.com>, Jiri Olsa
<jolsa@...nel.org>, Shuah Khan <shuah@...nel.org>, Mykyta Yatsenko
<yatsenko@...a.com>, Miao Xu <miaxu@...a.com>, Yuran Pereira
<yuran.pereira@...mail.com>, Huacai Chen <chenhuacai@...nel.org>, Tiezhu
Yang <yangtiezhu@...ngson.cn>, Geliang Tang <tanggeliang@...inos.cn>,
netdev@...r.kernel.org, bpf@...r.kernel.org, linux-kselftest@...r.kernel.org
Subject: Re: [PATCH bpf-next v2 1/4] skmsg: null check for sg_page in
sk_msg_recvmsg
Hi John,
On Wed, 2024-06-26 at 21:05 +0800, Geliang Tang wrote:
> On Tue, 2024-06-25 at 12:37 -0700, John Fastabend wrote:
> > Eric Dumazet wrote:
> > > On Tue, Jun 25, 2024 at 10:25 AM Geliang Tang
> > > <geliang@...nel.org>
> > > wrote:
> > > >
> > > > From: Geliang Tang <tanggeliang@...inos.cn>
> > > >
> > > > Run the following BPF selftests on Loongarch:
> > > >
> > > > ./test_progs -t sockmap_basic
> > > >
> > > > A Kernel panic occurs:
> > > >
> > > > '''
> > > > Oops[#1]:
> > > > CPU: 22 PID: 2824 Comm: test_progs Tainted: G
> > > > OE
> > > > 6.10.0-rc2+ #18
> > > > Hardware name: LOONGSON Dabieshan/Loongson-TC542F0, BIOS
> > > > Loongson-UDK2018-V4.0.11
> > > > pc 9000000004162774 ra 90000000048bf6c0 tp 90001000aa16c000 sp
> > > > 90001000aa16fb90
> > > > a0 0000000000000000 a1 0000000000000000 a2 0000000000000000 a3
> > > > 90001000aa16fd70
> > > > a4 0000000000000800 a5 0000000000000000 a6 000055557b63aae8 a7
> > > > 00000000000000cf
> > > > t0 0000000000000000 t1 0000000000004000 t2 0000000000000048 t3
> > > > 0000000000000000
> > > > t4 0000000000000001 t5 0000000000000002 t6 0000000000000001 t7
> > > > 0000000000000002
> > > > t8 0000000000000018 u0 9000000004856150 s9 0000000000000000 s0
> > > > 0000000000000000
> > > > s1 0000000000000000 s2 90001000aa16fd70 s3 0000000000000000 s4
> > > > 0000000000000000
> > > > s5 0000000000004000 s6 900010009284dc00 s7 0000000000000001 s8
> > > > 900010009284dc00
> > > > ra: 90000000048bf6c0 sk_msg_recvmsg+0x120/0x560
> > > > ERA: 9000000004162774 copy_page_to_iter+0x74/0x1c0
> > > > CRMD: 000000b0 (PLV0 -IE -DA +PG DACF=CC DACM=CC -WE)
> > > > PRMD: 0000000c (PPLV0 +PIE +PWE)
> > > > EUEN: 00000007 (+FPE +SXE +ASXE -BTE)
> > > > ECFG: 00071c1d (LIE=0,2-4,10-12 VS=7)
> > > > ESTAT: 00010000 [PIL] (IS= ECode=1 EsubCode=0)
> > > > BADV: 0000000000000040
> > > > PRID: 0014c011 (Loongson-64bit, Loongson-3C5000)
> > > > Modules linked in: bpf_testmod(OE) xt_CHECKSUM xt_MASQUERADE
> > > > xt_conntrack
> > > > Process test_progs (pid: 2824, threadinfo=0000000000863a31,
> > > > task=000000001cba0874)
> > > > Stack : 0000000000000001 fffffffffffffffc 0000000000000000
> > > > 0000000000000000
> > > > 0000000000000018 0000000000000000 0000000000000000
> > > > 90000000048bf6c0
> > > > 90000000052cd638 90001000aa16fd70 900010008bf51580
> > > > 900010009284f000
> > > > 90000000049f2b90 900010009284f188 900010009284f178
> > > > 90001000861d4780
> > > > 9000100084dccd00 0000000000000800 0000000000000007
> > > > fffffffffffffff2
> > > > 000000000453e92f 90000000049aae34 90001000aa16fd60
> > > > 900010009284f000
> > > > 0000000000000000 0000000000000000 900010008bf51580
> > > > 90000000049f2b90
> > > > 0000000000000001 0000000000000000 9000100084dc3a10
> > > > 900010009284f1ac
> > > > 90001000aa16fd40 0000555559953278 0000000000000001
> > > > 0000000000000000
> > > > 90001000aa16fdc8 9000000005a5a000 90001000861d4780
> > > > 0000000000000800
> > > > ...
> > > > Call Trace:
> > > > [<9000000004162774>] copy_page_to_iter+0x74/0x1c0
> > > > [<90000000048bf6c0>] sk_msg_recvmsg+0x120/0x560
> > > > [<90000000049f2b90>] tcp_bpf_recvmsg_parser+0x170/0x4e0
> > > > [<90000000049aae34>] inet_recvmsg+0x54/0x100
> > > > [<900000000481ad5c>] sock_recvmsg+0x7c/0xe0
> > > > [<900000000481e1a8>] __sys_recvfrom+0x108/0x1c0
> > > > [<900000000481e27c>] sys_recvfrom+0x1c/0x40
> > > > [<9000000004c076ec>] do_syscall+0x8c/0xc0
> > > > [<9000000003731da4>] handle_syscall+0xc4/0x160
> > > >
> > > > Code: 0010b09b 440125a0 0011df8d <28c10364> 0012b70c
> > > > 00133305 0013b1ac 0010dc84 00151585
> > > >
> > > > ---[ end trace 0000000000000000 ]---
> > > > Kernel panic - not syncing: Fatal exception
> > > > Kernel relocated by 0x3510000
> > > > .text @ 0x9000000003710000
> > > > .data @ 0x9000000004d70000
> > > > .bss @ 0x9000000006469400
> > > > ---[ end Kernel panic - not syncing: Fatal exception ]---
> > > > '''
> > > >
> > > > This is because "sg_page(sge)" is NULL in that case. This patch
> > > > adds null
> > > > check for it in sk_msg_recvmsg() to fix this error.
> > > >
> > > > Fixes: 604326b41a6f ("bpf, sockmap: convert to generic sk_msg
> > > > interface")
> > > > Signed-off-by: Geliang Tang <tanggeliang@...inos.cn>
> > > > ---
> > > > net/core/skmsg.c | 2 ++
> > > > 1 file changed, 2 insertions(+)
> > > >
> > > > diff --git a/net/core/skmsg.c b/net/core/skmsg.c
> > > > index fd20aae30be2..bafcc1e2eadf 100644
> > > > --- a/net/core/skmsg.c
> > > > +++ b/net/core/skmsg.c
> > > > @@ -432,6 +432,8 @@ int sk_msg_recvmsg(struct sock *sk, struct
> > > > sk_psock *psock, struct msghdr *msg,
> > > > sge = sk_msg_elem(msg_rx, i);
> > > > copy = sge->length;
> > > > page = sg_page(sge);
> > > > + if (!page)
> > > > + goto out;
> > > > if (copied + copy > len)
> > > > copy = len - copied;
> > > > copy = copy_page_to_iter(page, sge-
> > > > > offset, copy, iter);
> > > > --
> > > > 2.43.0
> > > >
> > >
> > > This looks pretty much random to me.
> > >
> > > Please find the root cause, instead of desperately trying to fix
> > > 'tests'.
> >
> > If this happens then either we put a bad msg_rx on the queue see a
> > few lines
> > up and we need to sort out why that msg_rx was built. Or we walked
I think I have figured out the issue. It's caused by this, an empty skb
(skb->len == 0) is put on the queue.
In this case, in sk_psock_skb_ingress_enqueue(), num_sge is zero, and
no page is put to this sge (see sg_set_page in sg_set_page), but this
empty sge is queued into ingress_msg list.
And in sk_msg_recvmsg(), this empty sge is dequeued, and a NULL page is
got by sg_page(sge). Pass this NULL-page to copy_page_to_iter(), then
kernel panics.
To solve this, I think we should prevent empty skb from putting on the
queue. My new modification is as follows:
diff --git a/net/core/skmsg.c b/net/core/skmsg.c
index fd20aae30be2..44952cdd1425 100644
--- a/net/core/skmsg.c
+++ b/net/core/skmsg.c
@@ -1184,7 +1184,7 @@ static int sk_psock_verdict_recv(struct sock *sk,
struct sk_buff *skb)
rcu_read_lock();
psock = sk_psock(sk);
- if (unlikely(!psock)) {
+ if (unlikely(!psock || !len)) {
len = 0;
tcp_eat_skb(sk, skb);
sock_drop(sk, skb);
--
WDYT? I'd like to hear your opinion.
Thanks,
-Geliang
> > off the
> > end of a scatter gather list and need to see why this test isn't
> > sufficient?
> >
> > } while ((i != msg_rx->sg.end) && !sg_is_last(sge))
> >
> > is this happening every time you run the command or did you run
> > this
> > for
> > a long iteration and eventually hit this? I don't see why this
> > would
> > be
>
> This happens every time when run test_sockmap_skb_verdict_shutdown
> test
> in sockmap_basic. It hits this null page case on X86_64 platform too.
>
> > specific to your arch though.
>
> Kernel panics when a null page is passed to kmap_local_page() on
> Loongarch only, and this function is an arch specific one. I think
> this
> issue is somehow related to Loongarch's memory management.
>
> Thanks,
> -Geliang
>
>
>
Powered by blists - more mailing lists