[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <YxhFe3EwqchC/fYf@krava>
Date: Wed, 7 Sep 2022 09:17:15 +0200
From: Jiri Olsa <olsajiri@...il.com>
To: Alexei Starovoitov <alexei.starovoitov@...il.com>
Cc: Jiri Olsa <olsajiri@...il.com>,
syzbot <syzbot+2251879aa068ad9c960d@...kaller.appspotmail.com>,
Andrii Nakryiko <andrii@...nel.org>,
Alexei Starovoitov <ast@...nel.org>, bpf <bpf@...r.kernel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Hao Luo <haoluo@...gle.com>,
John Fastabend <john.fastabend@...il.com>,
KP Singh <kpsingh@...nel.org>,
LKML <linux-kernel@...r.kernel.org>,
Martin KaFai Lau <martin.lau@...ux.dev>,
Stanislav Fomichev <sdf@...gle.com>,
Song Liu <song@...nel.org>,
syzkaller-bugs <syzkaller-bugs@...glegroups.com>,
Yonghong Song <yhs@...com>
Subject: Re: [syzbot] WARNING in bpf_bprintf_prepare (2)
On Tue, Sep 06, 2022 at 08:02:39PM -0700, Alexei Starovoitov wrote:
SNIP
> > > __mutex_lock_common kernel/locking/mutex.c:605 [inline]
> > > __mutex_lock+0x13c/0x1350 kernel/locking/mutex.c:747
> > > __pipe_lock fs/pipe.c:103 [inline]
> > > pipe_write+0x132/0x1be0 fs/pipe.c:431
> > > call_write_iter include/linux/fs.h:2188 [inline]
> > > new_sync_write fs/read_write.c:491 [inline]
> > > vfs_write+0x9e9/0xdd0 fs/read_write.c:578
> > > ksys_write+0x1e8/0x250 fs/read_write.c:631
> > > do_syscall_x64 arch/x86/entry/common.c:50 [inline]
> > > do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80
> > > entry_SYSCALL_64_after_hwframe+0x63/0xcd
> >
> > looks like __bpf_trace_contention_begin needs bpf_prog_active check
> > (like below untested), which would prevent the recursion and bail
> > out after 2nd invocation
> >
> > should be easy to reproduce, will check
> >
> > jirka
> >
> >
> > ---
> > diff --git a/include/trace/bpf_probe.h b/include/trace/bpf_probe.h
> > index 6a13220d2d27..481b057cc8d9 100644
> > --- a/include/trace/bpf_probe.h
> > +++ b/include/trace/bpf_probe.h
> > @@ -4,6 +4,8 @@
> >
> > #ifdef CONFIG_BPF_EVENTS
> >
> > +DECLARE_PER_CPU(int, bpf_prog_active);
> > +
> > #undef __entry
> > #define __entry entry
> >
> > @@ -82,7 +84,11 @@ static notrace void \
> > __bpf_trace_##call(void *__data, proto) \
> > { \
> > struct bpf_prog *prog = __data; \
> > + if (unlikely(__this_cpu_inc_return(bpf_prog_active) != 1)) \
> > + goto out; \
> > CONCATENATE(bpf_trace_run, COUNT_ARGS(args))(prog, CAST_TO_U64(args)); \
> > +out: \
> > + __this_cpu_dec(bpf_prog_active);
>
> I don't think we can use this big hammer here.
> raw_tp progs attached to different hooks need to
> run on the same cpu otherwise we will lose events.
might be good place to use prog->active
I managed to reproduce it localy, will try that
jirka
Powered by blists - more mailing lists