[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALOAHbAa-iMD4k2DEOun+RivUXiSMKR6ndCsqGZMseUbX_9+ww@mail.gmail.com>
Date: Tue, 26 Oct 2021 22:02:36 +0800
From: Yafang Shao <laoar.shao@...il.com>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: Kees Cook <keescook@...omium.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Arnaldo Carvalho de Melo <arnaldo.melo@...il.com>,
Petr Mladek <pmladek@...e.com>,
Peter Zijlstra <peterz@...radead.org>,
Al Viro <viro@...iv.linux.org.uk>,
Valentin Schneider <valentin.schneider@....com>,
Qiang Zhang <qiang.zhang@...driver.com>,
robdclark <robdclark@...omium.org>,
christian <christian@...uner.io>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Ingo Molnar <mingo@...hat.com>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
David Miller <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>, Martin Lau <kafai@...com>,
Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
john fastabend <john.fastabend@...il.com>,
KP Singh <kpsingh@...nel.org>,
dennis.dalessandro@...nelisnetworks.com,
mike.marciniszyn@...nelisnetworks.com, dledford@...hat.com,
jgg@...pe.ca, linux-rdma@...r.kernel.org,
netdev <netdev@...r.kernel.org>, bpf <bpf@...r.kernel.org>,
"linux-perf-use." <linux-perf-users@...r.kernel.org>,
linux-fsdevel@...r.kernel.org, Linux MM <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
kernel test robot <oliver.sang@...el.com>,
kbuild test robot <lkp@...el.com>,
Andrii Nakryiko <andrii.nakryiko@...il.com>
Subject: Re: [PATCH v6 08/12] tools/bpf/bpftool/skeleton: make it adopt to
task comm size change
On Tue, Oct 26, 2021 at 9:55 PM Yafang Shao <laoar.shao@...il.com> wrote:
>
> On Tue, Oct 26, 2021 at 9:12 PM Steven Rostedt <rostedt@...dmis.org> wrote:
> >
> > On Tue, 26 Oct 2021 10:18:51 +0800
> > Yafang Shao <laoar.shao@...il.com> wrote:
> >
> > > > So, if we're ever going to copying these buffers out of the kernel (I
> > > > don't know what the object lifetime here in bpf is for "e", etc), we
> > > > should be zero-padding (as get_task_comm() does).
> > > >
> > > > Should this, instead, be using a bounce buffer?
> > >
> > > The comment in bpf_probe_read_kernel_str_common() says
> > >
> > > : /*
> > > : * The strncpy_from_kernel_nofault() call will likely not fill the
> > > : * entire buffer, but that's okay in this circumstance as we're probing
> > > : * arbitrary memory anyway similar to bpf_probe_read_*() and might
> > > : * as well probe the stack. Thus, memory is explicitly cleared
> > > : * only in error case, so that improper users ignoring return
> > > : * code altogether don't copy garbage; otherwise length of string
> > > : * is returned that can be used for bpf_perf_event_output() et al.
> > > : */
> > >
> > > It seems that it doesn't matter if the buffer is filled as that is
> > > probing arbitrary memory.
> > >
> > > >
> > > > get_task_comm(comm, task->group_leader);
> > >
> > > This helper can't be used by the BPF programs, as it is not exported to BPF.
> > >
> > > > bpf_probe_read_kernel_str(&e.comm, sizeof(e.comm), comm);
> >
> > I guess Kees is worried that e.comm will have something exported to user
> > space that it shouldn't. But since e is part of the BPF program, does the
> > BPF JIT take care to make sure everything on its stack is zero'd out, such
> > that a user BPF couldn't just read various items off its stack and by doing
> > so, see kernel memory it shouldn't be seeing?
> >
>
Ah, you mean the BPF JIT has already avoided leaking information to user.
I will check the BPF JIT code first.
> Understood.
> It can leak information to the user if the user buffer is large enough.
>
>
> > I'm guessing it does, otherwise this would be a bigger issue than this
> > patch series.
> >
>
> I will think about how to fix it.
> At first glance, it seems we'd better introduce a new BPF helper like
> bpf_probe_read_kernel_str_pad().
>
> --
> Thanks
> Yafang
--
Thanks
Yafang
Powered by blists - more mailing lists