lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Thu, 30 Jul 2020 13:46:28 -0700
From:   Roman Gushchin <guro@...com>
To:     Andrii Nakryiko <andrii.nakryiko@...il.com>
CC:     bpf <bpf@...r.kernel.org>, Networking <netdev@...r.kernel.org>,
        Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Kernel Team <kernel-team@...com>,
        open list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH bpf-next v2 29/35] bpf: libbpf: cleanup RLIMIT_MEMLOCK
 usage

On Thu, Jul 30, 2020 at 12:39:40PM -0700, Andrii Nakryiko wrote:
> On Wed, Jul 29, 2020 at 6:38 PM Roman Gushchin <guro@...com> wrote:
> >
> > On Mon, Jul 27, 2020 at 10:59:33PM -0700, Andrii Nakryiko wrote:
> > > On Mon, Jul 27, 2020 at 4:15 PM Roman Gushchin <guro@...com> wrote:
> > > >
> > > > On Mon, Jul 27, 2020 at 03:05:11PM -0700, Andrii Nakryiko wrote:
> > > > > On Mon, Jul 27, 2020 at 12:21 PM Roman Gushchin <guro@...com> wrote:
> > > > > >
> > > > > > As bpf is not using memlock rlimit for memory accounting anymore,
> > > > > > let's remove the related code from libbpf.
> > > > > >
> > > > > > Bpf operations can't fail because of exceeding the limit anymore.
> > > > > >
> > > > >
> > > > > They can't in the newest kernel, but libbpf will keep working and
> > > > > supporting old kernels for a very long time now. So please don't
> > > > > remove any of this.
> > > >
> > > > Yeah, good point, agree.
> > > > So we just can drop this patch from the series, no other changes
> > > > are needed.
> > > >
> > > > >
> > > > > But it would be nice to add a detection of whether kernel needs a
> > > > > RLIMIT_MEMLOCK bump or not. Is there some simple and reliable way to
> > > > > detect this from user-space?
> >
> > Btw, do you mean we should add a new function to the libbpf API?
> > Or just extend pr_perm_msg() to skip guessing on new kernels?
> >
> 
> I think we have to do both. There is libbpf_util.h in libbpf, we could
> add two functions there:
> 
> - libbpf_needs_memlock() that would return true/false if kernel is old
> and needs RLIMIT_MEMLOCK
> - as a convenience, we can also add libbpf_inc_memlock_by() and
> libbpf_set_memlock_to(), which will optionally (if kernel needs it)
> adjust RLIMIT_MEMLOCK?
> 
> I think for your patch set, given it's pretty big already, let's not
> touch runqslower, libbpf, and perf code (I think samples/bpf are fine
> to just remove memlock adjustment), and we'll deal with detection and
> optional bumping of RLIMIT_MEMLOCK as a separate patch once your
> change land.

Ok, works for me. Let me repost the kernel part + samples as v3.

> 
> 
> > The problem with the latter one is that it's called on a failed attempt
> > to create a map, so unlikely we'll be able to create a new one just to test
> > for the "memlock" value. But it also raises a question what should we do
> > if the creation of this temporarily map fails? Assume the old kernel and
> > bump the limit?
> 
> Yeah, I think we'll have to make assumptions like that. Ideally, of
> course, detection of this would be just a simple sysfs value or
> something, don't know. Maybe there is already a way for kernel to
> communicate something like that?

For instance, we've /sys/kernel/cgroup/features for cgroup features:
it's a list of supported mount options for cgroup fs.

Idk if bpf deserves something similar, but as far as I remember,
we've discussed it a couple of years ago, and at that time the consensus
was that it's too hard to keep such list uptodate, so the userspace should
just try and fail. Idk if it's still valid.

Thank you!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ