[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200730013836.GA637520@carbon.dhcp.thefacebook.com>
Date: Wed, 29 Jul 2020 18:38:36 -0700
From: Roman Gushchin <guro@...com>
To: Andrii Nakryiko <andrii.nakryiko@...il.com>
CC: bpf <bpf@...r.kernel.org>, Networking <netdev@...r.kernel.org>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Kernel Team <kernel-team@...com>,
open list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH bpf-next v2 29/35] bpf: libbpf: cleanup RLIMIT_MEMLOCK
usage
On Mon, Jul 27, 2020 at 10:59:33PM -0700, Andrii Nakryiko wrote:
> On Mon, Jul 27, 2020 at 4:15 PM Roman Gushchin <guro@...com> wrote:
> >
> > On Mon, Jul 27, 2020 at 03:05:11PM -0700, Andrii Nakryiko wrote:
> > > On Mon, Jul 27, 2020 at 12:21 PM Roman Gushchin <guro@...com> wrote:
> > > >
> > > > As bpf is not using memlock rlimit for memory accounting anymore,
> > > > let's remove the related code from libbpf.
> > > >
> > > > Bpf operations can't fail because of exceeding the limit anymore.
> > > >
> > >
> > > They can't in the newest kernel, but libbpf will keep working and
> > > supporting old kernels for a very long time now. So please don't
> > > remove any of this.
> >
> > Yeah, good point, agree.
> > So we just can drop this patch from the series, no other changes
> > are needed.
> >
> > >
> > > But it would be nice to add a detection of whether kernel needs a
> > > RLIMIT_MEMLOCK bump or not. Is there some simple and reliable way to
> > > detect this from user-space?
Btw, do you mean we should add a new function to the libbpf API?
Or just extend pr_perm_msg() to skip guessing on new kernels?
The problem with the latter one is that it's called on a failed attempt
to create a map, so unlikely we'll be able to create a new one just to test
for the "memlock" value. But it also raises a question what should we do
if the creation of this temporarily map fails? Assume the old kernel and
bump the limit?
Idk, maybe it's better to just leave the userspace code as it is for some time.
Thanks!
Powered by blists - more mailing lists