[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+khW7i+TScwPZ6-rcFKiXtxMm8hiZYJGH-wYb=7jBvDWg8pJQ@mail.gmail.com>
Date: Fri, 4 Feb 2022 10:26:59 -0800
From: Hao Luo <haoluo@...gle.com>
To: Alexei Starovoitov <alexei.starovoitov@...il.com>
Cc: Alexei Starovoitov <ast@...nel.org>,
Andrii Nakryiko <andrii@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Martin KaFai Lau <kafai@...com>,
Song Liu <songliubraving@...com>, Yonghong Song <yhs@...com>,
KP Singh <kpsingh@...nel.org>,
Shakeel Butt <shakeelb@...gle.com>,
Joe Burton <jevburton.kernel@...il.com>,
Stanislav Fomichev <sdf@...gle.com>, bpf <bpf@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH RFC bpf-next v2 5/5] selftests/bpf: test for pinning for
cgroup_view link
On Thu, Feb 3, 2022 at 7:33 PM Alexei Starovoitov
<alexei.starovoitov@...il.com> wrote:
>
> On Thu, Feb 3, 2022 at 2:46 PM Hao Luo <haoluo@...gle.com> wrote:
> >
> > On Thu, Feb 3, 2022 at 10:04 AM Alexei Starovoitov
> > <alexei.starovoitov@...il.com> wrote:
> > >
> > > On Tue, Feb 01, 2022 at 12:55:34PM -0800, Hao Luo wrote:
> > > > +
> > > > +SEC("iter/cgroup_view")
> > > > +int dump_cgroup_lat(struct bpf_iter__cgroup_view *ctx)
> > > > +{
> > > > + struct seq_file *seq = ctx->meta->seq;
> > > > + struct cgroup *cgroup = ctx->cgroup;
> > > > + struct wait_lat *lat;
> > > > + u64 id;
> > > > +
> > > > + BPF_SEQ_PRINTF(seq, "cgroup_id: %8lu\n", cgroup->kn->id);
> > > > + lat = bpf_map_lookup_elem(&cgroup_lat, &id);
> > >
> > > Looks like "id = cgroup->kn->id" assignment is missing here?
> > >
> >
> > Ah, yes. I'll fix it.
> >
> > > Thanks a lot for this test. It explains the motivation well.
> > >
> > > It seems that the patches 1-4 are there to automatically
> > > supply cgroup pointer into bpf_iter__cgroup_view.
> > >
> > > Since user space needs to track good part of cgroup dir opreations
> > > can we task it with the job of patches 1-4 as well?
> > > It can register notifier for cgroupfs operations and
> > > do mkdir in bpffs similarly _and_ parametrize 'view' bpf program
> > > with corresponding cgroup_id.
> > > Ideally there is no new 'view' program and it's a subset of 'iter'
> > > bpf program. They're already parametrizable.
> > > When 'iter' is pinned the user space can tell it which object it should
> > > iterate on. The 'view' will be an interator of one element and
> > > argument to it can be cgroup_id.
> > > When user space pins the same 'view' program in a newly created bpffs
> > > directory it will parametrize it with a different cgroup_id.
> > > At the end the same 'view' program will be pinned in multiple directories
> > > with different cgroup_id arguments.
> > > This patch 5 will look very much the same, but patches 1-4 will not be
> > > necessary.
> > > Of course there are races between cgroup create/destroy and bpffs
> > > mkdir, prog pin operatiosn, but they will be there regardless.
> > > The patch 1-4 approach is not race free either.
> >
> > Right. I tried to minimize the races between cgroupfs and bpffs in
> > this patchset. The cgroup and kernfs APIs called in this patchset
> > guarantee that the cgroup and kernfs objects are alive once get. Some
> > states in the objects such as 'id' should be valid at least.
> >
> > > Will that work?
> >
> > Thanks Alexei for the idea.
> >
> > The parameterization part sounds good. By 'parametrize', do you mean a
> > variable in iter prog (like the 'pid' variable in bpf_iter_task_vma.c
> > [1])? or some metadata of the program? I assume it's program's
> > metadata. Either parameterizing with cgroup_id or passing cgroup
> > object to the prog should work. The problem is at pinning.
>
> The bpf_iter_link_info is used to parametrize the iterator.
> The map iterator will iterate the given map_fd.
> iirc pinning is not parameterizable yet,
> but that's not difficult to add.
>
I can take a look at that. This will be useful in our use case.
>
> > In our use case, we can't ask the users who create cgroups to do the
> > pinning. Pinning requires root privilege. In our use case, we have
> > non-root users who can create cgroup directories and still want to
> > read bpf stats. They can't do pinning by themselves. This is why
> > inheritance is a requirement for us. With inheritance, they only need
> > to mkdir in cgroupfs and bpffs (unprivileged operations), no pinning
> > operation is required. Patch 1-4 are needed to implement inheritance.
> >
> > It's also not a good idea in our use case to add a userspace
> > privileged process to monitor cgroupfs operations and perform the
> > pinning. It's more complex and has a higher maintenance cost and
> > runtime overhead, compared to the solution of asking whoever makes
> > cgroups to mkdir in bpffs. The other problem is: if there are nodes in
> > the data center that don't have the userspace process deployed, the
> > stats will be unavailable, which is a no-no for some of our users.
>
> The commit log says that there will be a daemon that does that
> monitoring of cgroupfs. And that daemon needs to mkdir
> directories in bpffs when a new cgroup is created, no?
> The kernel is only doing inheritance of bpf progs into
> new dirs. I think that daemon can pin as well.
>
> The cgroup creation is typically managed by an agent like systemd.
> Sounds like you have your own agent that creates cgroups?
> If so it has to be privileged and it can mkdir in bpffs and pin too ?
Ah, yes, we have our own daemon to manage cgroups. That daemon creates
the top-level cgroup for each job to run inside. However, the job can
create its own cgroups inside the top-level cgroup, for fine grained
resource control. This doesn't go through the daemon. The job-created
cgroups don't have the pinned objects and this is a no-no for our
users.
Powered by blists - more mailing lists