lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 3 Mar 2022 13:52:12 -0800
From:   Hao Luo <haoluo@...gle.com>
To:     Yonghong Song <yhs@...com>
Cc:     Kumar Kartikeya Dwivedi <memxor@...il.com>,
        Alexei Starovoitov <ast@...nel.org>,
        Andrii Nakryiko <andrii@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Martin KaFai Lau <kafai@...com>,
        Song Liu <songliubraving@...com>,
        KP Singh <kpsingh@...nel.org>,
        Shakeel Butt <shakeelb@...gle.com>,
        Joe Burton <jevburton.kernel@...il.com>,
        Tejun Heo <tj@...nel.org>, joshdon@...gle.com, sdf@...gle.com,
        bpf@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH bpf-next v1 8/9] bpf: Introduce cgroup iter

On Wed, Mar 2, 2022 at 11:34 PM Yonghong Song <yhs@...com> wrote:
>
>
>
> On 3/2/22 7:03 PM, Kumar Kartikeya Dwivedi wrote:
> > On Thu, Mar 03, 2022 at 07:33:16AM IST, Yonghong Song wrote:
> >>
> >>
> >> On 3/2/22 2:45 PM, Kumar Kartikeya Dwivedi wrote:
> >>> On Sat, Feb 26, 2022 at 05:13:38AM IST, Hao Luo wrote:
> >>>> Introduce a new type of iter prog: cgroup. Unlike other bpf_iter, this
> >>>> iter doesn't iterate a set of kernel objects. Instead, it is supposed to
> >>>> be parameterized by a cgroup id and prints only that cgroup. So one
> >>>> needs to specify a target cgroup id when attaching this iter.
> >>>>
> >>>> The target cgroup's state can be read out via a link of this iter.
> >>>> Typically, we can monitor cgroup creation and deletion using sleepable
> >>>> tracing and use it to create corresponding directories in bpffs and pin
> >>>> a cgroup id parameterized link in the directory. Then we can read the
> >>>> auto-pinned iter link to get cgroup's state. The output of the iter link
> >>>> is determined by the program. See the selftest test_cgroup_stats.c for
> >>>> an example.
> >>>>
> >>>> Signed-off-by: Hao Luo <haoluo@...gle.com>
> >>>> ---
> >>>>    include/linux/bpf.h            |   1 +
> >>>>    include/uapi/linux/bpf.h       |   6 ++
> >>>>    kernel/bpf/Makefile            |   2 +-
> >>>>    kernel/bpf/cgroup_iter.c       | 141 +++++++++++++++++++++++++++++++++
> >>>>    tools/include/uapi/linux/bpf.h |   6 ++
> >>>>    5 files changed, 155 insertions(+), 1 deletion(-)
> >>>>    create mode 100644 kernel/bpf/cgroup_iter.c
[...]
> >>>
> >>> I think in existing iterators, we make a final call to seq_show, with v as NULL,
> >>> is there a specific reason to do it differently for this? There is logic in
> >>> bpf_iter.c to trigger ->stop() callback again when ->start() or ->next() returns
> >>> NULL, to execute BPF program with NULL p, see the comment above stop label.
> >>>
> >>> If you do add the seq_show call with NULL, you'd also need to change the
> >>> ctx_arg_info PTR_TO_BTF_ID to PTR_TO_BTF_ID_OR_NULL.
> >>
> >> Kumar, PTR_TO_BTF_ID should be okay since the show() never takes a non-NULL
> >> cgroup. But we do have issues for cgroup_iter_seq_stop() which I missed
> >> earlier.
> >>
> >
> > Right, I was thinking whether it should call seq_show for v == NULL case. All
> > other iterators seem to do so, it's a bit different here since it is only
> > iterating over a single cgroup, I guess, but it would be nice to have some
> > consistency.
>
> You are correct that I think it is okay since it only iterates with one
> cgroup. This is different from other cases so far where more than one
> objects may be traversed. We may have future other use cases, e.g.,
> one task. I think we can abstract out start()/next()/stop() callbacks
> for such use cases. So it is okay it is different from other existing
> iterators since they are indeed different.
>

Right. This iter is special. It has a single element. So we don't
really need preamble and epilogue, which can directly be coded up in
the iter program. And we can also guarantee the cgroup passed is
always valid, otherwise we wouldn't invoke show(). So passing
PTR_TO_BTF_ID is fine. I did so mainly in order to save a null check
inside the prog.

> >
> >> For cgroup_iter, the following is the current workflow:
> >>     start -> not NULL -> show -> next -> NULL -> stop
> >> or
> >>     start -> NULL -> stop
> >>
> >> So for cgroup_iter_seq_stop, the input parameter 'v' will be NULL, so
> >> the cgroup_put() is not actually called, i.e., corresponding cgroup is
> >> not freed.
> >>
> >> There are two ways to fix the issue:
> >>    . call cgroup_put() in next() before return NULL. This way,
> >>      stop() will be a noop.
> >>    . put cgroup_get_from_id() and cgroup_put() in
> >>      bpf_iter_attach_cgroup() and bpf_iter_detach_cgroup().
> >>
> >> I prefer the second approach as it is cleaner.
> >>

Yeah, the second approach should be fine. I was thinking of holding
the cgroup's reference only when we actually start reading, so that a
cgroup can go at any time and this iter gets a reference only in best
effort. Now a reference is held from attach to detach, but I think it
should be fine. Let me test.

> >
> > I think current approach is also not safe if cgroup_id gets reused, right? I.e.
> > it only does cgroup_get_from_id in seq_start, not at attach time, so it may not
> > be the same cgroup when calling read(2). kernfs is using idr_alloc_cyclic, so it
> > is less likely to occur, but since it wraps around to find a free ID it might
> > not be theoretical.
>
> As Alexei mentioned, cgroup id is 64-bit, the collision should
> be nearly impossible. Another option is to get a fd from
> the cgroup path, and send the fd to the kernel. This probably
> works.
>

64bit cgroup id should be fine. Using cgroup path and fd is more
complicated, unnecessarily IMHO.

> [...]

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ