lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 13 May 2021 21:20:27 -0700
From:   Alexei Starovoitov <alexei.starovoitov@...il.com>
To:     xufeng zhang <yunbo.xufeng@...ux.alibaba.com>
Cc:     KP Singh <kpsingh@...nel.org>, Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        bpf <bpf@...r.kernel.org>, LKML <linux-kernel@...r.kernel.org>,
        LSM List <linux-security-module@...r.kernel.org>,
        Florent Revest <revest@...omium.org>,
        Brendan Jackman <jackmanb@...omium.org>,
        Yonghong Song <yhs@...com>, Song Liu <songliubraving@...com>,
        Martin KaFai Lau <kafai@...com>,
        John Fastabend <john.fastabend@...il.com>,
        Joe Stringer <joe@...ium.io>,
        Quentin Monnet <quentin@...valent.com>
Subject: Re: [RFC] [PATCH bpf-next 1/1] bpf: Add a BPF helper for getting the
 cgroup path of current task

On Thu, May 13, 2021 at 1:57 AM xufeng zhang
<yunbo.xufeng@...ux.alibaba.com> wrote:
>
> 在 2021/5/13 上午6:55, Alexei Starovoitov 写道:
>
> > On Wed, May 12, 2021 at 05:58:23PM +0800, Xufeng Zhang wrote:
> >> To implement security rules for application containers by utilizing
> >> bpf LSM, the container to which the current running task belongs need
> >> to be known in bpf context. Think about this scenario: kubernetes
> >> schedules a pod into one host, before the application container can run,
> >> the security rules for this application need to be loaded into bpf
> >> maps firstly, so that LSM bpf programs can make decisions based on
> >> this rule maps.
> >>
> >> However, there is no effective bpf helper to achieve this goal,
> >> especially for cgroup v1. In the above case, the only available information
> >> from user side is container-id, and the cgroup path for this container
> >> is certain based on container-id, so in order to make a bridge between
> >> user side and bpf programs, bpf programs also need to know the current
> >> cgroup path of running task.
> > ...
> >> +#ifdef CONFIG_CGROUPS
> >> +BPF_CALL_2(bpf_get_current_cpuset_cgroup_path, char *, buf, u32, buf_len)
> >> +{
> >> +    struct cgroup_subsys_state *css;
> >> +    int retval;
> >> +
> >> +    css = task_get_css(current, cpuset_cgrp_id);
> >> +    retval = cgroup_path_ns(css->cgroup, buf, buf_len, &init_cgroup_ns);
> >> +    css_put(css);
> >> +    if (retval >= buf_len)
> >> +            retval = -ENAMETOOLONG;
> > Manipulating string path to check the hierarchy will be difficult to do
> > inside bpf prog. It seems to me this helper will be useful only for
> > simplest cgroup setups where there is no additional cgroup nesting
> > within containers.
> > Have you looked at *ancestor_cgroup_id and *cgroup_id helpers?
> > They're a bit more flexible when dealing with hierarchy and
> > can be used to achieve the same correlation between kernel and user cgroup ids.
>
>
> Thanks for your timely reply, Alexei!
>
> Yes, this helper is not so common, it does not works for nesting cgroup
> within containers.
>
> About your suggestion, the *cgroup_id helpers only works for cgroup v2,
> however, we're still using cgroup v1 in product,and even for cgroup v2,
> I'm not sure if there is any way for user space to get this cgroup id
> timely(after container created, but before container start to run)。
>
> So if there is any effective way works for cgroup v1?

https://github.com/systemd/systemd/blob/main/NEWS#L379

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ