lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <31904afe-1d8a-4169-a3bd-d6d1c86cac5f@linux.dev>
Date: Wed, 10 Apr 2024 17:26:18 -0700
From: Yonghong Song <yonghong.song@...ux.dev>
To: Michal Koutný <mkoutny@...e.com>,
 Djalal Harouni <tixxdz@...il.com>
Cc: Tejun Heo <tj@...nel.org>, Zefan Li <lizefan.x@...edance.com>,
 Johannes Weiner <hannes@...xchg.org>, Alexei Starovoitov <ast@...nel.org>,
 Daniel Borkmann <daniel@...earbox.net>, Andrii Nakryiko <andrii@...nel.org>,
 Martin KaFai Lau <martin.lau@...ux.dev>, Eduard Zingerman
 <eddyz87@...il.com>, Song Liu <song@...nel.org>,
 John Fastabend <john.fastabend@...il.com>, KP Singh <kpsingh@...nel.org>,
 Stanislav Fomichev <sdf@...gle.com>, Hao Luo <haoluo@...gle.com>,
 Jiri Olsa <jolsa@...nel.org>, Mykola Lysenko <mykolal@...com>,
 Shuah Khan <shuah@...nel.org>, linux-kernel@...r.kernel.org,
 cgroups@...r.kernel.org, bpf@...r.kernel.org, linux-kselftest@...r.kernel.org
Subject: Re: [RFC PATCH bpf-next 0/3] bpf: freeze a task cgroup from bpf


On 4/9/24 8:32 AM, Michal Koutný wrote:
> Hi.
>
> On Tue, Apr 02, 2024 at 07:20:45PM +0100, Djalal Harouni <tixxdz@...il.com> wrote:
>> Thanks yes, I would expect freeze to behave like signal, and if one
>> wants to block immediately there is the LSM override return. The
>> selftest attached tries to do exactly that.
> Are you refering to this part:
>
> 	int BPF_PROG(lsm_freeze_cgroup, int cmd, union bpf_attr *attr, unsigned int size)
> 		...
> 		ret = bpf_task_freeze_cgroup(task, 1);
> 		if (!ret) {
> 			ret = -EPERM;
> 			/* reset for next call */
> ?
>
>
>> Could be security signals, reading sensitive files or related to any
>> operation management, for X reasons this user session should be freezed
>> or killed.
> What can be done with a frozen cgroup after anything of that happens?
> Anything besides killing anyway?
>
> Killing of an offending process could be caught by its supervisor (like
> container runtime or systemd) and propagated accordingly to the whole
> cgroup.
>
>> The kill is an effective defense against fork-bombs as an example.
> There are several ways how to prevent fork-bombs in kernel already, it
> looks like a contrived example.
>
>> Today some container/pod operations are performed at bpf level, having
>> the freeze and kill available is straightforward to perform this.
> It seems to me like an extra step when the same operation can be done from
> (the managing) userspace already.
>
>> For generalizing this, haven't thought about it that much. First use
>> case is to try to get freeze and possibly kill support, and use a common
>> interface as requested.
> BTW, I notice that there is bpf_sys_bpf() helper that allows calling an
> arbitrary syscall. Wouldn't that be sufficient for everything?

This is not true. Currently, only 'bpf' and 'close' syscalls are supported.

static const struct bpf_func_proto *
syscall_prog_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
{
         switch (func_id) {
         case BPF_FUNC_sys_bpf:
                 return !bpf_token_capable(prog->aux->token, CAP_PERFMON)
                        ? NULL : &bpf_sys_bpf_proto;
         case BPF_FUNC_btf_find_by_name_kind:
                 return &bpf_btf_find_by_name_kind_proto;
         case BPF_FUNC_sys_close:
                 return &bpf_sys_close_proto;
         case BPF_FUNC_kallsyms_lookup_name:
                 return &bpf_kallsyms_lookup_name_proto;
         default:
                 return tracing_prog_func_proto(func_id, prog);
         }
}

More syscalls can be added (through kfunc) if there is a use case for that.

>
> (Based on how I still understand the problem: either you must respond
> immediately and then the direct return from LSM is appropriate or timing
> is not sensitive but you want act on whole cgroup.)
>
> Thanks,
> Michal

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ