[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b4865347-110f-4648-a16e-2d453d849323@linux.dev>
Date: Fri, 15 Dec 2023 06:31:08 -0800
From: Yonghong Song <yonghong.song@...ux.dev>
To: Michael Weiß <michael.weiss@...ec.fraunhofer.de>,
Christian Brauner <brauner@...nel.org>,
Alexander Mikhalitsyn <alexander@...alicyn.com>,
Alexei Starovoitov <ast@...nel.org>, Paul Moore <paul@...l-moore.com>
Cc: Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>, Martin KaFai Lau
<martin.lau@...ux.dev>, Song Liu <song@...nel.org>,
Yonghong Song <yhs@...com>, John Fastabend <john.fastabend@...il.com>,
KP Singh <kpsingh@...nel.org>, Stanislav Fomichev <sdf@...gle.com>,
Hao Luo <haoluo@...gle.com>, Jiri Olsa <jolsa@...nel.org>,
Quentin Monnet <quentin@...valent.com>,
Alexander Viro <viro@...iv.linux.org.uk>, Miklos Szeredi
<miklos@...redi.hu>, Amir Goldstein <amir73il@...il.com>,
"Serge E. Hallyn" <serge@...lyn.com>, bpf@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
linux-security-module@...r.kernel.org, gyroidos@...ec.fraunhofer.de,
Alexander Mikhalitsyn <aleksandr.mikhalitsyn@...onical.com>
Subject: Re: [RFC PATCH v3 1/3] bpf: cgroup: Introduce helper
cgroup_bpf_current_enabled()
On 12/14/23 12:17 AM, Michael Weiß wrote:
> On 13.12.23 17:59, Yonghong Song wrote:
>> On 12/13/23 6:38 AM, Michael Weiß wrote:
>>> This helper can be used to check if a cgroup-bpf specific program is
>>> active for the current task.
>>>
>>> Signed-off-by: Michael Weiß <michael.weiss@...ec.fraunhofer.de>
>>> Reviewed-by: Alexander Mikhalitsyn <aleksandr.mikhalitsyn@...onical.com>
>>> ---
>>> include/linux/bpf-cgroup.h | 2 ++
>>> kernel/bpf/cgroup.c | 14 ++++++++++++++
>>> 2 files changed, 16 insertions(+)
>>>
>>> diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h
>>> index a789266feac3..7cb49bde09ff 100644
>>> --- a/include/linux/bpf-cgroup.h
>>> +++ b/include/linux/bpf-cgroup.h
>>> @@ -191,6 +191,8 @@ static inline bool cgroup_bpf_sock_enabled(struct sock *sk,
>>> return array != &bpf_empty_prog_array.hdr;
>>> }
>>>
>>> +bool cgroup_bpf_current_enabled(enum cgroup_bpf_attach_type type);
>>> +
>>> /* Wrappers for __cgroup_bpf_run_filter_skb() guarded by cgroup_bpf_enabled. */
>>> #define BPF_CGROUP_RUN_PROG_INET_INGRESS(sk, skb) \
>>> ({ \
>>> diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
>>> index 491d20038cbe..9007165abe8c 100644
>>> --- a/kernel/bpf/cgroup.c
>>> +++ b/kernel/bpf/cgroup.c
>>> @@ -24,6 +24,20 @@
>>> DEFINE_STATIC_KEY_ARRAY_FALSE(cgroup_bpf_enabled_key, MAX_CGROUP_BPF_ATTACH_TYPE);
>>> EXPORT_SYMBOL(cgroup_bpf_enabled_key);
>>>
>>> +bool cgroup_bpf_current_enabled(enum cgroup_bpf_attach_type type)
>>> +{
>>> + struct cgroup *cgrp;
>>> + struct bpf_prog_array *array;
>>> +
>>> + rcu_read_lock();
>>> + cgrp = task_dfl_cgroup(current);
>>> + rcu_read_unlock();
>>> +
>>> + array = rcu_access_pointer(cgrp->bpf.effective[type]);
>> This seems wrong here. The cgrp could become invalid once leaving
>> rcu critical section.
> You are right, maybe we where to opportunistic here. We just wanted
> to hold the lock as short as possible.
>
>>> + return array != &bpf_empty_prog_array.hdr;
>> I guess you need include 'array' usage as well in the rcu cs.
>> So overall should look like:
>>
>> rcu_read_lock();
>> cgrp = task_dfl_cgroup(current);
>> array = rcu_access_pointer(cgrp->bpf.effective[type]);
> Looks reasonable, but that we are in the cs now I would change this to
> rcu_dereference() then.
copy-paste error. Right, should use rcu_deference() indeed.
>
>> bpf_prog_exists = array != &bpf_empty_prog_array.hdr;
>> rcu_read_unlock();
>>
>> return bpf_prog_exists;
>>
>>> +}
>>> +EXPORT_SYMBOL(cgroup_bpf_current_enabled);
>>> +
>>> /* __always_inline is necessary to prevent indirect call through run_prog
>>> * function pointer.
>>> */
Powered by blists - more mailing lists