[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fa707ef9-d612-a3a4-1b2a-fc2b28a3ec5f@gmail.com>
Date: Sat, 11 Dec 2021 02:20:13 +0000
From: Pavel Begunkov <asml.silence@...il.com>
To: Martin KaFai Lau <kafai@...com>
Cc: netdev@...r.kernel.org, bpf@...r.kernel.org,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>,
Song Liu <songliubraving@...com>, linux-kernel@...r.kernel.org
Subject: Re: [BPF PATCH for-next] cgroup/bpf: fast path for not loaded skb BPF
filtering
On 12/11/21 01:56, Martin KaFai Lau wrote:
> On Sat, Dec 11, 2021 at 01:15:05AM +0000, Pavel Begunkov wrote:
>> On 12/11/21 00:38, Martin KaFai Lau wrote:
>>> On Fri, Dec 10, 2021 at 02:23:34AM +0000, Pavel Begunkov wrote:
>>>> cgroup_bpf_enabled_key static key guards from overhead in cases where
>>>> no cgroup bpf program of a specific type is loaded in any cgroup. Turn
>>>> out that's not always good enough, e.g. when there are many cgroups but
>>>> ones that we're interesting in are without bpf. It's seen in server
>>>> environments, but the problem seems to be even wider as apparently
>>>> systemd loads some BPF affecting my laptop.
>>>>
>>>> Profiles for small packet or zerocopy transmissions over fast network
>>>> show __cgroup_bpf_run_filter_skb() taking 2-3%, 1% of which is from
>>>> migrate_disable/enable(), and similarly on the receiving side. Also
>>>> got +4-5% of t-put for local testing.
>>>>
>>>> Signed-off-by: Pavel Begunkov <asml.silence@...il.com>
>>>> ---
>>>> include/linux/bpf-cgroup.h | 24 +++++++++++++++++++++---
>>>> kernel/bpf/cgroup.c | 23 +++++++----------------
>>>> 2 files changed, 28 insertions(+), 19 deletions(-)
>>>>
>>>> diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h
>>>> index 11820a430d6c..99b01201d7db 100644
>>>> --- a/include/linux/bpf-cgroup.h
>>>> +++ b/include/linux/bpf-cgroup.h
>>>> @@ -141,6 +141,9 @@ struct cgroup_bpf {
>>>> struct list_head progs[MAX_CGROUP_BPF_ATTACH_TYPE];
>>>> u32 flags[MAX_CGROUP_BPF_ATTACH_TYPE];
>>>> + /* for each type tracks whether effective prog array is not empty */
>>>> + unsigned long enabled_mask;
>>>> +
>>>> /* list of cgroup shared storages */
>>>> struct list_head storages;
>>>> @@ -219,11 +222,25 @@ int bpf_percpu_cgroup_storage_copy(struct bpf_map *map, void *key, void *value);
>>>> int bpf_percpu_cgroup_storage_update(struct bpf_map *map, void *key,
>>>> void *value, u64 flags);
>>>> +static inline bool __cgroup_bpf_type_enabled(struct cgroup_bpf *cgrp_bpf,
>>>> + enum cgroup_bpf_attach_type atype)
>>>> +{
>>>> + return test_bit(atype, &cgrp_bpf->enabled_mask);
>>>> +}
>>>> +
>>>> +#define CGROUP_BPF_TYPE_ENABLED(sk, atype) \
>>>> +({ \
>>>> + struct cgroup *__cgrp = sock_cgroup_ptr(&(sk)->sk_cgrp_data); \
>>>> + \
>>>> + __cgroup_bpf_type_enabled(&__cgrp->bpf, (atype)); \
>>>> +})
>>> I think it should directly test if the array is empty or not instead of
>>> adding another bit.
>>>
>>> Can the existing __cgroup_bpf_prog_array_is_empty(cgrp, ...) test be used instead?
>>
>> That was the first idea, but it's still heavier than I'd wish. 0.3%-0.7%
>> in profiles, something similar in reqs/s. rcu_read_lock/unlock() pair is
>> cheap but anyway adds 2 barrier()s, and with bitmasks we can inline
>> the check.
> It sounds like there is opportunity to optimize
> __cgroup_bpf_prog_array_is_empty().
>
> How about using rcu_access_pointer(), testing with &empty_prog_array.hdr,
> and then inline it? The cgroup prog array cannot be all
> dummy_bpf_prog.prog. If that could be the case, it should be replaced
> with &empty_prog_array.hdr earlier, so please check.
I'd need to expose and export empty_prog_array, but that should do.
Will try it out, thanks
--
Pavel Begunkov
Powered by blists - more mailing lists