lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5f7d2f60-b833-04e5-7710-fdd2ef3b6f67@gmail.com>
Date:   Sat, 11 Dec 2021 01:15:05 +0000
From:   Pavel Begunkov <asml.silence@...il.com>
To:     Martin KaFai Lau <kafai@...com>
Cc:     netdev@...r.kernel.org, bpf@...r.kernel.org,
        Alexei Starovoitov <ast@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Andrii Nakryiko <andrii@...nel.org>,
        Song Liu <songliubraving@...com>, linux-kernel@...r.kernel.org
Subject: Re: [BPF PATCH for-next] cgroup/bpf: fast path for not loaded skb BPF
 filtering

On 12/11/21 00:38, Martin KaFai Lau wrote:
> On Fri, Dec 10, 2021 at 02:23:34AM +0000, Pavel Begunkov wrote:
>> cgroup_bpf_enabled_key static key guards from overhead in cases where
>> no cgroup bpf program of a specific type is loaded in any cgroup. Turn
>> out that's not always good enough, e.g. when there are many cgroups but
>> ones that we're interesting in are without bpf. It's seen in server
>> environments, but the problem seems to be even wider as apparently
>> systemd loads some BPF affecting my laptop.
>>
>> Profiles for small packet or zerocopy transmissions over fast network
>> show __cgroup_bpf_run_filter_skb() taking 2-3%, 1% of which is from
>> migrate_disable/enable(), and similarly on the receiving side. Also
>> got +4-5% of t-put for local testing.
>>
>> Signed-off-by: Pavel Begunkov <asml.silence@...il.com>
>> ---
>>   include/linux/bpf-cgroup.h | 24 +++++++++++++++++++++---
>>   kernel/bpf/cgroup.c        | 23 +++++++----------------
>>   2 files changed, 28 insertions(+), 19 deletions(-)
>>
>> diff --git a/include/linux/bpf-cgroup.h b/include/linux/bpf-cgroup.h
>> index 11820a430d6c..99b01201d7db 100644
>> --- a/include/linux/bpf-cgroup.h
>> +++ b/include/linux/bpf-cgroup.h
>> @@ -141,6 +141,9 @@ struct cgroup_bpf {
>>   	struct list_head progs[MAX_CGROUP_BPF_ATTACH_TYPE];
>>   	u32 flags[MAX_CGROUP_BPF_ATTACH_TYPE];
>>   
>> +	/* for each type tracks whether effective prog array is not empty */
>> +	unsigned long enabled_mask;
>> +
>>   	/* list of cgroup shared storages */
>>   	struct list_head storages;
>>   
>> @@ -219,11 +222,25 @@ int bpf_percpu_cgroup_storage_copy(struct bpf_map *map, void *key, void *value);
>>   int bpf_percpu_cgroup_storage_update(struct bpf_map *map, void *key,
>>   				     void *value, u64 flags);
>>   
>> +static inline bool __cgroup_bpf_type_enabled(struct cgroup_bpf *cgrp_bpf,
>> +					     enum cgroup_bpf_attach_type atype)
>> +{
>> +	return test_bit(atype, &cgrp_bpf->enabled_mask);
>> +}
>> +
>> +#define CGROUP_BPF_TYPE_ENABLED(sk, atype)				       \
>> +({									       \
>> +	struct cgroup *__cgrp = sock_cgroup_ptr(&(sk)->sk_cgrp_data);	       \
>> +									       \
>> +	__cgroup_bpf_type_enabled(&__cgrp->bpf, (atype));		       \
>> +})
> I think it should directly test if the array is empty or not instead of
> adding another bit.
> 
> Can the existing __cgroup_bpf_prog_array_is_empty(cgrp, ...) test be used instead?

That was the first idea, but it's still heavier than I'd wish. 0.3%-0.7%
in profiles, something similar in reqs/s. rcu_read_lock/unlock() pair is
cheap but anyway adds 2 barrier()s, and with bitmasks we can inline
the check.

-- 
Pavel Begunkov

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ