[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200916230045.b6ofgnbibwealhiz@kafai-mbp>
Date: Wed, 16 Sep 2020 16:00:45 -0700
From: Martin KaFai Lau <kafai@...com>
To: Yonghong Song <yhs@...com>
CC: <bpf@...r.kernel.org>, <netdev@...r.kernel.org>,
Alexei Starovoitov <ast@...nel.org>,
Daniel Borkmann <daniel@...earbox.net>, <kernel-team@...com>,
Song Liu <songliubraving@...com>
Subject: Re: [PATCH bpf-next v4] bpf: using rcu_read_lock for
bpf_sk_storage_map iterator
On Wed, Sep 16, 2020 at 03:46:45PM -0700, Yonghong Song wrote:
> If a bucket contains a lot of sockets, during bpf_iter traversing
> a bucket, concurrent userspace bpf_map_update_elem() and
> bpf program bpf_sk_storage_{get,delete}() may experience
> some undesirable delays as they will compete with bpf_iter
> for bucket lock.
>
> Note that the number of buckets for bpf_sk_storage_map
> is roughly the same as the number of cpus. So if there
> are lots of sockets in the system, each bucket could
> contain lots of sockets.
>
> Different actual use cases may experience different delays.
> Here, using selftest bpf_iter subtest bpf_sk_storage_map,
> I hacked the kernel with ktime_get_mono_fast_ns()
> to collect the time when a bucket was locked
> during bpf_iter prog traversing that bucket. This way,
> the maximum incurred delay was measured w.r.t. the
> number of elements in a bucket.
> # elems in each bucket delay(ns)
> 64 17000
> 256 72512
> 2048 875246
>
> The potential delays will be further increased if
> we have even more elemnts in a bucket. Using rcu_read_lock()
> is a reasonable compromise here. It may lose some precision, e.g.,
> access stale sockets, but it will not hurt performance of
> bpf program or user space application which also tries
> to get/delete or update map elements.
Acked-by: Martin KaFai Lau <kafai@...com>
Powered by blists - more mailing lists