[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPhsuW7BKW40-NzJTPNPP90zz9ydP_9FSMa_qkGGwfQpp0thfg@mail.gmail.com>
Date: Mon, 5 Feb 2024 15:02:31 -0800
From: Song Liu <song@...nel.org>
To: Marco Elver <elver@...gle.com>
Cc: Alexei Starovoitov <ast@...nel.org>, Daniel Borkmann <daniel@...earbox.net>,
Andrii Nakryiko <andrii@...nel.org>, Martin KaFai Lau <martin.lau@...ux.dev>,
Yonghong Song <yonghong.song@...ux.dev>, John Fastabend <john.fastabend@...il.com>,
KP Singh <kpsingh@...nel.org>, Stanislav Fomichev <sdf@...gle.com>, Hao Luo <haoluo@...gle.com>,
Jiri Olsa <jolsa@...nel.org>, Mykola Lysenko <mykolal@...com>, Shuah Khan <shuah@...nel.org>,
bpf@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-kselftest@...r.kernel.org
Subject: Re: [PATCH] bpf: Separate bpf_local_storage_lookup() fast and slow paths
On Wed, Jan 31, 2024 at 6:19 AM Marco Elver <elver@...gle.com> wrote:
>
[...]
>
> Signed-off-by: Marco Elver <elver@...gle.com>
> ---
> include/linux/bpf_local_storage.h | 17 ++++++++++++++++-
> kernel/bpf/bpf_local_storage.c | 14 ++++----------
> .../selftests/bpf/progs/cgrp_ls_recursion.c | 2 +-
> .../selftests/bpf/progs/task_ls_recursion.c | 2 +-
> 4 files changed, 22 insertions(+), 13 deletions(-)
>
> diff --git a/include/linux/bpf_local_storage.h b/include/linux/bpf_local_storage.h
> index 173ec7f43ed1..c8cecf7fff87 100644
> --- a/include/linux/bpf_local_storage.h
> +++ b/include/linux/bpf_local_storage.h
> @@ -130,9 +130,24 @@ bpf_local_storage_map_alloc(union bpf_attr *attr,
> bool bpf_ma);
>
> struct bpf_local_storage_data *
> +bpf_local_storage_lookup_slowpath(struct bpf_local_storage *local_storage,
> + struct bpf_local_storage_map *smap,
> + bool cacheit_lockit);
> +static inline struct bpf_local_storage_data *
> bpf_local_storage_lookup(struct bpf_local_storage *local_storage,
> struct bpf_local_storage_map *smap,
> - bool cacheit_lockit);
> + bool cacheit_lockit)
> +{
> + struct bpf_local_storage_data *sdata;
> +
> + /* Fast path (cache hit) */
> + sdata = rcu_dereference_check(local_storage->cache[smap->cache_idx],
> + bpf_rcu_lock_held());
> + if (likely(sdata && rcu_access_pointer(sdata->smap) == smap))
> + return sdata;
We have two changes here: 1) inlining; 2) likely() annotation. Could you please
include in the commit log how much do the two contribute to the performance
improvement?
Thanks,
Song
> +
> + return bpf_local_storage_lookup_slowpath(local_storage, smap, cacheit_lockit);
> +}
>
[...]
Powered by blists - more mailing lists