[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220610023308.93798-1-zhoufeng.zf@bytedance.com>
Date: Fri, 10 Jun 2022 10:33:06 +0800
From: Feng zhou <zhoufeng.zf@...edance.com>
To: ast@...nel.org, daniel@...earbox.net, andrii@...nel.org,
kafai@...com, songliubraving@...com, yhs@...com,
john.fastabend@...il.com, kpsingh@...nel.org
Cc: netdev@...r.kernel.org, bpf@...r.kernel.org,
linux-kernel@...r.kernel.org, duanxiongchun@...edance.com,
songmuchun@...edance.com, wangdongdong.6@...edance.com,
cong.wang@...edance.com, zhouchengming@...edance.com,
zhoufeng.zf@...edance.com
Subject: [PATCH v6 0/2] Optimize performance of update hash-map when free is zero
From: Feng Zhou <zhoufeng.zf@...edance.com>
We encountered bad case on big system with 96 CPUs that
alloc_htab_elem() would last for 1ms. The reason is that after the
prealloc hashtab has no free elems, when trying to update, it will still
grab spin_locks of all cpus. If there are multiple update users, the
competition is very serious.
0001: Use head->first to check whether the free list is empty or not before taking
the lock.
0002: Add benchmark to reproduce this worst case.
Changelog:
v5->v6: Addressed comments from Alexei Starovoitov.
- Adjust the commit log.
some details in here:
https://lore.kernel.org/all/20220608021050.47279-1-zhoufeng.zf@bytedance.com/
v4->v5: Addressed comments from Alexei Starovoitov.
- Use head->first.
- Use cpu+max_entries.
some details in here:
https://lore.kernel.org/bpf/20220601084149.13097-1-zhoufeng.zf@bytedance.com/
v3->v4: Addressed comments from Daniel Borkmann.
- Use READ_ONCE/WRITE_ONCE.
some details in here:
https://lore.kernel.org/all/20220530091340.53443-1-zhoufeng.zf@bytedance.com/
v2->v3: Addressed comments from Alexei Starovoitov, Andrii Nakryiko.
- Adjust the way the benchmark is tested.
- Adjust the code format.
some details in here:
https://lore.kernel.org/all/20220524075306.32306-1-zhoufeng.zf@bytedance.com/T/
v1->v2: Addressed comments from Alexei Starovoitov.
- add a benchmark to reproduce the issue.
- Adjust the code format that avoid adding indent.
some details in here:
https://lore.kernel.org/all/877ac441-045b-1844-6938-fcaee5eee7f2@bytedance.com/T/
Feng Zhou (2):
bpf: avoid grabbing spin_locks of all cpus when no free elems
selftest/bpf/benchs: Add bpf_map benchmark
kernel/bpf/percpu_freelist.c | 20 ++--
tools/testing/selftests/bpf/Makefile | 4 +-
tools/testing/selftests/bpf/bench.c | 2 +
.../benchs/bench_bpf_hashmap_full_update.c | 96 +++++++++++++++++++
.../run_bench_bpf_hashmap_full_update.sh | 11 +++
.../bpf/progs/bpf_hashmap_full_update_bench.c | 40 ++++++++
6 files changed, 166 insertions(+), 7 deletions(-)
create mode 100644 tools/testing/selftests/bpf/benchs/bench_bpf_hashmap_full_update.c
create mode 100755 tools/testing/selftests/bpf/benchs/run_bench_bpf_hashmap_full_update.sh
create mode 100644 tools/testing/selftests/bpf/progs/bpf_hashmap_full_update_bench.c
--
2.20.1
Powered by blists - more mailing lists