[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250106081900.1665573-12-houtao@huaweicloud.com>
Date: Mon, 6 Jan 2025 16:18:52 +0800
From: Hou Tao <houtao@...weicloud.com>
To: bpf@...r.kernel.org,
netdev@...r.kernel.org
Cc: Martin KaFai Lau <martin.lau@...ux.dev>,
Alexei Starovoitov <alexei.starovoitov@...il.com>,
Andrii Nakryiko <andrii@...nel.org>,
Eduard Zingerman <eddyz87@...il.com>,
Song Liu <song@...nel.org>,
Hao Luo <haoluo@...gle.com>,
Yonghong Song <yonghong.song@...ux.dev>,
Daniel Borkmann <daniel@...earbox.net>,
KP Singh <kpsingh@...nel.org>,
Stanislav Fomichev <sdf@...ichev.me>,
Jiri Olsa <jolsa@...nel.org>,
John Fastabend <john.fastabend@...il.com>,
houtao1@...wei.com,
xukuohai@...wei.com
Subject: [PATCH bpf-next 11/19] bpf: Disable migration in htab_map_free()
From: Hou Tao <houtao1@...wei.com>
When freeing the hash map, the destroy procedure may invoke
bpf_obj_free_fields() to free the special fields in pre-allocated values
or dynamically-allocated values. Since these special fields may be
allocated from bpf memory allocator, migrate_{disable|enable} pairs are
necessary for the freeing of these objects.
To simplify reasoning about when migrate_disable() is needed for the
freeing of these dynamically-allocated objects, let the caller to
guarantee migration is disabled before invoking bpf_obj_free_fields().
For dynamically allocated values, delete_all_elements() already disables
migration before invoking bpf_obj_free_fields(). Therefore, the patch
moves migrate_{disable|enable} pair from delete_all_elements() to
htab_map_free() to handle all bpf_obj_free_fields() invocations. The
migrate_{disable|enable} pairs in the underlying implementation of
bpf_obj_free_fields() will be removed by the following patch.
Signed-off-by: Hou Tao <houtao1@...wei.com>
---
kernel/bpf/hashtab.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
index 1db71d25836e..8bf1ad326e02 100644
--- a/kernel/bpf/hashtab.c
+++ b/kernel/bpf/hashtab.c
@@ -1502,10 +1502,9 @@ static void delete_all_elements(struct bpf_htab *htab)
{
int i;
- /* It's called from a worker thread, so disable migration here,
- * since bpf_mem_cache_free() relies on that.
+ /* It's called from a worker thread and migration has been disabled,
+ * therefore, it is OK to invoke bpf_mem_cache_free() directly.
*/
- migrate_disable();
for (i = 0; i < htab->n_buckets; i++) {
struct hlist_nulls_head *head = select_bucket(htab, i);
struct hlist_nulls_node *n;
@@ -1517,7 +1516,6 @@ static void delete_all_elements(struct bpf_htab *htab)
}
cond_resched();
}
- migrate_enable();
}
static void htab_free_malloced_timers_and_wq(struct bpf_htab *htab)
@@ -1572,12 +1570,14 @@ static void htab_map_free(struct bpf_map *map)
* underneath and is responsible for waiting for callbacks to finish
* during bpf_mem_alloc_destroy().
*/
+ migrate_disable();
if (!htab_is_prealloc(htab)) {
delete_all_elements(htab);
} else {
htab_free_prealloced_fields(htab);
prealloc_destroy(htab);
}
+ migrate_enable();
bpf_map_free_elem_count(map);
free_percpu(htab->extra_elems);
--
2.29.2
Powered by blists - more mailing lists