[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAADnVQKQ6bCFVwaFUb0fpnhMyGDH9-HRDOFDkR3Mdjotk39jPw@mail.gmail.com>
Date: Mon, 6 Jan 2025 18:24:04 -0800
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Hou Tao <houtao@...weicloud.com>
Cc: bpf <bpf@...r.kernel.org>, Network Development <netdev@...r.kernel.org>,
Martin KaFai Lau <martin.lau@...ux.dev>, Andrii Nakryiko <andrii@...nel.org>,
Eduard Zingerman <eddyz87@...il.com>, Song Liu <song@...nel.org>, Hao Luo <haoluo@...gle.com>,
Yonghong Song <yonghong.song@...ux.dev>, Daniel Borkmann <daniel@...earbox.net>,
KP Singh <kpsingh@...nel.org>, Stanislav Fomichev <sdf@...ichev.me>, Jiri Olsa <jolsa@...nel.org>,
John Fastabend <john.fastabend@...il.com>, Hou Tao <houtao1@...wei.com>,
Xu Kuohai <xukuohai@...wei.com>
Subject: Re: [PATCH bpf-next 15/19] bpf: Disable migration before calling ops->map_free()
On Mon, Jan 6, 2025 at 5:40 PM Hou Tao <houtao@...weicloud.com> wrote:
>
> Hi,
>
> On 1/7/2025 6:24 AM, Alexei Starovoitov wrote:
> > On Mon, Jan 6, 2025 at 12:07 AM Hou Tao <houtao@...weicloud.com> wrote:
> >> From: Hou Tao <houtao1@...wei.com>
> >>
> >> Disabling migration before calling ops->map_free() to simplify the
> >> freeing of map values or special fields allocated from bpf memory
> >> allocator.
> >>
> >> After disabling migration in bpf_map_free(), there is no need for
> >> additional migration_{disable|enable} pairs in the ->map_free()
> >> callbacks. Remove these redundant invocations.
> >>
> >> Signed-off-by: Hou Tao <houtao1@...wei.com>
> >> ---
> >> kernel/bpf/arraymap.c | 2 --
> >> kernel/bpf/bpf_local_storage.c | 2 --
> >> kernel/bpf/hashtab.c | 2 --
> >> kernel/bpf/range_tree.c | 2 --
> >> kernel/bpf/syscall.c | 8 +++++++-
> >> 5 files changed, 7 insertions(+), 9 deletions(-)
> >>
> >> diff --git a/kernel/bpf/arraymap.c b/kernel/bpf/arraymap.c
> >> index 451737493b17..eb28c0f219ee 100644
> >> --- a/kernel/bpf/arraymap.c
> >> +++ b/kernel/bpf/arraymap.c
> >> @@ -455,7 +455,6 @@ static void array_map_free(struct bpf_map *map)
> >> struct bpf_array *array = container_of(map, struct bpf_array, map);
> >> int i;
> >>
> >> - migrate_disable();
> >> if (!IS_ERR_OR_NULL(map->record)) {
> >> if (array->map.map_type == BPF_MAP_TYPE_PERCPU_ARRAY) {
> >> for (i = 0; i < array->map.max_entries; i++) {
> >> @@ -472,7 +471,6 @@ static void array_map_free(struct bpf_map *map)
> >> bpf_obj_free_fields(map->record, array_map_elem_ptr(array, i));
> >> }
> >> }
> >> - migrate_enable();
> >>
> >> if (array->map.map_type == BPF_MAP_TYPE_PERCPU_ARRAY)
> >> bpf_array_free_percpu(array);
> >> diff --git a/kernel/bpf/bpf_local_storage.c b/kernel/bpf/bpf_local_storage.c
> >> index b649cf736438..12cf6382175e 100644
> >> --- a/kernel/bpf/bpf_local_storage.c
> >> +++ b/kernel/bpf/bpf_local_storage.c
> >> @@ -905,13 +905,11 @@ void bpf_local_storage_map_free(struct bpf_map *map,
> >> while ((selem = hlist_entry_safe(
> >> rcu_dereference_raw(hlist_first_rcu(&b->list)),
> >> struct bpf_local_storage_elem, map_node))) {
> >> - migrate_disable();
> >> if (busy_counter)
> >> this_cpu_inc(*busy_counter);
> >> bpf_selem_unlink(selem, true);
> >> if (busy_counter)
> >> this_cpu_dec(*busy_counter);
> >> - migrate_enable();
> >> cond_resched_rcu();
> >> }
> >> rcu_read_unlock();
> >> diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
> >> index 8bf1ad326e02..6051f8a39fec 100644
> >> --- a/kernel/bpf/hashtab.c
> >> +++ b/kernel/bpf/hashtab.c
> >> @@ -1570,14 +1570,12 @@ static void htab_map_free(struct bpf_map *map)
> >> * underneath and is responsible for waiting for callbacks to finish
> >> * during bpf_mem_alloc_destroy().
> >> */
> >> - migrate_disable();
> >> if (!htab_is_prealloc(htab)) {
> >> delete_all_elements(htab);
> >> } else {
> >> htab_free_prealloced_fields(htab);
> >> prealloc_destroy(htab);
> >> }
> >> - migrate_enable();
> >>
> >> bpf_map_free_elem_count(map);
> >> free_percpu(htab->extra_elems);
> >> diff --git a/kernel/bpf/range_tree.c b/kernel/bpf/range_tree.c
> >> index 5bdf9aadca3a..37b80a23ae1a 100644
> >> --- a/kernel/bpf/range_tree.c
> >> +++ b/kernel/bpf/range_tree.c
> >> @@ -259,9 +259,7 @@ void range_tree_destroy(struct range_tree *rt)
> >>
> >> while ((rn = range_it_iter_first(rt, 0, -1U))) {
> >> range_it_remove(rn, rt);
> >> - migrate_disable();
> >> bpf_mem_free(&bpf_global_ma, rn);
> >> - migrate_enable();
> >> }
> >> }
> >>
> >> diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> >> index 0503ce1916b6..e7a41abe4809 100644
> >> --- a/kernel/bpf/syscall.c
> >> +++ b/kernel/bpf/syscall.c
> >> @@ -835,8 +835,14 @@ static void bpf_map_free(struct bpf_map *map)
> >> struct btf_record *rec = map->record;
> >> struct btf *btf = map->btf;
> >>
> >> - /* implementation dependent freeing */
> >> + /* implementation dependent freeing. Disabling migration to simplify
> >> + * the free of values or special fields allocated from bpf memory
> >> + * allocator.
> >> + */
> >> + migrate_disable();
> >> map->ops->map_free(map);
> >> + migrate_enable();
> >> +
> > I was about to comment on patches 10-13 that it's
> > better to do it in bpf_map_free(), but then I got to this patch.
> > All makes sense, but the patch breakdown is too fine grain.
> > Patches 10-13 introduce migrate pairs only to be deleted
> > in patch 15. Please squash them into one patch.
>
> OK. However I need to argue for the fine grained break down. The
> original though is that if disabling migration for ->map_free callback
> for all maps introduces some problems, we could revert the patch #15
> separately instead of reverting the squashed patch and moving the
> migrate_{disable|enable}() pair to maps which are OK with that change
> again. What do you think ?
Feels overkill.
If migrate disable for the duration of map_free callback causes issues
we can introduce individual migrate pairs per map type
or revert the whole thing,
but imo it's all too theoretical at this point.
> >
> > Also you mention in the cover letter:
> >
> >> Considering the bpf-next CI is broken
> > What is this about?
>
> Er, I said it wrong. It is my local bpf-next setup. A few days ago, when
> I tried to verify the patch by using bpf_next/for-next treee, the
> running of test_maps and test_progs failed. Will check today that
> whether it is OK.
I see. /for-next maybe having issues. That needs to be investigated
separately.
Make sure /master is working well.
> >
> > The cant_migrate() additions throughout looks
> > a bit out of place. All that code doesn't care about migrations.
> > Only bpf_ma code does. Let's add it there instead?
> > The stack trace will tell us the caller anyway,
> > so no information lost.
>
> OK. However bpf_ma is not the only one which needs disabled migration.
> The reason that bpf_ma needs migrate_disable() is the use of
> this_cpu_ptr(). However, there are many places in bpf which use
> this_cpu_ptr() (e.g., bpf_for_each_array_elem) and this_cpu_{in|del}
> pair (e.g., bpf_cgrp_storage_lock). I will check the cant_migrate which
> can be removed in v2.
Well, maybe not all cant_migrate() hunks across all patches.
But patches 16, 17, 18, 19 don't look like the right places
for cant_migrate().
Powered by blists - more mailing lists