[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3d4c4abf-0529-72a3-22e3-af87d8f008c4@huaweicloud.com>
Date: Fri, 11 Aug 2023 18:23:22 +0800
From: Hou Tao <houtao@...weicloud.com>
To: Toke Høiland-Jørgensen <toke@...hat.com>,
bpf@...r.kernel.org
Cc: netdev@...r.kernel.org, "David S . Miller" <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>, Jesper Dangaard Brouer <hawk@...nel.org>,
John Fastabend <john.fastabend@...il.com>,
Björn Töpel <bjorn.topel@...il.com>,
Martin KaFai Lau <martin.lau@...ux.dev>,
Alexei Starovoitov <alexei.starovoitov@...il.com>,
Andrii Nakryiko <andrii@...nel.org>, Song Liu <song@...nel.org>,
Hao Luo <haoluo@...gle.com>, Yonghong Song <yonghong.song@...ux.dev>,
Daniel Borkmann <daniel@...earbox.net>, KP Singh <kpsingh@...nel.org>,
Stanislav Fomichev <sdf@...gle.com>, Jiri Olsa <jolsa@...nel.org>,
houtao1@...wei.com
Subject: Re: [RFC PATCH bpf-next 2/2] bpf, cpumap: Clean up bpf_cpu_map_entry
directly in cpu_map_free
Hi,
On 8/10/2023 6:22 PM, Toke Høiland-Jørgensen wrote:
> Hou Tao <houtao@...weicloud.com> writes:
>
>> From: Hou Tao <houtao1@...wei.com>
>>
>> After synchronize_rcu(), both the dettached XDP program and
>> xdp_do_flush() are completed, and the only user of bpf_cpu_map_entry
>> will be cpu_map_kthread_run(), so instead of calling
>> __cpu_map_entry_replace() to empty queue and do cleanup after a RCU
>> grace period, do these things directly.
>>
>> Signed-off-by: Hou Tao <houtao1@...wei.com>
> With one nit below:
>
> Reviewed-by: Toke Høiland-Jørgensen <toke@...hat.com>
Thanks for the review.
>> ---
>> kernel/bpf/cpumap.c | 17 ++++++++---------
>> 1 file changed, 8 insertions(+), 9 deletions(-)
>>
>> diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
>> index 24f39c37526f..f8e2b24320c0 100644
>> --- a/kernel/bpf/cpumap.c
>> +++ b/kernel/bpf/cpumap.c
>> @@ -554,16 +554,15 @@ static void cpu_map_free(struct bpf_map *map)
>> /* At this point bpf_prog->aux->refcnt == 0 and this map->refcnt == 0,
>> * so the bpf programs (can be more than one that used this map) were
>> * disconnected from events. Wait for outstanding critical sections in
>> - * these programs to complete. The rcu critical section only guarantees
>> - * no further "XDP/bpf-side" reads against bpf_cpu_map->cpu_map.
>> - * It does __not__ ensure pending flush operations (if any) are
>> - * complete.
>> + * these programs to complete. synchronize_rcu() below not only
>> + * guarantees no further "XDP/bpf-side" reads against
>> + * bpf_cpu_map->cpu_map, but also ensure pending flush operations
>> + * (if any) are complete.
>> */
>> -
>> synchronize_rcu();
>>
>> - /* For cpu_map the remote CPUs can still be using the entries
>> - * (struct bpf_cpu_map_entry).
>> + /* The only possible user of bpf_cpu_map_entry is
>> + * cpu_map_kthread_run().
>> */
>> for (i = 0; i < cmap->map.max_entries; i++) {
>> struct bpf_cpu_map_entry *rcpu;
>> @@ -572,8 +571,8 @@ static void cpu_map_free(struct bpf_map *map)
>> if (!rcpu)
>> continue;
>>
>> - /* bq flush and cleanup happens after RCU grace-period */
>> - __cpu_map_entry_replace(cmap, i, NULL); /* call_rcu */
>> + /* Empty queue and do cleanup directly */
> The "empty queue" here is a bit ambiguous, maybe "Stop kthread and
> cleanup entry"?
Sure. Will do in v1.
>
>> + __cpu_map_entry_free(&rcpu->free_work.work);
>> }
>> bpf_map_area_free(cmap->cpu_map);
>> bpf_map_area_free(cmap);
>> --
>> 2.29.2
Powered by blists - more mailing lists