[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5e14c0b1740ca_67962afd051fc5c0a5@john-XPS-13-9370.notmuch>
Date: Tue, 07 Jan 2020 09:32:33 -0800
From: John Fastabend <john.fastabend@...il.com>
To: Björn Töpel <bjorn.topel@...il.com>,
netdev@...r.kernel.org, ast@...nel.org, daniel@...earbox.net
Cc: Björn Töpel <bjorn.topel@...el.com>,
bpf@...r.kernel.org, davem@...emloft.net,
jakub.kicinski@...ronome.com, hawk@...nel.org,
john.fastabend@...il.com, magnus.karlsson@...el.com,
jonathan.lemon@...il.com,
Toke Høiland-Jørgensen <toke@...hat.com>
Subject: RE: [PATCH bpf-next v2 1/8] xdp: simplify devmap cleanup
Björn Töpel wrote:
> From: Björn Töpel <bjorn.topel@...el.com>
>
> After the RCU flavor consolidation [1], call_rcu() and
> synchronize_rcu() waits for preempt-disable regions (NAPI) in addition
> to the read-side critical sections. As a result of this, the cleanup
> code in devmap can be simplified
OK great makes sense. One comment below.
>
> * There is no longer a need to flush in __dev_map_entry_free, since we
> know that this has been done when the call_rcu() callback is
> triggered.
>
> * When freeing the map, there is no need to explicitly wait for a
> flush. It's guaranteed to be done after the synchronize_rcu() call
> in dev_map_free(). The rcu_barrier() is still needed, so that the
> map is not freed prior the elements.
>
> [1] https://lwn.net/Articles/777036/
>
> Acked-by: Toke Høiland-Jørgensen <toke@...hat.com>
> Signed-off-by: Björn Töpel <bjorn.topel@...el.com>
> ---
> kernel/bpf/devmap.c | 43 +++++--------------------------------------
> 1 file changed, 5 insertions(+), 38 deletions(-)
>
> diff --git a/kernel/bpf/devmap.c b/kernel/bpf/devmap.c
> index 3d3d61b5985b..b7595de6a91a 100644
> --- a/kernel/bpf/devmap.c
> +++ b/kernel/bpf/devmap.c
> @@ -201,7 +201,7 @@ static struct bpf_map *dev_map_alloc(union bpf_attr *attr)
> static void dev_map_free(struct bpf_map *map)
> {
> struct bpf_dtab *dtab = container_of(map, struct bpf_dtab, map);
> - int i, cpu;
> + int i;
>
> /* At this point bpf_prog->aux->refcnt == 0 and this map->refcnt == 0,
> * so the programs (can be more than one that used this map) were
> @@ -221,18 +221,6 @@ static void dev_map_free(struct bpf_map *map)
> /* Make sure prior __dev_map_entry_free() have completed. */
> rcu_barrier();
>
The comment at the start of this function also needs to be fixed it says,
/* At this point bpf_prog->aux->refcnt == 0 and this map->refcnt == 0,
* so the programs (can be more than one that used this map) were
* disconnected from events. Wait for outstanding critical sections in
* these programs to complete. The rcu critical section only guarantees
* no further reads against netdev_map. It does __not__ ensure pending
* flush operations (if any) are complete.
*/
also comment in dev_map_delete_elem() needs update.
> - /* To ensure all pending flush operations have completed wait for flush
> - * list to empty on _all_ cpus.
> - * Because the above synchronize_rcu() ensures the map is disconnected
> - * from the program we can assume no new items will be added.
> - */
> - for_each_online_cpu(cpu) {
> - struct list_head *flush_list = per_cpu_ptr(dtab->flush_list, cpu);
> -
> - while (!list_empty(flush_list))
> - cond_resched();
> - }
> -
> if (dtab->map.map_type == BPF_MAP_TYPE_DEVMAP_HASH) {
> for (i = 0; i < dtab->n_buckets; i++) {
> struct bpf_dtab_netdev *dev;
> @@ -345,8 +333,7 @@ static int dev_map_hash_get_next_key(struct bpf_map *map, void *key,
> return -ENOENT;
> }
Otherwise LGTM thanks.
Powered by blists - more mailing lists