lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201124105413.0406e879@kicinski-fedora-pc1c0hjn.dhcp.thefacebook.com>
Date:   Tue, 24 Nov 2020 10:54:13 -0800
From:   Jakub Kicinski <kuba@...nel.org>
To:     Eric Dumazet <eric.dumazet@...il.com>
Cc:     davem@...emloft.net, netdev@...r.kernel.org, kernel-team@...com
Subject: Re: [PATCH net-next 1/3] net: remove napi_hash_del() from
 driver-facing API

On Tue, 24 Nov 2020 19:00:50 +0100 Eric Dumazet wrote:
> On 9/9/20 7:37 PM, Jakub Kicinski wrote:
> > We allow drivers to call napi_hash_del() before calling
> > netif_napi_del() to batch RCU grace periods. This makes
> > the API asymmetric and leaks internal implementation details.
> > Soon we will want the grace period to protect more than just
> > the NAPI hash table.
> > 
> > Restructure the API and have drivers call a new function -
> > __netif_napi_del() if they want to take care of RCU waits.
> > 
> > Note that only core was checking the return status from
> > napi_hash_del() so the new helper does not report if the
> > NAPI was actually deleted.
> > 
> > Some notes on driver oddness:
> >  - veth observed the grace period before calling netif_napi_del()
> >    but that should not matter
> >  - myri10ge observed normal RCU flavor
> >  - bnx2x and enic did not actually observe the grace period
> >    (unless they did so implicitly)
> >  - virtio_net and enic only unhashed Rx NAPIs
> > 
> > The last two points seem to indicate that the calls to
> > napi_hash_del() were a left over rather than an optimization.
> > Regardless, it's easy enough to correct them.
> > 
> > This patch may introduce extra synchronize_net() calls for
> > interfaces which set NAPI_STATE_NO_BUSY_POLL and depend on
> > free_netdev() to call netif_napi_del(). This seems inevitable
> > since we want to use RCU for netpoll dev->napi_list traversal,
> > and almost no drivers set IFF_DISABLE_NETPOLL.
> > 
> > Signed-off-by: Jakub Kicinski <kuba@...nel.org>  
> 
> After this patch, gro_cells_destroy() became damn slow
> on hosts with a lot of cores.
> 
> After your change, we have one additional synchronize_net() per cpu as
> you stated in your changelog.

Sorry :S  I hope it didn't waste too much of your time..

> gro_cells_init() is setting NAPI_STATE_NO_BUSY_POLL, and this was enough
> to not have one synchronize_net() call per netif_napi_del()
> 
> I will test something like :
> I am not yet convinced the synchronize_net() is needed, since these
> NAPI structs are not involved in busy polling.

IDK how this squares against netpoll, though?

> diff --git a/net/core/gro_cells.c b/net/core/gro_cells.c
> index e095fb871d9120787bfdf62149f4d82e0e3b0a51..8cfa6ce0738977290cc9f76a3f5daa617308e107 100644
> --- a/net/core/gro_cells.c
> +++ b/net/core/gro_cells.c
> @@ -99,9 +99,10 @@ void gro_cells_destroy(struct gro_cells *gcells)
>                 struct gro_cell *cell = per_cpu_ptr(gcells->cells, i);
>  
>                 napi_disable(&cell->napi);
> -               netif_napi_del(&cell->napi);
> +               __netif_napi_del(&cell->napi);
>                 __skb_queue_purge(&cell->napi_skbs);
>         }
> +       synchronize_net();
>         free_percpu(gcells->cells);
>         gcells->cells = NULL;
>  }
> 
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ