lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200110193651.GA14384@localhost.localdomain>
Date:   Fri, 10 Jan 2020 20:36:51 +0100
From:   Lorenzo Bianconi <lorenzo@...nel.org>
To:     Jesper Dangaard Brouer <brouer@...hat.com>
Cc:     Lorenzo Bianconi <lorenzo.bianconi@...hat.com>,
        Ilias Apalodimas <ilias.apalodimas@...aro.org>,
        netdev@...r.kernel.org, davem@...emloft.net
Subject: Re: [PATCH v2 net-next] net: socionext: get rid of huge dma sync in
 netsec_alloc_rx_data

> On Fri, 10 Jan 2020 19:19:40 +0100
> Lorenzo Bianconi <lorenzo.bianconi@...hat.com> wrote:
> 
> > > On Fri, 10 Jan 2020 16:34:13 +0100
> > > Lorenzo Bianconi <lorenzo@...nel.org> wrote:
> > >   
> > > > > On Fri, Jan 10, 2020 at 02:57:44PM +0100, Lorenzo Bianconi wrote:    
> > > > > > Socionext driver can run on dma coherent and non-coherent devices.
> > > > > > Get rid of huge dma_sync_single_for_device in netsec_alloc_rx_data since
> > > > > > now the driver can let page_pool API to managed needed DMA sync
> > > > > > 
> > > > > > Signed-off-by: Lorenzo Bianconi <lorenzo@...nel.org>
> > > > > > ---
> > > > > > Changes since v1:
> > > > > > - rely on original frame size for dma sync
> > > > > > ---
> > > > > >  drivers/net/ethernet/socionext/netsec.c | 43 +++++++++++++++----------
> > > > > >  1 file changed, 26 insertions(+), 17 deletions(-)
> > > > > >     
> > > > 
> > > > [...]
> > > >   
> > > > > > @@ -883,6 +881,8 @@ static u32 netsec_xdp_xmit_back(struct netsec_priv *priv, struct xdp_buff *xdp)
> > > > > >  static u32 netsec_run_xdp(struct netsec_priv *priv, struct bpf_prog *prog,
> > > > > >  			  struct xdp_buff *xdp)
> > > > > >  {
> > > > > > +	struct netsec_desc_ring *dring = &priv->desc_ring[NETSEC_RING_RX];
> > > > > > +	unsigned int len = xdp->data_end - xdp->data;    
> > > > > 
> > > > > We need to account for XDP expanding the headers as well here. 
> > > > > So something like max(xdp->data_end(before bpf), xdp->data_end(after bpf)) -
> > > > > xdp->data (original)    
> > > > 
> > > > correct, the corner case that is not covered at the moment is when data_end is
> > > > moved forward by the bpf program. I will fix it in v3. Thx  
> > > 
> > > Maybe we can simplify do:
> > > 
> > >  void *data_start = NETSEC_RXBUF_HEADROOM + xdp->data_hard_start;
> > >  unsigned int len = xdp->data_end - data_start;
> > >   
> > 
> > Hi Jesper,
> > 
> > please correct me if I am wrong but this seems to me the same as v2.
> 
> No, this is v2, where you do:
>    len = xdp->data_end - xdp->data;

I mean in the solution you proposed you set (before running the bpf program):

len = xdp->data_end - data_start
where:
data_start = NETSEC_RXBUF_HEADROOM + xdp->data_hard_start

that is equivalent to what I did in v2 (before running the bpf program):
len = xdp->data_end - xdp->data

since:
xdp->data = xdp->data_hard_start + NETSEC_RXBUF_HEADROOM
(set in netsec_process_rx())

Am I missing something?

> 
> Maybe you mean v1? where you calc len like:
>    len = xdp->data_end - xdp->data_hard_start;
>    
> 
> > The leftover corner case is if xdp->data_end is moved 'forward' by
> > the bpf program (I guess it is possible, right?). In this case we
> > will not sync xdp->data_end(new) - xdp->data_end(old)
> 
> Currently xdp->data_end can only shrink (but I plan to extend it). Yes,
> this corner case is left, but I don't think we need to handle it.  When
> a BPF prog shrink xdp->data_end, then i believe it cannot change that
> part the shunk part any longer.
> 

ack, fine to me.

Regards,
Lorenzo

> 
> > 
> > > The cache-lines that need to be flushed/synced for_device is the area
> > > used by NIC DMA engine.  We know it will always start at a certain
> > > point (given driver configured hardware to this).
> 
> 
> -- 
> Best regards,
>   Jesper Dangaard Brouer
>   MSc.CS, Principal Kernel Engineer at Red Hat
>   LinkedIn: http://www.linkedin.com/in/brouer
> 

Download attachment "signature.asc" of type "application/pgp-signature" (229 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ