lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 19 Nov 2019 17:23:40 +0200
From:   Ilias Apalodimas <ilias.apalodimas@...aro.org>
To:     Jesper Dangaard Brouer <brouer@...hat.com>
Cc:     Lorenzo Bianconi <lorenzo@...nel.org>, netdev@...r.kernel.org,
        davem@...emloft.net, lorenzo.bianconi@...hat.com,
        mcroce@...hat.com, jonathan.lemon@...il.com
Subject: Re: [PATCH v4 net-next 2/3] net: page_pool: add the possibility to
 sync DMA memory for device

On Tue, Nov 19, 2019 at 04:11:09PM +0100, Jesper Dangaard Brouer wrote:
> On Tue, 19 Nov 2019 13:33:36 +0200
> Ilias Apalodimas <ilias.apalodimas@...aro.org> wrote:
> 
> > > > diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> > > > index dfc2501c35d9..4f9aed7bce5a 100644
> > > > --- a/net/core/page_pool.c
> > > > +++ b/net/core/page_pool.c
> > > > @@ -47,6 +47,13 @@ static int page_pool_init(struct page_pool *pool,
> > > >  	    (pool->p.dma_dir != DMA_BIDIRECTIONAL))
> > > >  		return -EINVAL;
> > > >  
> > > > +	/* In order to request DMA-sync-for-device the page needs to
> > > > +	 * be mapped
> > > > +	 */
> > > > +	if ((pool->p.flags & PP_FLAG_DMA_SYNC_DEV) &&
> > > > +	    !(pool->p.flags & PP_FLAG_DMA_MAP))
> > > > +		return -EINVAL;
> > > > +  
> > > 
> > > I like that you have moved this check to setup time.
> > > 
> > > There are two other parameters the DMA_SYNC_DEV depend on:
> > > 
> > >  	struct page_pool_params pp_params = {
> > >  		.order = 0,
> > > -		.flags = PP_FLAG_DMA_MAP,
> > > +		.flags = PP_FLAG_DMA_MAP | PP_FLAG_DMA_SYNC_DEV,
> > >  		.pool_size = size,
> > >  		.nid = cpu_to_node(0),
> > >  		.dev = pp->dev->dev.parent,
> > >  		.dma_dir = xdp_prog ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE,
> > > +		.offset = pp->rx_offset_correction,
> > > +		.max_len = MVNETA_MAX_RX_BUF_SIZE,
> > >  	};
> > > 
> > > Can you add a check, that .max_len must not be zero.  The reason is
> > > that I can easily see people misconfiguring this.  And the effect is
> > > that the DMA-sync-for-device is essentially disabled, without user
> > > realizing this. The not-realizing part is really bad, especially
> > > because bugs that can occur from this are very rare and hard to catch.  
> > 
> > +1 we sync based on the min() value of those 
> > 
> > > 
> > > I'm up for discussing if there should be a similar check for .offset.
> > > IMHO we should also check .offset is configured, and then be open to
> > > remove this check once a driver user want to use offset=0.  Does the
> > > mvneta driver already have a use-case for this (in non-XDP mode)?  
> > 
> > Not sure about this, since it does not break anything apart from some
> > performance hit
> 
> I don't follow the 'performance hit' comment.  This is checked at setup
> time (page_pool_init), thus it doesn't affect runtime.

If the offset is 0, you'll end up syncing a couple of uneeded bytes (whatever
headers the buffer has which doesn't need syncing). 

> 
> This is a generic optimization principle that I use a lot. Moving code
> checks out of fast-path, and instead do more at setup/load-time, or
> even at shutdown-time (like we do for page_pool e.g. check refcnt
> invariance).  This principle is also heavily used by BPF, that adjust
> BPF-instructions at load-time.  It is core to getting the performance
> we need for high-speed networking.

The offset will affect the fast path running code.

What i am worried about is that XDP and SKB pool will have different needs for
offsets. In the netsec driver i am dealing with this with reserving the same
header whether the packet is an SKB or XDP buffer. If we check the offset we are
practically forcing people to do something similar

Thanks
/Ilias
> 
> -- 
> Best regards,
>   Jesper Dangaard Brouer
>   MSc.CS, Principal Kernel Engineer at Red Hat
>   LinkedIn: http://www.linkedin.com/in/brouer
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ