lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 28 Oct 2020 12:43:01 +0100
From:   Lorenzo Bianconi <lorenzo@...nel.org>
To:     Jesper Dangaard Brouer <brouer@...hat.com>
Cc:     netdev@...r.kernel.org, bpf@...r.kernel.org,
        lorenzo.bianconi@...hat.com, davem@...emloft.net, kuba@...nel.org,
        ilias.apalodimas@...aro.org
Subject: Re: [PATCH net-next 1/4] net: xdp: introduce bulking for xdp tx
 return path

> On Tue, 27 Oct 2020 20:04:07 +0100
> Lorenzo Bianconi <lorenzo@...nel.org> wrote:
> 
> > Introduce bulking capability in xdp tx return path (XDP_REDIRECT).
> > xdp_return_frame is usually run inside the driver NAPI tx completion
> > loop so it is possible batch it.
> > Current implementation considers only page_pool memory model.
> > Convert mvneta driver to xdp_return_frame_bulk APIs.
> > 
> > Suggested-by: Jesper Dangaard Brouer <brouer@...hat.com>

Hi Jesper,

thx for the review.

> 
> I think you/we have to explain better in this commit message, what the
> idea/concept behind this bulk return is.  Or even explain this as a
> comment above "xdp_return_frame_bulk".
> 
> Maybe add/append text to commit below:
> 
> The bulk API introduced is a defer and flush API, that will defer
> the return if the xdp_mem_allocator object is the same, identified
> via the mem.id field (xdp_mem_info).  Thus, the flush operation will
> operate on the same xdp_mem_allocator object.
> 
> The bulk queue size of 16 is no coincident.  This is connected to how
> XDP redirect will bulk xmit (upto 16) frames. Thus, the idea is for the
> API to find these boundaries (via mem.id match), which is optimal for
> both the I-cache and D-cache for the memory allocator code and object.
> 
> The data structure (xdp_frame_bulk) used for deferred elements is
> stored/allocated on the function call-stack, which allows lockfree
> access.

ack, I will add it in v2

> 
> 
> > Signed-off-by: Lorenzo Bianconi <lorenzo@...nel.org>
> > ---
> [...]
> > diff --git a/include/net/xdp.h b/include/net/xdp.h
> > index 3814fb631d52..9567110845ef 100644
> > --- a/include/net/xdp.h
> > +++ b/include/net/xdp.h
> > @@ -104,6 +104,12 @@ struct xdp_frame {
> >  	struct net_device *dev_rx; /* used by cpumap */
> >  };
> >  
> > +#define XDP_BULK_QUEUE_SIZE	16
> 
> Maybe "#define DEV_MAP_BULK_SIZE 16" should be def to
> XDP_BULK_QUEUE_SIZE, to express the described connection.

ack, I guess we can fix it in a following patch

> 
> > +struct xdp_frame_bulk {
> > +	void *q[XDP_BULK_QUEUE_SIZE];
> > +	int count;
> > +	void *xa;
> 
> Just a hunch (not benchmarked), but I think it will be more optimal to
> place 'count' and '*xa' above the '*q' array.  (It might not matter at
> all, as we cannot control the start alignment, when this is on the
> stack.)

ack. I will fix in v2.

> 
> > +};
> [...]
> 
> > diff --git a/net/core/xdp.c b/net/core/xdp.c
> > index 48aba933a5a8..93eabd789246 100644
> > --- a/net/core/xdp.c
> > +++ b/net/core/xdp.c
> > @@ -380,6 +380,57 @@ void xdp_return_frame_rx_napi(struct xdp_frame *xdpf)
> >  }
> >  EXPORT_SYMBOL_GPL(xdp_return_frame_rx_napi);
> >  
> > +void xdp_flush_frame_bulk(struct xdp_frame_bulk *bq)
> > +{
> > +	struct xdp_mem_allocator *xa = bq->xa;
> > +	int i;
> > +
> > +	if (unlikely(!xa))
> > +		return;
> > +
> > +	for (i = 0; i < bq->count; i++) {
> > +		struct page *page = virt_to_head_page(bq->q[i]);
> > +
> > +		page_pool_put_full_page(xa->page_pool, page, false);
> > +	}
> > +	bq->count = 0;
> > +}
> > +EXPORT_SYMBOL_GPL(xdp_flush_frame_bulk);
> > +
> 
> Wondering if we should have a comment that explains the intent and idea
> behind this function?
> 
> /* Defers return when frame belongs to same mem.id as previous frame */
> 

ack.

Regards,
Lorenzo

> > +void xdp_return_frame_bulk(struct xdp_frame *xdpf,
> > +			   struct xdp_frame_bulk *bq)
> > +{
> > +	struct xdp_mem_info *mem = &xdpf->mem;
> > +	struct xdp_mem_allocator *xa, *nxa;
> > +
> > +	if (mem->type != MEM_TYPE_PAGE_POOL) {
> > +		__xdp_return(xdpf->data, &xdpf->mem, false);
> > +		return;
> > +	}
> > +
> > +	rcu_read_lock();
> > +
> > +	xa = bq->xa;
> > +	if (unlikely(!xa || mem->id != xa->mem.id)) {
> > +		nxa = rhashtable_lookup(mem_id_ht, &mem->id, mem_id_rht_params);
> > +		if (unlikely(!xa)) {
> > +			bq->count = 0;
> > +			bq->xa = nxa;
> > +			xa = nxa;
> > +		}
> > +	}
> > +
> > +	if (mem->id != xa->mem.id || bq->count == XDP_BULK_QUEUE_SIZE)
> > +		xdp_flush_frame_bulk(bq);
> > +
> > +	bq->q[bq->count++] = xdpf->data;
> > +	if (mem->id != xa->mem.id)
> > +		bq->xa = nxa;
> > +
> > +	rcu_read_unlock();
> > +}
> > +EXPORT_SYMBOL_GPL(xdp_return_frame_bulk);
> 
> 
> -- 
> Best regards,
>   Jesper Dangaard Brouer
>   MSc.CS, Principal Kernel Engineer at Red Hat
>   LinkedIn: http://www.linkedin.com/in/brouer
> 

Download attachment "signature.asc" of type "application/pgp-signature" (229 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ