[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201104111902.GA11993@lore-desk>
Date: Wed, 4 Nov 2020 12:19:02 +0100
From: Lorenzo Bianconi <lorenzo.bianconi@...hat.com>
To: Jesper Dangaard Brouer <brouer@...hat.com>
Cc: Lorenzo Bianconi <lorenzo@...nel.org>, netdev@...r.kernel.org,
bpf@...r.kernel.org, davem@...emloft.net, kuba@...nel.org,
ilias.apalodimas@...aro.org, Ioana Ciornei <ioana.ciornei@....com>,
Ioana Radulescu <ruxandra.radulescu@....com>
Subject: Re: [PATCH v3 net-next 1/5] net: xdp: introduce bulking for xdp tx
return path
> On Wed, 4 Nov 2020 11:22:54 +0100
> Lorenzo Bianconi <lorenzo@...nel.org> wrote:
>
[...]
> > +/* XDP bulk APIs introduce a defer/flush mechanism to return
> > + * pages belonging to the same xdp_mem_allocator object
> > + * (identified via the mem.id field) in bulk to optimize
> > + * I-cache and D-cache.
> > + * The bulk queue size is set to 16 to be aligned to how
> > + * XDP_REDIRECT bulking works. The bulk is flushed when
>
> If this is connected, then why have you not redefined DEV_MAP_BULK_SIZE?
>
> Cc. DPAA2 maintainers as they use this define in their drivers.
> You want to make sure this driver is flexible enough for future changes.
>
> Like:
>
> diff --git a/include/net/xdp.h b/include/net/xdp.h
> index 3814fb631d52..44440a36f96f 100644
> --- a/include/net/xdp.h
> +++ b/include/net/xdp.h
> @@ -245,6 +245,6 @@ bool xdp_attachment_flags_ok(struct xdp_attachment_info *info,
> void xdp_attachment_setup(struct xdp_attachment_info *info,
> struct netdev_bpf *bpf);
>
> -#define DEV_MAP_BULK_SIZE 16
> +#define DEV_MAP_BULK_SIZE XDP_BULK_QUEUE_SIZE
my idea was to address it in a separated patch, but if you prefer I can merge
this change in v4
>
> #endif /* __LINUX_NET_XDP_H__ */
>
>
> > + * it is full or when mem.id changes.
> > + * xdp_frame_bulk is usually stored/allocated on the function
> > + * call-stack to avoid locking penalties.
> > + */
> > +void xdp_flush_frame_bulk(struct xdp_frame_bulk *bq)
> > +{
> > + struct xdp_mem_allocator *xa = bq->xa;
> > + int i;
> > +
> > + if (unlikely(!xa))
> > + return;
> > +
> > + for (i = 0; i < bq->count; i++) {
> > + struct page *page = virt_to_head_page(bq->q[i]);
> > +
> > + page_pool_put_full_page(xa->page_pool, page, false);
> > + }
> > + bq->count = 0;
> > +}
> > +EXPORT_SYMBOL_GPL(xdp_flush_frame_bulk);
> > +
> > +void xdp_return_frame_bulk(struct xdp_frame *xdpf,
> > + struct xdp_frame_bulk *bq)
> > +{
> > + struct xdp_mem_info *mem = &xdpf->mem;
> > + struct xdp_mem_allocator *xa;
> > +
> > + if (mem->type != MEM_TYPE_PAGE_POOL) {
> > + __xdp_return(xdpf->data, &xdpf->mem, false);
> > + return;
> > + }
> >
>
> I cannot make up my mind: It would be a micro-optimization to move
> this if-statement to include/net/xdp.h, but it will make code harder to
> read/follow, and the call you replace xdp_return_frame() is also in
> xdp.c with same call to _xdp_return(). Let keep it as-is. (we can
> followup with micro-optimizations)
ack
Regards,
Lorenzo
>
>
> > + rcu_read_lock();
> > +
> > + xa = bq->xa;
> > + if (unlikely(!xa)) {
> > + xa = rhashtable_lookup(mem_id_ht, &mem->id, mem_id_rht_params);
> > + bq->count = 0;
> > + bq->xa = xa;
> > + }
> > +
> > + if (bq->count == XDP_BULK_QUEUE_SIZE)
> > + xdp_flush_frame_bulk(bq);
> > +
> > + if (mem->id != xa->mem.id) {
> > + xdp_flush_frame_bulk(bq);
> > + bq->xa = rhashtable_lookup(mem_id_ht, &mem->id, mem_id_rht_params);
> > + }
> > +
> > + bq->q[bq->count++] = xdpf->data;
> > +
> > + rcu_read_unlock();
> > +}
> > +EXPORT_SYMBOL_GPL(xdp_return_frame_bulk);
> > +
> > void xdp_return_buff(struct xdp_buff *xdp)
> > {
> > __xdp_return(xdp->data, &xdp->rxq->mem, true);
>
>
>
> --
> Best regards,
> Jesper Dangaard Brouer
> MSc.CS, Principal Kernel Engineer at Red Hat
> LinkedIn: http://www.linkedin.com/in/brouer
>
Download attachment "signature.asc" of type "application/pgp-signature" (229 bytes)
Powered by blists - more mailing lists