lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20180523172715.2d16d2c9@redhat.com>
Date:   Wed, 23 May 2018 17:27:15 +0200
From:   Jesper Dangaard Brouer <brouer@...hat.com>
To:     John Fastabend <john.fastabend@...il.com>
Cc:     netdev@...r.kernel.org, Daniel Borkmann <borkmann@...earbox.net>,
        Alexei Starovoitov <alexei.starovoitov@...il.com>,
        Christoph Hellwig <hch@...radead.org>,
        BjörnTöpel <bjorn.topel@...el.com>,
        Magnus Karlsson <magnus.karlsson@...el.com>,
        makita.toshiaki@....ntt.co.jp, brouer@...hat.com
Subject: Re: [bpf-next V4 PATCH 6/8] xdp: change ndo_xdp_xmit API to support
 bulking

On Wed, 23 May 2018 07:42:14 -0700
John Fastabend <john.fastabend@...il.com> wrote:

> On 05/18/2018 06:35 AM, Jesper Dangaard Brouer wrote:
>  [...]  
> 
> Couple suggestions for some optimizations/improvements but otherwise
> looks good to me.
> 
> Thanks,
> John
> 
[...]
> > diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> > index 5efa68de935b..9b698c5acd05 100644
> > --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> > +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c
> > @@ -3664,14 +3664,19 @@ netdev_tx_t i40e_lan_xmit_frame(struct sk_buff *skb, struct net_device *netdev)
> >   * @dev: netdev
> >   * @xdp: XDP buffer
> >   *
> > - * Returns Zero if sent, else an error code
> > + * Returns number of frames successfully sent. Frames that fail are
> > + * free'ed via XDP return API.
> > + *
> > + * For error cases, a negative errno code is returned and no-frames
> > + * are transmitted (caller must handle freeing frames).
> >   **/
> > -int i40e_xdp_xmit(struct net_device *dev, struct xdp_frame *xdpf)
> > +int i40e_xdp_xmit(struct net_device *dev, int n, struct xdp_frame **frames)
> >  {
> >  	struct i40e_netdev_priv *np = netdev_priv(dev);
> >  	unsigned int queue_index = smp_processor_id();
> >  	struct i40e_vsi *vsi = np->vsi;
> > -	int err;
> > +	int drops = 0;
> > +	int i;
> >  
> >  	if (test_bit(__I40E_VSI_DOWN, vsi->state))
> >  		return -ENETDOWN;
> > @@ -3679,11 +3684,18 @@ int i40e_xdp_xmit(struct net_device *dev, struct xdp_frame *xdpf)
> >  	if (!i40e_enabled_xdp_vsi(vsi) || queue_index >= vsi->num_queue_pairs)
> >  		return -ENXIO;
> >  
> > -	err = i40e_xmit_xdp_ring(xdpf, vsi->xdp_rings[queue_index]);
> > -	if (err != I40E_XDP_TX)
> > -		return -ENOSPC;
> > +	for (i = 0; i < n; i++) {
> > +		struct xdp_frame *xdpf = frames[i];
> > +		int err;
> >  
> > -	return 0;
> > +		err = i40e_xmit_xdp_ring(xdpf, vsi->xdp_rings[queue_index]);
> > +		if (err != I40E_XDP_TX) {
> > +			xdp_return_frame_rx_napi(xdpf);
> > +			drops++;
> > +		}
> > +	}
> > +  
> 
> Perhaps not needed in this series, but I wonder how useful it is to hit
> the ring with the remaining frames after the first error. The error will
> typically be due to the TX queue being full. In this case it might make
> more sense to just drop the entire train of frames vs beating the up a
> full queue.

Yes, that would be an optimization, but I think we can do that in
another series.

This patcheset actually open up for a lot of followup optimizations.
I imagine/plan to optimize i40e_xmit_xdp_ring() further, e.g. by
implementing a bulking variant (e.g check I40E_DESC_UNUSED+update
xdp_ring->next_to_use once, map several DMA pages in one call, only
call smp_wmb() once per bulk, etc.),

> > +	return n - drops;
> >  }


-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ