lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 23 May 2018 13:12:09 +0200
From:   Jesper Dangaard Brouer <brouer@...hat.com>
To:     Daniel Borkmann <daniel@...earbox.net>
Cc:     netdev@...r.kernel.org, Daniel Borkmann <borkmann@...earbox.net>,
        Alexei Starovoitov <alexei.starovoitov@...il.com>,
        Christoph Hellwig <hch@...radead.org>,
        BjörnTöpel <bjorn.topel@...el.com>,
        Magnus Karlsson <magnus.karlsson@...el.com>,
        makita.toshiaki@....ntt.co.jp, brouer@...hat.com
Subject: Re: [bpf-next V4 PATCH 1/8] bpf: devmap introduce dev_map_enqueue


On Wed, 23 May 2018 11:34:22 +0200 Daniel Borkmann <daniel@...earbox.net> wrote:

> > +int dev_map_enqueue(struct bpf_dtab_netdev *dst, struct xdp_buff *xdp)
> > +{
> > +	struct net_device *dev = dst->dev;
> > +	struct xdp_frame *xdpf;
> > +	int err;
> > +
> > +	if (!dev->netdev_ops->ndo_xdp_xmit)
> > +		return -EOPNOTSUPP;
> > +
> > +	xdpf = convert_to_xdp_frame(xdp);
> > +	if (unlikely(!xdpf))
> > +		return -EOVERFLOW;
> > +
> > +	/* TODO: implement a bulking/enqueue step later */
> > +	err = dev->netdev_ops->ndo_xdp_xmit(dev, xdpf);
> > +	if (err)
> > +		return err;
> > +
> > +	return 0;  
> 
> The 'err' is just unnecessary, lets just do:
> 
>   return dev->netdev_ops->ndo_xdp_xmit(dev, xdpf);
> 
> Later after the other patches this becomes:
> 
>   return bq_enqueue(dst, xdpf, dev_rx);

I agree, I'll fix this up in V5.

After this patchset gets applied, there are also other opportunities to
do similar micro-optimizations.  I have a branch (on top of this
patchset) which does such micro-optimizations (including this) plus
I've looked at the resulting asm-code layout.  But my benchmarks only
show a 2 nanosec improvement for all these micro-optimizations (where
the focus is to reduce the asm-code I-cache size of xdp_do_redirect).

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ