lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 22 Aug 2017 11:17:20 -0700
From:   John Fastabend <john.fastabend@...il.com>
To:     Michael Chan <michael.chan@...adcom.com>,
        Jesper Dangaard Brouer <brouer@...hat.com>
CC:     "xdp-newbies@...r.kernel.org" <xdp-newbies@...r.kernel.org>,
        Daniel Borkmann <borkmann@...earbox.net>,
        Andy Gospodarek <andy@...yhouse.net>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        Paweł Staszewski <pstaszewski@...are.pl>,
        Alexander H Duyck <alexander.h.duyck@...el.com>
Subject: Re: XDP redirect measurements, gotchas and tracepoints

On 08/22/2017 11:02 AM, Michael Chan wrote:
> On Mon, Aug 21, 2017 at 12:25 PM, Jesper Dangaard Brouer
> <brouer@...hat.com> wrote:
>>
>> I'be been playing with the latest XDP_REDIRECT feature, that was
>> accepted in net-next (for ixgbe), see merge commit[1].
>>  [1] https://git.kernel.org/davem/net-next/c/6093ec2dc31
>>
> 
> Just catching on XDP_REDIRECT and I have a very basic question.  The
> ingress device passes the XDP buffer to the egress device for XDP
> redirect transmission.  When the egress device has transmitted the
> packet, is it supposed to just free the buffer?  Or is it supposed to
> be recycled?
> 
> In XDP_TX, the buffer is recycled back to the rx ring.
> 

With XDP_REDIRECT we must "just free the buffer" in ixgbe this means
page_frag_free() on the data. There is no way to know where the xdp
buffer came from it could be a different NIC for example.

However with how ixgbe is coded up recycling will work as long as
the memory is free'd before the driver ring tries to use it again. In
normal usage this should be the case. And if we are over-running a device
it doesn't really hurt to slow down the sender a bit.

I think this is a pretty good model, we could probably provide a set
of APIs for drivers to use so that we get some consistency across
vendors here, ala Jesper's page pool ideas.

(+Alex, for ixgbe details)

Thanks,
John

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ