[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190701.192733.26575663343081553.davem@davemloft.net>
Date: Mon, 01 Jul 2019 19:27:33 -0700 (PDT)
From: David Miller <davem@...emloft.net>
To: ilias.apalodimas@...aro.org
Cc: netdev@...r.kernel.org, jaswinder.singh@...aro.org,
ard.biesheuvel@...aro.org, bjorn.topel@...el.com,
magnus.karlsson@...el.com, brouer@...hat.com, daniel@...earbox.net,
ast@...nel.org, makita.toshiaki@....ntt.co.jp,
jakub.kicinski@...ronome.com, john.fastabend@...il.com,
maciejromanfijalkowski@...il.com
Subject: Re: [PATCH 0/3, net-next, v2] net: netsec: Add XDP Support
From: Ilias Apalodimas <ilias.apalodimas@...aro.org>
Date: Sat, 29 Jun 2019 08:23:22 +0300
> This is a respin of https://www.spinics.net/lists/netdev/msg526066.html
> Since page_pool API fixes are merged into net-next we can now safely use
> it's DMA mapping capabilities.
>
> First patch changes the buffer allocation from napi/netdev_alloc_frag()
> to page_pool API. Although this will lead to slightly reduced performance
> (on raw packet drops only) we can use the API for XDP buffer recycling.
> Another side effect is a slight increase in memory usage, due to using a
> single page per packet.
>
> The second patch adds XDP support on the driver.
> There's a bunch of interesting options that come up due to the single
> Tx queue.
> Locking is needed(to avoid messing up the Tx queues since ndo_xdp_xmit
> and the normal stack can co-exist). We also need to track down the
> 'buffer type' for TX and properly free or recycle the packet depending
> on it's nature.
>
>
> Changes since RFC:
> - Bug fixes from Jesper and Maciej
> - Added page pool API to retrieve the DMA direction
>
> Changes since v1:
> - Use page_pool_free correctly if xdp_rxq_info_reg() failed
Series applied, thanks.
I realize from the discussion on patch #3 there will be follow-ups to this.
Powered by blists - more mailing lists