[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1561475179-7686-1-git-send-email-ilias.apalodimas@linaro.org>
Date: Tue, 25 Jun 2019 18:06:17 +0300
From: Ilias Apalodimas <ilias.apalodimas@...aro.org>
To: netdev@...r.kernel.org, jaswinder.singh@...aro.org
Cc: ard.biesheuvel@...aro.org, bjorn.topel@...el.com,
magnus.karlsson@...el.com, brouer@...hat.com, daniel@...earbox.net,
ast@...nel.org, makita.toshiaki@....ntt.co.jp,
jakub.kicinski@...ronome.com, john.fastabend@...il.com,
davem@...emloft.net, Ilias Apalodimas <ilias.apalodimas@...aro.org>
Subject: [RFC, PATCH 0/2, net-next] net: netsec: Add XDP Support
This is a respin of https://www.spinics.net/lists/netdev/msg526066.html
Since page_pool API fixes are merged into net-next we can now safely use
it's DMA mapping capabilities.
The first patch changes the buffer allocation from netdev_alloc_frag()
to page_pool API. Although this will lead to slightly reduced performance
(on raw packet drops only) we can use the API for XDP buffer recycling.
Another side effect is a slight increase in memory usage, due to using a
single page per packet.
The second patch adds XDP support on the driver.
There's a bunch of interesting options that come up due to the single
Tx queue.
Use of locking (to avoid messing up the Tx queue since ndo_xdp_xmit
and the normal stack can co-exist) is one thing.
We also need to track down the 'buffer type' for TX and properly free or
recycle the packet depending on it's nature. Since we use page_pool API in
the XDP_TX case the buffers are already mapped for us and we only need to
sync them, while on the ndo_xdp_xmit we need to map and send them
Ilias Apalodimas (2):
net: netsec: Use page_pool API
net: netsec: add XDP support
drivers/net/ethernet/socionext/Kconfig | 1 +
drivers/net/ethernet/socionext/netsec.c | 459 ++++++++++++++++++++----
2 files changed, 394 insertions(+), 66 deletions(-)
--
2.20.1
Powered by blists - more mailing lists