[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d4106b81-31cb-2569-6b49-9393bd2c2b34@gmail.com>
Date: Sat, 9 Apr 2022 02:32:51 +0900
From: Taehee Yoo <ap420073@...il.com>
To: davem@...emloft.net, kuba@...nel.org, pabeni@...hat.com,
netdev@...r.kernel.org, irusskikh@...vell.com, ast@...nel.org,
daniel@...earbox.net, hawk@...nel.org, john.fastabend@...il.com,
andrii@...nel.org, kafai@...com, songliubraving@...com, yhs@...com,
kpsingh@...nel.org, bpf@...r.kernel.org
Subject: Re: [PATCH net-next v3 0/3] net: atlantic: Add XDP support
On 4/9/22 01:59, Taehee Yoo wrote:
> This patchset is to make atlantic to support multi-buffer XDP.
>
> The first patch implement control plane of xdp.
> The aq_xdp(), callback of .xdp_bpf is added.
>
> The second patch implements data plane of xdp.
> XDP_TX, XDP_DROP, and XDP_PASS is supported.
> __aq_ring_xdp_clean() is added to receive and execute xdp program.
> aq_nic_xmit_xdpf() is added to send packet by XDP.
>
> The third patch implements callback of .ndo_xdp_xmit.
> aq_xdp_xmit() is added to send redirected packets and it internally
> calls aq_nic_xmit_xdpf().
>
> Memory model is MEM_TYPE_PAGE_ORDER0 so it doesn't reuse rx page when
> XDP_TX, XDP_PASS, XDP_REDIRECT.
>
> Default the maximum rx frame size is 2K.
> If xdp is attached, size is changed to about 3K.
> It can be reused when XDP_DROP, and XDP_ABORTED.
>
> Atlantic driver has AQ_CFG_RX_PAGEORDER option and it will be always 0
> if xdp is attached.
>
> LRO will be disabled if XDP program supports only single buffer.
>
> AQC chip supports 32 multi-queues and 8 vectors(irq).
> There are two options.
> 1. under 8 cores and maximum 4 tx queues per core.
> 2. under 4 cores and maximum 8 tx queues per core.
>
> Like other drivers, these tx queues can be used only for XDP_TX,
> XDP_REDIRECT queue. If so, no tx_lock is needed.
> But this patchset doesn't use this strategy because getting hardware tx
> queue index cost is too high.
> So, tx_lock is used in the aq_nic_xmit_xdpf().
>
> single-core, single queue, 80% cpu utilization.
>
> 30.75% bpf_prog_xxx_xdp_prog_tx [k] bpf_prog_xxx_xdp_prog_tx
> 10.35% [kernel] [k] aq_hw_read_reg <---------- here
> 4.38% [kernel] [k] get_page_from_freelist
>
> single-core, 8 queues, 100% cpu utilization, half PPS.
>
> 45.56% [kernel] [k] aq_hw_read_reg <---------- here
> 17.58% bpf_prog_xxx_xdp_prog_tx [k] bpf_prog_xxx_xdp_prog_tx
> 4.72% [kernel] [k] hw_atl_b0_hw_ring_rx_receive
>
> Performance result(64 Byte)
> 1. XDP_TX
> a. xdp_geieric, single core
> - 2.5Mpps, 100% cpu
> b. xdp_driver, single core
> - 4.5Mpps, 80% cpu
> c. xdp_generic, 8 core(hyper thread)
> - 6.3Mpps, 5~10% cpu
> d. xdp_driver, 8 core(hyper thread)
> - 6.3Mpps, 5% cpu
>
> 2. XDP_REDIRECT
> a. xdp_generic, single core
> - 2.3Mpps
> b. xdp_driver, single core
> - 4.5Mpps
>
> v3:
> - Change wrong PPS performance result 40% -> 80% in single
> core(Intel i3-12100)
> - Separate aq_nic_map_xdp() from aq_nic_map_skb()
> - Drop multi buffer packets if single buffer XDP is attached
> - Disable LRO when single buffer XDP is attached
> - Use xdp_get_{frame/buff}_len()
>
> v2:
> - Do not use inline in C file
>
> Taehee Yoo (3):
> net: atlantic: Implement xdp control plane
> net: atlantic: Implement xdp data plane
> net: atlantic: Implement .ndo_xdp_xmit handler
>
> .../net/ethernet/aquantia/atlantic/aq_cfg.h | 1 +
> .../ethernet/aquantia/atlantic/aq_ethtool.c | 8 +
> .../net/ethernet/aquantia/atlantic/aq_main.c | 87 ++++
> .../net/ethernet/aquantia/atlantic/aq_main.h | 2 +
> .../net/ethernet/aquantia/atlantic/aq_nic.c | 137 ++++++
> .../net/ethernet/aquantia/atlantic/aq_nic.h | 5 +
> .../net/ethernet/aquantia/atlantic/aq_ring.c | 415 ++++++++++++++++--
> .../net/ethernet/aquantia/atlantic/aq_ring.h | 17 +
> .../net/ethernet/aquantia/atlantic/aq_vec.c | 23 +-
> .../net/ethernet/aquantia/atlantic/aq_vec.h | 6 +
> .../aquantia/atlantic/hw_atl/hw_atl_a0.c | 6 +-
> .../aquantia/atlantic/hw_atl/hw_atl_b0.c | 10 +-
> 12 files changed, 675 insertions(+), 42 deletions(-)
>
I will send v4 patch because of compile warning.
Thanks,
Taehee Yoo
Powered by blists - more mailing lists