[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6d625989-5be4-a780-b4a4-c53e6f219ee8@engleder-embedded.com>
Date: Fri, 6 Jan 2023 00:22:32 +0100
From: Gerhard Engleder <gerhard@...leder-embedded.com>
To: Alexander Lobakin <alexandr.lobakin@...el.com>
Cc: davem@...emloft.net, kuba@...nel.org, edumazet@...gle.com,
pabeni@...hat.com, netdev@...r.kernel.org
Subject: Re: [PATCH net-next v3 9/9] tsnep: Add XDP RX support
On 05.01.23 18:52, Alexander Lobakin wrote:
> From: Gerhard Engleder <gerhard@...leder-embedded.com>
> Date: Wed Jan 04 2023 20:41:32 GMT+0100
>
>> If BPF program is set up, then run BPF program for every received frame
>> and execute the selected action.
>>
>> Test results with A53 1.2GHz:
>>
>> XDP_DROP (samples/bpf/xdp1)
>> proto 17: 883878 pkt/s
>>
>> XDP_TX (samples/bpf/xdp2)
>> proto 17: 255693 pkt/s
>>
>> XDP_REDIRECT (samples/bpf/xdpsock)
>> sock0@...2:0 rxdrop xdp-drv
>> pps pkts 1.00
>> rx 855,582 5,404,523
>> tx 0 0
>>
>> XDP_REDIRECT (samples/bpf/xdp_redirect)
>> eth2->eth1 613,267 rx/s 0 err,drop/s 613,272 xmit/s
>>
>> Signed-off-by: Gerhard Engleder <gerhard@...leder-embedded.com>
>> ---
>> drivers/net/ethernet/engleder/tsnep_main.c | 129 ++++++++++++++++++++-
>> 1 file changed, 127 insertions(+), 2 deletions(-)
>
> [...]
>
>> @@ -624,6 +628,34 @@ static void tsnep_xdp_xmit_flush(struct tsnep_tx *tx)
>> iowrite32(TSNEP_CONTROL_TX_ENABLE, tx->addr + TSNEP_CONTROL);
>> }
>>
>> +static bool tsnep_xdp_xmit_back(struct tsnep_adapter *adapter,
>> + struct xdp_buff *xdp)
>> +{
>> + struct xdp_frame *xdpf = xdp_convert_buff_to_frame(xdp);
>> + int cpu = smp_processor_id();
>> + struct netdev_queue *nq;
>> + int queue;
>
> Squash with @cpu above (or make @cpu u32)?
Will change to u32.
>> + bool xmit;
>> +
>> + if (unlikely(!xdpf))
>> + return -EFAULT;
>> +
>> + queue = cpu % adapter->num_tx_queues;
>> + nq = netdev_get_tx_queue(adapter->netdev, queue);
>> +
>> + __netif_tx_lock(nq, cpu);
>
> [...]
>
>> @@ -788,6 +820,11 @@ static unsigned int tsnep_rx_offset(struct tsnep_rx *rx)
>> return TSNEP_SKB_PAD;
>> }
>>
>> +static unsigned int tsnep_rx_offset_xdp(void)
>> +{
>> + return XDP_PACKET_HEADROOM;
>> +}
>
> The reason for creating a function to always return a constant?
It is a variant of tsnep_rx_offset() for the XDP path to prevent
unneeded calls of tsnep_xdp_is_enabled(). With this function I
keep the RX offset local. But yes, it provides actually no
functionality. I will add a comment.
What about always using XDP_PACKET_HEADROOM as offset in the RX buffer?
NET_IP_ALIGN would not be considered then, but it is zero anyway on
the main platforms x86 and arm64. This would simplify the code.
>> +
>> static void tsnep_rx_ring_cleanup(struct tsnep_rx *rx)
>> {
>> struct device *dmadev = rx->adapter->dmadev;
>
> [...]
>
>> +static void tsnep_finalize_xdp(struct tsnep_adapter *adapter, int status)
>> +{
>> + int cpu = smp_processor_id();
>> + struct netdev_queue *nq;
>> + int queue;
>
> (same re squashing)
u32.
>> +
>> + if (status & TSNEP_XDP_TX) {
>> + queue = cpu % adapter->num_tx_queues;
>> + nq = netdev_get_tx_queue(adapter->netdev, queue);
>> +
>> + __netif_tx_lock(nq, cpu);
>> + tsnep_xdp_xmit_flush(&adapter->tx[queue]);
>> + __netif_tx_unlock(nq);
>> + }
>
> This can be optimized. Given that one NAPI cycle is always being run on
> one CPU, you can get both @queue and @nq once at the beginning of a
> polling cycle and then pass it to perform %XDP_TX and this flush.
> Alternatively, if you don't want to do that not knowing in advance if
> you'll need it at all during the cycle, you can obtain them at the first
> tsnep_xdp_xmit_back() invocation.
I will give it a try.
>> +
>> + if (status & TSNEP_XDP_REDIRECT)
>> + xdp_do_flush();
>> +}
>> +
>> static struct sk_buff *tsnep_build_skb(struct tsnep_rx *rx, struct page *page,
>> int length)
>> {
>
> [...]
>
>> @@ -1087,6 +1189,26 @@ static int tsnep_rx_poll(struct tsnep_rx *rx, struct napi_struct *napi,
>> rx->read = (rx->read + 1) % TSNEP_RING_SIZE;
>> desc_available++;
>>
>> + if (prog) {
>> + bool consume;
>> +
>> + xdp_init_buff(&xdp, PAGE_SIZE, &rx->xdp_rxq);
>
> xdp_init_buff() is designed to be called once per NAPI cycle, at the
> beginning. You don't need to reinit it given that the values you pass
> are always the same.
Will be done.
Thanks for the review!
Gerhard
Powered by blists - more mailing lists