[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <8f6624b3-b197-1cc7-ede3-03198c6f1936@ti.com>
Date: Wed, 9 Jun 2021 20:03:19 +0300
From: Grygorii Strashko <grygorii.strashko@...com>
To: Matteo Croce <mcroce@...ux.microsoft.com>,
Lorenzo Bianconi <lorenzo@...nel.org>
CC: <netdev@...r.kernel.org>,
Lorenzo Bianconi <lorenzo.bianconi@...hat.com>,
David Miller <davem@...emloft.net>,
Jakub Kicinski <kuba@...nel.org>,
Ilias Apalodimas <ilias.apalodimas@...aro.org>,
Jesper Dangaard Brouer <brouer@...hat.com>
Subject: Re: [RFT net-next] net: ti: add pp skb recycling support
On 09/06/2021 15:20, Matteo Croce wrote:
> On Wed, Jun 9, 2021 at 2:01 PM Lorenzo Bianconi <lorenzo@...nel.org> wrote:
>>
>> As already done for mvneta and mvpp2, enable skb recycling for ti
>> ethernet drivers
>>
>> Signed-off-by: Lorenzo Bianconi <lorenzo@...nel.org>
>
> Looks good! If someone with the HW could provide a with and without
> the patch, that would be nice!
>
Not sure to which mail to answer, so answering here - thanks all
1) I've simulated packet drop using iperf
Host:
- arp -s <some IP> <unknown MAC>
- iperf -c <some IP> -u -l60 -b700M -t60 -i1
DUP:
- place interface in promisc mode
- check rx_packets stats
I see big improvement ~47Kpps vs ~64Kpps
2) I've run iperf3 tests - see no regressions, but also not too much improvements
3) I've applied 2 patches
- this one
- and [1]
Results are below, Thank you
Tested-by: Grygorii Strashko <grygorii.strashko@...com>
Reviewed-by: Grygorii Strashko <grygorii.strashko@...com>
[1] https://patchwork.kernel.org/project/netdevbpf/patch/20210609103326.278782-18-toke@redhat.com/
=========== Before:
[perf top]
47.15% [kernel] [k] _raw_spin_unlock_irqrestore
11.77% [kernel] [k] __cpdma_chan_free
3.16% [kernel] [k] ___bpf_prog_run
2.52% [kernel] [k] cpsw_rx_vlan_encap
2.34% [kernel] [k] __netif_receive_skb_core
2.27% [kernel] [k] free_unref_page
2.26% [kernel] [k] kmem_cache_free
2.24% [kernel] [k] kmem_cache_alloc
1.69% [kernel] [k] __softirqentry_text_start
1.61% [kernel] [k] cpsw_rx_handler
1.19% [kernel] [k] page_pool_release_page
1.19% [kernel] [k] clear_bits_ll
1.15% [kernel] [k] page_frag_free
1.06% [kernel] [k] __dma_page_dev_to_cpu
0.99% [kernel] [k] memset
0.94% [kernel] [k] __alloc_pages_bulk
0.92% [kernel] [k] kfree_skb
0.85% [kernel] [k] packet_rcv
0.78% [kernel] [k] page_address
0.75% [kernel] [k] v7_dma_inv_range
0.71% [kernel] [k] __lock_text_start
[rx packets - with packet drop]
rxdiff 48004
rxdiff 47630
rxdiff 47538
.
[iperf3 TCP]
iperf3 -c 192.168.1.1 -i1
[ 5] 0.00-10.00 sec 873 MBytes 732 Mbits/sec 0 sender
[ 5] 0.00-10.01 sec 866 MBytes 726 Mbits/sec receiver
=========== After:
[perf top - with packet drop]
40.58% [kernel] [k] _raw_spin_unlock_irqrestore
16.18% [kernel] [k] __softirqentry_text_start
10.33% [kernel] [k] __cpdma_chan_free
2.62% [kernel] [k] ___bpf_prog_run
2.05% [kernel] [k] cpsw_rx_vlan_encap
2.00% [kernel] [k] kmem_cache_alloc
1.86% [kernel] [k] __netif_receive_skb_core
1.80% [kernel] [k] kmem_cache_free
1.63% [kernel] [k] cpsw_rx_handler
1.12% [kernel] [k] cpsw_rx_mq_poll
1.11% [kernel] [k] page_pool_put_page
1.04% [kernel] [k] _raw_spin_unlock
0.97% [kernel] [k] clear_bits_ll
0.90% [kernel] [k] packet_rcv
0.88% [kernel] [k] __dma_page_dev_to_cpu
0.85% [kernel] [k] kfree_skb
0.80% [kernel] [k] memset
0.71% [kernel] [k] __lock_text_start
0.66% [kernel] [k] v7_dma_inv_range
0.64% [kernel] [k] gen_pool_free_owner
0.58% [kernel] [k] __rcu_read_unlock
[rx packets - with packet drop]
rxdiff 65843
rxdiff 66722
rxdiff 65264
[iperf3 TCP]
iperf3 -c 192.168.1.1 -i1
[ 5] 0.00-10.00 sec 884 MBytes 742 Mbits/sec 0 sender
[ 5] 0.00-10.01 sec 878 MBytes 735 Mbits/sec receiver
--
Best regards,
grygorii
Powered by blists - more mailing lists