[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e3f1c071-3609-d6e7-81d6-9ee73f9f4f6a@solarflare.com>
Date: Fri, 1 Nov 2019 10:07:30 +0000
From: Martin Habets <mhabets@...arflare.com>
To: David Ahern <dsahern@...il.com>,
Charles McLachlan <cmclachlan@...arflare.com>
CC: <davem@...emloft.net>, <netdev@...r.kernel.org>,
<linux-net-drivers@...arflare.com>, <brouer@...hat.com>
Subject: Re: [PATCH net-next v4 0/6] sfc: Add XDP support
On 31/10/2019 22:18, David Ahern wrote:
> On 10/31/19 4:21 AM, Charles McLachlan wrote:
>> Supply the XDP callbacks in netdevice ops that enable lower level processing
>> of XDP frames.
>>
>> Changes in v4:
>> - Handle the failure to send some frames in efx_xdp_tx_buffers() properly.
>>
>> Changes in v3:
>> - Fix a BUG_ON when trying to allocate piobufs to xdp queues.
>> - Add a missed trace_xdp_exception.
>>
>> Changes in v2:
>> - Use of xdp_return_frame_rx_napi() in tx.c
>> - Addition of xdp_rxq_info_valid and xdp_rxq_info_failed to track when
>> xdp_rxq_info failures occur.
>> - Renaming of rc to err and more use of unlikely().
>> - Cut some duplicated code and fix an array overrun.
>> - Actually increment n_rx_xdp_tx when packets are transmitted.
>>
>
> Something is up with this version versus v2. I am seeing a huge
> performance drop with my L2 forwarding program - something I was not
> seeing with v2 and I do not see with the experimental version of XDP in
> the out of tree sfc driver.
>
> Without XDP:
>
> $ netperf -H 10.39.16.7 -l 30 -t TCP_STREAM
> MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
> 10.39.16.7 () port 0 AF_INET : demo
> Recv Send Send
> Socket Socket Message Elapsed
> Size Size Size Time Throughput
> bytes bytes bytes secs. 10^6bits/sec
>
> 87380 16384 16384 30.00 9386.73
>
>
> With XDP
>
> $ netperf -H 10.39.16.7 -l 30 -t TCP_STREAM
> MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
> 10.39.16.7 () port 0 AF_INET : demo
> Recv Send Send
> Socket Socket Message Elapsed
> Size Size Size Time Throughput
> bytes bytes bytes secs. 10^6bits/sec
>
> 87380 16384 16384 30.01 384.11
>
>
> Prior versions was showing throughput of at least 4000 (depends on the
> test and VM setup).
Thanks for testing this. And a good thing we have counters for this.
Are the rx_xdp_drops or rx_xdp_bad_drops non-zero/increasing?
Martin
Powered by blists - more mailing lists