[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <236ff7c4-95da-da0e-8ba7-12bfb92c7d55@gmail.com>
Date: Sun, 3 Nov 2019 10:24:13 -0700
From: David Ahern <dsahern@...il.com>
To: Martin Habets <mhabets@...arflare.com>,
Charles McLachlan <cmclachlan@...arflare.com>
Cc: davem@...emloft.net, netdev@...r.kernel.org,
linux-net-drivers@...arflare.com, brouer@...hat.com
Subject: Re: [PATCH net-next v4 0/6] sfc: Add XDP support
On 11/1/19 4:07 AM, Martin Habets wrote:
> On 31/10/2019 22:18, David Ahern wrote:
>> On 10/31/19 4:21 AM, Charles McLachlan wrote:
>>> Supply the XDP callbacks in netdevice ops that enable lower level processing
>>> of XDP frames.
>>>
>>> Changes in v4:
>>> - Handle the failure to send some frames in efx_xdp_tx_buffers() properly.
>>>
>>> Changes in v3:
>>> - Fix a BUG_ON when trying to allocate piobufs to xdp queues.
>>> - Add a missed trace_xdp_exception.
>>>
>>> Changes in v2:
>>> - Use of xdp_return_frame_rx_napi() in tx.c
>>> - Addition of xdp_rxq_info_valid and xdp_rxq_info_failed to track when
>>> xdp_rxq_info failures occur.
>>> - Renaming of rc to err and more use of unlikely().
>>> - Cut some duplicated code and fix an array overrun.
>>> - Actually increment n_rx_xdp_tx when packets are transmitted.
>>>
>>
>> Something is up with this version versus v2. I am seeing a huge
>> performance drop with my L2 forwarding program - something I was not
>> seeing with v2 and I do not see with the experimental version of XDP in
>> the out of tree sfc driver.
>>
>> Without XDP:
>>
>> $ netperf -H 10.39.16.7 -l 30 -t TCP_STREAM
>> MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
>> 10.39.16.7 () port 0 AF_INET : demo
>> Recv Send Send
>> Socket Socket Message Elapsed
>> Size Size Size Time Throughput
>> bytes bytes bytes secs. 10^6bits/sec
>>
>> 87380 16384 16384 30.00 9386.73
>>
>>
>> With XDP
>>
>> $ netperf -H 10.39.16.7 -l 30 -t TCP_STREAM
>> MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to
>> 10.39.16.7 () port 0 AF_INET : demo
>> Recv Send Send
>> Socket Socket Message Elapsed
>> Size Size Size Time Throughput
>> bytes bytes bytes secs. 10^6bits/sec
>>
>> 87380 16384 16384 30.01 384.11
>>
>>
>> Prior versions was showing throughput of at least 4000 (depends on the
>> test and VM setup).
>
> Thanks for testing this. And a good thing we have counters for this.
> Are the rx_xdp_drops or rx_xdp_bad_drops non-zero/increasing?
>
Patches are now in the kernel. Tests Friday and yesterday were showing a
lot of variability, so may be an issue with the servers / lab setup I am
using. I need to follow up on that.
Powered by blists - more mailing lists