lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e2d2db0f-94d7-dc1e-f99c-b419d50fbdcc@mellanox.com>
Date:   Wed, 24 Jan 2018 16:42:47 +0200
From:   Tal Gilboa <talgi@...lanox.com>
To:     Eric Dumazet <edumazet@...gle.com>
Cc:     David Miller <davem@...emloft.net>,
        "ncardwell@...gle.com" <ncardwell@...gle.com>,
        "ycheng@...gle.com" <ycheng@...gle.com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "eric.dumazet@...il.com" <eric.dumazet@...il.com>,
        Saeed Mahameed <saeedm@...lanox.com>,
        Tariq Toukan <tariqt@...lanox.com>,
        Amir Ancel <amira@...lanox.com>
Subject: Re: [PATCH net-next 0/7] tcp: implement rb-tree based retransmit
 queue

Hi Eric,
My choice of words in my comment was misplaced, and I apologies. It 
completely missed the point. I understand, of course, the importance of 
optimizing real-life scenarios.

We are currently evaluating this patch and if/how it might affect our 
customers. We would also evaluate your suggestion below.

We will contact you if and when we have a real concern.
Thanks.

On 1/22/2018 1:47 AM, Eric Dumazet wrote:
> On Sun, Jan 21, 2018 at 12:52 PM, Tal Gilboa <talgi@...lanox.com> wrote:
>> Hi Eric,
>> We have noticed a degradation on both of our drivers (mlx4 and mlx5) when
>> running TCP. Exact scenario is single stream TCP with 1KB packets. The
>> degradation is a steady 50% drop.
>> We tracked the offending commit to be:
>> 75c119a ("tcp: implement rb-tree based retransmit queue")
>>
>> Since mlx4 and mlx5 code base is completely different and by looking at the
>> changes in this commit, we believe the issue is external to the mlx4/5
>> drivers.
>>
>> I see in the comment below you anticipated some overhead, but this may be a
>> too common case to ignore.
>>
>> Can you please review and consider reverting/fixing it?
>>
> 
> Hi Tal
> 
> You have to provide way more details than a simple mail, asking for a
> " revert or a fix " ...
> 
> On our GFEs, we got a win, while I was expecting a small overhead,
> given the apparent complexity of dealing with RB tree instead of
> linear list.
> 
> And on the stress scenario described in my patch set, the win was
> absolutely abysmal.
> 
> A " single strean TCP with 1KB packets"  is not something we need to optimize,
> unless there is some really strange setup for one of your customers ?
> 
> Here we deal with millions of TCP flows, and this is what we need to
> take care of.
> 
> Thanks.
> 
>> Thanks,
>>
>> Tal G.
>>
>>
>> On 10/7/2017 2:31 AM, David Miller wrote:
>>>
>>> From: Eric Dumazet <edumazet@...gle.com>
>>> Date: Thu,  5 Oct 2017 22:21:20 -0700
>>>
>>>> This patch series implement RB-tree based retransmit queue for TCP,
>>>> to better match modern BDP.
>>>
>>>
>>> Indeed, there was a lot of resistence to this due to the overhead
>>> for small retransmit queue sizes, but with today's scale this is
>>> long overdue.
>>>
>>> So, series applied, nice work!
>>>
>>> Maybe we can look into dynamic schemes where when the queue never
>>> goes over N entries we elide the rbtree and use a list.  I'm not
>>> so sure how practical that would be.
>>>
>>> Thanks!
>>>
>>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ