lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 25 Mar 2022 16:44:37 +0300
From:   Sagi Grimberg <sagi@...mberg.me>
To:     Mingbao Sun <sunmingbao@....com>
Cc:     Keith Busch <kbusch@...nel.org>, Jens Axboe <axboe@...com>,
        Christoph Hellwig <hch@....de>,
        Chaitanya Kulkarni <kch@...dia.com>,
        linux-nvme@...ts.infradead.org, linux-kernel@...r.kernel.org,
        Eric Dumazet <edumazet@...gle.com>,
        "David S . Miller" <davem@...emloft.net>,
        Hideaki YOSHIFUJI <yoshfuji@...ux-ipv6.org>,
        David Ahern <dsahern@...nel.org>,
        Jakub Kicinski <kuba@...nel.org>, netdev@...r.kernel.org,
        tyler.sun@...l.com, ping.gan@...l.com, yanxiu.cai@...l.com,
        libin.zhang@...l.com, ao.sun@...l.com
Subject: Re: [PATCH v2 2/3] nvme-tcp: support specifying the
 congestion-control



On 3/25/22 15:11, Mingbao Sun wrote:
>> 1. Can you please provide your measurements that support your claims?
> 
> Yes. I would provide a series of the testing result.
> In the bottom of this mail, I would provide the first one.
> 
>>
>> 2. Can you please provide a real, existing use-case where this provides
>> true, measureable value? And more specifically, please clarify how the
>> use-case needs a local tuning for nvme-tcp that would not hold for
>> other tcp streams that are running on the host (and vice-versa).
>>
> 
> As for the use-case.
> I think multiple NVMe/TCP hosts simultaneously write data to a single target
> is a much common use-case.
> And this patchset just addresses the performance issue of this use-case.

Thanks Mingbao,

Long email, haven't read it all yet.

But this doesn't answer my specific question. I was asking why should
the tcp congestion be controlled locally to nvme. You could just as
easily change these knobs via sysctl and achieve the expected result
that dctcp handles congestion better than cubic (which was not even
testing nvme btw).

As I said, TCP can be tuned in various ways, congestion being just one
of them. I'm sure you can find a workload where rmem/wmem will make
a difference.

In addition, based on my knowledge, application specific TCP level
tuning (like congestion) is not really a common thing to do. So why in
nvme-tcp?

So to me at least, it is not clear why we should add it to the driver.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ