lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  PHC 
Open Source and information security mailing list archives
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 25 Mar 2022 16:44:37 +0300
From:   Sagi Grimberg <>
To:     Mingbao Sun <>
Cc:     Keith Busch <>, Jens Axboe <>,
        Christoph Hellwig <>,
        Chaitanya Kulkarni <>,,,
        Eric Dumazet <>,
        "David S . Miller" <>,
        Hideaki YOSHIFUJI <>,
        David Ahern <>,
        Jakub Kicinski <>,,,,,,
Subject: Re: [PATCH v2 2/3] nvme-tcp: support specifying the

On 3/25/22 15:11, Mingbao Sun wrote:
>> 1. Can you please provide your measurements that support your claims?
> Yes. I would provide a series of the testing result.
> In the bottom of this mail, I would provide the first one.
>> 2. Can you please provide a real, existing use-case where this provides
>> true, measureable value? And more specifically, please clarify how the
>> use-case needs a local tuning for nvme-tcp that would not hold for
>> other tcp streams that are running on the host (and vice-versa).
> As for the use-case.
> I think multiple NVMe/TCP hosts simultaneously write data to a single target
> is a much common use-case.
> And this patchset just addresses the performance issue of this use-case.

Thanks Mingbao,

Long email, haven't read it all yet.

But this doesn't answer my specific question. I was asking why should
the tcp congestion be controlled locally to nvme. You could just as
easily change these knobs via sysctl and achieve the expected result
that dctcp handles congestion better than cubic (which was not even
testing nvme btw).

As I said, TCP can be tuned in various ways, congestion being just one
of them. I'm sure you can find a workload where rmem/wmem will make
a difference.

In addition, based on my knowledge, application specific TCP level
tuning (like congestion) is not really a common thing to do. So why in

So to me at least, it is not clear why we should add it to the driver.

Powered by blists - more mailing lists