lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a79bf503-b1d5-8d18-5f02-c63e665e2e07@grimberg.me>
Date:   Tue, 14 Sep 2021 17:20:46 +0300
From:   Sagi Grimberg <sagi@...mberg.me>
To:     Daniel Wagner <dwagner@...e.de>,
        Christoph Hellwig <hch@...radead.org>
Cc:     linux-nvme@...ts.infradead.org, linux-kernel@...r.kernel.org,
        netdev@...r.kernel.org
Subject: Re: [RFC v1] nvme-tcp: enable linger socket option on shutdown


>>> When the no linger is set, the networking stack sends FIN followed by
>>> RST immediately when shutting down the socket. By enabling linger when
>>> shutting down we have a proper shutdown sequence on the wire.
>>>
>>> Signed-off-by: Daniel Wagner <dwagner@...e.de>
>>> ---
>>> The current shutdown sequence on the wire is a bit harsh and
>>> doesn't let the remote host to react. I suppose we should
>>> introduce a short (how long?) linger pause when shutting down
>>> the connection. Thoughs?
>>
>> Why?  I'm not really a TCP expert, but why is this different from
>> say iSCSI or NBD?
> 
> I am also no TCP expert. Adding netdev to Cc.
> 
> During testing the nvme-tcp subsystem by one of our partners we observed
> this. Maybe this is perfectly fine. Just as I said it looks a bit weird
> that a proper shutdown of the connection a RST is send out right after
> the FIN.

The point here is that when we close the connection we may have inflight
requests that we already failed to upper layers and we don't want them
to get through as we proceed to error handling. This is why we want the
socket to go away asap.

> No idea how iSCSI or NBD handles this. I'll check.

iSCSI does the same thing in essence (with a minor variation because in
iscsi we have a logout message which we don't have in nvme).

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ