lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8eb26510-03e1-7923-0f47-2a8d3d539963@suse.de>
Date:   Tue, 2 Mar 2021 08:18:40 +0100
From:   Hannes Reinecke <hare@...e.de>
To:     Keith Busch <kbusch@...nel.org>
Cc:     Daniel Wagner <dwagner@...e.de>, Sagi Grimberg <sagi@...mberg.me>,
        Jens Axboe <axboe@...com>, Christoph Hellwig <hch@....de>,
        linux-nvme@...ts.infradead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] nvme-tcp: Check if request has started before processing
 it

On 3/1/21 9:59 PM, Keith Busch wrote:
> On Mon, Mar 01, 2021 at 05:53:25PM +0100, Hannes Reinecke wrote:
>> On 3/1/21 5:05 PM, Keith Busch wrote:
>>> On Mon, Mar 01, 2021 at 02:55:30PM +0100, Hannes Reinecke wrote:
>>>> On 3/1/21 2:26 PM, Daniel Wagner wrote:
>>>>> On Sat, Feb 27, 2021 at 02:19:01AM +0900, Keith Busch wrote:
>>>>>> Crashing is bad, silent data corruption is worse. Is there truly no
>>>>>> defense against that? If not, why should anyone rely on this?
>>>>>
>>>>> If we receive an response for which we don't have a started request, we
>>>>> know that something is wrong. Couldn't we in just reset the connection
>>>>> in this case? We don't have to pretend nothing has happened and
>>>>> continuing normally. This would avoid a host crash and would not create
>>>>> (more) data corruption. Or I am just too naive?
>>>>>
>>>> This is actually a sensible solution.
>>>> Please send a patch for that.
>>>
>>> Is a bad frame a problem that can be resolved with a reset?
>>>
>>> Even if so, the reset doesn't indicate to the user if previous commands
>>> completed with bad data, so it still seems unreliable.
>>>
>> We need to distinguish two cases here.
>> The one is use receiving a frame with an invalid tag, leading to a crash.
>> This can be easily resolved by issuing a reset, as clearly the command was
>> garbage and we need to invoke error handling (which is reset).
>>
>> The other case is us receiving a frame with a _duplicate_ tag, ie a tag
>> which is _currently_ valid. This is a case which will fail _even now_, as we
>> have simply no way of detecting this.
>>
>> So what again do we miss by fixing the first case?
>> Apart from a system which does _not_ crash?
> 
> I'm just saying each case is a symptom of the same problem. The only
> difference from observing one vs the other is a race with the host's
> dispatch. And since you're proposing this patch, it sounds like this
> condition does happen on tcp compared to other transports where we don't
> observe it. I just thought the implication that data corruption happens
> is a alarming.
> 
Oh yes, it is.
But sadly TCP inherently suffers from this, as literally anyone can
spoof frames on the network.
Other transports like RDMA or FC do not suffer to that extend as
spoofing frames there is far more elaborate, and not really possible
without dedicated hardware equipment.

That's why there is header and data digest; that will protect you
against accidental frame corruption (as this case clearly is; the
remainder of the frame is filled with zeroes).
It would not protect you against deliberate frame corruption; that's why
there is TPAR 8010 (TLS encryption for NVMe-TCP).

Be it as it may, none of these methods are in use here, and none of
these methods can be made mandatory. So we need to deal with the case at
hand.

And in my opinion crashing is the _worst_ options of all.
Tear the connection down, reset the thing, whatever.

But do not crash.
Customers tend to have a very dim view on crashing machines, and have a
very limited capacity for being susceptible to our reasoning in these cases.

Cheers,

Hannes
-- 
Dr. Hannes Reinecke		           Kernel Storage Architect
hare@...e.de			                  +49 911 74053 688
SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), GF: Felix Imendörffer

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ