lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0605ac36-16d5-2026-d3c6-62d346db6dfb@gmail.com>
Date:   Tue, 11 Jul 2023 15:47:45 -0700
From:   James Smart <jsmart2021@...il.com>
To:     Daniel Wagner <dwagner@...e.de>
Cc:     linux-nvme@...ts.infradead.org, linux-kernel@...r.kernel.org,
        linux-block@...r.kernel.org, Chaitanya Kulkarni <kch@...dia.com>,
        Shin'ichiro Kawasaki <shinichiro@...tmail.com>,
        Sagi Grimberg <sagi@...mberg.me>,
        Hannes Reinecke <hare@...e.de>, Ewan Milne <emilne@...hat.com>
Subject: Re: [PATCH v2 4/5] nvme-fc: Make initial connect attempt synchronous

On 7/6/2023 5:07 AM, Daniel Wagner wrote:
> Hi James,
> 
> On Sat, Jul 01, 2023 at 05:11:11AM -0700, James Smart wrote:
>> As much as you want to make this change to make transports "similar", I am
>> dead set against it unless you are completing a long qualification of the
>> change on real FC hardware and FC-NVME devices. There is probably 1.5 yrs of
>> testing of different race conditions that drove this change. You cannot
>> declare success from a simplistic toy tool such as fcloop for validation.
>>
>> The original issues exist, probably have even morphed given the time from
>> the original change, and this will seriously disrupt the transport and any
>> downstream releases.  So I have a very strong NACK on this change.
>>
>> Yes - things such as the connect failure results are difficult to return
>> back to nvme-cli. I have had many gripes about the nvme-cli's behavior over
>> the years, especially on negative cases due to race conditions which
>> required retries. It still fails this miserably.  The async reconnect path
>> solved many of these issues for fc.
>>
>> For the auth failure, how do we deal with things if auth fails over time as
>> reconnects fail due to a credential changes ?  I would think commonality of
>> this behavior drives part of the choice.
> 
> Alright, what do you think about the idea to introduce a new '--sync' option to
> nvme-cli which forwards this info to the kernel that we want to wait for the
> initial connect to succeed or fail? Obviously, this needs to handle signals too.
> 
>  From what I understood this is also what Ewan would like to have
To me this is not sync vs non-sync option, it's a max_reconnects value 
tested for in nvmf_should_reconnect(). Which, if set to 0 (or 1), should 
fail if the initial connect fails.

Right now max_reconnects is calculated by the ctrl_loss_tmo and 
reconnect_delay. So there's already a way via the cli to make sure 
there's only 1 connect attempt. I wouldn't mind seeing an exact cli 
option that sets it to 1 connection attempt w/o the user calculation and 
2 value specification.

I also assume that this is not something that would be set by default in 
the auto-connect scripts or automated cli startup scripts.


> 
> Hannes thought it would make sense to use the same initial connect logic in
> tcp/rdma, because there could also be transient erros (e.g. spanning tree
> protocol). In short making the tcp/rdma do the same thing as fc?

I agree that the same connect logic makes sense for tcp/rdma. Certainly 
one connect/teardown path vs one at create and one at reconnect makes 
sense. The transient errors during 1st connect was the why FC added it 
and I would assume tcp/rdma has it's own transient errors or timing 
relationships at initial connection setups, etc.

For FC, we're trying to work around errors to transport commands (FC 
NVME ELS's) that fail (dropped or timeout) or commands used to 
initialize the controller which may be dropped/timeout thus fail 
controller init. Although NVMe-FC does have a retransmission option, it 
generally doesn't apply to the FC NVME LS's, and few of the FC devices 
have yet to turn on the retransmission option to deal with the errors. 
So the general behavior is connection termination and/or association 
termination which then depends on the reconnect path to retry. It's also 
critical as connection requests are automated on FC based on 
connectivity events. If we fail out to the cli due to the fabric 
dropping some up front command, there's no guarantee there will be 
another connectivity event to restart the controller create and we end 
up without device connectivity. The other issue we had to deal with was 
how long sysadm hung out waiting for the auto-connect script to 
complete. We couldn't wait the entire multiple retry case, and returning 
before the 1st attempt was complete was against the spirit of the cli - 
so we waited for the 1st attempt to try, released sysadm and let the 
reconnect go on in the background.


> 
> So let's drop the final patch from this series for the time. Could you give some
> feedback on the rest of the patches?
> 
> Thanks,
> Daniel

I'll look at them.

-- james


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ