[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <02f12db0-4f66-8691-72cb-7531395c7990@grimberg.me>
Date: Thu, 14 Jan 2021 13:07:34 -0800
From: Sagi Grimberg <sagi@...mberg.me>
To: Boris Pismenny <borispismenny@...il.com>,
Boris Pismenny <borisp@...lanox.com>, kuba@...nel.org,
davem@...emloft.net, saeedm@...dia.com, hch@....de, axboe@...com,
kbusch@...nel.org, viro@...iv.linux.org.uk, edumazet@...gle.com
Cc: yorayz@...dia.com, boris.pismenny@...il.com, benishay@...dia.com,
linux-nvme@...ts.infradead.org, netdev@...r.kernel.org,
ogerlitz@...dia.com
Subject: Re: [PATCH v1 net-next 00/15] nvme-tcp receive offloads
>> Hey Boris, sorry for some delays on my end...
>>
>> I saw some long discussions on this set with David, what is
>> the status here?
>>
>
> The main purpose of this series is to address these.
>
>> I'll take some more look into the patches, but if you
>> addressed the feedback from the last iteration I don't
>> expect major issues with this patch set (at least from
>> nvme-tcp side).
>>
>>> Changes since RFC v1:
>>> =========================================
>>> * Split mlx5 driver patches to several commits
>>> * Fix nvme-tcp handling of recovery flows. In particular, move queue offlaod
>>> init/teardown to the start/stop functions.
>>
>> I'm assuming that you tested controller resets and network hiccups
>> during traffic right?
>>
>
> Network hiccups were tested through netem packet drops and reordering.
> We tested error recovery by taking the controller down and bringing it
> back up while the system is quiescent and during traffic.
>
> If you have another test in mind, please let me know.
I suggest to also perform interface down/up during traffic both
on the host and the targets.
Other than that we should be in decent shape...
Powered by blists - more mailing lists