[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6b9da1e6-c4d0-7853-15fb-edf655ead33f@gmail.com>
Date: Thu, 14 Jan 2021 21:17:43 +0200
From: Boris Pismenny <borispismenny@...il.com>
To: Sagi Grimberg <sagi@...mberg.me>,
Boris Pismenny <borisp@...lanox.com>, kuba@...nel.org,
davem@...emloft.net, saeedm@...dia.com, hch@....de, axboe@...com,
kbusch@...nel.org, viro@...iv.linux.org.uk, edumazet@...gle.com
Cc: yorayz@...dia.com, boris.pismenny@...il.com, benishay@...dia.com,
linux-nvme@...ts.infradead.org, netdev@...r.kernel.org,
ogerlitz@...dia.com
Subject: Re: [PATCH v1 net-next 00/15] nvme-tcp receive offloads
On 14/01/2021 3:27, Sagi Grimberg wrote:
> Hey Boris, sorry for some delays on my end...
>
> I saw some long discussions on this set with David, what is
> the status here?
>
The main purpose of this series is to address these.
> I'll take some more look into the patches, but if you
> addressed the feedback from the last iteration I don't
> expect major issues with this patch set (at least from
> nvme-tcp side).
>
>> Changes since RFC v1:
>> =========================================
>> * Split mlx5 driver patches to several commits
>> * Fix nvme-tcp handling of recovery flows. In particular, move queue offlaod
>> init/teardown to the start/stop functions.
>
> I'm assuming that you tested controller resets and network hiccups
> during traffic right?
>
Network hiccups were tested through netem packet drops and reordering.
We tested error recovery by taking the controller down and bringing it
back up while the system is quiescent and during traffic.
If you have another test in mind, please let me know.
Powered by blists - more mailing lists