[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250813200744.17975-1-bgurney@redhat.com>
Date: Wed, 13 Aug 2025 16:07:35 -0400
From: Bryan Gurney <bgurney@...hat.com>
To: linux-nvme@...ts.infradead.org,
kbusch@...nel.org,
hch@....de,
sagi@...mberg.me,
axboe@...nel.dk
Cc: james.smart@...adcom.com,
njavali@...vell.com,
linux-scsi@...r.kernel.org,
hare@...e.de,
linux-hardening@...r.kernel.org,
kees@...nel.org,
gustavoars@...nel.org,
bgurney@...hat.com,
jmeneghi@...hat.com,
emilne@...hat.com
Subject: [PATCH v9 0/8] nvme-fc: FPIN link integrity handling
FPIN LI (link integrity) messages are received when the attached
fabric detects hardware errors. In response to these messages I/O
should be directed away from the affected ports, and only used
if the 'optimized' paths are unavailable.
Upon port reset the paths should be put back in service as the
affected hardware might have been replaced.
This patch adds a new controller flag 'NVME_CTRL_MARGINAL'
which will be checked during multipath path selection, causing the
path to be skipped when checking for 'optimized' paths. If no
optimized paths are available the 'marginal' paths are considered
for path selection alongside the 'non-optimized' paths.
It also introduces a new nvme-fc callback 'nvme_fc_fpin_rcv()' to
evaluate the FPIN LI TLV payload and set the 'marginal' state on
all affected rports.
The testing for this patch set was performed by Bryan Gurney, using the
process outlined by John Meneghini's presentation at LSFMM 2024, where
the fibre channel switch sends an FPIN notification on a specific switch
port, and the following is checked on the initiator:
1. The controllers corresponding to the paths on the port that has
received the notification are showing a set NVME_CTRL_MARGINAL flag.
\
+- nvme4 fc traddr=c,host_traddr=e live optimized
+- nvme5 fc traddr=8,host_traddr=e live non-optimized
+- nvme8 fc traddr=e,host_traddr=f marginal optimized
+- nvme9 fc traddr=a,host_traddr=f marginal non-optimized
2. The I/O statistics of the test namespace show no I/O activity on the
controllers with NVME_CTRL_MARGINAL set.
Device tps MB_read/s MB_wrtn/s MB_dscd/s
nvme4c4n1 0.00 0.00 0.00 0.00
nvme4c5n1 25001.00 0.00 97.66 0.00
nvme4c9n1 25000.00 0.00 97.66 0.00
nvme4n1 50011.00 0.00 195.36 0.00
Device tps MB_read/s MB_wrtn/s MB_dscd/s
nvme4c4n1 0.00 0.00 0.00 0.00
nvme4c5n1 48360.00 0.00 188.91 0.00
nvme4c9n1 1642.00 0.00 6.41 0.00
nvme4n1 49981.00 0.00 195.24 0.00
Device tps MB_read/s MB_wrtn/s MB_dscd/s
nvme4c4n1 0.00 0.00 0.00 0.00
nvme4c5n1 50001.00 0.00 195.32 0.00
nvme4c9n1 0.00 0.00 0.00 0.00
nvme4n1 50016.00 0.00 195.38 0.00
Link: https://people.redhat.com/jmeneghi/LSFMM_2024/LSFMM_2024_NVMe_Cancel_and_FPIN.pdf
Testing has been performed by sending all FPIN LI ELS messages from the
switch to the Host and verifying the proper nvme multi-pathing behavior
is effected with each of the eight different FPIN link integrity events.
Results were verified with iostat and with the nvme list-subsys command.
These tests were run with all scenarios including where there were only
non-optimized paths available, and where all paths were
marginal/degraded. All multi-path io-policies were tested including:
numa, round-robin and queue-depth. When all paths on the host are
marginal/degraded, I/O continues on the optimized path that was most
recently non-marginal. If both of the optimized paths are down, I/O
properly continues on one of the marginal/degraded non-optimized paths.
Testing has been complete with both Broadcom (lpfc) and Marvell (qla2xx)
32GB HBAs. Both HBAs successfully complete all tests.
For a complete description of the tests that were run, please see
bugzilla 20329.
Closes: https://bugzilla.kernel.org/show_bug.cgi?id=220329
Changes to the original submission:
- Changed flag name to 'marginal'
- Do not block marginal path; influence path selection instead
to de-prioritize marginal paths
Changes to v2:
- Split off driver-specific modifications
- Introduce 'union fc_tlv_desc' to avoid casts
Changes to v3:
- Include reviews from Justin Tee
- Split marginal path handling patch
Changes to v4:
- Change 'u8' to '__u8' on fc_tlv_desc to fix a failure to build
- Print 'marginal' instead of 'live' in the state of controllers
when they are marginal
Changes to v5:
- Minor spelling corrections to patch descriptions
Changes to v6:
- No code changes; added note about additional testing
Changes to v7:
- Split nvme core marginal flag addition into its own patch
- Add patch for queue_depth marginal path support
Changes to v8:
- Rebased patch series to nvme-6.17.
- Added patch from Gustavo Silva, "scsi: qla2xxx: Fix memcpy field-spanning
write issue", which resolves the field-spanning write issue
- We decided to leave the "marginal" state as is, because the transport
driver uses the term "marginal".
This patch series is based upon nvme-6.17.
Bryan Gurney (2):
nvme: add NVME_CTRL_MARGINAL flag
nvme: sysfs: emit the marginal path state in show_state()
Gustavo A. R. Silva (1):
scsi: qla2xxx: Fix memcpy field-spanning write issue
Hannes Reinecke (5):
fc_els: use 'union fc_tlv_desc'
nvme-fc: marginal path handling
nvme-fc: nvme_fc_fpin_rcv() callback
lpfc: enable FPIN notification for NVMe
qla2xxx: enable FPIN notification for NVMe
John Meneghini (1):
nvme-multipath: queue-depth support for marginal paths
drivers/nvme/host/core.c | 1 +
drivers/nvme/host/fc.c | 99 +++++++++++++++++++
drivers/nvme/host/multipath.c | 24 +++--
drivers/nvme/host/nvme.h | 6 ++
drivers/nvme/host/sysfs.c | 4 +-
drivers/scsi/lpfc/lpfc_els.c | 84 ++++++++--------
drivers/scsi/qla2xxx/qla_def.h | 10 +-
drivers/scsi/qla2xxx/qla_isr.c | 20 ++--
drivers/scsi/qla2xxx/qla_nvme.c | 2 +-
drivers/scsi/qla2xxx/qla_os.c | 5 +-
drivers/scsi/scsi_transport_fc.c | 27 +++--
include/linux/nvme-fc-driver.h | 3 +
include/uapi/scsi/fc/fc_els.h | 165 +++++++++++++++++--------------
13 files changed, 293 insertions(+), 157 deletions(-)
--
2.50.1
Powered by blists - more mailing lists