[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <97526d45-ec7d-48a0-bdc6-659f75839f53@embeddedor.com>
Date: Thu, 25 Sep 2025 16:30:20 +0200
From: "Gustavo A. R. Silva" <gustavo@...eddedor.com>
To: John Meneghini <jmeneghi@...hat.com>, martin.petersen@...cle.com
Cc: axboe@...nel.dk, bgurney@...hat.com, emilne@...hat.com,
gustavoars@...nel.org, hare@...e.de, hch@....de, james.smart@...adcom.com,
kbusch@...nel.org, kees@...nel.org, linux-hardening@...r.kernel.org,
linux-nvme@...ts.infradead.org, linux-scsi@...r.kernel.org,
njavali@...vell.com, sagi@...mberg.me
Subject: Re: [PATCH] Revert "scsi: qla2xxx: Fix memcpy() field-spanning write
issue"
On 9/25/25 16:18, John Meneghini wrote:
> On 9/25/25 9:38 AM, Gustavo A. R. Silva wrote:
>> On 9/25/25 15:07, John Meneghini wrote:
>>> This reverts commit 6f4b10226b6b1e7d1ff3cdb006cf0f6da6eed71e.
>>>
>>> We've been testing this patch and it turns out there is a significant
>>> bug here. This leaks memory and causes a driver hang.
>>>
>>> Link:
>>> https://lore.kernel.org/linux-scsi/yq1zfajqpec.fsf@ca-mkp.ca.oracle.com/
>>
>> Thanks for the report. I wonder if you have any logs or something I could
>> look at to figure out what's going on.
>
>
> We have a fix already. Chris and Bryan figured it out.
>
>> Bryan,
>>
>> Could you please share how this patch[1] was tested?
>
> Bryan, please reply with bug fix patch you emailed me yesterday as an RFC patch.
>
> Gustavo, this patch is being tested as a part of our FPIN LI changes. To run this code you need a Brocade switch and a whole lot of hardware.
>
> You can see a example test plan here: https://bugzilla.kernel.org/attachment.cgi?id=308368&action=view
>
> I am about to submit a version 10 patch series for these changes and I will include a new/fixed version of your patch in that series.
Awesome, thank you!
I was in the process of writing the following (draft) patch, which is much
less intrusive than the other one:
diff --git a/drivers/scsi/qla2xxx/qla_def.h b/drivers/scsi/qla2xxx/qla_def.h
index cb95b7b12051..1b000709ccd8 100644
--- a/drivers/scsi/qla2xxx/qla_def.h
+++ b/drivers/scsi/qla2xxx/qla_def.h
@@ -4890,9 +4890,10 @@ struct purex_item {
struct purex_item *pkt);
atomic_t in_use;
uint16_t size;
- struct {
- uint8_t iocb[64];
- } iocb;
+ union {
+ uint8_t min_iocb[QLA_DEFAULT_PAYLOAD_SIZE];
+ DECLARE_FLEX_ARRAY(uint8_t, iocb);
+ };
};
#include "qla_edif.h"
@@ -5101,7 +5102,6 @@ typedef struct scsi_qla_host {
struct list_head head;
spinlock_t lock;
} purex_list;
- struct purex_item default_item;
struct name_list_extended gnl;
/* Count of active session/fcport */
@@ -5130,6 +5130,9 @@ typedef struct scsi_qla_host {
#define DPORT_DIAG_IN_PROGRESS BIT_0
#define DPORT_DIAG_CHIP_RESET_IN_PROGRESS BIT_1
uint16_t dport_status;
+
+ /* Must be last --ends in a flexible-array member. */
+ struct purex_item default_item;
} scsi_qla_host_t;
struct qla27xx_image_status {
diff --git a/drivers/scsi/qla2xxx/qla_isr.c b/drivers/scsi/qla2xxx/qla_isr.c
index c4c6b5c6658c..a342e137a53a 100644
--- a/drivers/scsi/qla2xxx/qla_isr.c
+++ b/drivers/scsi/qla2xxx/qla_isr.c
@@ -1137,7 +1137,7 @@ static struct purex_item
if (!item)
return item;
- memcpy(&item->iocb, pkt, sizeof(item->iocb));
+ memcpy(&item->iocb, pkt, QLA_DEFAULT_PAYLOAD_SIZE);
return item;
}
diff --git a/drivers/scsi/qla2xxx/qla_nvme.c b/drivers/scsi/qla2xxx/qla_nvme.c
index 316594aa40cc..065f9bcca26f 100644
--- a/drivers/scsi/qla2xxx/qla_nvme.c
+++ b/drivers/scsi/qla2xxx/qla_nvme.c
@@ -1308,7 +1308,7 @@ void qla2xxx_process_purls_iocb(void **pkt, struct rsp_que **rsp)
ql_dbg(ql_dbg_unsol, vha, 0x2121,
"PURLS OP[%01x] size %d xchg addr 0x%x portid %06x\n",
- item->iocb.iocb[3], item->size, uctx->exchange_address,
+ item->iocb[3], item->size, uctx->exchange_address,
fcport->d_id.b24);
/* +48 0 1 2 3 4 5 6 7 8 9 A B C D E F
* ----- -----------------------------------------------
But if you already figured it out, that's great. :)
Thanks
-Gustavo
> /John
>
>> Thanks
>> -Gustavo
>>
>> [1] https://lore.kernel.org/linux-scsi/20250813200744.17975-10-bgurney@redhat.com/
>>
>
Powered by blists - more mailing lists