[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20241031173327.663-1-grurikov@gmail.com>
Date: Thu, 31 Oct 2024 20:33:27 +0300
From: George Rurikov <grurikov@...il.com>
To: Christoph Hellwig <hch@....de>
Cc: MrRurikov <grurikov@...l.com>,
Sagi Grimberg <sagi@...mberg.me>,
Chaitanya Kulkarni <kch@...dia.com>,
Keith Busch <kbusch@...nel.org>,
Israel Rukshin <israelr@...lanox.com>,
Max Gurtovoy <maxg@...lanox.com>,
Jens Axboe <axboe@...nel.dk>,
linux-nvme@...ts.infradead.org,
linux-kernel@...r.kernel.org,
stable@...r.kernel.org,
George Rurikov <g.ryurikov@...uritycode.ru>
Subject: [PATCH] nvme: rdma: Add check for queue in nvmet_rdma_cm_handler()
From: MrRurikov <grurikov@...l.com>
After having been assigned to a NULL value at rdma.c:1758, pointer 'queue'
is passed as 1st parameter in call to function
'nvmet_rdma_queue_established' at rdma.c:1773, as 1st parameter in call
to function 'nvmet_rdma_queue_disconnect' at rdma.c:1787 and as 2nd
parameter in call to function 'nvmet_rdma_queue_connect_fail' at
rdma.c:1800, where it is dereferenced.
I understand, that driver is confident that the
RDMA_CM_EVENT_CONNECT_REQUEST event will occur first and perform
initialization, but maliciously prepared hardware could send events in
violation of the protocol. Nothing guarantees that the sequence of events
will start with RDMA_CM_EVENT_CONNECT_REQUEST.
Found by Linux Verification Center (linuxtesting.org) with SVACE
Fixes: e1a2ee249b19 ("nvmet-rdma: Fix use after free in nvmet_rdma_cm_handler()")
Cc: stable@...r.kernel.org
Signed-off-by: George Rurikov <g.ryurikov@...uritycode.ru>
---
drivers/nvme/target/rdma.c | 18 ++++++++++++------
1 file changed, 12 insertions(+), 6 deletions(-)
diff --git a/drivers/nvme/target/rdma.c b/drivers/nvme/target/rdma.c
index 1b6264fa5803..becebc95f349 100644
--- a/drivers/nvme/target/rdma.c
+++ b/drivers/nvme/target/rdma.c
@@ -1770,8 +1770,10 @@ static int nvmet_rdma_cm_handler(struct rdma_cm_id *cm_id,
ret = nvmet_rdma_queue_connect(cm_id, event);
break;
case RDMA_CM_EVENT_ESTABLISHED:
- nvmet_rdma_queue_established(queue);
- break;
+ if (!queue) {
+ nvmet_rdma_queue_established(queue);
+ break;
+ }
case RDMA_CM_EVENT_ADDR_CHANGE:
if (!queue) {
struct nvmet_rdma_port *port = cm_id->context;
@@ -1782,8 +1784,10 @@ static int nvmet_rdma_cm_handler(struct rdma_cm_id *cm_id,
fallthrough;
case RDMA_CM_EVENT_DISCONNECTED:
case RDMA_CM_EVENT_TIMEWAIT_EXIT:
- nvmet_rdma_queue_disconnect(queue);
- break;
+ if (!queue) {
+ nvmet_rdma_queue_disconnect(queue);
+ break;
+ }
case RDMA_CM_EVENT_DEVICE_REMOVAL:
ret = nvmet_rdma_device_removal(cm_id, queue);
break;
@@ -1793,8 +1797,10 @@ static int nvmet_rdma_cm_handler(struct rdma_cm_id *cm_id,
fallthrough;
case RDMA_CM_EVENT_UNREACHABLE:
case RDMA_CM_EVENT_CONNECT_ERROR:
- nvmet_rdma_queue_connect_fail(cm_id, queue);
- break;
+ if (!queue) {
+ nvmet_rdma_queue_connect_fail(cm_id, queue);
+ break;
+ }
default:
pr_err("received unrecognized RDMA CM event %d\n",
event->event);
--
2.34.1
Powered by blists - more mailing lists