[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20200418144047.9013-15-sashal@kernel.org>
Date: Sat, 18 Apr 2020 10:39:44 -0400
From: Sasha Levin <sashal@...nel.org>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org
Cc: Sagi Grimberg <sagi@...mberg.me>,
Tony Asleson <tasleson@...hat.com>,
Chaitanya Kulkarni <Chaitanya.Kulkarni@....com>,
Keith Busch <kbusch@...nel.org>,
Christoph Hellwig <hch@....de>,
Sasha Levin <sashal@...nel.org>, linux-nvme@...ts.infradead.org
Subject: [PATCH AUTOSEL 5.4 15/78] nvme-tcp: fix possible crash in write_zeroes processing
From: Sagi Grimberg <sagi@...mberg.me>
[ Upstream commit 25e5cb780e62bde432b401f312bb847edc78b432 ]
We cannot look at blk_rq_payload_bytes without first checking
that the request has a mappable physical segments first (e.g.
blk_rq_nr_phys_segments(rq) != 0) and only then to take the
request payload bytes. This caused us to send a wrong sgl to
the target or even dereference a non-existing buffer in case
we actually got to the data send sequence (if it was in-capsule).
Reported-by: Tony Asleson <tasleson@...hat.com>
Suggested-by: Chaitanya Kulkarni <Chaitanya.Kulkarni@....com>
Signed-off-by: Sagi Grimberg <sagi@...mberg.me>
Signed-off-by: Keith Busch <kbusch@...nel.org>
Signed-off-by: Christoph Hellwig <hch@....de>
Signed-off-by: Sasha Levin <sashal@...nel.org>
---
drivers/nvme/host/tcp.c | 13 +++++++------
1 file changed, 7 insertions(+), 6 deletions(-)
diff --git a/drivers/nvme/host/tcp.c b/drivers/nvme/host/tcp.c
index 244984420b41b..11e84ed4de361 100644
--- a/drivers/nvme/host/tcp.c
+++ b/drivers/nvme/host/tcp.c
@@ -164,16 +164,14 @@ static inline bool nvme_tcp_async_req(struct nvme_tcp_request *req)
static inline bool nvme_tcp_has_inline_data(struct nvme_tcp_request *req)
{
struct request *rq;
- unsigned int bytes;
if (unlikely(nvme_tcp_async_req(req)))
return false; /* async events don't have a request */
rq = blk_mq_rq_from_pdu(req);
- bytes = blk_rq_payload_bytes(rq);
- return rq_data_dir(rq) == WRITE && bytes &&
- bytes <= nvme_tcp_inline_data_size(req->queue);
+ return rq_data_dir(rq) == WRITE && req->data_len &&
+ req->data_len <= nvme_tcp_inline_data_size(req->queue);
}
static inline struct page *nvme_tcp_req_cur_page(struct nvme_tcp_request *req)
@@ -2090,7 +2088,9 @@ static blk_status_t nvme_tcp_map_data(struct nvme_tcp_queue *queue,
c->common.flags |= NVME_CMD_SGL_METABUF;
- if (rq_data_dir(rq) == WRITE && req->data_len &&
+ if (!blk_rq_nr_phys_segments(rq))
+ nvme_tcp_set_sg_null(c);
+ else if (rq_data_dir(rq) == WRITE &&
req->data_len <= nvme_tcp_inline_data_size(queue))
nvme_tcp_set_sg_inline(queue, c, req->data_len);
else
@@ -2117,7 +2117,8 @@ static blk_status_t nvme_tcp_setup_cmd_pdu(struct nvme_ns *ns,
req->data_sent = 0;
req->pdu_len = 0;
req->pdu_sent = 0;
- req->data_len = blk_rq_payload_bytes(rq);
+ req->data_len = blk_rq_nr_phys_segments(rq) ?
+ blk_rq_payload_bytes(rq) : 0;
req->curr_bio = rq->bio;
if (rq_data_dir(rq) == WRITE &&
--
2.20.1
Powered by blists - more mailing lists