lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230317202749.419094-3-eblake@redhat.com>
Date:   Fri, 17 Mar 2023 15:27:46 -0500
From:   Eric Blake <eblake@...hat.com>
To:     josef@...icpanda.com, linux-block@...r.kernel.org,
        nbd@...er.debian.org
Cc:     philipp.reisner@...bit.com, lars.ellenberg@...bit.com,
        christoph.boehmwalder@...bit.com, corbet@....net,
        linux-doc@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: [PATCH v2 2/5] block nbd: send handle in network order

The NBD spec says the client handle (or cookie) is opaque on the
server, and therefore it really doesn't matter what endianness we use;
to date, the use of memcpy() between u64 and a char[8] has exposed
native endianness when treating the handle as a 64-bit number.
However, since NBD protocol documents that everything else is in
network order, and tools like Wireshark will dump even the contents of
the handle as seen over the network, it's worth using a consistent
ordering regardless of the native endianness.

Plus, using a consistent endianness now allows an upcoming patch to
simplify this to directly use integer assignment instead of memcpy().

Signed-off-by: Eric Blake <eblake@...hat.com>

---
v2: new patch
---
 drivers/block/nbd.c | 10 +++++++---
 1 file changed, 7 insertions(+), 3 deletions(-)

diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index 592cfa8b765a..8a9487e79f1c 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -560,6 +560,7 @@ static int nbd_send_cmd(struct nbd_device *nbd, struct nbd_cmd *cmd, int index)
 	unsigned long size = blk_rq_bytes(req);
 	struct bio *bio;
 	u64 handle;
+	__be64 tmp;
 	u32 type;
 	u32 nbd_cmd_flags = 0;
 	int sent = nsock->sent, skip = 0;
@@ -606,7 +607,8 @@ static int nbd_send_cmd(struct nbd_device *nbd, struct nbd_cmd *cmd, int index)
 		request.len = htonl(size);
 	}
 	handle = nbd_cmd_handle(cmd);
-	memcpy(request.handle, &handle, sizeof(handle));
+	tmp = cpu_to_be64(handle);
+	memcpy(request.handle, &tmp, sizeof(tmp));

 	trace_nbd_send_request(&request, nbd->index, blk_mq_rq_from_pdu(cmd));

@@ -618,7 +620,7 @@ static int nbd_send_cmd(struct nbd_device *nbd, struct nbd_cmd *cmd, int index)
 	trace_nbd_header_sent(req, handle);
 	if (result < 0) {
 		if (was_interrupted(result)) {
-			/* If we havne't sent anything we can just return BUSY,
+			/* If we haven't sent anything we can just return BUSY,
 			 * however if we have sent something we need to make
 			 * sure we only allow this req to be sent until we are
 			 * completely done.
@@ -727,12 +729,14 @@ static struct nbd_cmd *nbd_handle_reply(struct nbd_device *nbd, int index,
 	int result;
 	struct nbd_cmd *cmd;
 	struct request *req = NULL;
+	__be64 tmp;
 	u64 handle;
 	u16 hwq;
 	u32 tag;
 	int ret = 0;

-	memcpy(&handle, reply->handle, sizeof(handle));
+	memcpy(&tmp, reply->handle, sizeof(tmp));
+	handle = be64_to_cpu(tmp);
 	tag = nbd_handle_to_tag(handle);
 	hwq = blk_mq_unique_tag_to_hwq(tag);
 	if (hwq < nbd->tag_set.nr_hw_queues)
-- 
2.39.2

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ