[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231117211544.1740466-36-dhowells@redhat.com>
Date: Fri, 17 Nov 2023 21:15:27 +0000
From: David Howells <dhowells@...hat.com>
To: Jeff Layton <jlayton@...nel.org>,
Steve French <smfrench@...il.com>
Cc: David Howells <dhowells@...hat.com>,
Matthew Wilcox <willy@...radead.org>,
Marc Dionne <marc.dionne@...istor.com>,
Paulo Alcantara <pc@...guebit.com>,
Shyam Prasad N <sprasad@...rosoft.com>,
Tom Talpey <tom@...pey.com>,
Dominique Martinet <asmadeus@...ewreck.org>,
Ilya Dryomov <idryomov@...il.com>,
Christian Brauner <christian@...uner.io>,
linux-cachefs@...hat.com,
linux-afs@...ts.infradead.org,
linux-cifs@...r.kernel.org,
linux-nfs@...r.kernel.org,
ceph-devel@...r.kernel.org,
v9fs@...ts.linux.dev,
linux-fsdevel@...r.kernel.org,
linux-mm@...ck.org,
netdev@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: [PATCH v2 35/51] netfs: Support decryption on ubuffered/DIO read
Support unbuffered and direct I/O reads from an encrypted file. This may
require making a larger read than is required into a bounce buffer and
copying out the required bits. We don't decrypt in-place in the user
buffer lest userspace interfere and muck up the decryption.
Signed-off-by: David Howells <dhowells@...hat.com>
cc: Jeff Layton <jlayton@...nel.org>
cc: linux-cachefs@...hat.com
cc: linux-fsdevel@...r.kernel.org
cc: linux-mm@...ck.org
---
fs/netfs/direct_read.c | 10 ++++++++++
fs/netfs/internal.h | 17 +++++++++++++++++
2 files changed, 27 insertions(+)
diff --git a/fs/netfs/direct_read.c b/fs/netfs/direct_read.c
index 52ad8fa66dd5..158719b56900 100644
--- a/fs/netfs/direct_read.c
+++ b/fs/netfs/direct_read.c
@@ -181,6 +181,16 @@ static ssize_t netfs_unbuffered_read_iter_locked(struct kiocb *iocb, struct iov_
iov_iter_advance(iter, orig_count);
}
+ /* If we're going to do decryption or decompression, we're going to
+ * need a bounce buffer - and if the data is misaligned for the crypto
+ * algorithm, we decrypt in place and then copy.
+ */
+ if (test_bit(NETFS_RREQ_CONTENT_ENCRYPTION, &rreq->flags)) {
+ if (!netfs_is_crypto_aligned(rreq, iter))
+ __set_bit(NETFS_RREQ_CRYPT_IN_PLACE, &rreq->flags);
+ __set_bit(NETFS_RREQ_USE_BOUNCE_BUFFER, &rreq->flags);
+ }
+
/* If we're going to use a bounce buffer, we need to set it up. We
* will then need to pad the request out to the minimum block size.
*/
diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h
index fbecfd9b3174..447a67301329 100644
--- a/fs/netfs/internal.h
+++ b/fs/netfs/internal.h
@@ -193,6 +193,23 @@ static inline void netfs_put_group_many(struct netfs_group *netfs_group, int nr)
netfs_group->free(netfs_group);
}
+/*
+ * Check to see if a buffer aligns with the crypto unit block size. If it
+ * doesn't the crypto layer is going to copy all the data - in which case
+ * relying on the crypto op for a free copy is pointless.
+ */
+static inline bool netfs_is_crypto_aligned(struct netfs_io_request *rreq,
+ struct iov_iter *iter)
+{
+ struct netfs_inode *ctx = netfs_inode(rreq->inode);
+ unsigned long align, mask = (1UL << ctx->min_bshift) - 1;
+
+ if (!ctx->min_bshift)
+ return true;
+ align = iov_iter_alignment(iter);
+ return (align & mask) == 0;
+}
+
/*****************************************************************************/
/*
* debug tracing
Powered by blists - more mailing lists