[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20231207212206.1379128-40-dhowells@redhat.com>
Date: Thu, 7 Dec 2023 21:21:46 +0000
From: David Howells <dhowells@...hat.com>
To: Jeff Layton <jlayton@...nel.org>, Steve French <smfrench@...il.com>
Cc: David Howells <dhowells@...hat.com>,
Matthew Wilcox <willy@...radead.org>,
Marc Dionne <marc.dionne@...istor.com>,
Paulo Alcantara <pc@...guebit.com>,
Shyam Prasad N <sprasad@...rosoft.com>,
Tom Talpey <tom@...pey.com>,
Dominique Martinet <asmadeus@...ewreck.org>,
Eric Van Hensbergen <ericvh@...nel.org>,
Ilya Dryomov <idryomov@...il.com>,
Christian Brauner <christian@...uner.io>,
linux-cachefs@...hat.com, linux-afs@...ts.infradead.org,
linux-cifs@...r.kernel.org, linux-nfs@...r.kernel.org,
ceph-devel@...r.kernel.org, v9fs@...ts.linux.dev,
linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: [PATCH v3 39/59] netfs: Support decryption on ubuffered/DIO read
Support unbuffered and direct I/O reads from an encrypted file. This may
require making a larger read than is required into a bounce buffer and
copying out the required bits. We don't decrypt in-place in the user
buffer lest userspace interfere and muck up the decryption.
Signed-off-by: David Howells <dhowells@...hat.com>
cc: Jeff Layton <jlayton@...nel.org>
cc: linux-cachefs@...hat.com
cc: linux-fsdevel@...r.kernel.org
cc: linux-mm@...ck.org
---
fs/netfs/direct_read.c | 10 ++++++++++
fs/netfs/internal.h | 17 +++++++++++++++++
2 files changed, 27 insertions(+)
diff --git a/fs/netfs/direct_read.c b/fs/netfs/direct_read.c
index 52ad8fa66dd5..158719b56900 100644
--- a/fs/netfs/direct_read.c
+++ b/fs/netfs/direct_read.c
@@ -181,6 +181,16 @@ static ssize_t netfs_unbuffered_read_iter_locked(struct kiocb *iocb, struct iov_
iov_iter_advance(iter, orig_count);
}
+ /* If we're going to do decryption or decompression, we're going to
+ * need a bounce buffer - and if the data is misaligned for the crypto
+ * algorithm, we decrypt in place and then copy.
+ */
+ if (test_bit(NETFS_RREQ_CONTENT_ENCRYPTION, &rreq->flags)) {
+ if (!netfs_is_crypto_aligned(rreq, iter))
+ __set_bit(NETFS_RREQ_CRYPT_IN_PLACE, &rreq->flags);
+ __set_bit(NETFS_RREQ_USE_BOUNCE_BUFFER, &rreq->flags);
+ }
+
/* If we're going to use a bounce buffer, we need to set it up. We
* will then need to pad the request out to the minimum block size.
*/
diff --git a/fs/netfs/internal.h b/fs/netfs/internal.h
index b6c142ef996a..7180e2931189 100644
--- a/fs/netfs/internal.h
+++ b/fs/netfs/internal.h
@@ -198,6 +198,23 @@ static inline void netfs_put_group_many(struct netfs_group *netfs_group, int nr)
netfs_group->free(netfs_group);
}
+/*
+ * Check to see if a buffer aligns with the crypto unit block size. If it
+ * doesn't the crypto layer is going to copy all the data - in which case
+ * relying on the crypto op for a free copy is pointless.
+ */
+static inline bool netfs_is_crypto_aligned(struct netfs_io_request *rreq,
+ struct iov_iter *iter)
+{
+ struct netfs_inode *ctx = netfs_inode(rreq->inode);
+ unsigned long align, mask = (1UL << ctx->min_bshift) - 1;
+
+ if (!ctx->min_bshift)
+ return true;
+ align = iov_iter_alignment(iter);
+ return (align & mask) == 0;
+}
+
/*
* fscache-cache.c
*/
Powered by blists - more mailing lists