[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <458de992be8760c387f7a4e55a1e42a021090a02.camel@ibm.com>
Date: Wed, 12 Mar 2025 18:43:08 +0000
From: Viacheslav Dubeyko <Slava.Dubeyko@....com>
To: "slava@...eyko.com" <slava@...eyko.com>,
David Howells
<dhowells@...hat.com>
CC: "ceph-devel@...r.kernel.org" <ceph-devel@...r.kernel.org>,
Alex Markuze
<amarkuze@...hat.com>, Xiubo Li <xiubli@...hat.com>,
"brauner@...nel.org"
<brauner@...nel.org>,
"idryomov@...il.com" <idryomov@...il.com>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] ceph: Fix incorrect flush end position calculation
On Wed, 2025-03-12 at 10:47 +0000, David Howells wrote:
> In ceph, in fill_fscrypt_truncate(), the end flush position is calculated
> by:
>
> loff_t lend = orig_pos + CEPH_FSCRYPT_BLOCK_SHIFT - 1;
>
> but that's using the block shift not the block size.
>
> Fix this to use the block size instead.
>
> Fixes: 5c64737d2536 ("ceph: add truncate size handling support for fscrypt")
> Signed-off-by: David Howells <dhowells@...hat.com>
> cc: Viacheslav Dubeyko <slava@...eyko.com>
> cc: Alex Markuze <amarkuze@...hat.com>
> cc: Xiubo Li <xiubli@...hat.com>
> cc: Ilya Dryomov <idryomov@...il.com>
> cc: ceph-devel@...r.kernel.org
> cc: linux-fsdevel@...r.kernel.org
> ---
> fs/ceph/inode.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/fs/ceph/inode.c b/fs/ceph/inode.c
> index ab63c7ebce5b..ec9b80fec7be 100644
> --- a/fs/ceph/inode.c
> +++ b/fs/ceph/inode.c
> @@ -2363,7 +2363,7 @@ static int fill_fscrypt_truncate(struct inode *inode,
>
> /* Try to writeback the dirty pagecaches */
> if (issued & (CEPH_CAP_FILE_BUFFER)) {
> - loff_t lend = orig_pos + CEPH_FSCRYPT_BLOCK_SHIFT - 1;
> + loff_t lend = orig_pos + CEPH_FSCRYPT_BLOCK_SIZE - 1;
>
> ret = filemap_write_and_wait_range(inode->i_mapping,
> orig_pos, lend);
>
>
Looks good.
Reviewed-by: Viacheslav Dubeyko <Slava.Dubeyko@....com>
Do we know easy way to reproduce the issue?
Thanks,
Slava.
Powered by blists - more mailing lists