[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87mskyxf3l.fsf@linux.dev>
Date: Tue, 27 Aug 2024 14:36:14 +0100
From: Luis Henriques <luis.henriques@...ux.dev>
To: Xiubo Li <xiubli@...hat.com>
Cc: Ilya Dryomov <idryomov@...il.com>, ceph-devel@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH] ceph: fix out-of-bound array access when doing a
file read
On Thu, Aug 22 2024, Luis Henriques (SUSE) wrote:
> If, while doing a read, the inode is updated and the size is set to zero,
> __ceph_sync_read() may not be able to handle it. It is thus easy to hit a
> NULL pointer dereferrence by continuously reading a file while, on another
> client, we keep truncating and writing new data into it.
>
> This patch fixes the issue by adding extra checks to avoid integer overflows
> for the case of a zero size inode. This will prevent the loop doing page
> copies from running and thus accessing the pages[] array beyond num_pages.
>
> Link: https://tracker.ceph.com/issues/67524
> Signed-off-by: Luis Henriques (SUSE) <luis.henriques@...ux.dev>
> ---
> Hi!
>
> Please note that this patch is only lightly tested and, to be honest, I'm
> not sure if this is the correct way to fix this bug. For example, if the
> inode size is 0, then maybe ceph_osdc_wait_request() should have returned
> 0 and the problem would be solved. However, it seems to be returning the
> size of the reply message and that's not something easy to change. Or maybe
> I'm just reading it wrong. Anyway, this is just an RFC to see if there's
> other ideas.
>
> Also, the tracker contains a simple testcase for crashing the client.
Just for the record, I've done a quick bisect as this bug is easily
reproducible. The issue was introduced in v6.9-rc1, with commit
1065da21e5df ("ceph: stop copying to iter at EOF on sync reads").
Reverting it makes the crash go away.
Cheers,
--
Luís
> fs/ceph/file.c | 7 ++++---
> 1 file changed, 4 insertions(+), 3 deletions(-)
>
> diff --git a/fs/ceph/file.c b/fs/ceph/file.c
> index 4b8d59ebda00..dc23d5e5b11e 100644
> --- a/fs/ceph/file.c
> +++ b/fs/ceph/file.c
> @@ -1200,9 +1200,9 @@ ssize_t __ceph_sync_read(struct inode *inode, loff_t *ki_pos,
> }
>
> idx = 0;
> - if (ret <= 0)
> + if ((ret <= 0) || (i_size == 0))
> left = 0;
> - else if (off + ret > i_size)
> + else if ((i_size >= off) && (off + ret > i_size))
> left = i_size - off;
> else
> left = ret;
> @@ -1210,6 +1210,7 @@ ssize_t __ceph_sync_read(struct inode *inode, loff_t *ki_pos,
> size_t plen, copied;
>
> plen = min_t(size_t, left, PAGE_SIZE - page_off);
> + WARN_ON_ONCE(idx >= num_pages);
> SetPageUptodate(pages[idx]);
> copied = copy_page_to_iter(pages[idx++],
> page_off, plen, to);
> @@ -1234,7 +1235,7 @@ ssize_t __ceph_sync_read(struct inode *inode, loff_t *ki_pos,
> }
>
> if (ret > 0) {
> - if (off >= i_size) {
> + if ((i_size >= *ki_pos) && (off >= i_size)) {
> *retry_op = CHECK_EOF;
> ret = i_size - *ki_pos;
> *ki_pos = i_size;
>
Powered by blists - more mailing lists