[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5d44ae23-4a68-446a-9ae8-f5b809437b32@redhat.com>
Date: Wed, 28 Aug 2024 13:47:10 +0800
From: Xiubo Li <xiubli@...hat.com>
To: Luis Henriques <luis.henriques@...ux.dev>
Cc: Ilya Dryomov <idryomov@...il.com>, ceph-devel@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH] ceph: fix out-of-bound array access when doing a file
read
On 8/27/24 21:36, Luis Henriques wrote:
> On Thu, Aug 22 2024, Luis Henriques (SUSE) wrote:
>
>> If, while doing a read, the inode is updated and the size is set to zero,
>> __ceph_sync_read() may not be able to handle it. It is thus easy to hit a
>> NULL pointer dereferrence by continuously reading a file while, on another
>> client, we keep truncating and writing new data into it.
>>
>> This patch fixes the issue by adding extra checks to avoid integer overflows
>> for the case of a zero size inode. This will prevent the loop doing page
>> copies from running and thus accessing the pages[] array beyond num_pages.
>>
>> Link: https://tracker.ceph.com/issues/67524
>> Signed-off-by: Luis Henriques (SUSE) <luis.henriques@...ux.dev>
>> ---
>> Hi!
>>
>> Please note that this patch is only lightly tested and, to be honest, I'm
>> not sure if this is the correct way to fix this bug. For example, if the
>> inode size is 0, then maybe ceph_osdc_wait_request() should have returned
>> 0 and the problem would be solved. However, it seems to be returning the
>> size of the reply message and that's not something easy to change. Or maybe
>> I'm just reading it wrong. Anyway, this is just an RFC to see if there's
>> other ideas.
>>
>> Also, the tracker contains a simple testcase for crashing the client.
> Just for the record, I've done a quick bisect as this bug is easily
> reproducible. The issue was introduced in v6.9-rc1, with commit
> 1065da21e5df ("ceph: stop copying to iter at EOF on sync reads").
> Reverting it makes the crash go away.
Thanks very much Luis.
So let's try to find the root cause of it and then improve the patch.
Thanks
- Xiubo
> Cheers,
Powered by blists - more mailing lists