[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d6ee4571-64d6-ebd2-4adb-83f33e5e608d@vivo.com>
Date: Mon, 10 Jul 2023 11:32:43 +0800
From: Chunhai Guo <guochunhai@...o.com>
To: Gao Xiang <hsiangkao@...ux.alibaba.com>,
"xiang@...nel.org" <xiang@...nel.org>,
"chao@...nel.org" <chao@...nel.org>
Cc: "huyue2@...lpad.com" <huyue2@...lpad.com>,
"jefflexu@...ux.alibaba.com" <jefflexu@...ux.alibaba.com>,
"linux-erofs@...ts.ozlabs.org" <linux-erofs@...ts.ozlabs.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] erofs: fix two loop issues when read page beyond EOF
Hi Xiang,
On 2023/7/8 17:00, Gao Xiang wrote:
> Hi Chunhai,
>
> On 2023/7/8 14:24, Chunhai Guo wrote:
>> When z_erofs_read_folio() reads a page with an offset far beyond EOF, two
>> issues may occur:
>> - z_erofs_pcluster_readmore() may take a long time to loop when the offset
>> is big enough, which is unnecessary.
>> - For example, it will loop 4691368 times and take about 27 seconds
>> with following case.
>> - offset = 19217289215
>> - inode_size = 1442672
>> - z_erofs_do_read_page() may loop infinitely due to the inappropriate
>> truncation in the below statement. Since the offset is 64 bits and
>> min_t() truncates the result to 32 bits. The solution is to replace
>> unsigned int with another 64-bit type, such as erofs_off_t.
>> cur = end - min_t(unsigned int, offset + end - map->m_la, end);
>> - For example:
>> - offset = 0x400160000
>> - end = 0x370
>> - map->m_la = 0x160370
>> - offset + end - map->m_la = 0x400000000
>> - offset + end - map->m_la = 0x00000000 (truncated as unsigned int)
>
> Thanks for the catch!
>
> Could you split these two into two patches?
>
> how about using:
> cur = end - min_t(erofs_off_t, offend + end - map->m_la, end)
> for this?
>
> since cur and end are all [0, PAGE_SIZE - 1] for now, and
> folio_size() later.
OK. I will split the patch.
Sorry that I can not understand what is 'offend' refer to and what do
you mean. Could you please describe it more clearly?
>> - Expected result:
>> - cur = 0
>> - Actual result:
>> - cur = 0x370
>>
>> Signed-off-by: Chunhai Guo <guochunhai@...o.com>
>> ---
>> fs/erofs/zdata.c | 13 ++++++++++---
>> 1 file changed, 10 insertions(+), 3 deletions(-)
>>
>> diff --git a/fs/erofs/zdata.c b/fs/erofs/zdata.c
>> index 5f1890e309c6..6abbd4510076 100644
>> --- a/fs/erofs/zdata.c
>> +++ b/fs/erofs/zdata.c
>> @@ -972,7 +972,8 @@ static int z_erofs_do_read_page(struct z_erofs_decompress_frontend *fe,
>> struct erofs_map_blocks *const map = &fe->map;
>> const loff_t offset = page_offset(page);
>> bool tight = true, exclusive;
>> - unsigned int cur, end, spiltted;
>> + erofs_off_t cur, end;
>> + unsigned int spiltted;
>> int err = 0;
>>
>> /* register locked file pages as online pages in pack */
>> @@ -1035,7 +1036,7 @@ static int z_erofs_do_read_page(struct z_erofs_decompress_frontend *fe,
>> */
>> tight &= (fe->mode > Z_EROFS_PCLUSTER_FOLLOWED_NOINPLACE);
>>
>> - cur = end - min_t(unsigned int, offset + end - map->m_la, end);
>> + cur = end - min_t(erofs_off_t, offset + end - map->m_la, end);
>> if (!(map->m_flags & EROFS_MAP_MAPPED)) {
>> zero_user_segment(page, cur, end);
>> goto next_part;
>> @@ -1841,7 +1842,7 @@ static void z_erofs_pcluster_readmore(struct z_erofs_decompress_frontend *f,
>> }
>>
>> cur = map->m_la + map->m_llen - 1;
>> - while (cur >= end) {
>> + while ((cur >= end) && (cur < i_size_read(inode))) {
>> pgoff_t index = cur >> PAGE_SHIFT;
>> struct page *page;
>>
>> @@ -1876,6 +1877,12 @@ static int z_erofs_read_folio(struct file *file, struct folio *folio)
>> trace_erofs_readpage(page, false);
>> f.headoffset = (erofs_off_t)page->index << PAGE_SHIFT;
>>
>> + /* when trying to read beyond EOF, return zero page directly */
>> + if (f.headoffset >= i_size_read(inode)) {
>> + zero_user_segment(page, 0, PAGE_SIZE);
>> + return 0;
>> + }
> Do we really need to optimize this rare case?
> I guess the follow readmore fix is enough, thoughts? >
> Thanks,
> Gao Xiang
Since the code is constantly being updated and someone may encounter
this bug again, I think we had better fix it if possible.
Thanks.
Guo Chunhai
Powered by blists - more mailing lists