[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <40f2f891-bd09-4600-9540-b6e6fa977958@huawei.com>
Date: Wed, 7 May 2025 14:15:10 +0800
From: Hongbo Li <lihongbo22@...wei.com>
To: Gao Xiang <hsiangkao@...ux.alibaba.com>, <xiang@...nel.org>,
<chao@...nel.org>, <zbestahu@...il.com>, <jefflexu@...ux.alibaba.com>
CC: <dhavale@...gle.com>, <linux-erofs@...ts.ozlabs.org>,
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v3] erofs: fix file handle encoding for 64-bit NIDs
On 2025/5/7 10:42, Gao Xiang wrote:
>
>
> On 2025/5/7 09:53, Hongbo Li wrote:
>>
>>
>> On 2025/5/6 23:10, Gao Xiang wrote:
>>> Hi Hongbo,
>>>
>>> On 2025/4/29 21:42, Hongbo Li wrote:
>>>> In erofs, the inode number has the location information of
>>>> files. The default encode_fh uses the ino32, this will lack
>>>> of some information when the file is too big. So we need
>>>> the internal helpers to encode filehandle.
>>>
>>> EROFS uses NID to indicate the on-disk inode offset, which can
>>> exceed 32 bits. However, the default encode_fh uses the ino32,
>>> thus it doesn't work if the image is large than 128GiB.
>>>
>> Thanks for helping me correct my description.
>>
>> Here, an image larger than 128GiB won't trigger NID reversal. It
>> requires a 128GiB file inside, and the NID of the second file may
>> exceed U32 during formatting. So here can we change it to "However,
>> the default encode_fh uses the ino32, thus it may not work if there
>> exist a file which is large than 128GiB." ?
>
> Why? Currently EROFS doesn't arrange inode metadata
> together, but close to its data (or its directory)
> if possible for data locality.
>
> So NIDs can exceed 32-bit for images larger than
> 128 GiB.
>
Ok, I see your point, and you are right. It doesn't have to be a 128GiB
file, but it is easy to construct this kind of EROFS image by large
file. Such as:
mkfs.erofs -d7 --tar=f --clean=data foo.erofs 128g-file.tar # the nid
of 128g-file is 39.
mkfs.erofs -d7 --tar=f --incremental=data 1b-file.tar # the nid of
1b-file is 4294967425.
Thank you again for your review, I will send the next version of the
patch later.
Thanks,
Hongbo
> Thanks,
> Gao Xiang
>
>>
>> Thanks,
>> Hongbo
>>
>
Powered by blists - more mailing lists