[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8D79B547-D606-4975-A79A-AEA65684F3A0@tuxera.com>
Date: Wed, 17 Nov 2021 23:29:47 +0000
From: Anton Altaparmakov <anton@...era.com>
To: Geert Uytterhoeven <geert@...ux-m68k.org>
CC: Michael Ellerman <mpe@...erman.id.au>,
Guenter Roeck <linux@...ck-us.net>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
"linux-ntfs-dev@...ts.sourceforge.net"
<linux-ntfs-dev@...ts.sourceforge.net>
Subject: Re: Linux 5.16-rc1
Hi Geert,
> On 17 Nov 2021, at 20:18, Geert Uytterhoeven <geert@...ux-m68k.org> wrote:
> Hi Michael,
> On Tue, Nov 16, 2021 at 12:39 PM Michael Ellerman <mpe@...erman.id.au> wrote:
>> Guenter Roeck <linux@...ck-us.net> writes:
>>> fs/ntfs/aops.c: In function 'ntfs_write_mst_block':
>>> fs/ntfs/aops.c:1311:1: error: the frame size of 2240 bytes is larger than 2048 bytes
>>>
>>> Bisect points to commit f22969a6604 ("powerpc/64s: Default to 64K pages for
>>> 64 bit book3s"), and reverting that commit does fix the problem.
>>> The problem is
>>> ntfs_inode *locked_nis[PAGE_SIZE / NTFS_BLOCK_SIZE];
>>>
>>> I don't see the problem in next-20211115, but I don't immediately see how it was fixed there.
>>
>> I still see it in next.
>>
>> I don't know what to do about it though. The NTFS folks presumably don't
>> want to rewrite their code to avoid a warning on powerpc, we have no
>> real interest in NTFS, and definitely no expertise in the NTFS code. We
>> don't want to revert the 64K change, and even if we did the warning
>> would still be there for other 64K page configs.
>
> Do you have a pointer to that discussion? I couldn't find it.
>
> Why does the ntfs code have a need to allocate an array
> (regardless whether it's on the stack or not) with a size related to
> PAGE_SIZE? Shouldn't the array's size be related to a parameter of
> the file system on the storage device, instead of a parameter of the
> system it is running on?
Yes and no. (-: As always the devil is in the details.
What you would know of as the inode table in a traditional Linux file system is in NTFS an actual file on disk and we access it via the page cache. What we need here is an array to store pointers to in-memory inodes that correspond to inodes in the inode table page being written out. We used to have a specific array using the per-filesystem parameteres but because those are variable (e.g. an inode can be 1024 bytes or it can be 4096 bytes depending on your sector size and also depending on how you format the volume in Windows) thus the array was a dynamically sized array. This at some point in time was frowned upon so a commit was made to change the dynamically sized array to a statically/constant sized one. To do that the maximum possible size had to be used and that means using NTFS_BLOCK_SIZE as the inode size (that is 512 bytes constant). This then means that the number of inodes that can be present in a page is at most PAGE_SIZE / NTFS_BLOCK_SIZE and thus this is the size of the array. The correct array size would be PAGE_SIZE / vol->mft_record_size and with MFT record size normally being 1024 bytes or on 4k sector disks usually 4096 bytes the array we actually need is normally either half or an eight of the size we actually allocate with the PAGE_SIZE / NTFS_BLOCK_SIZE but there is no helping that if we want it to be statically/constant sized array.
Unless that particular kernel can support such small stack sizes I think the solution would be to simply modify the frame size warning not to complain until a bigger size, maybe 3kiB instead of 2kiB or something and that warning would not happen any more.
Best regards,
Anton
--
Anton Altaparmakov <anton at tuxera.com> (replace at with @)
Lead in File System Development, Tuxera Inc., http://www.tuxera.com/
Linux NTFS maintainer
Powered by blists - more mailing lists