lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <33d9e4b3-4455-4431-81dc-e621cf383c22@redhat.com>
Date: Tue, 25 Jun 2024 20:51:13 +0200
From: David Hildenbrand <david@...hat.com>
To: Andrew Morton <akpm@...ux-foundation.org>, Gavin Shan <gshan@...hat.com>
Cc: linux-mm@...ck.org, linux-fsdevel@...r.kernel.org,
 linux-kernel@...r.kernel.org, djwong@...nel.org, willy@...radead.org,
 hughd@...gle.com, torvalds@...ux-foundation.org, zhenyzha@...hat.com,
 shan.gavin@...il.com
Subject: Re: [PATCH 0/4] mm/filemap: Limit page cache size to that supported
 by xarray

On 25.06.24 20:37, Andrew Morton wrote:
> On Tue, 25 Jun 2024 19:06:42 +1000 Gavin Shan <gshan@...hat.com> wrote:
> 
>> Currently, xarray can't support arbitrary page cache size. More details
>> can be found from the WARN_ON() statement in xas_split_alloc(). In our
>> test whose code is attached below, we hit the WARN_ON() on ARM64 system
>> where the base page size is 64KB and huge page size is 512MB. The issue
>> was reported long time ago and some discussions on it can be found here
>> [1].
>>
>> [1] https://www.spinics.net/lists/linux-xfs/msg75404.html
>>
>> In order to fix the issue, we need to adjust MAX_PAGECACHE_ORDER to one
>> supported by xarray and avoid PMD-sized page cache if needed. The code
>> changes are suggested by David Hildenbrand.
>>
>> PATCH[1] adjusts MAX_PAGECACHE_ORDER to that supported by xarray
>> PATCH[2-3] avoids PMD-sized page cache in the synchronous readahead path
>> PATCH[4] avoids PMD-sized page cache for shmem files if needed
> 
> Questions on the timing of these.
> 
> 1&2 are cc:stable whereas 3&4 are not.
> 
> I could split them and feed 1&2 into 6.10-rcX and 3&4 into 6.11-rc1.  A
> problem with this approach is that we're putting a basically untested
> combination into -stable: 1&2 might have bugs which were accidentally
> fixed in 3&4.  A way to avoid this is to add cc:stable to all four
> patches.
> 
> What are your thoughts on this matter?

Especially 4 should also be CC stable, so likely we should just do it 
for all of them.

-- 
Cheers,

David / dhildenb


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ