lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZnyAD24AQFzlKAhD@casper.infradead.org>
Date: Wed, 26 Jun 2024 21:54:39 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Gavin Shan <gshan@...hat.com>
Cc: David Hildenbrand <david@...hat.com>,
	Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org,
	linux-fsdevel@...r.kernel.org, linux-kernel@...r.kernel.org,
	djwong@...nel.org, hughd@...gle.com, torvalds@...ux-foundation.org,
	zhenyzha@...hat.com, shan.gavin@...il.com
Subject: Re: [PATCH 0/4] mm/filemap: Limit page cache size to that supported
 by xarray

On Wed, Jun 26, 2024 at 10:37:00AM +1000, Gavin Shan wrote:
> On 6/26/24 5:05 AM, David Hildenbrand wrote:
> > On 25.06.24 20:58, Andrew Morton wrote:
> > > On Tue, 25 Jun 2024 20:51:13 +0200 David Hildenbrand <david@...hat.com> wrote:
> > > 
> > > > > I could split them and feed 1&2 into 6.10-rcX and 3&4 into 6.11-rc1.  A
> > > > > problem with this approach is that we're putting a basically untested
> > > > > combination into -stable: 1&2 might have bugs which were accidentally
> > > > > fixed in 3&4.  A way to avoid this is to add cc:stable to all four
> > > > > patches.
> > > > > 
> > > > > What are your thoughts on this matter?
> > > > 
> > > > Especially 4 should also be CC stable, so likely we should just do it
> > > > for all of them.
> > > 
> > > Fine.  A Fixes: for 3 & 4 would be good.  Otherwise we're potentially
> > > asking for those to be backported further than 1 & 2, which seems
> > > wrong.
> > 
> > 4 is shmem fix, which likely dates back a bit longer.
> > 
> > > 
> > > Then again, by having different Fixes: in the various patches we're
> > > suggesting that people split the patch series apart as they slot things
> > > into the indicated places.  In other words, it's not a patch series at
> > > all - it's a sprinkle of independent fixes.  Are we OK thinking of it
> > > in that fashion?
> > 
> > The common themes is "pagecache cannot handle > order-11", #1-3 tackle "ordinary" file THP, #4 tackles shmem THP.
> > 
> > So I'm not sure we should be splitting it apart. It's just that shmem THP arrived before file THP :)
> > 
> 
> I rechecked the history, it's a bit hard to have precise fix tag for PATCH[4].
> Please let me know if you have a better one for PATCH[4].
> 
> #4
>   Fixes: 800d8c63b2e9 ("shmem: add huge pages support")
>   Cc: stable@...nel.org # v4.10+
>   Fixes: 552446a41661 ("shmem: Convert shmem_add_to_page_cache to XArray")
>   Cc: stable@...nel.org # v4.20+
> #3
>   Fixes: 793917d997df ("mm/readahead: Add large folio readahead")
>   Cc: stable@...nel.org # v5.18+
> #2
>   Fixes: 4687fdbb805a ("mm/filemap: Support VM_HUGEPAGE for file mappings")
>   Cc: stable@...nel.org # v5.18+
> #1
>   Fixes: 793917d997df ("mm/readahead: Add large folio readahead")
>   Cc: stable@...nel.org # v5.18+

I actually think it's this:

commit 6b24ca4a1a8d
Author: Matthew Wilcox (Oracle) <willy@...radead.org>
Date:   Sat Jun 27 22:19:08 2020 -0400

    mm: Use multi-index entries in the page cache

    We currently store large folios as 2^N consecutive entries.  While this
    consumes rather more memory than necessary, it also turns out to be buggy.
    A writeback operation which starts within a tail page of a dirty folio will
    not write back the folio as the xarray's dirty bit is only set on the
    head index.  With multi-index entries, the dirty bit will be found no
    matter where in the folio the operation starts.

    This does end up simplifying the page cache slightly, although not as
    much as I had hoped.

    Signed-off-by: Matthew Wilcox (Oracle) <willy@...radead.org>
    Reviewed-by: William Kucharski <william.kucharski@...cle.com>

Before this, we could split an arbitrary size folio to order 0.  After
it, we're limited to whatever the xarray allows us to split.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ