lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHbLzkonVj63+up4-BCPm29yjaf_29asMFJHpXiZp96UjGGNSg@mail.gmail.com>
Date:   Fri, 18 Mar 2022 11:04:29 -0700
From:   Yang Shi <shy828301@...il.com>
To:     Matthew Wilcox <willy@...radead.org>
Cc:     Dave Chinner <david@...morbit.com>, vbabka@...e.cz,
        kirill.shutemov@...ux.intel.com, linmiaohe@...wei.com,
        songliubraving@...com, riel@...riel.com, ziy@...dia.com,
        akpm@...ux-foundation.org, tytso@....edu, adilger.kernel@...ger.ca,
        darrick.wong@...cle.com, linux-mm@...ck.org,
        linux-fsdevel@...r.kernel.org, linux-ext4@...r.kernel.org,
        linux-xfs@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [v2 PATCH 0/8] Make khugepaged collapse readonly FS THP more consistent

On Thu, Mar 17, 2022 at 8:38 PM Matthew Wilcox <willy@...radead.org> wrote:
>
> On Fri, Mar 18, 2022 at 12:29:48PM +1100, Dave Chinner wrote:
> > On Thu, Mar 17, 2022 at 04:48:19PM -0700, Yang Shi wrote:
> > >
> > > Changelog
> > > v2: * Collected reviewed-by tags from Miaohe Lin.
> > >     * Fixed build error for patch 4/8.
> > >
> > > The readonly FS THP relies on khugepaged to collapse THP for suitable
> > > vmas.  But it is kind of "random luck" for khugepaged to see the
> > > readonly FS vmas (see report: https://lore.kernel.org/linux-mm/00f195d4-d039-3cf2-d3a1-a2c88de397a0@suse.cz/) since currently the vmas are registered to khugepaged when:
> > >   - Anon huge pmd page fault
> > >   - VMA merge
> > >   - MADV_HUGEPAGE
> > >   - Shmem mmap
> > >
> > > If the above conditions are not met, even though khugepaged is enabled
> > > it won't see readonly FS vmas at all.  MADV_HUGEPAGE could be specified
> > > explicitly to tell khugepaged to collapse this area, but when khugepaged
> > > mode is "always" it should scan suitable vmas as long as VM_NOHUGEPAGE
> > > is not set.
> > >
> > > So make sure readonly FS vmas are registered to khugepaged to make the
> > > behavior more consistent.
> > >
> > > Registering the vmas in mmap path seems more preferred from performance
> > > point of view since page fault path is definitely hot path.
> > >
> > >
> > > The patch 1 ~ 7 are minor bug fixes, clean up and preparation patches.
> > > The patch 8 converts ext4 and xfs.  We may need convert more filesystems,
> > > but I'd like to hear some comments before doing that.
> >
> > After reading through the patchset, I have no idea what this is even
> > doing or enabling. I can't comment on the last patch and it's effect
> > on XFS because there's no high level explanation of the
> > functionality or feature to provide me with the context in which I
> > should be reviewing this patchset.
> >
> > I understand this has something to do with hugepages, but there's no
> > explaination of exactly where huge pages are going to be used in the
> > filesystem, what the problems with khugepaged and filesystems are
> > that this apparently solves, what constraints it places on
> > filesystems to enable huge pages to be used, etc.
> >
> > I'm guessing that the result is that we'll suddenly see huge pages
> > in the page cache for some undefined set of files in some undefined
> > set of workloads. But that doesn't help me understand any of the
> > impacts it may have. e.g:
> >
> > - how does this relate to the folio conversion and use of large
> >   pages in the page cache?
> > - why do we want two completely separate large page mechanisms in
> >   the page cache?
> > - why is this limited to "read only VMAs" and how does the
> >   filesystem actually ensure that the VMAs are read only?
> > - what happens if we have a file that huge pages mapped into the
> >   page cache via read only VMAs then has write() called on it via a
> >   different file descriptor and so we need to dirty the page cache
> >   that has huge pages in it?
> >
> > I've got a lot more questions, but to save me having to ask them,
> > how about you explain what this new functionality actually does, why
> > we need to support it, and why it is better than the fully writeable
> > huge page support via folios that we already have in the works...
>
> Back in Puerto Rico when we set up the THP Cabal, we had two competing
> approaches for using larger pages in the page cache; mine (which turned
> into folios after I realised that THPs were the wrong model) and Song
> Liu's CONFIG_READ_ONLY_THP_FOR_FS.  Song's patches were ready earlier
> (2019) and were helpful in unveiling some of the problems which needed
> to be fixed.  The filesystem never sees the large pages because they're
> only used for read-only files, and the pages are already Uptodate at
> the point they're collapsed into a THP.  So there's no changes needed
> to the filesystem.
>
> This collection of patches I'm agnostic about.  As far as I can
> tell, they're a way to improve how often the ROTHP feature gets used.
> That doesn't really interest me since we're so close to having proper
> support for large pages/folios in filesystems.  So I'm not particularly
> interested in improving a feature that we're about to delete.  But I also
> don't like it that the filesystem now has to do something; the ROTHP
> feature is supposed to be completely transparent from the point of view
> of the filesystem.

I agree once page cache huge page is fully supported,
READ_ONLY_THP_FOR_FS could be deprecated. But actually this patchset
makes khugepaged collapse file THP more consistently. It guarantees
the THP could be collapsed as long as file THP is supported and
configured properly and there is suitable file vmas, it is not
guaranteed by the current code. So it should be useful even though
READ_ONLY_THP_FOR_FS is gone IMHO.

>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ