[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZNrh6w9ICu4rMrhV@casper.infradead.org>
Date: Tue, 15 Aug 2023 03:24:43 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Zach O'Keefe <zokeefe@...gle.com>
Cc: Saurabh Singh Sengar <ssengar@...rosoft.com>,
Dan Williams <dan.j.williams@...el.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
Yang Shi <shy828301@...il.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [EXTERNAL] [PATCH] mm/thp: fix "mm: thp: kill
__transhuge_page_enabled()"
On Mon, Aug 14, 2023 at 05:04:47PM -0700, Zach O'Keefe wrote:
> > From a large folios perspective, filesystems do not implement a special
> > handler. They call filemap_fault() (directly or indirectly) from their
> > ->fault handler. If there is already a folio in the page cache which
> > satisfies this fault, we insert it into the page tables (no matter what
> > size it is). If there is no folio, we call readahead to populate that
> > index in the page cache, and probably some other indices around it.
> > That's do_sync_mmap_readahead().
> >
> > If you look at that, you'll see that we check the VM_HUGEPAGE flag, and
> > if set we align to a PMD boundary and read two PMD-size pages (so that we
> > can do async readahead for the second page, if we're doing a linear scan).
> > If the VM_HUGEPAGE flag isn't set, we'll use the readahead algorithm to
> > decide how large the folio should be that we're reading into; if it's a
> > random read workload, we'll stick to order-0 pages, but if we're getting
> > good hit rate from the linear scan, we'll increase the size (although
> > we won't go past PMD size)
> >
> > There's also the ->map_pages() optimisation which handles page faults
> > locklessly, and will fail back to ->fault() if there's even a light
> > breeze. I don't think that's of any particular use in answering your
> > question, so I'm not going into details about it.
> >
> > I'm not sure I understand the code that's being modified well enough to
> > be able to give you a straight answer to your question, but hopefully
> > this is helpful to you.
>
> Thank you, this was great info. I had thought, incorrectly, that large
> folio work would eventually tie into that ->huge_fault() handler
> (should be dax_huge_fault() ?)
>
> If that's the case, then faulting file-backed, non-DAX memory as
> (pmd-mapped-)THPs isn't supported at all, and no fault lies with the
> aforementioned patches.
Ah, wait, hang on. You absolutely can get a PMD mapping by calling into
->fault. Look at how finish_fault() works:
if (pmd_none(*vmf->pmd)) {
if (PageTransCompound(page)) {
ret = do_set_pmd(vmf, page);
if (ret != VM_FAULT_FALLBACK)
return ret;
}
if (vmf->prealloc_pte)
pmd_install(vma->vm_mm, vmf->pmd, &vmf->prealloc_pte);
So if we find a large folio that is PMD mappable, and there's nothing
at vmf->pmd, we install a PMD-sized mapping at that spot. If that
fails, we install the preallocated PTE table at vmf->pmd and continue to
trying set one or more PTEs to satisfy this page fault.
So why, you may be asking, do we have ->huge_fault. Well, you should
ask the clown who did commit b96375f74a6d ... in fairness to me,
finish_fault() did not exist at the time, and the ability to return
a PMD-sized page was added later.
Powered by blists - more mailing lists