[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <PUZP153MB0635DBD4E63A1A90C25F67ADBE15A@PUZP153MB0635.APCP153.PROD.OUTLOOK.COM>
Date: Wed, 16 Aug 2023 16:49:57 +0000
From: Saurabh Singh Sengar <ssengar@...rosoft.com>
To: Zach O'Keefe <zokeefe@...gle.com>,
Matthew Wilcox <willy@...radead.org>
CC: Dan Williams <dan.j.williams@...el.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
Yang Shi <shy828301@...il.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: RE: [EXTERNAL] [PATCH] mm/thp: fix "mm: thp: kill
__transhuge_page_enabled()"
> -----Original Message-----
> From: Zach O'Keefe <zokeefe@...gle.com>
> Sent: Tuesday, August 15, 2023 5:35 AM
> To: Matthew Wilcox <willy@...radead.org>
> Cc: Saurabh Singh Sengar <ssengar@...rosoft.com>; Dan Williams
> <dan.j.williams@...el.com>; linux-mm@...ck.org; Yang Shi
> <shy828301@...il.com>; linux-kernel@...r.kernel.org
> Subject: Re: [EXTERNAL] [PATCH] mm/thp: fix "mm: thp: kill
> __transhuge_page_enabled()"
>
> [You don't often get email from zokeefe@...gle.com. Learn why this is
> important at https://aka.ms/LearnAboutSenderIdentification ]
>
> On Mon, Aug 14, 2023 at 12:06 PM Matthew Wilcox <willy@...radead.org>
> wrote:
> >
> > On Mon, Aug 14, 2023 at 11:47:50AM -0700, Zach O'Keefe wrote:
> > > Willy -- I'm not up-to-date on what is happening on the THP-fs front.
> > > Should we be checking for a ->huge_fault handler here?
> >
> > Oh, thank goodness, I thought you were cc'ing me to ask a DAX question ...
>
> :)
>
> > From a large folios perspective, filesystems do not implement a
> > special handler. They call filemap_fault() (directly or indirectly)
> > from their
> > ->fault handler. If there is already a folio in the page cache which
> > satisfies this fault, we insert it into the page tables (no matter
> > what size it is). If there is no folio, we call readahead to populate
> > that index in the page cache, and probably some other indices around it.
> > That's do_sync_mmap_readahead().
> >
> > If you look at that, you'll see that we check the VM_HUGEPAGE flag,
> > and if set we align to a PMD boundary and read two PMD-size pages (so
> > that we can do async readahead for the second page, if we're doing a linear
> scan).
> > If the VM_HUGEPAGE flag isn't set, we'll use the readahead algorithm
> > to decide how large the folio should be that we're reading into; if
> > it's a random read workload, we'll stick to order-0 pages, but if
> > we're getting good hit rate from the linear scan, we'll increase the
> > size (although we won't go past PMD size)
> >
> > There's also the ->map_pages() optimisation which handles page faults
> > locklessly, and will fail back to ->fault() if there's even a light
> > breeze. I don't think that's of any particular use in answering your
> > question, so I'm not going into details about it.
> >
> > I'm not sure I understand the code that's being modified well enough
> > to be able to give you a straight answer to your question, but
> > hopefully this is helpful to you.
>
> Thank you, this was great info. I had thought, incorrectly, that large folio work
> would eventually tie into that ->huge_fault() handler (should be
> dax_huge_fault() ?)
>
> If that's the case, then faulting file-backed, non-DAX memory as (pmd-
> mapped-)THPs isn't supported at all, and no fault lies with the
> aforementioned patches.
>
> Saurabh, perhaps you can elaborate on your use case a bit more, and how
> that anonymous check broke you?
Zach,
We have a out of tree driver that maps huge pages through a file handle and
relies on -> huge_fault. It used to work in 5.19 kernels but 6.1 changed this
behaviour.
I don’t think reverting the earlier behaviour of fault_path for huge pages should
impact kernel negatively.
- Saurabh
>
> Best,
> Zach
Powered by blists - more mailing lists