[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZnrO4clYoEH_67Ur@casper.infradead.org>
Date: Tue, 25 Jun 2024 15:06:25 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Ryan Roberts <ryan.roberts@....com>
Cc: Baolin Wang <baolin.wang@...ux.alibaba.com>,
Kefeng Wang <wangkefeng.wang@...wei.com>,
Catalin Marinas <catalin.marinas@....com>,
Will Deacon <will@...nel.org>, Ard Biesheuvel <ardb@...nel.org>,
Marc Zyngier <maz@...nel.org>, James Morse <james.morse@....com>,
Andrey Ryabinin <ryabinin.a.a@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Mark Rutland <mark.rutland@....com>,
David Hildenbrand <david@...hat.com>,
John Hubbard <jhubbard@...dia.com>, Zi Yan <ziy@...dia.com>,
Barry Song <21cnbao@...il.com>,
Alistair Popple <apopple@...dia.com>,
Yang Shi <shy828301@...il.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
"H. Peter Anvin" <hpa@...or.com>,
"Yin, Fengwei" <fengwei.yin@...el.com>,
linux-arm-kernel@...ts.infradead.org, x86@...nel.org,
linuxppc-dev@...ts.ozlabs.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v6 18/18] arm64/mm: Automatically fold contpte mappings
On Tue, Jun 25, 2024 at 02:41:18PM +0100, Ryan Roberts wrote:
> On 25/06/2024 14:06, Matthew Wilcox wrote:
> > On Tue, Jun 25, 2024 at 01:41:02PM +0100, Ryan Roberts wrote:
> >> On 25/06/2024 13:37, Baolin Wang wrote:
> >>
> >> [...]
> >>
> >>>>> For other filesystems, like ext4, I did not found the logic to determin what
> >>>>> size of folio to allocate in writable mmap() path
> >>>>
> >>>> Yes I'd be keen to understand this to. When I was doing contpte, page cache
> >>>> would only allocate large folios for readahead. So that's why I wouldn't have
> >>>
> >>> You mean non-large folios, right?
> >>
> >> No I mean that at the time I wrote contpte, the policy was to allocate an
> >> order-0 folio for any writes that missed in the page cache, and allocate large
> >> folios only when doing readahead from storage into page cache. The test that is
> >> regressing is doing writes.
> >
> > mmap() faults also use readahead.
> >
> > filemap_fault():
> >
> > folio = filemap_get_folio(mapping, index);
> > if (likely(!IS_ERR(folio))) {
> > if (!(vmf->flags & FAULT_FLAG_TRIED))
> > fpin = do_async_mmap_readahead(vmf, folio);
> > which does:
> > if (folio_test_readahead(folio)) {
> > fpin = maybe_unlock_mmap_for_io(vmf, fpin);
> > page_cache_async_ra(&ractl, folio, ra->ra_pages);
> >
> > which has been there in one form or another since 2007 (3ea89ee86a82).
>
> OK sounds like I'm probably misremembering something I read on LWN... You're
> saying that its been the case for a while that if we take a write fault for a
> portion of a file, then we will still end up taking the readahead path and
> allocating a large folio (filesystem permitting)? Does that apply in the case
> where the file has never been touched but only ftruncate'd, as is happening in
> this test? There is obviously no need for IO in that case, but have we always
> taken a path where a large folio may be allocated for it? I thought that bit was
> newer for some reason.
The pagecache doesn't know whether the file contains data or holes.
It allocates folios and then invites the filesystem to fill them; the
filesystem checks its data structures and then either issues reads if
there's data on media or calls memset if the records indicate there's
a hole.
Whether it chooses to allocate large folios or not is going to depend
on the access pattern; a sequential write pattern will use large folios
and a random write pattern won't.
Now, I've oversimplified things a bit by talking about filemap_fault.
Before we call filemap_fault, we call filemap_map_pages() which looks
for any suitable folios in the page cache between start and end, and
maps those.
Powered by blists - more mailing lists