[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240610072657.erdzkedvbzj3gohu@quentin>
Date: Mon, 10 Jun 2024 07:26:57 +0000
From: "Pankaj Raghav (Samsung)" <kernel@...kajraghav.com>
To: Zi Yan <ziy@...dia.com>
Cc: willy@...radead.org, david@...morbit.com, djwong@...nel.org,
chandan.babu@...cle.com, brauner@...nel.org,
akpm@...ux-foundation.org, mcgrof@...nel.org, linux-mm@...ck.org,
hare@...e.de, linux-kernel@...r.kernel.org,
yang@...amperecomputing.com, linux-xfs@...r.kernel.org,
p.raghav@...sung.com, linux-fsdevel@...r.kernel.org, hch@....de,
gost.dev@...sung.com, cl@...amperecomputing.com,
john.g.garry@...cle.com
Subject: Re: [PATCH v7 05/11] mm: split a folio in minimum folio order chunks
On Fri, Jun 07, 2024 at 04:51:04PM -0400, Zi Yan wrote:
> On 7 Jun 2024, at 16:30, Pankaj Raghav (Samsung) wrote:
> >>> + if (!folio->mapping) {
> >>> + count_vm_event(THP_SPLIT_PAGE_FAILED);
> >>
> >> You should only increase this counter when the input folio is a THP, namely
> >> folio_test_pmd_mappable(folio) is true. For other large folios, we will
> >> need a separate counter. Something like MTHP_STAT_FILE_SPLIT_FAILED.
> >> See enum mthp_stat_item in include/linux/huge_mm.h.
> >>
> > Hmm, but we don't have mTHP support for non-anonymous memory right? In
> > that case it won't be applicable for file backed memory?
>
> Large folio support in page cache precedes mTHP (large anonymous folio),
> thanks to willy's work. mTHP is more like a subset of large folio.
> There is no specific counters for page cache large folio. If you think
> it is worth tracking folios with orders between 0 and 9 (exclusive),
> you can add counters. Matthew, what is your take on this?
Got it. I think this is out of scope for this series but something we
could consider as a future enhancement?
In any case, we need to decide whether we need to count truncation as a
VM event or not.
Powered by blists - more mailing lists