[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <45567EBA-5856-4BBC-8C02-EAE03A676B94@nvidia.com>
Date: Fri, 07 Jun 2024 16:51:04 -0400
From: Zi Yan <ziy@...dia.com>
To: "Pankaj Raghav (Samsung)" <kernel@...kajraghav.com>, willy@...radead.org
Cc: david@...morbit.com, djwong@...nel.org, chandan.babu@...cle.com,
brauner@...nel.org, akpm@...ux-foundation.org, mcgrof@...nel.org,
linux-mm@...ck.org, hare@...e.de, linux-kernel@...r.kernel.org,
yang@...amperecomputing.com, linux-xfs@...r.kernel.org, p.raghav@...sung.com,
linux-fsdevel@...r.kernel.org, hch@....de, gost.dev@...sung.com,
cl@...amperecomputing.com, john.g.garry@...cle.com
Subject: Re: [PATCH v7 05/11] mm: split a folio in minimum folio order chunks
On 7 Jun 2024, at 16:30, Pankaj Raghav (Samsung) wrote:
> On Fri, Jun 07, 2024 at 12:58:33PM -0400, Zi Yan wrote:
>> Hi Pankaj,
>>
>> Can you use ziy@...dia.com instead of zi.yan@...t.com? Since I just use the latter
>> to send patches. Thanks.
>
> Got it!
>
>>
>> On 7 Jun 2024, at 10:58, Pankaj Raghav (Samsung) wrote:
>>
>>> From: Luis Chamberlain <mcgrof@...nel.org>
>>>
>>> split_folio() and split_folio_to_list() assume order 0, to support
>>> minorder for non-anonymous folios, we must expand these to check the
>>> folio mapping order and use that.
>>>
>>> Set new_order to be at least minimum folio order if it is set in
>>> split_huge_page_to_list() so that we can maintain minimum folio order
>>> requirement in the page cache.
>>>
>>> Update the debugfs write files used for testing to ensure the order
>>> is respected as well. We simply enforce the min order when a file
>>> mapping is used.
>>>
>>> Signed-off-by: Luis Chamberlain <mcgrof@...nel.org>
>>> Signed-off-by: Pankaj Raghav <p.raghav@...sung.com>
>>> ---
>>> include/linux/huge_mm.h | 14 ++++++++---
>>> mm/huge_memory.c | 55 ++++++++++++++++++++++++++++++++++++++---
>>> 2 files changed, 61 insertions(+), 8 deletions(-)
>>>
>>
>> <snip>
>>
>>>
>>> +int split_folio_to_list(struct folio *folio, struct list_head *list)
>>> +{
>>> + unsigned int min_order = 0;
>>> +
>>> + if (!folio_test_anon(folio)) {
>>> + if (!folio->mapping) {
>>> + count_vm_event(THP_SPLIT_PAGE_FAILED);
>>
>> You should only increase this counter when the input folio is a THP, namely
>> folio_test_pmd_mappable(folio) is true. For other large folios, we will
>> need a separate counter. Something like MTHP_STAT_FILE_SPLIT_FAILED.
>> See enum mthp_stat_item in include/linux/huge_mm.h.
>>
> Hmm, but we don't have mTHP support for non-anonymous memory right? In
> that case it won't be applicable for file backed memory?
Large folio support in page cache precedes mTHP (large anonymous folio),
thanks to willy's work. mTHP is more like a subset of large folio.
There is no specific counters for page cache large folio. If you think
it is worth tracking folios with orders between 0 and 9 (exclusive),
you can add counters. Matthew, what is your take on this?
--
Best Regards,
Yan, Zi
Download attachment "signature.asc" of type "application/pgp-signature" (855 bytes)
Powered by blists - more mailing lists