[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <22309FE2-8F8F-44B3-BABF-0227624F38C4@nvidia.com>
Date: Tue, 12 Mar 2024 09:49:12 -0400
From: Zi Yan <ziy@...dia.com>
To: Baolin Wang <baolin.wang@...ux.alibaba.com>
Cc: linux-mm@...ck.org, Andrew Morton <akpm@...ux-foundation.org>,
"\"Matthew Wilcox (Oracle)\"" <willy@...radead.org>,
Yang Shi <shy828301@...il.com>, Huang Ying <ying.huang@...el.com>,
"\"Kirill A . Shutemov\"" <kirill.shutemov@...ux.intel.com>,
Ryan Roberts <ryan.roberts@....com>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] mm/migrate: put dest folio on deferred split list if
source was there.
On 12 Mar 2024, at 3:27, Baolin Wang wrote:
> On 2024/3/12 03:58, Zi Yan wrote:
>> From: Zi Yan <ziy@...dia.com>
>>
>> Commit 616b8371539a6 ("mm: thp: enable thp migration in generic path")
>> did not check if a THP is on deferred split list before migration, thus,
>> the destination THP is never put on deferred split list even if the source
>> THP might be. The opportunity of reclaiming free pages in a partially
>> mapped THP during deferred list scanning is lost, but no other harmful
>> consequence is present[1]. Checking source folio deferred split list
>> status before page unmapped and add destination folio to the list if
>> source was after migration.
>>
>> [1]: https://lore.kernel.org/linux-mm/03CE3A00-917C-48CC-8E1C-6A98713C817C@nvidia.com/
>>
>> From v1:
>> 1. Used dst to get correct deferred split list after migration
>> (per Ryan Roberts).
>>
>> Fixes: 616b8371539a ("mm: thp: enable thp migration in generic path")
>> Signed-off-by: Zi Yan <ziy@...dia.com>
>> ---
>> mm/huge_memory.c | 22 ----------------------
>> mm/internal.h | 23 +++++++++++++++++++++++
>> mm/migrate.c | 26 +++++++++++++++++++++++++-
>> 3 files changed, 48 insertions(+), 23 deletions(-)
>>
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 9859aa4f7553..c6d4d0cdf4b3 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -766,28 +766,6 @@ pmd_t maybe_pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma)
>> return pmd;
>> }
>> -#ifdef CONFIG_MEMCG
>> -static inline
>> -struct deferred_split *get_deferred_split_queue(struct folio *folio)
>> -{
>> - struct mem_cgroup *memcg = folio_memcg(folio);
>> - struct pglist_data *pgdat = NODE_DATA(folio_nid(folio));
>> -
>> - if (memcg)
>> - return &memcg->deferred_split_queue;
>> - else
>> - return &pgdat->deferred_split_queue;
>> -}
>> -#else
>> -static inline
>> -struct deferred_split *get_deferred_split_queue(struct folio *folio)
>> -{
>> - struct pglist_data *pgdat = NODE_DATA(folio_nid(folio));
>> -
>> - return &pgdat->deferred_split_queue;
>> -}
>> -#endif
>> -
>> void folio_prep_large_rmappable(struct folio *folio)
>> {
>> if (!folio || !folio_test_large(folio))
>> diff --git a/mm/internal.h b/mm/internal.h
>> index d1c69119b24f..8fa36e84463a 100644
>> --- a/mm/internal.h
>> +++ b/mm/internal.h
>> @@ -1107,6 +1107,29 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
>> unsigned long addr, pmd_t *pmd,
>> unsigned int flags);
>> +#ifdef CONFIG_MEMCG
>> +static inline
>> +struct deferred_split *get_deferred_split_queue(struct folio *folio)
>> +{
>> + struct mem_cgroup *memcg = folio_memcg(folio);
>> + struct pglist_data *pgdat = NODE_DATA(folio_nid(folio));
>> +
>> + if (memcg)
>> + return &memcg->deferred_split_queue;
>> + else
>> + return &pgdat->deferred_split_queue;
>> +}
>> +#else
>> +static inline
>> +struct deferred_split *get_deferred_split_queue(struct folio *folio)
>> +{
>> + struct pglist_data *pgdat = NODE_DATA(folio_nid(folio));
>> +
>> + return &pgdat->deferred_split_queue;
>> +}
>> +#endif
>> +
>> +
>> /*
>> * mm/mmap.c
>> */
>> diff --git a/mm/migrate.c b/mm/migrate.c
>> index 73a052a382f1..591e65658535 100644
>> --- a/mm/migrate.c
>> +++ b/mm/migrate.c
>> @@ -20,6 +20,7 @@
>> #include <linux/pagemap.h>
>> #include <linux/buffer_head.h>
>> #include <linux/mm_inline.h>
>> +#include <linux/mmzone.h>
>> #include <linux/nsproxy.h>
>> #include <linux/ksm.h>
>> #include <linux/rmap.h>
>> @@ -1037,7 +1038,10 @@ static int move_to_new_folio(struct folio *dst, struct folio *src,
>> enum {
>> PAGE_WAS_MAPPED = BIT(0),
>> PAGE_WAS_MLOCKED = BIT(1),
>> - PAGE_OLD_STATES = PAGE_WAS_MAPPED | PAGE_WAS_MLOCKED,
>> + PAGE_WAS_ON_DEFERRED_LIST = BIT(2),
>> + PAGE_OLD_STATES = PAGE_WAS_MAPPED |
>> + PAGE_WAS_MLOCKED |
>> + PAGE_WAS_ON_DEFERRED_LIST,
>> };
>> static void __migrate_folio_record(struct folio *dst,
>> @@ -1168,6 +1172,17 @@ static int migrate_folio_unmap(new_folio_t get_new_folio,
>> folio_lock(src);
>> }
>> locked = true;
>> + if (folio_test_large_rmappable(src) &&
>
> IMO, you should check folio_test_large() before calling folio_test_large_rmappable(), since the PG_large_rmappable flag is stored in the first tail page.
You are right. Ryan also pointed this out in another email. Will fix. Thanks.
--
Best Regards,
Yan, Zi
Download attachment "signature.asc" of type "application/pgp-signature" (855 bytes)
Powered by blists - more mailing lists