[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6622b7b4-e558-4d14-bc72-33e8008b06ec@gmail.com>
Date: Sun, 4 Aug 2024 20:02:50 +0100
From: Usama Arif <usamaarif642@...il.com>
To: David Hildenbrand <david@...hat.com>, akpm@...ux-foundation.org,
linux-mm@...ck.org
Cc: hannes@...xchg.org, riel@...riel.com, shakeel.butt@...ux.dev,
roman.gushchin@...ux.dev, yuzhao@...gle.com, baohua@...nel.org,
ryan.roberts@....com, rppt@...nel.org, willy@...radead.org,
cerasuolodomenico@...il.com, corbet@....net, linux-kernel@...r.kernel.org,
linux-doc@...r.kernel.org, kernel-team@...a.com,
Shuang Zhai <zhais@...gle.com>
Subject: Re: [PATCH 3/6] mm: free zapped tail pages when splitting isolated
thp
On 30/07/2024 16:14, David Hildenbrand wrote:
> On 30.07.24 14:46, Usama Arif wrote:
>> From: Yu Zhao <yuzhao@...gle.com>
>>
>> If a tail page has only two references left, one inherited from the
>> isolation of its head and the other from lru_add_page_tail() which we
>> are about to drop, it means this tail page was concurrently zapped.
>> Then we can safely free it and save page reclaim or migration the
>> trouble of trying it.
>>
>> Signed-off-by: Yu Zhao <yuzhao@...gle.com>
>> Tested-by: Shuang Zhai <zhais@...gle.com>
>> Signed-off-by: Usama Arif <usamaarif642@...il.com>
>> ---
>> mm/huge_memory.c | 26 ++++++++++++++++++++++++++
>> 1 file changed, 26 insertions(+)
>>
>> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>> index 0167dc27e365..76a3b6a2b796 100644
>> --- a/mm/huge_memory.c
>> +++ b/mm/huge_memory.c
>> @@ -2923,6 +2923,8 @@ static void __split_huge_page(struct page *page, struct list_head *list,
>> unsigned int new_nr = 1 << new_order;
>> int order = folio_order(folio);
>> unsigned int nr = 1 << order;
>> + LIST_HEAD(pages_to_free);
>> + int nr_pages_to_free = 0;
>> /* complete memcg works before add pages to LRU */
>> split_page_memcg(head, order, new_order);
>> @@ -3007,6 +3009,24 @@ static void __split_huge_page(struct page *page, struct list_head *list,
>> if (subpage == page)
>> continue;
>> folio_unlock(new_folio);
>> + /*
>> + * If a tail page has only two references left, one inherited
>> + * from the isolation of its head and the other from
>> + * lru_add_page_tail() which we are about to drop, it means this
>> + * tail page was concurrently zapped. Then we can safely free it
>> + * and save page reclaim or migration the trouble of trying it.
>> + */
>> + if (list && page_ref_freeze(subpage, 2)) {
>> + VM_BUG_ON_PAGE(PageLRU(subpage), subpage);
>> + VM_BUG_ON_PAGE(PageCompound(subpage), subpage);
>> + VM_BUG_ON_PAGE(page_mapped(subpage), subpage);
>> +
>
> No VM_BUG_*, VM_WARN is good enough.
>
>> + ClearPageActive(subpage);
>> + ClearPageUnevictable(subpage);
>> + list_move(&subpage->lru, &pages_to_free);
>
> Most checks here should operate on new_folio instead of subpage.
>
>
Do you mean instead of doing the PageLRU, PageCompound and page_mapped check on the subpage, there should be checks on new_folio?
If new_folio is a large folio, then it could be that only some of the subpages were zapped?
Could do below if subpage makes sense
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 3305e6d0b90e..abfcd4b7cbba 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3041,9 +3041,9 @@ static void __split_huge_page(struct page *page, struct list_head *list,
* and save page reclaim or migration the trouble of trying it.
*/
if (list && page_ref_freeze(subpage, 2)) {
- VM_BUG_ON_PAGE(PageLRU(subpage), subpage);
- VM_BUG_ON_PAGE(PageCompound(subpage), subpage);
- VM_BUG_ON_PAGE(page_mapped(subpage), subpage);
+ VM_WARN_ON_ONCE_PAGE(PageLRU(subpage), subpage);
+ VM_WARN_ON_ONCE_PAGE(PageCompound(subpage), subpage);
+ VM_WARN_ON_ONCE_PAGE(page_mapped(subpage), subpage);
ClearPageActive(subpage);
ClearPageUnevictable(subpage);
Powered by blists - more mailing lists