[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c75d1c6c-8ea6-424f-853c-1ccda6c77ba2@redhat.com>
Date: Tue, 6 Aug 2024 12:24:43 +0200
From: David Hildenbrand <david@...hat.com>
To: Ryan Roberts <ryan.roberts@....com>, linux-kernel@...r.kernel.org
Cc: linux-mm@...ck.org, linux-doc@...r.kernel.org, kvm@...r.kernel.org,
linux-s390@...r.kernel.org, linux-fsdevel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>,
Jonathan Corbet <corbet@....net>,
Christian Borntraeger <borntraeger@...ux.ibm.com>,
Janosch Frank <frankja@...ux.ibm.com>,
Claudio Imbrenda <imbrenda@...ux.ibm.com>, Heiko Carstens
<hca@...ux.ibm.com>, Vasily Gorbik <gor@...ux.ibm.com>,
Alexander Gordeev <agordeev@...ux.ibm.com>,
Sven Schnelle <svens@...ux.ibm.com>,
Gerald Schaefer <gerald.schaefer@...ux.ibm.com>,
Mark Brown <broonie@...nel.org>
Subject: Re: [PATCH v1 07/11] mm/huge_memory: convert split_huge_pages_pid()
from follow_page() to folio_walk
On 06.08.24 12:03, David Hildenbrand wrote:
> On 06.08.24 11:56, David Hildenbrand wrote:
>> On 06.08.24 11:46, Ryan Roberts wrote:
>>> On 02/08/2024 16:55, David Hildenbrand wrote:
>>>> Let's remove yet another follow_page() user. Note that we have to do the
>>>> split without holding the PTL, after folio_walk_end(). We don't care
>>>> about losing the secretmem check in follow_page().
>>>
>>> Hi David,
>>>
>>> Our (arm64) CI is showing a regression in split_huge_page_test from mm selftests from next-20240805 onwards. Navigating around a couple of other lurking bugs, I was able to bisect to this change (which smells about right).
>>>
>>> Newly failing test:
>>>
>>> # # ------------------------------
>>> # # running ./split_huge_page_test
>>> # # ------------------------------
>>> # # TAP version 13
>>> # # 1..12
>>> # # Bail out! Still AnonHugePages not split
>>> # # # Planned tests != run tests (12 != 0)
>>> # # # Totals: pass:0 fail:0 xfail:0 xpass:0 skip:0 error:0
>>> # # [FAIL]
>>> # not ok 52 split_huge_page_test # exit=1
>>>
>>> It's trying to split some pmd-mapped THPs then checking and finding that they are not split. The split is requested via /sys/kernel/debug/split_huge_pages, which I believe ends up in this function you are modifying here. Although I'll admit that looking at the change, there is nothing obviously wrong! Any ideas?
>>
>> Nothing jumps at me as well. Let me fire up the debugger :)
>
> Ah, very likely the can_split_folio() check expects a raised refcount
> already.
Indeed, the following does the trick! Thanks Ryan, I could have sworn
I ran that selftest as well.
TAP version 13
1..12
ok 1 Split huge pages successful
ok 2 Split PTE-mapped huge pages successful
# Please enable pr_debug in split_huge_pages_in_file() for more info.
# Please check dmesg for more information
ok 3 File-backed THP split test done
...
@Andrew, can you squash the following?
From e5ea585de3e089ea89bf43d8447ff9fc9b371286 Mon Sep 17 00:00:00 2001
From: David Hildenbrand <david@...hat.com>
Date: Tue, 6 Aug 2024 12:08:17 +0200
Subject: [PATCH] fixup: mm/huge_memory: convert split_huge_pages_pid() from
follow_page() to folio_walk
We have to teach can_split_folio() that we are not holding an additional
reference.
Signed-off-by: David Hildenbrand <david@...hat.com>
---
include/linux/huge_mm.h | 4 ++--
mm/huge_memory.c | 8 ++++----
mm/vmscan.c | 2 +-
3 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index e25d9ebfdf89..ce44caa40eed 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -314,7 +314,7 @@ unsigned long thp_get_unmapped_area_vmflags(struct file *filp, unsigned long add
unsigned long len, unsigned long pgoff, unsigned long flags,
vm_flags_t vm_flags);
-bool can_split_folio(struct folio *folio, int *pextra_pins);
+bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins);
int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
unsigned int new_order);
static inline int split_huge_page(struct page *page)
@@ -470,7 +470,7 @@ thp_get_unmapped_area_vmflags(struct file *filp, unsigned long addr,
}
static inline bool
-can_split_folio(struct folio *folio, int *pextra_pins)
+can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins)
{
return false;
}
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 697fcf89f975..c40b0dcc205b 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3021,7 +3021,7 @@ static void __split_huge_page(struct page *page, struct list_head *list,
}
/* Racy check whether the huge page can be split */
-bool can_split_folio(struct folio *folio, int *pextra_pins)
+bool can_split_folio(struct folio *folio, int caller_pins, int *pextra_pins)
{
int extra_pins;
@@ -3033,7 +3033,7 @@ bool can_split_folio(struct folio *folio, int *pextra_pins)
extra_pins = folio_nr_pages(folio);
if (pextra_pins)
*pextra_pins = extra_pins;
- return folio_mapcount(folio) == folio_ref_count(folio) - extra_pins - 1;
+ return folio_mapcount(folio) == folio_ref_count(folio) - extra_pins - caller_pins;
}
/*
@@ -3201,7 +3201,7 @@ int split_huge_page_to_list_to_order(struct page *page, struct list_head *list,
* Racy check if we can split the page, before unmap_folio() will
* split PMDs
*/
- if (!can_split_folio(folio, &extra_pins)) {
+ if (!can_split_folio(folio, 1, &extra_pins)) {
ret = -EAGAIN;
goto out_unlock;
}
@@ -3537,7 +3537,7 @@ static int split_huge_pages_pid(int pid, unsigned long vaddr_start,
* can be split or not. So skip the check here.
*/
if (!folio_test_private(folio) &&
- !can_split_folio(folio, NULL))
+ !can_split_folio(folio, 0, NULL))
goto next;
if (!folio_trylock(folio))
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 31d13462571e..a332cb80e928 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1227,7 +1227,7 @@ static unsigned int shrink_folio_list(struct list_head *folio_list,
goto keep_locked;
if (folio_test_large(folio)) {
/* cannot split folio, skip it */
- if (!can_split_folio(folio, NULL))
+ if (!can_split_folio(folio, 1, NULL))
goto activate_locked;
/*
* Split partially mapped folios right away.
--
2.45.2
--
Cheers,
David / dhildenb
Powered by blists - more mailing lists