[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87zhl59w2t.fsf@yhuang-dev.intel.com>
Date: Tue, 23 Jul 2019 13:08:42 +0800
From: "Huang\, Ying" <ying.huang@...el.com>
To: Mikhail Gavrilov <mikhail.v.gavrilov@...il.com>
Cc: huang ying <huang.ying.caritas@...il.com>,
Linux List Kernel Mailing <linux-kernel@...r.kernel.org>,
<linux-mm@...ck.org>
Subject: Re: kernel BUG at mm/swap_state.c:170!
Mikhail Gavrilov <mikhail.v.gavrilov@...il.com> writes:
> On Mon, 22 Jul 2019 at 12:53, Huang, Ying <ying.huang@...el.com> wrote:
>>
>> Yes. This is quite complex. Is the transparent huge page enabled in
>> your system? You can check the output of
>>
>> $ cat /sys/kernel/mm/transparent_hugepage/enabled
>
> always [madvise] never
>
>> And, whether is the swap device you use a SSD or NVMe disk (not HDD)?
>
> NVMe INTEL Optane 905P SSDPE21D480GAM3
Thanks! I have found another (easier way) to reproduce the panic.
Could you try the below patch on top of v5.2-rc2? It can fix the panic
for me.
Best Regards,
Huang, Ying
-----------------------------------8<----------------------------------
>From 5e519c2de54b9fd4b32b7a59e47ce7f94beb8845 Mon Sep 17 00:00:00 2001
From: Huang Ying <ying.huang@...el.com>
Date: Tue, 23 Jul 2019 08:49:57 +0800
Subject: [PATCH] dbg xa head
---
mm/huge_memory.c | 18 ++++++++++++++++--
1 file changed, 16 insertions(+), 2 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 9f8bce9a6b32..c6ca1c7157ed 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2482,6 +2482,8 @@ static void __split_huge_page(struct page *page, struct list_head *list,
struct page *head = compound_head(page);
pg_data_t *pgdat = page_pgdat(head);
struct lruvec *lruvec;
+ struct address_space *swap_cache = NULL;
+ unsigned long offset;
int i;
lruvec = mem_cgroup_page_lruvec(head, pgdat);
@@ -2489,6 +2491,14 @@ static void __split_huge_page(struct page *page, struct list_head *list,
/* complete memcg works before add pages to LRU */
mem_cgroup_split_huge_fixup(head);
+ if (PageAnon(head) && PageSwapCache(head)) {
+ swp_entry_t entry = { .val = page_private(head) };
+
+ offset = swp_offset(entry);
+ swap_cache = swap_address_space(entry);
+ xa_lock(&swap_cache->i_pages);
+ }
+
for (i = HPAGE_PMD_NR - 1; i >= 1; i--) {
__split_huge_page_tail(head, i, lruvec, list);
/* Some pages can be beyond i_size: drop them from page cache */
@@ -2501,6 +2511,9 @@ static void __split_huge_page(struct page *page, struct list_head *list,
} else if (!PageAnon(page)) {
__xa_store(&head->mapping->i_pages, head[i].index,
head + i, 0);
+ } else if (swap_cache) {
+ __xa_store(&swap_cache->i_pages, offset + i,
+ head + i, 0);
}
}
@@ -2508,9 +2521,10 @@ static void __split_huge_page(struct page *page, struct list_head *list,
/* See comment in __split_huge_page_tail() */
if (PageAnon(head)) {
/* Additional pin to swap cache */
- if (PageSwapCache(head))
+ if (PageSwapCache(head)) {
page_ref_add(head, 2);
- else
+ xa_unlock(&swap_cache->i_pages);
+ } else
page_ref_inc(head);
} else {
/* Additional pin to page cache */
--
2.20.1
Powered by blists - more mailing lists