[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190729054335.3241150-3-songliubraving@fb.com>
Date: Sun, 28 Jul 2019 22:43:35 -0700
From: Song Liu <songliubraving@...com>
To: <linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>,
<akpm@...ux-foundation.org>
CC: <matthew.wilcox@...cle.com>, <kirill.shutemov@...ux.intel.com>,
<oleg@...hat.com>, <kernel-team@...com>,
<william.kucharski@...cle.com>, <srikar@...ux.vnet.ibm.com>,
Song Liu <songliubraving@...com>
Subject: [PATCH 2/2] uprobe: collapse THP pmd after removing all uprobes
After all uprobes are removed from the huge page (with PTE pgtable), it
is possible to collapse the pmd and benefit from THP again. This patch
does the collapse by calling khugepaged_add_pte_mapped_thp().
Signed-off-by: Song Liu <songliubraving@...com>
---
kernel/events/uprobes.c | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
index 58ab7fc7272a..cc53789fefc6 100644
--- a/kernel/events/uprobes.c
+++ b/kernel/events/uprobes.c
@@ -26,6 +26,7 @@
#include <linux/percpu-rwsem.h>
#include <linux/task_work.h>
#include <linux/shmem_fs.h>
+#include <linux/khugepaged.h>
#include <linux/uprobes.h>
@@ -470,6 +471,7 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm,
struct page *old_page, *new_page;
struct vm_area_struct *vma;
int ret, is_register, ref_ctr_updated = 0;
+ bool orig_page_huge = false;
is_register = is_swbp_insn(&opcode);
uprobe = container_of(auprobe, struct uprobe, arch);
@@ -525,6 +527,9 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm,
/* dec_mm_counter for old_page */
dec_mm_counter(mm, MM_ANONPAGES);
+
+ if (PageCompound(orig_page))
+ orig_page_huge = true;
}
put_page(orig_page);
}
@@ -543,6 +548,10 @@ int uprobe_write_opcode(struct arch_uprobe *auprobe, struct mm_struct *mm,
if (ret && is_register && ref_ctr_updated)
update_ref_ctr(uprobe, mm, -1);
+ /* try collapse pmd for compound page */
+ if (!ret && orig_page_huge)
+ khugepaged_add_pte_mapped_thp(mm, vaddr & HPAGE_PMD_MASK);
+
return ret;
}
--
2.17.1
Powered by blists - more mailing lists