[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230413055038.180952-1-yang.yang29@zte.com.cn>
Date: Thu, 13 Apr 2023 13:50:38 +0800
From: Yang Yang <yang.yang29@....com.cn>
To: akpm@...ux-foundation.org, david@...hat.com
Cc: yang.yang29@....com.cn, imbrenda@...ux.ibm.com,
linux-kernel@...r.kernel.org, linux-mm@...ck.org,
ran.xiaokai@....com.cn, xu.xin.sc@...il.com, xu.xin16@....com.cn,
Xuexin Jiang <jiang.xuexin@....com.cn>
Subject: [PATCH v7 1/6] ksm: support unsharing KSM-placed zero pages
From: xu xin <xu.xin16@....com.cn>
When use_zero_pages of ksm is enabled, madvise(addr, len, MADV_UNMERGEABLE)
and other ways (like write 2 to /sys/kernel/mm/ksm/run) to trigger
unsharing will *not* actually unshare the shared zeropage as placed by KSM
(which is against the MADV_UNMERGEABLE documentation). As these KSM-placed
zero pages are out of the control of KSM, the related counts of ksm pages
don't expose how many zero pages are placed by KSM (these special zero
pages are different from those initially mapped zero pages, because the
zero pages mapped to MADV_UNMERGEABLE areas are expected to be a complete
and unshared page)
To not blindly unshare all shared zero_pages in applicable VMAs, the patch
use pte_mkdirty (related with architecture) to mark KSM-placed zero pages.
Thus, MADV_UNMERGEABLE will only unshare those KSM-placed zero pages.
The architecture must guarantee that pte_mkdirty won't treat the pte as
writable. Otherwise, it will break KSM pages state (wrprotect) and affect
the KSM functionality. For safety, we restrict this feature only to the
tested and known-working architechtures fow now.
The patch will not degrade the performance of use_zero_pages as it doesn't
change the way of merging empty pages in use_zero_pages's feature.
Signed-off-by: xu xin <xu.xin16@....com.cn>
Suggested-by: David Hildenbrand <david@...hat.com>
Cc: Claudio Imbrenda <imbrenda@...ux.ibm.com>
Cc: Xuexin Jiang <jiang.xuexin@....com.cn>
Reviewed-by: Xiaokai Ran <ran.xiaokai@....com.cn>
Reviewed-by: Yang Yang <yang.yang29@....com.cn>
---
include/linux/ksm.h | 9 +++++++++
mm/Kconfig | 24 +++++++++++++++++++++++-
mm/ksm.c | 5 +++--
3 files changed, 35 insertions(+), 3 deletions(-)
diff --git a/include/linux/ksm.h b/include/linux/ksm.h
index d5f69f18ee5a..f0cc085be42a 100644
--- a/include/linux/ksm.h
+++ b/include/linux/ksm.h
@@ -95,4 +95,13 @@ static inline void folio_migrate_ksm(struct folio *newfolio, struct folio *old)
#endif /* CONFIG_MMU */
#endif /* !CONFIG_KSM */
+#ifdef CONFIG_KSM_ZERO_PAGES_TRACK
+/* use pte_mkdirty to track a KSM-placed zero page */
+#define set_pte_ksm_zero(pte) pte_mkdirty(pte_mkspecial(pte))
+#define is_ksm_zero_pte(pte) (is_zero_pfn(pte_pfn(pte)) && pte_dirty(pte))
+#else /* !CONFIG_KSM_ZERO_PAGES_TRACK */
+#define set_pte_ksm_zero(pte) pte_mkspecial(pte)
+#define is_ksm_zero_pte(pte) 0
+#endif /* CONFIG_KSM_ZERO_PAGES_TRACK */
+
#endif /* __LINUX_KSM_H */
diff --git a/mm/Kconfig b/mm/Kconfig
index 3894a6309c41..42f69f421a03 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -666,7 +666,7 @@ config MMU_NOTIFIER
bool
select INTERVAL_TREE
-config KSM
+menuconfig KSM
bool "Enable KSM for page merging"
depends on MMU
select XXHASH
@@ -681,6 +681,28 @@ config KSM
until a program has madvised that an area is MADV_MERGEABLE, and
root has set /sys/kernel/mm/ksm/run to 1 (if CONFIG_SYSFS is set).
+if KSM
+
+config KSM_ZERO_PAGES_TRACK
+ bool "support tracking KSM-placed zero pages"
+ depends on KSM
+ depends on ARM || ARM64 || X86
+ default y
+ help
+ This allows KSM to track KSM-placed zero pages, including supporting
+ unsharing and counting the KSM-placed zero pages. if say N, then
+ madvise(,,UNMERGEABLE) can't unshare the KSM-placed zero pages, and
+ users can't know how many zero pages are placed by KSM. This feature
+ depends on pte_mkdirty (related with architecture) to mark KSM-placed
+ zero pages.
+
+ The architecture must guarantee that pte_mkdirty won't treat the pte
+ as writable. Otherwise, it will break KSM pages state (wrprotect) and
+ affect the KSM functionality. For safety, we restrict this feature only
+ to the tested and known-working architechtures.
+
+endif # KSM
+
config DEFAULT_MMAP_MIN_ADDR
int "Low address space to protect from user allocation"
depends on MMU
diff --git a/mm/ksm.c b/mm/ksm.c
index 7cd7e12cd3df..1d1771a6b3fe 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -447,7 +447,8 @@ static int break_ksm_pmd_entry(pmd_t *pmd, unsigned long addr, unsigned long nex
if (is_migration_entry(entry))
page = pfn_swap_entry_to_page(entry);
}
- ret = page && PageKsm(page);
+ /* return 1 if the page is an normal ksm page or KSM-placed zero page */
+ ret = (page && PageKsm(page)) || is_ksm_zero_pte(*pte);
pte_unmap_unlock(pte, ptl);
return ret;
}
@@ -1240,7 +1241,7 @@ static int replace_page(struct vm_area_struct *vma, struct page *page,
page_add_anon_rmap(kpage, vma, addr, RMAP_NONE);
newpte = mk_pte(kpage, vma->vm_page_prot);
} else {
- newpte = pte_mkspecial(pfn_pte(page_to_pfn(kpage),
+ newpte = set_pte_ksm_zero(pfn_pte(page_to_pfn(kpage),
vma->vm_page_prot));
/*
* We're replacing an anonymous page with a zero page, which is
--
2.15.2
Powered by blists - more mailing lists