lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 11 Oct 2021 15:36:49 +0800 From: Liangcai Fan <liangcaifan19@...il.com> To: liangcai.fan@...soc.com, akpm@...ux-foundation.org Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org, Liangcai Fan <liangcaifan19@...il.com>, Chunyan Zhang <zhang.lyra@...il.com> Subject: [PATCH] mm: khugepaged: Recalculate min_free_kbytes after stopping khugepaged When initializing transparent huge pages, min_free_kbytes would be calculated according to what khugepaged expected. So when disable transparent huge pages, min_free_kbytes should be recalculated instead of the higher value set by khugepaged. Signed-off-by: Liangcai Fan <liangcaifan19@...il.com> Signed-off-by: Chunyan Zhang <zhang.lyra@...il.com> --- include/linux/mm.h | 1 + mm/khugepaged.c | 10 ++++++++-- mm/page_alloc.c | 7 ++++++- 3 files changed, 15 insertions(+), 3 deletions(-) diff --git a/include/linux/mm.h b/include/linux/mm.h index 73a52ab..4ef07e8 100644 --- a/include/linux/mm.h +++ b/include/linux/mm.h @@ -2475,6 +2475,7 @@ extern void memmap_init_range(unsigned long, int, unsigned long, unsigned long, unsigned long, enum meminit_context, struct vmem_altmap *, int migratetype); extern void setup_per_zone_wmarks(void); +extern void calculate_min_free_kbytes(void); extern int __meminit init_per_zone_wmark_min(void); extern void mem_init(void); extern void __init mmap_init(void); diff --git a/mm/khugepaged.c b/mm/khugepaged.c index 045cc57..682130f 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -2291,6 +2291,11 @@ static void set_recommended_min_free_kbytes(void) int nr_zones = 0; unsigned long recommended_min; + if (!khugepaged_enabled()) { + calculate_min_free_kbytes(); + goto update_wmarks; + } + for_each_populated_zone(zone) { /* * We don't need to worry about fragmentation of @@ -2326,6 +2331,8 @@ static void set_recommended_min_free_kbytes(void) min_free_kbytes = recommended_min; } + +update_wmarks: setup_per_zone_wmarks(); } @@ -2347,12 +2354,11 @@ int start_stop_khugepaged(void) if (!list_empty(&khugepaged_scan.mm_head)) wake_up_interruptible(&khugepaged_wait); - - set_recommended_min_free_kbytes(); } else if (khugepaged_thread) { kthread_stop(khugepaged_thread); khugepaged_thread = NULL; } + set_recommended_min_free_kbytes(); fail: mutex_unlock(&khugepaged_mutex); return err; diff --git a/mm/page_alloc.c b/mm/page_alloc.c index b37435c..1d44386 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -8452,7 +8452,7 @@ void setup_per_zone_wmarks(void) * 8192MB: 11584k * 16384MB: 16384k */ -int __meminit init_per_zone_wmark_min(void) +void calculate_min_free_kbytes(void) { unsigned long lowmem_kbytes; int new_min_free_kbytes; @@ -8470,6 +8470,11 @@ int __meminit init_per_zone_wmark_min(void) pr_warn("min_free_kbytes is not updated to %d because user defined value %d is preferred\n", new_min_free_kbytes, user_min_free_kbytes); } +} + +int __meminit init_per_zone_wmark_min(void) +{ + calculate_min_free_kbytes(); setup_per_zone_wmarks(); refresh_zone_stat_thresholds(); setup_per_zone_lowmem_reserve(); -- 1.9.1
Powered by blists - more mailing lists