lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 21 Nov 2023 15:51:29 +0800 From: Zhiguo Jiang <justinjiang@...o.com> To: Andrew Morton <akpm@...ux-foundation.org>, linux-mm@...ck.org, linux-kernel@...r.kernel.org Cc: Matthew Wilcox <willy@...radead.org>, Johannes Weiner <hannes@...xchg.org>, opensource.kernel@...o.com, Zhiguo Jiang <justinjiang@...o.com> Subject: [PATCH v2] mm: ALLOC_HIGHATOMIC flag allocation issue Update comments and modify variable highatomic_allocation to highatomic. Signed-off-by: Zhiguo Jiang <justinjiang@...o.com> --- Changelog: v1: In case that alloc_flags contains ALLOC_HIGHATOMIC and alloc order is order1/2/3/10 in rmqueue(), if pages are alloced successfully from pcplist, a free pageblock will be also moved from the alloced migratetype freelist to MIGRATE_HIGHATOMIC freelist, rather than alloc from MIGRATE_HIGHATOMIC freelist firstly, so this will result in an increasing number of pages on the MIGRATE_HIGHATOMIC freelist, pages in other migratetype freelist are reduced and more likely to allocation failure. Currently the sequence of ALLOC_HIGHATOMIC allocation is: pcplist --> rmqueue_bulk() --> rmqueue_buddy() MIGRATE_HIGHATOMIC --> rmqueue_buddy() allocation migratetype. Due to the fact that requesting pages from the pcplist is faster than buddy, the sequence of modifying the ALLOC_HIGHATOMIC allocation is: pcplist --> rmqueue_buddy() MIGRATE_HIGHATOMIC --> rmqueue_buddy() allocation migratetype. This patch can solve the failure problem of allocating other types of pages due to excessive MIGRATE_HIGHATOMIC freelist reservations. In comparative testing, cat /proc/pagetypeinfo and the HighAtomic freelist size is: Test without this patch: Node 0, zone Normal, type HighAtomic 2369 771 138 15 0 0 0 0 0 0 0 Test with this patch: Node 0, zone Normal, type HighAtomic 206 82 4 2 1 0 0 0 0 0 0 mm/page_alloc.c | 31 +++++++++++++++---------------- 1 file changed, 15 insertions(+), 16 deletions(-) diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 49890d00cc3c..8e192c21e199 100755 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -2851,9 +2851,9 @@ struct page *__rmqueue_pcplist(struct zone *zone, unsigned int order, int alloced; /* - * If pcplist is empty and alloc_flags is with ALLOC_HIGHATOMIC, - * it should alloc from buddy highatomic migrate freelist firstly - * to ensure quick and successful allocation. + * If pcplist is empty and alloc_flags contains + * ALLOC_HIGHATOMIC, alloc from buddy highatomic + * freelist first. */ if (alloc_flags & ALLOC_HIGHATOMIC) goto out; @@ -2927,7 +2927,7 @@ static inline struct page *rmqueue(struct zone *preferred_zone, struct zone *zone, unsigned int order, gfp_t gfp_flags, unsigned int alloc_flags, - int migratetype, bool *highatomc_allocation) + int migratetype, bool *highatomic) { struct page *page; @@ -2950,19 +2950,18 @@ struct page *rmqueue(struct zone *preferred_zone, /* * The high-order atomic allocation pageblock reserved conditions: * - * If the high-order atomic allocation page is alloced from pcplist, + * If the high-order atomic allocation page is allocated from pcplist, * the highatomic pageblock does not need to be reserved, which can - * void to migrate an increasing number of pages into buddy - * MIGRATE_HIGHATOMIC freelist and lead to an increasing risk of - * allocation failure on other buddy migrate freelists. + * avoid migrating an increasing number of pages into buddy highatomic + * freelist and leading to an increased risk of allocation failure on + * other migrate freelists in buddy. * - * If the high-order atomic allocation page is alloced from buddy - * highatomic migrate freelist, regardless of whether the allocation - * is successful or not, the highatomic pageblock can try to be - * reserved. + * If the high-order atomic allocation page is allocated from buddy + * highatomic freelist, regardless of whether the allocation is + * successful or not, the highatomic pageblock can try to be reserved. */ if (unlikely(alloc_flags & ALLOC_HIGHATOMIC)) - *highatomc_allocation = true; + *highatomic = true; out: /* Separate test+clear to avoid unnecessary atomics */ @@ -3234,7 +3233,7 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags, struct pglist_data *last_pgdat = NULL; bool last_pgdat_dirty_ok = false; bool no_fallback; - bool highatomc_allocation = false; + bool highatomic = false; retry: /* @@ -3366,7 +3365,7 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags, try_this_zone: page = rmqueue(ac->preferred_zoneref->zone, zone, order, - gfp_mask, alloc_flags, ac->migratetype, &highatomc_allocation); + gfp_mask, alloc_flags, ac->migratetype, &highatomic); if (page) { prep_new_page(page, order, gfp_mask, alloc_flags); @@ -3374,7 +3373,7 @@ get_page_from_freelist(gfp_t gfp_mask, unsigned int order, int alloc_flags, * If this is a high-order atomic allocation then check * if the pageblock should be reserved for the future */ - if (unlikely(highatomc_allocation)) + if (unlikely(highatomic)) reserve_highatomic_pageblock(page, zone); return page; -- 2.39.0
Powered by blists - more mailing lists