lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230830111332.7599-1-lecopzer.chen@mediatek.com>
Date:   Wed, 30 Aug 2023 19:13:33 +0800
From:   Lecopzer Chen <lecopzer.chen@...iatek.com>
To:     <akpm@...ux-foundation.org>, <mgorman@...hsingularity.net>
CC:     <linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>,
        <vbabka@...e.cz>, <nsaenzju@...hat.com>, <yj.chiang@...iatek.com>,
        Lecopzer Chen <lecopzer.chen@...iatek.com>,
        Mark-pk Tsai <mark-pk.tsai@...iatek.com>,
        Joe Liu <joe.liu@...iatek.com>
Subject: [PATCH] mm: page_alloc: fix cma pageblock was stolen in rmqueue fallback

commit 4b23a68f9536 ("mm/page_alloc: protect PCP lists with a
spinlock") fallback freeing page to free_one_page() if pcp trylock
failed. This make MIGRATE_CMA be able to fallback and be stolen
whole pageblock by MIGRATE_UNMOVABLE in the page allocation.

PCP free is fine because free_pcppages_bulk() will always get
migratetype again before freeing the page, thus this only happen when
someone tried to put CMA page in to other MIGRATE_TYPE's freelist.

Fixes: 4b23a68f9536 ("mm/page_alloc: protect PCP lists with a spinlock")
Reported-by: Joe Liu <joe.liu@...iatek.com>
Signed-off-by: Lecopzer Chen <lecopzer.chen@...iatek.com>
Cc: Mark-pk Tsai <mark-pk.tsai@...iatek.com>
Cc: Joe Liu <joe.liu@...iatek.com>
---
 mm/page_alloc.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 452459836b71..0ea88c031838 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2428,6 +2428,14 @@ void free_unref_page(struct page *page, unsigned int order)
 		free_unref_page_commit(zone, pcp, page, migratetype, order);
 		pcp_spin_unlock(pcp);
 	} else {
+#ifdef CONFIG_CMA
+		/*
+		 * CMA must be back to its freelist. Otherwise CMA pageblock may
+		 * be stolen by fallback flow while getting free page.
+		 */
+		if (get_pcppage_migratetype(page) == MIGRATE_CMA)
+			migratetype = MIGRATE_CMA;
+#endif
 		free_one_page(zone, page, pfn, order, migratetype, FPI_NONE);
 	}
 	pcp_trylock_finish(UP_flags);
-- 
2.18.0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ