lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250331002809.94758-1-alexei.starovoitov@gmail.com>
Date: Sun, 30 Mar 2025 17:28:09 -0700
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: bpf@...r.kernel.org,
	daniel@...earbox.net,
	andrii@...nel.org,
	martin.lau@...nel.org,
	akpm@...ux-foundation.org,
	peterz@...radead.org,
	vbabka@...e.cz,
	bigeasy@...utronix.de,
	rostedt@...dmis.org,
	shakeel.butt@...ux.dev,
	mhocko@...e.com,
	linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Subject: [PATCH mm] mm/page_alloc: Avoid second trylock of zone->lock

From: Alexei Starovoitov <ast@...nel.org>

spin_trylock followed by spin_lock will cause extra write cache
access. If the lock is contended it may cause unnecessary cache
line bouncing and will execute redundant irq restore/save pair.
Therefore, check alloc/fpi_flags first and use spin_trylock or
spin_lock.

Suggested-by: Linus Torvalds <torvalds@...ux-foundation.org>
Fixes: 97769a53f117 ("mm, bpf: Introduce try_alloc_pages() for opportunistic page allocation")
Signed-off-by: Alexei Starovoitov <ast@...nel.org>
---
 mm/page_alloc.c | 15 +++++++++------
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index e3ea5bf5c459..ffbb5678bc2f 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1268,11 +1268,12 @@ static void free_one_page(struct zone *zone, struct page *page,
 	struct llist_head *llhead;
 	unsigned long flags;
 
-	if (!spin_trylock_irqsave(&zone->lock, flags)) {
-		if (unlikely(fpi_flags & FPI_TRYLOCK)) {
+	if (unlikely(fpi_flags & FPI_TRYLOCK)) {
+		if (!spin_trylock_irqsave(&zone->lock, flags)) {
 			add_page_to_zone_llist(zone, page, order);
 			return;
 		}
+	} else {
 		spin_lock_irqsave(&zone->lock, flags);
 	}
 
@@ -2341,9 +2342,10 @@ static int rmqueue_bulk(struct zone *zone, unsigned int order,
 	unsigned long flags;
 	int i;
 
-	if (!spin_trylock_irqsave(&zone->lock, flags)) {
-		if (unlikely(alloc_flags & ALLOC_TRYLOCK))
+	if (unlikely(alloc_flags & ALLOC_TRYLOCK)) {
+		if (!spin_trylock_irqsave(&zone->lock, flags))
 			return 0;
+	} else {
 		spin_lock_irqsave(&zone->lock, flags);
 	}
 	for (i = 0; i < count; ++i) {
@@ -2964,9 +2966,10 @@ struct page *rmqueue_buddy(struct zone *preferred_zone, struct zone *zone,
 
 	do {
 		page = NULL;
-		if (!spin_trylock_irqsave(&zone->lock, flags)) {
-			if (unlikely(alloc_flags & ALLOC_TRYLOCK))
+		if (unlikely(alloc_flags & ALLOC_TRYLOCK)) {
+			if (!spin_trylock_irqsave(&zone->lock, flags))
 				return NULL;
+		} else {
 			spin_lock_irqsave(&zone->lock, flags);
 		}
 		if (alloc_flags & ALLOC_HIGHATOMIC)
-- 
2.47.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ