[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20220708144406.GJ27531@techsingularity.net>
Date: Fri, 8 Jul 2022 15:44:06 +0100
From: Mel Gorman <mgorman@...hsingularity.net>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Nicolas Saenz Julienne <nsaenzju@...hat.com>,
Marcelo Tosatti <mtosatti@...hat.com>,
Vlastimil Babka <vbabka@...e.cz>,
Michal Hocko <mhocko@...nel.org>,
Hugh Dickins <hughd@...gle.com>, Yu Zhao <yuzhao@...gle.com>,
Marek Szyprowski <m.szyprowski@...sung.com>,
LKML <linux-kernel@...r.kernel.org>,
Linux-MM <linux-mm@...ck.org>
Subject: [PATCH] mm/page_alloc: replace local_lock with normal spinlock -fix
-fix
pcpu_spin_unlock and pcpu_spin_unlock_irqrestore both unlock
pcp->lock and then enable preemption. This lacks symmetry against
both the pcpu_spin helpers and differs from how local_unlock_* is
implemented. While this is harmless, it's unnecessary and it's generally
better to unwind locks and preemption state in the reverse order as
they were acquired.
This is a fix on top of the mm-unstable patch
mm-page_alloc-replace-local_lock-with-normal-spinlock-fix.patch
Signed-off-by: Mel Gorman <mgorman@...hsingularity.net>
---
mm/page_alloc.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 934d1b5a5449..d0141e51e613 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -192,14 +192,14 @@ static DEFINE_MUTEX(pcp_batch_high_lock);
#define pcpu_spin_unlock(member, ptr) \
({ \
- spin_unlock(&ptr->member); \
pcpu_task_unpin(); \
+ spin_unlock(&ptr->member); \
})
#define pcpu_spin_unlock_irqrestore(member, ptr, flags) \
({ \
- spin_unlock_irqrestore(&ptr->member, flags); \
pcpu_task_unpin(); \
+ spin_unlock_irqrestore(&ptr->member, flags); \
})
/* struct per_cpu_pages specific helpers. */
Powered by blists - more mailing lists