lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20250415024532.26632-26-songmuchun@bytedance.com>
Date: Tue, 15 Apr 2025 10:45:29 +0800
From: Muchun Song <songmuchun@...edance.com>
To: hannes@...xchg.org,
	mhocko@...nel.org,
	roman.gushchin@...ux.dev,
	shakeel.butt@...ux.dev,
	muchun.song@...ux.dev,
	akpm@...ux-foundation.org,
	david@...morbit.com,
	zhengqi.arch@...edance.com,
	yosry.ahmed@...ux.dev,
	nphamcs@...il.com,
	chengming.zhou@...ux.dev
Cc: linux-kernel@...r.kernel.org,
	cgroups@...r.kernel.org,
	linux-mm@...ck.org,
	hamzamahfooz@...ux.microsoft.com,
	apais@...ux.microsoft.com,
	Muchun Song <songmuchun@...edance.com>
Subject: [PATCH RFC 25/28] mm: thp: prepare for reparenting LRU pages for split queue lock

Analogous to the mechanism employed for the lruvec lock, we adopt
an identical strategy to ensure the safety of the split queue lock
during the reparenting process of LRU folios.

Signed-off-by: Muchun Song <songmuchun@...edance.com>
---
 mm/huge_memory.c | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index d2bc943a40e8..813334994f84 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1100,8 +1100,14 @@ static struct deferred_split *folio_split_queue_lock(struct folio *folio)
 {
 	struct deferred_split *queue;
 
+	rcu_read_lock();
+retry:
 	queue = folio_split_queue(folio);
 	spin_lock(&queue->split_queue_lock);
+	if (unlikely(folio_split_queue_memcg(folio, queue) != folio_memcg(folio))) {
+		spin_unlock(&queue->split_queue_lock);
+		goto retry;
+	}
 
 	return queue;
 }
@@ -1111,8 +1117,14 @@ folio_split_queue_lock_irqsave(struct folio *folio, unsigned long *flags)
 {
 	struct deferred_split *queue;
 
+	rcu_read_lock();
+retry:
 	queue = folio_split_queue(folio);
 	spin_lock_irqsave(&queue->split_queue_lock, *flags);
+	if (unlikely(folio_split_queue_memcg(folio, queue) != folio_memcg(folio))) {
+		spin_unlock_irqrestore(&queue->split_queue_lock, *flags);
+		goto retry;
+	}
 
 	return queue;
 }
@@ -1120,12 +1132,14 @@ folio_split_queue_lock_irqsave(struct folio *folio, unsigned long *flags)
 static inline void split_queue_unlock(struct deferred_split *queue)
 {
 	spin_unlock(&queue->split_queue_lock);
+	rcu_read_unlock();
 }
 
 static inline void split_queue_unlock_irqrestore(struct deferred_split *queue,
 						 unsigned long flags)
 {
 	spin_unlock_irqrestore(&queue->split_queue_lock, flags);
+	rcu_read_unlock();
 }
 
 static inline bool is_transparent_hugepage(const struct folio *folio)
-- 
2.20.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ