lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-Id: <20250504061923.66914-1-00107082@163.com>
Date: Sun,  4 May 2025 14:19:23 +0800
From: David Wang <00107082@....com>
To: akpm@...ux-foundation.org,
	vbabka@...e.cz,
	surenb@...gle.com,
	mhocko@...e.com,
	jackmanb@...gle.com,
	hannes@...xchg.org,
	ziy@...dia.com
Cc: linux-mm@...ck.org,
	linux-kernel@...r.kernel.org,
	David Wang <00107082@....com>
Subject: [PATCH] mm/codetag: sub in advance when free non-compound high order pages

When page is non-compound, page[0] could be released by other
thread right after put_page_testzero failed in current thread,
pgalloc_tag_sub_pages afterwards would manipulate an invalid
page for accounting remaining pages:

[timeline]   [thread1]                     [thread2]
  |          alloc_page non-compound
  V
  |                                        get_page, rf counter inc
  V
  |          in ___free_pages
  |          put_page_testzero fails
  V
  |                                        put_page, page released
  V
  |          in ___free_pages,
  |          pgalloc_tag_sub_pages
  |          manipulate an invalid page
  V
  V

Move the tag page accounting ahead, and only account remaining pages
for non-compound pages with non-zero order.

Signed-off-by: David Wang <00107082@....com>
---
 mm/page_alloc.c | 36 +++++++++++++++++++++++++++++++++---
 1 file changed, 33 insertions(+), 3 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 5669baf2a6fe..c42e41ed35fe 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1163,12 +1163,25 @@ static inline void pgalloc_tag_sub_pages(struct page *page, unsigned int nr)
 		this_cpu_sub(tag->counters->bytes, PAGE_SIZE * nr);
 }
 
+static inline void pgalloc_tag_add_pages(struct page *page, unsigned int nr)
+{
+	struct alloc_tag *tag;
+
+	if (!mem_alloc_profiling_enabled())
+		return;
+
+	tag = __pgalloc_tag_get(page);
+	if (tag)
+		this_cpu_add(tag->counters->bytes, PAGE_SIZE * nr);
+}
+
 #else /* CONFIG_MEM_ALLOC_PROFILING */
 
 static inline void pgalloc_tag_add(struct page *page, struct task_struct *task,
 				   unsigned int nr) {}
 static inline void pgalloc_tag_sub(struct page *page, unsigned int nr) {}
 static inline void pgalloc_tag_sub_pages(struct page *page, unsigned int nr) {}
+static inline void pgalloc_tag_add_pages(struct page *page, unsigned int nr) {}
 
 #endif /* CONFIG_MEM_ALLOC_PROFILING */
 
@@ -5065,11 +5078,28 @@ static void ___free_pages(struct page *page, unsigned int order,
 {
 	/* get PageHead before we drop reference */
 	int head = PageHead(page);
+	/*
+	 * For remaining pages other than the first page of
+	 * a non-compound allocation, we decrease its tag
+	 * pages in advance, in case the first page is released
+	 * by other thread inbetween our put_page_testzero and any
+	 * accounting behavior afterwards.
+	 */
+	unsigned int remaining_tag_pages = 0;
 
-	if (put_page_testzero(page))
+	if (order > 0 && !head) {
+		if (unlikely(page_ref_count(page) > 1)) {
+			remaining_tag_pages = (1 << order) - 1;
+			pgalloc_tag_sub_pages(page, remaining_tag_pages);
+		}
+	}
+
+	if (put_page_testzero(page)) {
+		/* no need special treat for remaining pages, add it back. */
+		if (unlikely(remaining_tag_pages > 0))
+			pgalloc_tag_add_pages(page, remaining_tag_pages);
 		__free_frozen_pages(page, order, fpi_flags);
-	else if (!head) {
-		pgalloc_tag_sub_pages(page, (1 << order) - 1);
+	} else if (!head) {
 		while (order-- > 0)
 			__free_frozen_pages(page + (1 << order), order,
 					    fpi_flags);
-- 
2.39.2


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ