lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250617091044.1062-1-justinjiang@vivo.com>
Date: Tue, 17 Jun 2025 17:10:44 +0800
From: Zhiguo Jiang <justinjiang@...o.com>
To: Andrew Morton <akpm@...ux-foundation.org>,
	linux-mm@...ck.org,
	linux-kernel@...r.kernel.org
Cc: opensource.kernel@...o.com,
	Zhiguo Jiang <justinjiang@...o.com>
Subject: [PATCH] mm: rt-threads retry mempool allocation without delay

The real-time(rt) threads are delayed for 5 seconds in mempool_alloc,
which will seriously affect the timeliness of front-end applications
and the user experience lag issues.

The real-time(rt) threads should retry mempool allocation without
delay and in order to obtain the required memory resources as soon as
possible.

The following example shows that the real-time(rt) QoSCoreThread
prio=98 blocks 5 seconds in mempool_alloc, seriously affecting the
user experience.

Running process:	system_server (pid 2245)
Running thread:	QoSCoreThread 2529
State:	Uninterruptible Sleep - Block I/O
Start:	12,859.616 ms
Systrace Time:	100,063.057104
Duration:	5,152.591 ms
On CPU:
Running instead:	kswapd0
Args:	{kernel callsite when blocked:: "mempool_alloc+0x130/0x1e8"}

   QoSCoreThread-2529  (   2245) [000] d..2. 100063.057104: sched_switch:
   prev_comm=QoSCoreThread prev_pid=2529 prev_prio=000255001000098
   prev_state=D ==> next_comm=kswapd0 next_pid=107
   next_prio=000063310000120
 [GT]ColdPool#14-23937 (  23854) [000] dNs2. 100068.209675: sched_waking:
 comm=QoSCoreThread pid=2529 prio=98 target_cpu=000
 [GT]ColdPool#14-23937 (  23854) [000] dNs2. 100068.209676:
 sched_blocked_reason: pid=2529 iowait=1 caller=mempool_alloc+0x130/0x1e8
 [GT]ColdPool#14-23937 (  23854) [000] dNs3. 100068.209695: sched_wakeup:
 comm=QoSCoreThread pid=2529 prio=98 target_cpu=000
 [GT]ColdPool#14-23937 (  23854) [000] d..2. 100068.209732: sched_switch:
 prev_comm=[GT]ColdPool#14 prev_pid=23937 prev_prio=000003010342130
 prev_state=R ==> next_comm=QoSCoreThread next_pid=2529
 next_prio=000255131000098

Signed-off-by: Zhiguo Jiang <justinjiang@...o.com>
---
 mm/mempool.c | 13 ++++++++++++-
 1 file changed, 12 insertions(+), 1 deletion(-)

diff --git a/mm/mempool.c b/mm/mempool.c
--- a/mm/mempool.c
+++ b/mm/mempool.c
@@ -18,6 +18,7 @@
 #include <linux/export.h>
 #include <linux/mempool.h>
 #include <linux/writeback.h>
+#include <linux/sched/prio.h>
 #include "slab.h"
 
 #ifdef CONFIG_SLUB_DEBUG_ON
@@ -386,7 +387,7 @@ void *mempool_alloc_noprof(mempool_t *pool, gfp_t gfp_mask)
 	void *element;
 	unsigned long flags;
 	wait_queue_entry_t wait;
-	gfp_t gfp_temp;
+	gfp_t gfp_temp, gfp_src = gfp_mask;
 
 	VM_WARN_ON_ONCE(gfp_mask & __GFP_ZERO);
 	might_alloc(gfp_mask);
@@ -433,6 +434,16 @@ void *mempool_alloc_noprof(mempool_t *pool, gfp_t gfp_mask)
 		return NULL;
 	}
 
+	/*
+	 * We will try to direct reclaim cyclically, if the rt-thread
+	 * is without __GFP_NORETRY.
+	 */
+	if (!(gfp_src & __GFP_NORETRY) && current->prio < MAX_RT_PRIO) {
+		spin_unlock_irqrestore(&pool->lock, flags);
+		gfp_temp = gfp_src;
+		goto repeat_alloc;
+	}
+
 	/* Let's wait for someone else to return an element to @pool */
 	init_wait(&wait);
 	prepare_to_wait(&pool->wait, &wait, TASK_UNINTERRUPTIBLE);
-- 
2.48.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ