[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <1452473001-10518-1-git-send-email-l@dorileo.org>
Date: Sun, 10 Jan 2016 22:43:21 -0200
From: l@...ileo.org
To: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: LKML <linux-kernel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>, rostedt@...dmis.org,
John Kacur <jkacur@...hat.com>, linux-mm@...ck.org,
Leandro Dorileo <leandro.maciel.dorileo@...el.com>
Subject: [RFC][4.1.15-rt17 PATCH] mm: swap: lru drain don't use workqueue with PREEMPT_RT_FULL
From: Leandro Dorileo <leandro.maciel.dorileo@...el.com>
Running a smp system with an -rt kernel, with CONFIG_PREEMPT_RT_FULL,
in a heavy cpu load scenario and an arbitrary process tries to mlockall
with MCL_CURRENT flag that process will block indefinitely - until the
process resulting in the heavy cpu load finishes(the process's set the
sched priority > 0).
Since MCL_CURRENT flag is passed to mlockall it will try to drain the
lru in all cpus. The lru_add_drain_all() will start an workqueue to
drain lru on each online cpu and then try to flush the work(will wait
until the work's finished).
The drain for the heavy loaded core will never finished - like
mentioned before - until the process resulting in the heavy cpu load
finishes. The work will never be scheduled, even if the calling process
has been so.
This patch adds an lru_add_drain_all() implementation for such
situation, and synchronously do the lru drain on behalf of the calling
process.
Signed-off-by: Leandro Dorileo <leandro.maciel.dorileo@...el.com>
---
mm/swap.c | 18 ++++++++++++++++++
1 file changed, 18 insertions(+)
diff --git a/mm/swap.c b/mm/swap.c
index 1785ac6..df807b4 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -864,6 +864,23 @@ void lru_add_drain(void)
local_unlock_cpu(swapvec_lock);
}
+#ifdef CONFIG_PREEMPT_RT_FULL
+void lru_add_drain_all(void)
+{
+ static DEFINE_MUTEX(lock);
+ int cpu;
+
+ mutex_lock(&lock);
+ get_online_cpus();
+
+ for_each_online_cpu(cpu) {
+ smp_call_function_single(cpu, lru_add_drain, NULL, 1);
+ }
+
+ put_online_cpus();
+ mutex_unlock(&lock);
+}
+#else
static void lru_add_drain_per_cpu(struct work_struct *dummy)
{
lru_add_drain();
@@ -900,6 +917,7 @@ void lru_add_drain_all(void)
put_online_cpus();
mutex_unlock(&lock);
}
+#endif
/**
* release_pages - batched page_cache_release()
--
2.7.0
Powered by blists - more mailing lists