[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20211208181714.880312-1-jsavitz@redhat.com>
Date: Wed, 8 Dec 2021 13:17:14 -0500
From: Joel Savitz <jsavitz@...hat.com>
To: linux-kernel@...r.kernel.org
Cc: Joel Savitz <jsavitz@...hat.com>, Waiman Long <longman@...hat.com>,
linux-mm@...ck.org, Nico Pache <npache@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
Darren Hart <dvhart@...radead.org>,
Davidlohr Bueso <dave@...olabs.net>,
André Almeida <andrealmeid@...labora.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Michal Hocko <mhocko@...e.com>
Subject: [PATCH v2] mm/oom_kill: wake futex waiters before annihilating victim shared mutex
In the case that two or more processes share a futex located within
a shared mmaped region, such as a process that shares a lock between
itself and a number of child processes, we have observed that when
a process holding the lock is oom killed, at least one waiter is never
alerted to this new development and simply continues to wait.
This is visible via pthreads by checking the __owner field of the
pthread_mutex_t structure within a waiting process, perhaps with gdb.
We identify reproduction of this issue by checking a waiting process of
a test program and viewing the contents of the pthread_mutex_t, taking note
of the value in the owner field, and then checking dmesg to see if the
owner has already been killed.
This issue can be tricky to reproduce, but with the modifications of
this small patch, I have found it to be impossible to reproduce. There
may be additional considerations that I have not taken into account in
this patch and I welcome any comments and criticism.
Changes from v1:
- add comments before calls to futex_exit_release()
Co-developed-by: Nico Pache <npache@...hat.com>
Signed-off-by: Nico Pache <npache@...hat.com>
Signed-off-by: Joel Savitz <jsavitz@...hat.com>
---
mm/oom_kill.c | 12 ++++++++++++
1 file changed, 12 insertions(+)
diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index 1ddabefcfb5a..884a5f15fd06 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -44,6 +44,7 @@
#include <linux/kthread.h>
#include <linux/init.h>
#include <linux/mmu_notifier.h>
+#include <linux/futex.h>
#include <asm/tlb.h>
#include "internal.h"
@@ -885,6 +886,11 @@ static void __oom_kill_process(struct task_struct *victim, const char *message)
count_vm_event(OOM_KILL);
memcg_memory_event_mm(mm, MEMCG_OOM_KILL);
+ /*
+ * We call futex_exit_release() on the victim task to ensure any waiters on any
+ * process-shared futexes held by the victim task are woken up.
+ */
+ futex_exit_release(victim);
/*
* We should send SIGKILL before granting access to memory reserves
* in order to prevent the OOM victim from depleting the memory
@@ -930,6 +936,12 @@ static void __oom_kill_process(struct task_struct *victim, const char *message)
*/
if (unlikely(p->flags & PF_KTHREAD))
continue;
+ /*
+ * We call futex_exit_release() on any task p sharing the
+ * victim->mm to ensure any waiters on any
+ * process-shared futexes held by task p are woken up.
+ */
+ futex_exit_release(p);
do_send_sig_info(SIGKILL, SEND_SIG_PRIV, p, PIDTYPE_TGID);
}
rcu_read_unlock();
--
2.27.0
Powered by blists - more mailing lists