lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ce63e509-dedf-ce00-cd12-2c67a3e650ba@redhat.com>
Date:   Tue, 7 Dec 2021 19:46:57 -0500
From:   Nico Pache <npache@...hat.com>
To:     Andrew Morton <akpm@...ux-foundation.org>,
        Joel Savitz <jsavitz@...hat.com>
Cc:     linux-kernel@...r.kernel.org, Waiman Long <longman@...hat.com>,
        linux-mm@...ck.org, Peter Zijlstra <peterz@...radead.org>,
        Michal Hocko <mhocko@...e.com>
Subject: Re: [PATCH] mm/oom_kill: wake futex waiters before annihilating
 victim shared mutex



On 12/7/21 18:47, Andrew Morton wrote:
> (cc's added)
> 
> On Tue,  7 Dec 2021 16:49:02 -0500 Joel Savitz <jsavitz@...hat.com> wrote:
> 
>> In the case that two or more processes share a futex located within
>> a shared mmaped region, such as a process that shares a lock between
>> itself and a number of child processes, we have observed that when
>> a process holding the lock is oom killed, at least one waiter is never
>> alerted to this new development and simply continues to wait.
> 
> Well dang.  Is there any way of killing off that waiting process, or do
> we have a resource leak here?

If I understood your question correctly, there is a way to recover the system by
killing the process that is utilizing the futex; however, the purpose of robust
futexes is to avoid having to do this.

>From my work with Joel on this it seems like a race is occurring between the
oom_reaper and the exit signal sent to the OMM'd process. By setting the
futex_exit_release before these signals are sent we avoid this.

> 
>> This is visible via pthreads by checking the __owner field of the
>> pthread_mutex_t structure within a waiting process, perhaps with gdb.
>>
>> We identify reproduction of this issue by checking a waiting process of
>> a test program and viewing the contents of the pthread_mutex_t, taking note
>> of the value in the owner field, and then checking dmesg to see if the
>> owner has already been killed.
>>
>> This issue can be tricky to reproduce, but with the modifications of
>> this small patch, I have found it to be impossible to reproduce. There
>> may be additional considerations that I have not taken into account in
>> this patch and I welcome any comments and criticism.
> 
>> Co-developed-by: Nico Pache <npache@...hat.com>
>> Signed-off-by: Nico Pache <npache@...hat.com>
>> Signed-off-by: Joel Savitz <jsavitz@...hat.com>
>> ---
>>  mm/oom_kill.c | 3 +++
>>  1 file changed, 3 insertions(+)
>>
>> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
>> index 1ddabefcfb5a..fa58bd10a0df 100644
>> --- a/mm/oom_kill.c
>> +++ b/mm/oom_kill.c
>> @@ -44,6 +44,7 @@
>>  #include <linux/kthread.h>
>>  #include <linux/init.h>
>>  #include <linux/mmu_notifier.h>
>> +#include <linux/futex.h>
>>  
>>  #include <asm/tlb.h>
>>  #include "internal.h"
>> @@ -890,6 +891,7 @@ static void __oom_kill_process(struct task_struct *victim, const char *message)
>>  	 * in order to prevent the OOM victim from depleting the memory
>>  	 * reserves from the user space under its control.
>>  	 */
>> +	futex_exit_release(victim);
>>  	do_send_sig_info(SIGKILL, SEND_SIG_PRIV, victim, PIDTYPE_TGID);
>>  	mark_oom_victim(victim);
>>  	pr_err("%s: Killed process %d (%s) total-vm:%lukB, anon-rss:%lukB, file-rss:%lukB, shmem-rss:%lukB, UID:%u pgtables:%lukB oom_score_adj:%hd\n",
>> @@ -930,6 +932,7 @@ static void __oom_kill_process(struct task_struct *victim, const char *message)
>>  		 */
>>  		if (unlikely(p->flags & PF_KTHREAD))
>>  			continue;
>> +		futex_exit_release(p);
>>  		do_send_sig_info(SIGKILL, SEND_SIG_PRIV, p, PIDTYPE_TGID);
>>  	}
>>  	rcu_read_unlock();
>> -- 
>> 2.33.1
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ