[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YeUuWcNArnDhOjFY@dhcp22.suse.cz>
Date: Mon, 17 Jan 2022 09:52:41 +0100
From: Michal Hocko <mhocko@...e.com>
To: Nico Pache <npache@...hat.com>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org,
akpm@...ux-foundation.org, jsavitz@...hat.com,
peterz@...radead.org, tglx@...utronix.de, mingo@...hat.com,
dvhart@...radead.org, dave@...olabs.net, andrealmeid@...labora.com,
longman@...hat.com
Subject: Re: [PATCH v3] mm/oom: do not oom reap task with an unresolved
robust futex
On Fri 14-01-22 13:01:35, Nico Pache wrote:
> In the case that two or more processes share a futex located within
> a shared mmaped region, such as a process that shares a lock between
> itself and child processes, we have observed that when a process holding
> the lock is oom killed, at least one waiter is never alerted to this new
> development and simply continues to wait.
>
> This is visible via pthreads by checking the __owner field of the
> pthread_mutex_t structure within a waiting process, perhaps with gdb.
>
> We identify reproduction of this issue by checking a waiting process of
> a test program and viewing the contents of the pthread_mutex_t, taking note
> of the value in the owner field, and then checking dmesg to see if the
> owner has already been killed.
I believe we really need to find out why the original holder of the
futex is not woken up to release the lock when exiting.
> As mentioned by Michal in his patchset introducing the oom reaper,
> commit aac4536355496 ("mm, oom: introduce oom reaper"), the purpose of the
> oom reaper is to try and free memory more quickly; however, In the case
> that a robust futex is being used, we want to avoid utilizing the
> concurrent oom reaper. This is due to a race that can occur between the
> SIGKILL handling the robust futex, and the oom reaper freeing the memory
> needed to maintain the robust list.
OOM reaper is only unmapping private memory. It doesn't touch a shared
mappings. So how could it interfere?
> In the case that the oom victim is utilizing a robust futex, and the
> SIGKILL has not yet handled the futex death, the tsk->robust_list should
> be non-NULL. This issue can be tricky to reproduce, but with the
> modifications of this patch, we have found it to be impossible to
> reproduce.
We really need a deeper analysis of the udnerlying problem because right
now I do not really see why the oom reaper should interfere with shared
futex.
> Add a check for tsk->robust_list is non-NULL in wake_oom_reaper() to return
> early and prevent waking the oom reaper.
>
> Reproducer: https://gitlab.com/jsavitz/oom_futex_reproducer
>
> Co-developed-by: Joel Savitz <jsavitz@...hat.com>
> Signed-off-by: Joel Savitz <jsavitz@...hat.com>
> Signed-off-by: Nico Pache <npache@...hat.com>
> ---
> mm/oom_kill.c | 15 +++++++++++++++
> 1 file changed, 15 insertions(+)
>
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> index 1ddabefcfb5a..3cdaac9c7de5 100644
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
> @@ -667,6 +667,21 @@ static void wake_oom_reaper(struct task_struct *tsk)
> if (test_and_set_bit(MMF_OOM_REAP_QUEUED, &tsk->signal->oom_mm->flags))
> return;
>
> +#ifdef CONFIG_FUTEX
> + /*
> + * If the ooming task's SIGKILL has not finished handling the
> + * robust futex it is not correct to reap the mm concurrently.
> + * Do not wake the oom reaper when the task still contains a
> + * robust list.
> + */
> + if (tsk->robust_list)
> + return;
> +#ifdef CONFIG_COMPAT
> + if (tsk->compat_robust_list)
> + return;
> +#endif
> +#endif
If this turns out to be really needed, which I do not really see at the
moment, then this is not the right way to handle this situation. The oom
victim could get stuck and the oom killer wouldn't be able to move
forward. If anything the victim would need to get MMF_OOM_SKIP set.
> +
> get_task_struct(tsk);
>
> spin_lock(&oom_reaper_lock);
> --
> 2.33.1
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists