[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YXJwUUPjfg9wV6MQ@dhcp22.suse.cz>
Date: Fri, 22 Oct 2021 10:03:29 +0200
From: Michal Hocko <mhocko@...e.com>
To: Suren Baghdasaryan <surenb@...gle.com>
Cc: akpm@...ux-foundation.org, rientjes@...gle.com,
willy@...radead.org, hannes@...xchg.org, guro@...com,
riel@...riel.com, minchan@...nel.org, christian@...uner.io,
hch@...radead.org, oleg@...hat.com, david@...hat.com,
jannh@...gle.com, shakeelb@...gle.com, luto@...nel.org,
christian.brauner@...ntu.com, fweimer@...hat.com, jengelh@...i.de,
linux-api@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, kernel-team@...roid.com
Subject: Re: [PATCH 1/1] mm: prevent a race between process_mrelease and
exit_mmap
On Thu 21-10-21 18:46:58, Suren Baghdasaryan wrote:
> Race between process_mrelease and exit_mmap, where free_pgtables is
> called while __oom_reap_task_mm is in progress, leads to kernel crash
> during pte_offset_map_lock call. oom-reaper avoids this race by setting
> MMF_OOM_VICTIM flag and causing exit_mmap to take and release
> mmap_write_lock, blocking it until oom-reaper releases mmap_read_lock.
> Reusing MMF_OOM_VICTIM for process_mrelease would be the simplest way to
> fix this race, however that would be considered a hack. Fix this race
> by elevating mm->mm_users and preventing exit_mmap from executing until
> process_mrelease is finished. Patch slightly refactors the code to adapt
> for a possible mmget_not_zero failure.
> This fix has considerable negative impact on process_mrelease performance
> and will likely need later optimization.
I am not sure there is any promise that process_mrelease will run in
parallel with the exiting process. In fact the primary purpose of this
syscall is to provide a reliable way to oom kill from user space. If you
want to optimize process exit resp. its exit_mmap part then you should
be using other means. So I would be careful calling this a regression.
I do agree that taking the reference count is the right approach here. I
was wrong previously [1] when saying that pinning the mm struct is
sufficient. I have completely forgot about the subtle sync in exit_mmap.
One way we can approach that would be to take exclusive mmap_sem
throughout the exit_mmap unconditionally. There was a push back against
that though so arguments would have to be re-evaluated.
[1] http://lkml.kernel.org/r/YQzZqFwDP7eUxwcn@dhcp22.suse.cz
That being said
Acked-by: Michal Hocko <mhocko@...e.com>
Thanks!
> Fixes: 884a7e5964e0 ("mm: introduce process_mrelease system call")
> Signed-off-by: Suren Baghdasaryan <surenb@...gle.com>
> ---
> mm/oom_kill.c | 23 ++++++++++++-----------
> 1 file changed, 12 insertions(+), 11 deletions(-)
>
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> index 831340e7ad8b..989f35a2bbb1 100644
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
> @@ -1150,7 +1150,7 @@ SYSCALL_DEFINE2(process_mrelease, int, pidfd, unsigned int, flags)
> struct task_struct *task;
> struct task_struct *p;
> unsigned int f_flags;
> - bool reap = true;
> + bool reap = false;
> struct pid *pid;
> long ret = 0;
>
> @@ -1177,15 +1177,15 @@ SYSCALL_DEFINE2(process_mrelease, int, pidfd, unsigned int, flags)
> goto put_task;
> }
>
> - mm = p->mm;
> - mmgrab(mm);
> -
> - /* If the work has been done already, just exit with success */
> - if (test_bit(MMF_OOM_SKIP, &mm->flags))
> - reap = false;
> - else if (!task_will_free_mem(p)) {
> - reap = false;
> - ret = -EINVAL;
> + if (mmget_not_zero(p->mm)) {
> + mm = p->mm;
> + if (task_will_free_mem(p))
> + reap = true;
> + else {
> + /* Error only if the work has not been done already */
> + if (!test_bit(MMF_OOM_SKIP, &mm->flags))
> + ret = -EINVAL;
> + }
> }
> task_unlock(p);
>
> @@ -1201,7 +1201,8 @@ SYSCALL_DEFINE2(process_mrelease, int, pidfd, unsigned int, flags)
> mmap_read_unlock(mm);
>
> drop_mm:
> - mmdrop(mm);
> + if (mm)
> + mmput(mm);
> put_task:
> put_task_struct(task);
> put_pid:
> --
> 2.33.0.1079.g6e70778dc9-goog
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists