[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJuCfpFccBJHHqfOKixJvLr7Xta_ojkdHGfGomwTDNKffzziRQ@mail.gmail.com>
Date: Wed, 27 Oct 2021 09:08:21 -0700
From: Suren Baghdasaryan <surenb@...gle.com>
To: Michal Hocko <mhocko@...e.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
David Rientjes <rientjes@...gle.com>,
Matthew Wilcox <willy@...radead.org>,
Johannes Weiner <hannes@...xchg.org>,
Roman Gushchin <guro@...com>, Rik van Riel <riel@...riel.com>,
Minchan Kim <minchan@...nel.org>,
Christian Brauner <christian@...uner.io>,
Christoph Hellwig <hch@...radead.org>,
Oleg Nesterov <oleg@...hat.com>,
David Hildenbrand <david@...hat.com>,
Jann Horn <jannh@...gle.com>,
Shakeel Butt <shakeelb@...gle.com>,
Andy Lutomirski <luto@...nel.org>,
Christian Brauner <christian.brauner@...ntu.com>,
Florian Weimer <fweimer@...hat.com>,
Jan Engelhardt <jengelh@...i.de>,
Linux API <linux-api@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
kernel-team <kernel-team@...roid.com>,
"Kirill A. Shutemov" <kirill@...temov.name>,
Andrea Arcangeli <aarcange@...hat.com>
Subject: Re: [PATCH 1/1] mm: prevent a race between process_mrelease and exit_mmap
On Fri, Oct 22, 2021 at 10:38 AM Suren Baghdasaryan <surenb@...gle.com> wrote:
>
> On Fri, Oct 22, 2021 at 1:03 AM Michal Hocko <mhocko@...e.com> wrote:
> >
> > On Thu 21-10-21 18:46:58, Suren Baghdasaryan wrote:
> > > Race between process_mrelease and exit_mmap, where free_pgtables is
> > > called while __oom_reap_task_mm is in progress, leads to kernel crash
> > > during pte_offset_map_lock call. oom-reaper avoids this race by setting
> > > MMF_OOM_VICTIM flag and causing exit_mmap to take and release
> > > mmap_write_lock, blocking it until oom-reaper releases mmap_read_lock.
> > > Reusing MMF_OOM_VICTIM for process_mrelease would be the simplest way to
> > > fix this race, however that would be considered a hack. Fix this race
> > > by elevating mm->mm_users and preventing exit_mmap from executing until
> > > process_mrelease is finished. Patch slightly refactors the code to adapt
> > > for a possible mmget_not_zero failure.
> > > This fix has considerable negative impact on process_mrelease performance
> > > and will likely need later optimization.
> >
> > I am not sure there is any promise that process_mrelease will run in
> > parallel with the exiting process. In fact the primary purpose of this
> > syscall is to provide a reliable way to oom kill from user space. If you
> > want to optimize process exit resp. its exit_mmap part then you should
> > be using other means. So I would be careful calling this a regression.
> >
> > I do agree that taking the reference count is the right approach here. I
> > was wrong previously [1] when saying that pinning the mm struct is
> > sufficient. I have completely forgot about the subtle sync in exit_mmap.
> > One way we can approach that would be to take exclusive mmap_sem
> > throughout the exit_mmap unconditionally.
>
> I agree, that would probably be the cleanest way.
>
> > There was a push back against
> > that though so arguments would have to be re-evaluated.
>
> I'll review that discussion to better understand the reasons for the
> push back. Thanks for the link.
Adding Kirill and Andrea.
I had some time to dig some more. The latency increase is definitely
coming due to process_mrelease calling the last mmput and exit_aio is
especially problematic. So, currently process_mrelease not only
releases memory but does more, including waiting for io to finish.
Unconditional mmap_write_lock around free_pgtables in exit_mmap seems
to me the most semantically correct way forward and the pushback is on
the basis of regressing performance of the exit path. I would like to
measure that regression to confirm this. I don't have access to a big
machine but will ask someone in another Google team to try the test
Michal wrote here
https://lore.kernel.org/all/20170725142626.GJ26723@dhcp22.suse.cz/ on
a server with and without a custom patch.
If the regression is real, then I think we could keep the "if
(unlikely(mm_is_oom_victim(mm)))" condition but wrap free_pgtables
with conditional mmap_write_lock. To me this is cleaner because it
clearly shows that we are trying to prevent free_pgtables from racing
with any mm readers (current mmap_write_lock(); mmap_write_unlock()
sequence needs a comment section to explain why this is needed). In
that case I would need to reuse MMF_OOM_VICTIM in process_mrelease to
avoid postponing the exit_mmap, like oom-reaper does. Maybe we could
rename MMF_OOM_VICTIM / MMF_OOM_SKIP to something like MMF_RELEASING /
MMF_RELEASED to make them more generic and allow their use outside of
oom-killer? Again, this is a fallback plan in case unconditional
mmap_write_lock indeed regresses the exit path.
Any comments/suggestions?
>
> >
> > [1] http://lkml.kernel.org/r/YQzZqFwDP7eUxwcn@dhcp22.suse.cz
> >
> > That being said
> > Acked-by: Michal Hocko <mhocko@...e.com>
>
> Thanks!
>
> >
> > Thanks!
> >
> > > Fixes: 884a7e5964e0 ("mm: introduce process_mrelease system call")
> > > Signed-off-by: Suren Baghdasaryan <surenb@...gle.com>
> > > ---
> > > mm/oom_kill.c | 23 ++++++++++++-----------
> > > 1 file changed, 12 insertions(+), 11 deletions(-)
> > >
> > > diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> > > index 831340e7ad8b..989f35a2bbb1 100644
> > > --- a/mm/oom_kill.c
> > > +++ b/mm/oom_kill.c
> > > @@ -1150,7 +1150,7 @@ SYSCALL_DEFINE2(process_mrelease, int, pidfd, unsigned int, flags)
> > > struct task_struct *task;
> > > struct task_struct *p;
> > > unsigned int f_flags;
> > > - bool reap = true;
> > > + bool reap = false;
> > > struct pid *pid;
> > > long ret = 0;
> > >
> > > @@ -1177,15 +1177,15 @@ SYSCALL_DEFINE2(process_mrelease, int, pidfd, unsigned int, flags)
> > > goto put_task;
> > > }
> > >
> > > - mm = p->mm;
> > > - mmgrab(mm);
> > > -
> > > - /* If the work has been done already, just exit with success */
> > > - if (test_bit(MMF_OOM_SKIP, &mm->flags))
> > > - reap = false;
> > > - else if (!task_will_free_mem(p)) {
> > > - reap = false;
> > > - ret = -EINVAL;
> > > + if (mmget_not_zero(p->mm)) {
> > > + mm = p->mm;
> > > + if (task_will_free_mem(p))
> > > + reap = true;
> > > + else {
> > > + /* Error only if the work has not been done already */
> > > + if (!test_bit(MMF_OOM_SKIP, &mm->flags))
> > > + ret = -EINVAL;
> > > + }
> > > }
> > > task_unlock(p);
> > >
> > > @@ -1201,7 +1201,8 @@ SYSCALL_DEFINE2(process_mrelease, int, pidfd, unsigned int, flags)
> > > mmap_read_unlock(mm);
> > >
> > > drop_mm:
> > > - mmdrop(mm);
> > > + if (mm)
> > > + mmput(mm);
> > > put_task:
> > > put_task_struct(task);
> > > put_pid:
> > > --
> > > 2.33.0.1079.g6e70778dc9-goog
> >
> > --
> > Michal Hocko
> > SUSE Labs
Powered by blists - more mailing lists