[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160531074624.GE26128@dhcp22.suse.cz>
Date: Tue, 31 May 2016 09:46:24 +0200
From: Michal Hocko <mhocko@...nel.org>
To: Oleg Nesterov <oleg@...hat.com>
Cc: linux-mm@...ck.org,
Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>,
David Rientjes <rientjes@...gle.com>,
Vladimir Davydov <vdavydov@...allels.com>,
Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 6/6] mm, oom: fortify task_will_free_mem
On Mon 30-05-16 19:35:05, Oleg Nesterov wrote:
> On 05/30, Michal Hocko wrote:
> >
> > task_will_free_mem is rather weak.
>
> I was thinking about the similar change because I noticed that try_oom_reaper()
> is very, very wrong.
>
> To the point I think that we need another change for stable which simply removes
> spin_lock_irq(sighand->siglock) from try_oom_reaper(). It buys nothing, we can
> check signal_group_exit() (which is wrong too ;) lockless, and at the same time
> the kernel can crash because we can hit ->siglock == NULL.
OK, I have sent a separate patch
http://lkml.kernel.org/r/1464679423-30218-1-git-send-email-mhocko@kernel.org
and rebase the series on top. This would be 4.7 material. Thanks for
catching that!
> So I do think this change is good in general.
>
> I think that task_will_free_mem() should be un-inlined, and __task_will_free_mem()
> should go into mm/oom-kill.c... but this is minor.
I was thinking about it as well but then thought that this would be
harder to review. But OK, I will do that.
> > -static inline bool task_will_free_mem(struct task_struct *task)
> > +static inline bool __task_will_free_mem(struct task_struct *task)
> > {
> > struct signal_struct *sig = task->signal;
> >
> > @@ -119,16 +119,69 @@ static inline bool task_will_free_mem(struct task_struct *task)
> > if (sig->flags & SIGNAL_GROUP_COREDUMP)
> > return false;
> >
> > - if (!(task->flags & PF_EXITING))
> > + if (!(task->flags & PF_EXITING || fatal_signal_pending(task)))
> > return false;
> >
> > /* Make sure that the whole thread group is going down */
> > - if (!thread_group_empty(task) && !(sig->flags & SIGNAL_GROUP_EXIT))
> > + if (!thread_group_empty(task) &&
> > + !(sig->flags & SIGNAL_GROUP_EXIT || fatal_signal_pending(task)))
> > return false;
> >
> > return true;
> > }
>
> Well, let me suggest this again. I think it should do
>
>
> if (SIGNAL_GROUP_COREDUMP)
> return false;
>
> if (SIGNAL_GROUP_EXIT)
> return true;
>
> if (thread_group_empty() && PF_EXITING)
> return true;
>
> return false;
>
> we do not need fatal_signal_pending(), in this case SIGNAL_GROUP_EXIT should
> be set (ignoring some bugs with sub-namespaces which we need to fix anyway).
OK, so we shouldn't care about race when the fatal_signal is set on the
task until it reaches do_group_exit?
> At the same time, we do not want to return false if PF_EXITING is not set
> if SIGNAL_GROUP_EXIT is set.
makes sense.
> > +static inline bool task_will_free_mem(struct task_struct *task)
> > +{
> > + struct mm_struct *mm = NULL;
> > + struct task_struct *p;
> > + bool ret;
> > +
> > + /*
> > + * If the process has passed exit_mm we have to skip it because
> > + * we have lost a link to other tasks sharing this mm, we do not
> > + * have anything to reap and the task might then get stuck waiting
> > + * for parent as zombie and we do not want it to hold TIF_MEMDIE
> > + */
> > + p = find_lock_task_mm(task);
> > + if (!p)
> > + return false;
> > +
> > + if (!__task_will_free_mem(p)) {
> > + task_unlock(p);
> > + return false;
> > + }
> > +
> > + mm = p->mm;
> > + if (atomic_read(&mm->mm_users) <= 1) {
>
> this is sub-optimal, we should probably take signal->live or ->nr_threads
> into account... but OK, we can do this later.
Yes I would prefer to add a more complex checks later. We want
mm_has_external_refs for other purposes as well.
> > + rcu_read_lock();
> > + for_each_process(p) {
> > + ret = __task_will_free_mem(p);
> > + if (!ret)
> > + break;
> > + }
> > + rcu_read_unlock();
>
> Yes, I agree very much.
>
> But it seems you forgot to add the process_shares_mm() check into this loop?
Yes. Dunno where it got lost but it surely wasn't in the previous
version either. I definitely screwed somewhere...
> and perhaps it also makes sense to add
>
> if (same_thread_group(tsk, p))
> continue;
>
> This should not really matter, we know that __task_will_free_mem(p) should return
> true. Just to make it more clear.
ok
> And. I think this needs smp_rmb() at the end of the loop (assuming we have the
> process_shares_mm() check here). We need it to ensure that we read p->mm before
> we read next_task(), to avoid the race with exit() + clone(CLONE_VM).
Why don't we need the same barrier in oom_kill_process? Which barrier it
would pair with? Anyway I think this would deserve it's own patch.
Barriers are always tricky and it is better to have them in a small
patch with a full explanation.
Thanks for your review. It was really helpful!
The whole pile is currently in my k.org git tree in
attempts/process-share-mm-oom-sanitization branch if somebody wants to
see the full series.
My current diff on top of the patch
---
>From eb2755127e53f9f3cbc3cab757fb46bfb61c2a10 Mon Sep 17 00:00:00 2001
From: Michal Hocko <mhocko@...e.com>
Date: Tue, 31 May 2016 07:33:06 +0200
Subject: [PATCH] fold me "mm, oom: fortify task_will_free_mem"
As per Oleg
- uninline task_will_free_mem
- reorganize checks and simplify __task_will_free_mem
- add missing process_shares_mm in task_will_free_mem
- add same_thread_group to task_will_free_mem for clarity
Signed-off-by: Michal Hocko <mhocko@...e.com>
---
include/linux/oom.h | 64 +++++------------------------------------------------
mm/oom_kill.c | 56 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 62 insertions(+), 58 deletions(-)
diff --git a/include/linux/oom.h b/include/linux/oom.h
index c4cc0591d959..f3ac9d088645 100644
--- a/include/linux/oom.h
+++ b/include/linux/oom.h
@@ -119,69 +119,17 @@ static inline bool __task_will_free_mem(struct task_struct *task)
if (sig->flags & SIGNAL_GROUP_COREDUMP)
return false;
- if (!(task->flags & PF_EXITING || fatal_signal_pending(task)))
- return false;
-
- /* Make sure that the whole thread group is going down */
- if (!thread_group_empty(task) &&
- !(sig->flags & SIGNAL_GROUP_EXIT || fatal_signal_pending(task)))
- return false;
-
- return true;
-}
-
-/*
- * Checks whether the given task is dying or exiting and likely to
- * release its address space. This means that all threads and processes
- * sharing the same mm have to be killed or exiting.
- */
-static inline bool task_will_free_mem(struct task_struct *task)
-{
- struct mm_struct *mm = NULL;
- struct task_struct *p;
- bool ret;
-
- /*
- * If the process has passed exit_mm we have to skip it because
- * we have lost a link to other tasks sharing this mm, we do not
- * have anything to reap and the task might then get stuck waiting
- * for parent as zombie and we do not want it to hold TIF_MEMDIE
- */
- p = find_lock_task_mm(task);
- if (!p)
- return false;
-
- if (!__task_will_free_mem(p)) {
- task_unlock(p);
- return false;
- }
-
- mm = p->mm;
- if (atomic_read(&mm->mm_users) <= 1) {
- task_unlock(p);
+ if (sig->flags & SIGNAL_GROUP_EXIT)
return true;
- }
- /* pin the mm to not get freed and reused */
- atomic_inc(&mm->mm_count);
- task_unlock(p);
+ if (thread_group_empty(task) && PF_EXITING)
+ return true;
- /*
- * This is really pessimistic but we do not have any reliable way
- * to check that external processes share with our mm
- */
- rcu_read_lock();
- for_each_process(p) {
- ret = __task_will_free_mem(p);
- if (!ret)
- break;
- }
- rcu_read_unlock();
- mmdrop(mm);
-
- return ret;
+ return false;
}
+bool task_will_free_mem(struct task_struct *task);
+
/* sysctls */
extern int sysctl_oom_dump_tasks;
extern int sysctl_oom_kill_allocating_task;
diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index 0b7c02869bc0..aa28315ac310 100644
--- a/mm/oom_kill.c
+++ b/mm/oom_kill.c
@@ -697,6 +697,62 @@ void oom_killer_enable(void)
}
/*
+ * Checks whether the given task is dying or exiting and likely to
+ * release its address space. This means that all threads and processes
+ * sharing the same mm have to be killed or exiting.
+ */
+bool task_will_free_mem(struct task_struct *task)
+{
+ struct mm_struct *mm = NULL;
+ struct task_struct *p;
+ bool ret;
+
+ /*
+ * If the process has passed exit_mm we have to skip it because
+ * we have lost a link to other tasks sharing this mm, we do not
+ * have anything to reap and the task might then get stuck waiting
+ * for parent as zombie and we do not want it to hold TIF_MEMDIE
+ */
+ p = find_lock_task_mm(task);
+ if (!p)
+ return false;
+
+ if (!__task_will_free_mem(p)) {
+ task_unlock(p);
+ return false;
+ }
+
+ mm = p->mm;
+ if (atomic_read(&mm->mm_users) <= 1) {
+ task_unlock(p);
+ return true;
+ }
+
+ /* pin the mm to not get freed and reused */
+ atomic_inc(&mm->mm_count);
+ task_unlock(p);
+
+ /*
+ * This is really pessimistic but we do not have any reliable way
+ * to check that external processes share with our mm
+ */
+ rcu_read_lock();
+ for_each_process(p) {
+ if (!process_shares_mm(p, mm))
+ continue;
+ if (same_thread_group(task, p))
+ continue;
+ ret = __task_will_free_mem(p);
+ if (!ret)
+ break;
+ }
+ rcu_read_unlock();
+ mmdrop(mm);
+
+ return ret;
+}
+
+/*
* Must be called while holding a reference to p, which will be released upon
* returning.
*/
--
2.8.1
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists