[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <201605262341.GFE48463.OOtLFFMQSVFHOJ@I-love.SAKURA.ne.jp>
Date: Thu, 26 May 2016 23:41:54 +0900
From: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
To: mhocko@...nel.org, linux-mm@...ck.org
Cc: rientjes@...gle.com, oleg@...hat.com, vdavydov@...allels.com,
akpm@...ux-foundation.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 6/6] mm, oom: fortify task_will_free_mem
Michal Hocko wrote:
> +/*
> + * Checks whether the given task is dying or exiting and likely to
> + * release its address space. This means that all threads and processes
> + * sharing the same mm have to be killed or exiting.
> + */
> +static inline bool task_will_free_mem(struct task_struct *task)
> +{
> + struct mm_struct *mm = NULL;
> + struct task_struct *p;
> + bool ret = false;
If atomic_read(&p->mm->mm_users) <= get_nr_threads(p), this returns "false".
According to previous version, I think this is "bool ret = true;".
> +
> + /*
> + * If the process has passed exit_mm we have to skip it because
> + * we have lost a link to other tasks sharing this mm, we do not
> + * have anything to reap and the task might then get stuck waiting
> + * for parent as zombie and we do not want it to hold TIF_MEMDIE
> + */
> + p = find_lock_task_mm(task);
> + if (!p)
> + return false;
> +
> + if (!__task_will_free_mem(p)) {
> + task_unlock(p);
> + return false;
> + }
> +
> + /*
> + * Check whether there are other processes sharing the mm - they all have
> + * to be killed or exiting.
> + */
> + if (atomic_read(&p->mm->mm_users) > get_nr_threads(p)) {
> + mm = p->mm;
> + /* pin the mm to not get freed and reused */
> + atomic_inc(&mm->mm_count);
> + }
> + task_unlock(p);
> +
> + if (mm) {
> + rcu_read_lock();
> + for_each_process(p) {
> + bool vfork;
> +
> + /*
> + * skip over vforked tasks because they are mostly
> + * independent and will drop the mm soon
> + */
> + task_lock(p);
> + vfork = p->vfork_done;
> + task_unlock(p);
> + if (vfork)
> + continue;
> +
> + ret = __task_will_free_mem(p);
> + if (!ret)
> + break;
> + }
> + rcu_read_unlock();
> + mmdrop(mm);
> + }
> +
> + return ret;
> +}
Powered by blists - more mailing lists