[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160206083757.GB25220@dhcp22.suse.cz>
Date: Sat, 6 Feb 2016 09:37:58 +0100
From: Michal Hocko <mhocko@...nel.org>
To: Tetsuo Handa <penguin-kernel@...ove.SAKURA.ne.jp>
Cc: akpm@...ux-foundation.org, rientjes@...gle.com, mgorman@...e.de,
oleg@...hat.com, torvalds@...ux-foundation.org, hughd@...gle.com,
andrea@...nel.org, riel@...hat.com, linux-mm@...ck.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 5/5] mm, oom_reaper: implement OOM victims queuing
On Sat 06-02-16 14:54:24, Tetsuo Handa wrote:
> Michal Hocko wrote:
> > > But if we consider non system-wide OOM events, it is not very unlikely to hit
> > > this race. This queue is useful for situations where memcg1 and memcg2 hit
> > > memcg OOM at the same time and victim1 in memcg1 cannot terminate immediately.
> >
> > This can happen of course but the likelihood is _much_ smaller without
> > the global OOM because the memcg OOM killer is invoked from a lockless
> > context so the oom context cannot block the victim to proceed.
>
> Suppose mem_cgroup_out_of_memory() is called from a lockless context via
> mem_cgroup_oom_synchronize() called from pagefault_out_of_memory(), that
> "lockless" is talking about only current thread, doesn't it?
Yes and you need the OOM context to sit on the same lock as the victim
to form a deadlock. So while the victim might be blocked somewhere it is
much less likely it would be deadlocked.
> Since oom_kill_process() sets TIF_MEMDIE on first mm!=NULL thread of a
> victim process, it is possible that non-first mm!=NULL thread triggers
> pagefault_out_of_memory() and first mm!=NULL thread gets TIF_MEMDIE,
> isn't it?
I got lost here completely. Maybe it is your usage of thread terminology
again.
> Then, where is the guarantee that victim1 (first mm!=NULL thread in memcg1
> which got TIF_MEMDIE) is not waiting at down_read(&victim2->mm->mmap_sem)
> when victim2 (first mm!=NULL thread in memcg2 which got TIF_MEMDIE) is
> waiting at down_write(&victim2->mm->mmap_sem)
All threads/processes sharing the same mm are in fact in the same memory
cgroup. That is the reason we have owner in the task_struct
> or both victim1 and victim2
> are waiting on a lock somewhere in memory reclaim path (e.g.
> mutex_lock(&inode->i_mutex))?
Such waiting has to make a forward progress at some point in time
because the lock itself cannot be deadlocked by the memcg OOM context.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists