[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160229134139.GB16930@dhcp22.suse.cz>
Date: Mon, 29 Feb 2016 14:41:39 +0100
From: Michal Hocko <mhocko@...nel.org>
To: Hugh Dickins <hughd@...gle.com>
Cc: David Rientjes <rientjes@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mgorman@...e.de>,
Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>,
Oleg Nesterov <oleg@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Andrea Argangeli <andrea@...nel.org>,
Rik van Riel <riel@...hat.com>, linux-mm@...ck.org,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/5] oom reaper: handle mlocked pages
On Sun 28-02-16 19:19:11, Hugh Dickins wrote:
> On Tue, 23 Feb 2016, Michal Hocko wrote:
> > On Mon 22-02-16 17:36:07, David Rientjes wrote:
> > >
> > > Are we concerned about munlock_vma_pages_all() taking lock_page() and
> > > perhaps stalling forever, the same way it would stall in exit_mmap() for
> > > VM_LOCKED vmas, if another thread has locked the same page and is doing an
> > > allocation?
> >
> > This is a good question. I have checked for that particular case
> > previously and managed to convinced myself that this is OK(ish).
> > munlock_vma_pages_range locks only THP pages to prevent from the
> > parallel split-up AFAICS.
>
> I think you're mistaken on that: there is also the lock_page()
> on every page in Phase 2 of __munlock_pagevec().
Ohh, I have missed that one. Thanks for pointing it out!
[...]
> > Just for the reference this is what I came up with (just compile tested).
>
> I tried something similar internally (on an earlier kernel). Like
> you I've set that work aside for now, there were quicker ways to fix
> the issue at hand. But it does continue to offend me that munlock
> demands all those page locks: so if you don't get back to it before me,
> I shall eventually.
>
> I didn't understand why you complicated yours with the "enforce"
> arg to munlock_vma_pages_range(): why not just trylock in all cases?
Well, I have to confess that I am not really sure I understand all the
consequences of the locking here. It has always been subtle and weird
issues popping up from time to time. So I only wanted to have that
change limitted to the oom_reaper. So I would really appreciate if
somebody more knowledgeable had a look. We can drop the mlock patch for
now.
Thanks for looking into this, Hugh!
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists