[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150920093332.GA20562@dhcp22.suse.cz>
Date: Sun, 20 Sep 2015 11:33:33 +0200
From: Michal Hocko <mhocko@...nel.org>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Oleg Nesterov <oleg@...hat.com>, Kyle Walker <kwalker@...hat.com>,
Christoph Lameter <cl@...ux.com>,
Andrew Morton <akpm@...ux-foundation.org>,
David Rientjes <rientjes@...gle.com>,
Johannes Weiner <hannes@...xchg.org>,
Vladimir Davydov <vdavydov@...allels.com>,
linux-mm <linux-mm@...ck.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Stanislav Kozina <skozina@...hat.com>,
Tetsuo Handa <penguin-kernel@...ove.sakura.ne.jp>
Subject: Re: can't oom-kill zap the victim's memory?
On Sat 19-09-15 15:24:02, Linus Torvalds wrote:
> On Sat, Sep 19, 2015 at 8:03 AM, Oleg Nesterov <oleg@...hat.com> wrote:
> > +
> > +static void oom_unmap_func(struct work_struct *work)
> > +{
> > + struct mm_struct *mm = xchg(&oom_unmap_mm, NULL);
> > +
> > + if (!atomic_inc_not_zero(&mm->mm_users))
> > + return;
> > +
> > + // If this is not safe we can do use_mm() + unuse_mm()
> > + down_read(&mm->mmap_sem);
>
> I don't think this is safe.
>
> What makes you sure that we might not deadlock on the mmap_sem here?
> For all we know, the process that is going out of memory is in the
> middle of a mmap(), and already holds the mmap_sem for writing. No?
>
> So at the very least that needs to be a trylock, I think.
Agreed.
> And I'm not
> sure zap_page_range() is ok with the mmap_sem only held for reading.
> Normally our rule is that you can *populate* the page tables
> concurrently, but you can't tear the down
Actually mmap_sem for reading should be sufficient because we do not
alter the layout. Both MADV_DONTNEED and MADV_FREE require read mmap_sem
for example.
--
Michal Hocko
SUSE Labs
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists