[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171116091941.elzfpt72mgxofux4@dhcp22.suse.cz>
Date: Thu, 16 Nov 2017 10:19:41 +0100
From: Michal Hocko <mhocko@...nel.org>
To: Minchan Kim <minchan@...nel.org>
Cc: Wang Nan <wangnan0@...wei.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, will.deacon@....com,
Bob Liu <liubo95@...wei.com>,
Andrew Morton <akpm@...ux-foundation.org>,
David Rientjes <rientjes@...gle.com>,
Ingo Molnar <mingo@...nel.org>, Roman Gushchin <guro@...com>,
Konstantin Khlebnikov <khlebnikov@...dex-team.ru>,
Andrea Arcangeli <aarcange@...hat.com>
Subject: Re: [PATCH] arch, mm: introduce arch_tlb_gather_mmu_lazy (was: Re:
[RESEND PATCH] mm, oom_reaper: gather each vma to prevent) leaking TLB entry
On Thu 16-11-17 09:44:57, Minchan Kim wrote:
> On Wed, Nov 15, 2017 at 09:14:52AM +0100, Michal Hocko wrote:
> > On Mon 13-11-17 09:28:33, Minchan Kim wrote:
> > [...]
> > > void arch_tlb_gather_mmu(...)
> > >
> > > tlb->fullmm = !(start | (end + 1)) && atomic_read(&mm->mm_users) == 0;
> >
> > Sorry, I should have realized sooner but this will not work for the oom
> > reaper. It _can_ race with the final exit_mmap and run with mm_users == 0
>
> If someone see mm_users is zero, it means there is no user to access
> address space by stale TLB. Am I missing something?
You are probably right but changing the flushing policy in the middle of
the address space tear down makes me nervous. While this might work
right now, it is kind of tricky and it has some potential to kick us
back in future. Just note how the current arm64 optimization went
unnoticed because the the oom reaper is such a rare event that nobody
has actually noticed this. And I suspect that the likelyhood of failure
is very low even when applied for anybody to notice in the real life.
So I would very much like to make the behavior really explicit for
everybody to see what is going on there.
--
Michal Hocko
SUSE Labs
Powered by blists - more mailing lists