lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20171129072211.vbjauoqyaj7hcfel@dhcp22.suse.cz>
Date:   Wed, 29 Nov 2017 08:22:11 +0100
From:   Michal Hocko <mhocko@...nel.org>
To:     Will Deacon <will.deacon@....com>
Cc:     linux-mm@...ck.org, Minchan Kim <minchan@...nel.org>,
        Andrea Argangeli <andrea@...nel.org>,
        Ingo Molnar <mingo@...hat.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        LKML <linux-kernel@...r.kernel.org>, linux-arch@...r.kernel.org
Subject: Re: [RFC PATCH] arch, mm: introduce arch_tlb_gather_mmu_exit

On Tue 28-11-17 19:00:01, Will Deacon wrote:
> On Thu, Nov 23, 2017 at 10:02:36AM +0100, Michal Hocko wrote:
> > From: Michal Hocko <mhocko@...e.com>
> > 
> > 5a7862e83000 ("arm64: tlbflush: avoid flushing when fullmm == 1") has
> > introduced an optimization to not flush tlb when we are tearing the
> > whole address space down. Will goes on to explain
> > 
> > : Basically, we tag each address space with an ASID (PCID on x86) which
> > : is resident in the TLB. This means we can elide TLB invalidation when
> > : pulling down a full mm because we won't ever assign that ASID to
> > : another mm without doing TLB invalidation elsewhere (which actually
> > : just nukes the whole TLB).
> > 
> > This all is nice but tlb_gather users are not aware of that and this can
> > actually cause some real problems. E.g. the oom_reaper tries to reap the
> > whole address space but it might race with threads accessing the memory [1].
> > It is possible that soft-dirty handling might suffer from the same
> > problem [2] as soon as it starts supporting the feature.
> > 
> > Introduce an explicit exit variant tlb_gather_mmu_exit which allows the
> > behavior arm64 implements for the fullmm case and replace it by an
> > explicit exit flag in the mmu_gather structure. exit_mmap path is then
> > turned into the explicit exit variant. Other architectures simply ignore
> > the flag.
> > 
> > [1] http://lkml.kernel.org/r/20171106033651.172368-1-wangnan0@huawei.com
> > [2] http://lkml.kernel.org/r/20171110001933.GA12421@bbox
> > Signed-off-by: Michal Hocko <mhocko@...e.com>
> > ---
> > Hi,
> > I am sending this as an RFC because I am not fully familiar with the tlb
> > gather arch implications, espacially the semantic of fullmm. Therefore
> > I might duplicate some of its functionality. I hope people on the CC
> > list will help me to sort this out.
> > 
> > Comments? Objections?
> 
> I can't think of a case where we'd have exit set but not be doing the
> fullmm, in which case I'd be inclined to remove the last two parameters
> from tlb_gather_mmu_exit.

Makes sense. Will do!

-- 
Michal Hocko
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ