lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20141106135740.GC19702@e104818-lin.cambridge.arm.com>
Date:	Thu, 6 Nov 2014 13:57:40 +0000
From:	Catalin Marinas <catalin.marinas@....com>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
Cc:	Will Deacon <Will.Deacon@....com>,
	Peter Zijlstra <peterz@...radead.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Russell King - ARM Linux <linux@....linux.org.uk>,
	Benjamin Herrenschmidt <benh@...nel.crashing.org>
Subject: Re: [RFC PATCH 1/2] zap_pte_range: update addr when forcing flush
 after TLB batching faiure

On Tue, Nov 04, 2014 at 04:08:27PM +0000, Linus Torvalds wrote:
> On Tue, Nov 4, 2014 at 6:29 AM, Catalin Marinas <catalin.marinas@....com> wrote:
> >
> > This would work on arm64 but is the PAGE_SIZE range enough for all
> > architectures even when we flush a huge page or a pmd/pud table entry?
> 
> It pretty much had *better* be.

Thanks for confirming.

> For things like page tables caches (ie caching addresses "inside" the
> page tables, like x86 does), for legacy reasons, flushing an
> individual page had better flush the page table caches behind it. This
> is definitely how x86 works, for example. And if you have an
> architected non-legacy page table cache (which I'm not aware of
> anybody actually doing), you're going to have some architecturally
> explicit flushing for that, likely *separate* from a regular TLB entry
> flush, and thus you'd need more than just some range expansion..

On arm64 we have two types of TLB invalidation instructions, the
standard one which flushes a pte entry together with the corresponding
upper level page table cache and a "leaf" operation only for the pte. We
don't use the latter in Linux (yet) but in theory it's more efficient.

Anyway, even without special "leaf" operations, it would be useful to
make the distinction between unmap_vmas() and free_pgtables() with
regards to the ranges tracked by mmu_gather. For the former, tlb_flush()
needs to flush the range in PAGE_SIZE increments (assuming a mix of
small and huge pages). For the latter, PMD_SIZE increments would be
enough.

With RCU_TABLE_FREE, I think checking tlb->local.next would do the trick
but for x86 we can keep mmu_gather.need_flush only for pte clearing
and remove need_flush = 1 from p*_free_tlb() functions. The arch
specific tlb_flush() can take need_flush into account to change the
range flushing increment or even ignore the second tlb_flush() triggered
by tlb_finish_mmu() (after free_pgtables(), the ptes have been flushed
via tlb_end_vma()).

-- 
Catalin
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ