lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20170830174857.GC2386@redhat.com>
Date:   Wed, 30 Aug 2017 13:48:58 -0400
From:   Jerome Glisse <jglisse@...hat.com>
To:     Andrea Arcangeli <aarcange@...hat.com>
Cc:     linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        Dan Williams <dan.j.williams@...el.com>,
        Ross Zwisler <ross.zwisler@...ux.intel.com>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Bernhard Held <berny156@....de>,
        Adam Borowski <kilobyte@...band.pl>,
        Radim Krčmář <rkrcmar@...hat.com>,
        Wanpeng Li <kernellwp@...il.com>,
        Paolo Bonzini <pbonzini@...hat.com>,
        Takashi Iwai <tiwai@...e.de>,
        Nadav Amit <nadav.amit@...il.com>,
        Mike Galbraith <efault@....de>,
        "Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
        axie <axie@....com>, Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH 02/13] mm/rmap: update to new mmu_notifier semantic

On Wed, Aug 30, 2017 at 06:52:50PM +0200, Andrea Arcangeli wrote:
> Hello Jerome,
> 
> On Tue, Aug 29, 2017 at 07:54:36PM -0400, Jerome Glisse wrote:
> > Replacing all mmu_notifier_invalidate_page() by mmu_notifier_invalidat_range()
> > and making sure it is bracketed by call to mmu_notifier_invalidate_range_start/
> > end.
> > 
> > Note that because we can not presume the pmd value or pte value we have to
> > assume the worse and unconditionaly report an invalidation as happening.
> 
> I pointed out in earlier email ->invalidate_range can only be
> implemented (as mutually exclusive alternative to
> ->invalidate_range_start/end) by secondary MMUs that shares the very
> same pagetables with the core linux VM of the primary MMU, and those
> invalidate_range are already called by
> __mmu_notifier_invalidate_range_end. The other bit is done by the MMU
> gather (because mmu_notifier_invalidate_range_start is a noop for
> drivers that implement s->invalidate_range).
> 
> The difference between sharing the same pagetables or not allows for
> ->invalidate_range to work because when the Linux MM changes the
> primary MMU pagetables it also automatically invalidated updates
> secondary MMU at the same time (because of the pagetable sharing
> between primary and secondary MMUs). So then all that is left to do is
> an invalidate_range to flush the secondary MMU TLBs.
> 
> There's no need of action in mmu_notifier_invalidate_range_start for
> those pagetable sharing drivers because there's no risk of a secondary
> MMU shadow pagetable layer to be re-created in between
> mmu_notifier_invalidate_range_start and the actual pagetable
> invalidate because again the pagetables are shared.

Yes but we still need to call invalidate_range() while under the page
table spinlock as this hardware sharing the CPU page table have their
own tlb (not even talking about tlb of each individual devices that
use PASID/ATS).

If we do not call it under spinlock then there is a chance that something
else get mapped between the time we drop the CPU page table and the
time we call mmu_notifier_invalidate_range_end() which itself call
invalidate_range().

Actually i would remove the call to invalidate_range() from range_end()
and audit all places where we call range_end() to assert that there is
a call to invalidate_range() under the page table spinlock that preced
it.

> 
> void __mmu_notifier_invalidate_range_end(struct mm_struct *mm,
> 				  unsigned long start, unsigned long end)
> {
> 	struct mmu_notifier *mn;
> 	int id;
> 
> 	id = srcu_read_lock(&srcu);
> 	hlist_for_each_entry_rcu(mn, &mm->mmu_notifier_mm->list, hlist) {
> 		/*
> 		 * Call invalidate_range here too to avoid the need for the
> 		 * subsystem of having to register an invalidate_range_end
> 		 * call-back when there is invalidate_range already. Usually a
> 		 * subsystem registers either invalidate_range_start()/end() or
> 		 * invalidate_range(), so this will be no additional overhead
> 		 * (besides the pointer check).
> 		 */
> 		if (mn->ops->invalidate_range)
> 			mn->ops->invalidate_range(mn, mm, start, end);
> 			^^^^^^^^^^^^^^^^^^^^^^^^^
> 		if (mn->ops->invalidate_range_end)
> 			mn->ops->invalidate_range_end(mn, mm, start, end);
> 	}
> 	srcu_read_unlock(&srcu, id);
> }
> 
> So this conversion from invalidate_page to invalidate_range looks
> superflous and the final mmu_notifier_invalidate_range_end should be
> enough.

See above.

> AFIK only amd_iommu_v2 and intel-svm (svm as in shared virtual memory)
> uses it.

powerpc has something similar too but i don't know its status

> My suggestion is to remove from below all
> mmu_notifier_invalidate_range calls and keep only the
> mmu_notifier_invalidate_range_end and test both amd_iommu_v2 and
> intel-svm with it under heavy swapping.
> 
> The only critical constraint to keep for invalidate_range to stay safe
> with a single call of mmu_notifier_invalidate_range_end after put_page
> is that the put_page cannot be the last put_page. That only applies to
> the case where the page isn't freed through MMU gather (MMU gather
> calls mmu_notifier_invalidate_range of its own before freeing the
> page, as opposed mmu gather does nothing for drivers using
> invalidate_range_start/end because invalidate_range_start acted as
> barrier to avoid establishing mappings on the secondary MMUs for
> those).
> 
> Not strictly required but I think it would be safer and more efficient
> to replace the put_page with something like:
> 
> static inline void put_page_not_freeing(struct page *page)
> {
> 	page = compound_head(page);
> 
> 	if (put_page_testzero(page))
> 		VM_WARN_ON_PAGE(1, page);
> }

Yes adding such check make sense.

Thank for looking into this too

Jérôme

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ