lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 29 Apr 2016 01:21:27 +0200
From:	Andrea Arcangeli <aarcange@...hat.com>
To:	Alex Williamson <alex.williamson@...hat.com>
Cc:	"Kirill A. Shutemov" <kirill@...temov.name>,
	kirill.shutemov@...ux.intel.com, linux-kernel@...r.kernel.org,
	"linux-mm@...ck.org" <linux-mm@...ck.org>
Subject: Re: [BUG] vfio device assignment regression with THP ref counting
 redesign

Hello Alex and Kirill,

On Thu, Apr 28, 2016 at 12:58:08PM -0600, Alex Williamson wrote:
> > > specific fix to this code is not applicable.  It also still occurs on
> > > kernels as recent as v4.6-rc5, so the issue hasn't been silently fixed
> > > yet.  I'm able to reproduce this fairly quickly with the above test,
> > > but it's not hard to imagine a test w/o any iommu dependencies which
> > > simply does a user directed get_user_pages_fast() on a set of userspace
> > > addresses, retains the reference, and at some point later rechecks that
> > > a new get_user_pages_fast() results in the same page address.  It

Can you try to "git revert 1f25fe20a76af0d960172fb104d4b13697cafa84"
and then apply the below patch on top of the revert?

Totally untested... if I missed something and it isn't correct, I hope
this brings us in the right direction faster at least.

Overall the problem I think is that we need to restore full accuracy
and we can't deal with false positive COWs (which aren't entirely
cheap either... reading 512 cachelines should be much faster than
copying 2MB and using 4MB of CPU cache). 32k vs 4MB. The problem of
course is when we really need a COW, we'll waste an additional 32k,
but then it doesn't matter that much as we'd be forced to load 4MB of
cache anyway in such case. There's room for optimizations but even the
simple below patch would be ok for now.

>From 09e3d1ff10b49fb9c3ab77f0b96a862848e30067 Mon Sep 17 00:00:00 2001
From: Andrea Arcangeli <aarcange@...hat.com>
Date: Fri, 29 Apr 2016 01:05:06 +0200
Subject: [PATCH 1/1] mm: thp: calculate page_mapcount() correctly for THP
 pages

This allows to revert commit 1f25fe20a76af0d960172fb104d4b13697cafa84
and it provides fully accuracy with wrprotect faults so page pinning
will stop causing false positive copy-on-writes.

Signed-off-by: Andrea Arcangeli <aarcange@...hat.com>
---
 mm/util.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/mm/util.c b/mm/util.c
index 6cc81e7..a0b9f63 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -383,9 +383,10 @@ struct address_space *page_mapping(struct page *page)
 /* Slow path of page_mapcount() for compound pages */
 int __page_mapcount(struct page *page)
 {
-	int ret;
+	int ret = 0, i;
 
-	ret = atomic_read(&page->_mapcount) + 1;
+	for (i = 0; i < HPAGE_PMD_NR; i++)
+		ret = max(ret, atomic_read(&page->_mapcount) + 1);
 	page = compound_head(page);
 	ret += atomic_read(compound_mapcount_ptr(page)) + 1;
 	if (PageDoubleMap(page))

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ