lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 26 May 2017 13:59:32 -0600
From:   Ross Zwisler <ross.zwisler@...ux.intel.com>
To:     Andrew Morton <akpm@...ux-foundation.org>,
        linux-kernel@...r.kernel.org
Cc:     Ross Zwisler <ross.zwisler@...ux.intel.com>,
        "Darrick J. Wong" <darrick.wong@...cle.com>,
        Alexander Viro <viro@...iv.linux.org.uk>,
        Christoph Hellwig <hch@....de>,
        Dan Williams <dan.j.williams@...el.com>,
        Dave Hansen <dave.hansen@...el.com>, Jan Kara <jack@...e.cz>,
        Matthew Wilcox <mawilcox@...rosoft.com>,
        linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
        linux-nvdimm@...ts.01.org,
        "Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
        Pawel Lebioda <pawel.lebioda@...el.com>,
        Dave Jiang <dave.jiang@...el.com>,
        Xiong Zhou <xzhou@...hat.com>, Eryu Guan <eguan@...hat.com>,
        stable@...r.kernel.org
Subject: [PATCH] dax: improve fix for colliding PMD & PTE entries

This commit, which has not yet made it upstream but is in the -mm tree:

    dax: Fix race between colliding PMD & PTE entries

fixed a pair of race conditions where racing DAX PTE and PMD faults could
corrupt page tables.  This fix had two shortcomings which are addressed by
this patch:

1) In the PTE fault handler we only checked for a collision using
pmd_devmap().  The pmd_devmap() check will trigger when we have raced with
a PMD that has real DAX storage, but to account for the case where we
collide with a huge zero page entry we also need to check for
pmd_trans_huge().

2) In the PMD fault handler we only continued with the fault if no PMD at
all was present (pmd_none()).  This is the case when we are faulting in a
PMD for the first time, but there are two other cases to consider.  The
first is that we are servicing a write fault over a PMD huge zero page,
which we detect with pmd_trans_huge().  The second is that we are servicing
a write fault over a DAX PMD with real storage, which we address with
pmd_devmap().

Fix both of these, and instead of manually triggering a fallback in the PMD
collision case instead be consistent with the other collision detection
code in the fault handlers and just retry.

Signed-off-by: Ross Zwisler <ross.zwisler@...ux.intel.com>
Cc: stable@...r.kernel.org
---

For both the -mm tree and for stable, feel free to squash this with the
original commit if you think that is appropriate.

This has passed targeted testing and an xfstests run.
---
 fs/dax.c | 11 +++++++----
 1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/fs/dax.c b/fs/dax.c
index fc62f36..2a6889b 100644
--- a/fs/dax.c
+++ b/fs/dax.c
@@ -1160,7 +1160,7 @@ static int dax_iomap_pte_fault(struct vm_fault *vmf,
 	 * the PTE we need to set up.  If so just return and the fault will be
 	 * retried.
 	 */
-	if (pmd_devmap(*vmf->pmd)) {
+	if (pmd_trans_huge(*vmf->pmd) || pmd_devmap(*vmf->pmd)) {
 		vmf_ret = VM_FAULT_NOPAGE;
 		goto unlock_entry;
 	}
@@ -1411,11 +1411,14 @@ static int dax_iomap_pmd_fault(struct vm_fault *vmf,
 	/*
 	 * It is possible, particularly with mixed reads & writes to private
 	 * mappings, that we have raced with a PTE fault that overlaps with
-	 * the PMD we need to set up.  If so we just fall back to a PTE fault
-	 * ourselves.
+	 * the PMD we need to set up.  If so just return and the fault will be
+	 * retried.
 	 */
-	if (!pmd_none(*vmf->pmd))
+	if (!pmd_none(*vmf->pmd) && !pmd_trans_huge(*vmf->pmd) &&
+			!pmd_devmap(*vmf->pmd)) {
+		result = 0;
 		goto unlock_entry;
+	}
 
 	/*
 	 * Note that we don't use iomap_apply here.  We aren't doing I/O, only
-- 
2.9.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ