lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210828010441.3702-1-lizhijian@cn.fujitsu.com>
Date:   Sat, 28 Aug 2021 09:04:41 +0800
From:   Li Zhijian <lizhijian@...fujitsu.com>
To:     <linux-mm@...ck.org>, <linux-rdma@...r.kernel.org>,
        <akpm@...ux-foundation.org>, <jglisse@...hat.com>, <jgg@...pe.ca>,
        <hch@...radead.org>
CC:     <yishaih@...dia.com>, <linux-kernel@...r.kernel.org>,
        Li Zhijian <lizhijian@...fujitsu.com>, <stable@...r.kernel.org>
Subject: [PATCH v2] mm/hmm: bypass devmap pte when all pfn requested flags are fulfilled

Previously, we noticed the one rpma example was failed[1] since 36f30e486d,
where it will use ODP feature to do RDMA WRITE between fsdax files.

After digging into the code, we found hmm_vma_handle_pte() will still
return EFAULT even though all the its requesting flags has been
fulfilled. That's because a DAX page will be marked as
(_PAGE_SPECIAL | PAGE_DEVMAP) by pte_mkdevmap().

[1]: https://github.com/pmem/rpma/issues/1142

CC: stable@...r.kernel.org
Fixes: 405506274922 ("mm/hmm: add missing call to hmm_pte_need_fault in HMM_PFN_SPECIAL handling")
Signed-off-by: Li Zhijian <lizhijian@...fujitsu.com>
---
 mm/hmm.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/mm/hmm.c b/mm/hmm.c
index fad6be2bf072..d324fb1a5352 100644
--- a/mm/hmm.c
+++ b/mm/hmm.c
@@ -295,10 +295,13 @@ static int hmm_vma_handle_pte(struct mm_walk *walk, unsigned long addr,
 		goto fault;
 
 	/*
+	 * Bypass devmap pte such as DAX page when all pfn requested
+	 * flags(pfn_req_flags) are fulfilled.
 	 * Since each architecture defines a struct page for the zero page, just
 	 * fall through and treat it like a normal page.
 	 */
-	if (pte_special(pte) && !is_zero_pfn(pte_pfn(pte))) {
+	if (!pte_devmap(pte) && pte_special(pte) &&
+	    !is_zero_pfn(pte_pfn(pte))) {
 		if (hmm_pte_need_fault(hmm_vma_walk, pfn_req_flags, 0)) {
 			pte_unmap(ptep);
 			return -EFAULT;
-- 
2.31.1



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ