lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a190e78f-d833-780b-6fbe-b129c2505deb@redhat.com>
Date:   Mon, 11 Apr 2022 13:41:58 +0200
From:   David Hildenbrand <david@...hat.com>
To:     Miaohe Lin <linmiaohe@...wei.com>, akpm@...ux-foundation.org
Cc:     mike.kravetz@...cle.com, shy828301@...il.com, willy@...radead.org,
        ying.huang@...el.com, ziy@...dia.com, minchan@...nel.org,
        apopple@...dia.com, dave.hansen@...ux.intel.com,
        o451686892@...il.com, jhubbard@...dia.com, peterx@...hat.com,
        naoya.horiguchi@....com, mhocko@...e.com, riel@...hat.com,
        osalvador@...e.de, sfr@...b.auug.org.au, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 4/4] mm/migration: fix potential pte_unmap on an not
 mapped pte

On 09.04.22 09:38, Miaohe Lin wrote:
> __migration_entry_wait and migration_entry_wait_on_locked assume pte is
> always mapped from caller. But this is not the case when it's called from
> migration_entry_wait_huge and follow_huge_pmd. And a parameter unmap to
> indicate whether pte needs to be unmapped to fix this issue.

Hm.


migration_entry_wait_on_locked documents

"@ptep: mapped pte pointer. Will return with the ptep unmapped. Only
required for pte entries, pass NULL for pmd entries."

Setting ptep implies that we have a *mapped pte* pointer that requires unmap.
If some code sets that although that's not guaranteed, that calling code
is wrong and needs to be fixed to not pass a ptep.


hugetlbfs never requires a map/unmap. I really don't see we there is need to
adjust migration_entry_wait_on_locked(): just don't pass a ptep as documented.

What's really nasty here is that hugetlbfs actually mostly works on PMD/PUD,
but we call it PTEs. One corner case might be CONT PTEs, but they are also
accessed without a map+unmap.

Regarding __migration_entry_wait(), I think we should just stop using it for
hugetlbfs and have a proper hugetlbfs variant that calls
hugetlb_migration_entry_wait(ptep == NULL), and knows that although we're
handling ptes, we're usually not actually holding ptes in our hands
that need a map+unmap.


Something like (including some cleanups mm parameter):


diff --git a/include/linux/swapops.h b/include/linux/swapops.h
index 32d517a28969..898c407ad8f7 100644
--- a/include/linux/swapops.h
+++ b/include/linux/swapops.h
@@ -234,8 +234,8 @@ extern void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep,
 					spinlock_t *ptl);
 extern void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd,
 					unsigned long address);
-extern void migration_entry_wait_huge(struct vm_area_struct *vma,
-		struct mm_struct *mm, pte_t *pte);
+extern void __migration_entry_wait_huge(pte_t *ptep, spinlock_t *ptl);
+extern void migration_entry_wait_huge(struct vm_area_struct *vma, pte_t *pte);
 #else
 static inline swp_entry_t make_readable_migration_entry(pgoff_t offset)
 {
@@ -261,8 +261,9 @@ static inline void __migration_entry_wait(struct mm_struct *mm, pte_t *ptep,
 					spinlock_t *ptl) { }
 static inline void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd,
 					 unsigned long address) { }
+static inline void __migration_entry_wait_huge(pte_t *ptep, spinlock_t *ptl) { }
 static inline void migration_entry_wait_huge(struct vm_area_struct *vma,
-		struct mm_struct *mm, pte_t *pte) { }
+					     pte_t *pte) { }
 static inline int is_writable_migration_entry(swp_entry_t entry)
 {
 	return 0;
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 48740e6c3476..2b38eaaa2e60 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -5622,7 +5622,7 @@ vm_fault_t hugetlb_fault(struct mm_struct *mm, struct vm_area_struct *vma,
 		 */
 		entry = huge_ptep_get(ptep);
 		if (unlikely(is_hugetlb_entry_migration(entry))) {
-			migration_entry_wait_huge(vma, mm, ptep);
+			migration_entry_wait_huge(vma, ptep);
 			return 0;
 		} else if (unlikely(is_hugetlb_entry_hwpoisoned(entry)))
 			return VM_FAULT_HWPOISON_LARGE |
@@ -6770,7 +6770,7 @@ follow_huge_pmd(struct mm_struct *mm, unsigned long address,
 	} else {
 		if (is_hugetlb_entry_migration(pte)) {
 			spin_unlock(ptl);
-			__migration_entry_wait(mm, (pte_t *)pmd, ptl);
+			__migration_entry_wait_huge((pte_t *)pmd, ptl);
 			goto retry;
 		}
 		/*
diff --git a/mm/migrate.c b/mm/migrate.c
index 231907e89b93..84b685a235fe 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -315,11 +315,26 @@ void migration_entry_wait(struct mm_struct *mm, pmd_t *pmd,
 	__migration_entry_wait(mm, ptep, ptl);
 }
 
+void __migration_entry_wait_huge(pte_t *ptep, spinlock_t *ptl)
+{
+	swp_entry_t entry;
+	pte_t pte;
+
+	spin_lock(ptl);
+	pte = huge_ptep_get(ptep);
+
+	if (unlikely(!is_hugetlb_entry_migration(pte)))
+		spin_unlock(ptl);
+	else
+		migration_entry_wait_on_locked(pte_to_swp_entry(pte), NULL, ptl);
+}
+
 void migration_entry_wait_huge(struct vm_area_struct *vma,
 		struct mm_struct *mm, pte_t *pte)
 {
-	spinlock_t *ptl = huge_pte_lockptr(hstate_vma(vma), mm, pte);
-	__migration_entry_wait(mm, pte, ptl);
+	spinlock_t *ptl = huge_pte_lockptr(hstate_vma(vma), vma->mm, pte);
+
+	__migration_entry_wait_huge(pte, ptl);
 }
 
 #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION

-- 
Thanks,

David / dhildenb

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ