lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230717193008.122040-1-sj@kernel.org>
Date:   Mon, 17 Jul 2023 19:30:08 +0000
From:   SeongJae Park <sj@...nel.org>
To:     gregkh@...uxfoundation.org
Cc:     ryan.roberts@....com, akpm@...ux-foundation.org, hch@....de,
        kirill.shutemov@...ux.intel.com, lstoakes@...il.com,
        rppt@...nel.org, sj@...nel.org, stable@...r.kernel.org,
        urezki@...il.com, willy@...radead.org, yuzhao@...gle.com,
        ziy@...dia.com, damon@...ts.linux.dev, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: [PATCH 5.15.y] mm/damon/ops-common: atomically test and clear young on ptes and pmds

From: Ryan Roberts <ryan.roberts@....com>

commit c11d34fa139e4b0fb4249a30f37b178353533fa1 upstream.

It is racy to non-atomically read a pte, then clear the young bit, then
write it back as this could discard dirty information.  Further, it is bad
practice to directly set a pte entry within a table.  Instead clearing
young must go through the arch-provided helper,
ptep_test_and_clear_young() to ensure it is modified atomically and to
give the arch code visibility and allow it to check (and potentially
modify) the operation.

Link: https://lkml.kernel.org/r/20230602092949.545577-3-ryan.roberts@arm.com
Fixes: 3f49584b262c ("mm/damon: implement primitives for the virtual memory address spaces").
Signed-off-by: Ryan Roberts <ryan.roberts@....com>
Reviewed-by: Zi Yan <ziy@...dia.com>
Reviewed-by: SeongJae Park <sj@...nel.org>
Reviewed-by: Mike Rapoport (IBM) <rppt@...nel.org>
Cc: Christoph Hellwig <hch@....de>
Cc: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
Cc: Lorenzo Stoakes <lstoakes@...il.com>
Cc: Matthew Wilcox (Oracle) <willy@...radead.org>
Cc: Uladzislau Rezki (Sony) <urezki@...il.com>
Cc: Yu Zhao <yuzhao@...gle.com>
Cc: <stable@...r.kernel.org>
Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
Signed-off-by: SeongJae Park <sj@...nel.org>
---
This is a manual backport of the commit, which cannot cleanly
cherry-picked on 5.15.y[1], on 5.15.y, specifically 5.15.120.

[1] https://lore.kernel.org/stable/2023071613-reminder-relapse-b922@gregkh/

 mm/damon/vaddr.c | 20 ++++++++------------
 1 file changed, 8 insertions(+), 12 deletions(-)

diff --git a/mm/damon/vaddr.c b/mm/damon/vaddr.c
index 1945196fd743..6ad96da15081 100644
--- a/mm/damon/vaddr.c
+++ b/mm/damon/vaddr.c
@@ -393,7 +393,7 @@ static struct page *damon_get_page(unsigned long pfn)
 	return page;
 }
 
-static void damon_ptep_mkold(pte_t *pte, struct mm_struct *mm,
+static void damon_ptep_mkold(pte_t *pte, struct vm_area_struct *vma,
 			     unsigned long addr)
 {
 	bool referenced = false;
@@ -402,13 +402,11 @@ static void damon_ptep_mkold(pte_t *pte, struct mm_struct *mm,
 	if (!page)
 		return;
 
-	if (pte_young(*pte)) {
+	if (ptep_test_and_clear_young(vma, addr, pte))
 		referenced = true;
-		*pte = pte_mkold(*pte);
-	}
 
 #ifdef CONFIG_MMU_NOTIFIER
-	if (mmu_notifier_clear_young(mm, addr, addr + PAGE_SIZE))
+	if (mmu_notifier_clear_young(vma->vm_mm, addr, addr + PAGE_SIZE))
 		referenced = true;
 #endif /* CONFIG_MMU_NOTIFIER */
 
@@ -419,7 +417,7 @@ static void damon_ptep_mkold(pte_t *pte, struct mm_struct *mm,
 	put_page(page);
 }
 
-static void damon_pmdp_mkold(pmd_t *pmd, struct mm_struct *mm,
+static void damon_pmdp_mkold(pmd_t *pmd, struct vm_area_struct *vma,
 			     unsigned long addr)
 {
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
@@ -429,13 +427,11 @@ static void damon_pmdp_mkold(pmd_t *pmd, struct mm_struct *mm,
 	if (!page)
 		return;
 
-	if (pmd_young(*pmd)) {
+	if (pmdp_test_and_clear_young(vma, addr, pmd))
 		referenced = true;
-		*pmd = pmd_mkold(*pmd);
-	}
 
 #ifdef CONFIG_MMU_NOTIFIER
-	if (mmu_notifier_clear_young(mm, addr,
+	if (mmu_notifier_clear_young(vma->vm_mm, addr,
 				addr + ((1UL) << HPAGE_PMD_SHIFT)))
 		referenced = true;
 #endif /* CONFIG_MMU_NOTIFIER */
@@ -462,7 +458,7 @@ static int damon_mkold_pmd_entry(pmd_t *pmd, unsigned long addr,
 		}
 
 		if (pmd_huge(*pmd)) {
-			damon_pmdp_mkold(pmd, walk->mm, addr);
+			damon_pmdp_mkold(pmd, walk->vma, addr);
 			spin_unlock(ptl);
 			return 0;
 		}
@@ -474,7 +470,7 @@ static int damon_mkold_pmd_entry(pmd_t *pmd, unsigned long addr,
 	pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl);
 	if (!pte_present(*pte))
 		goto out;
-	damon_ptep_mkold(pte, walk->mm, addr);
+	damon_ptep_mkold(pte, walk->vma, addr);
 out:
 	pte_unmap_unlock(pte, ptl);
 	return 0;
-- 
2.25.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ