[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <174051422675.10177.13226545170101706336.tip-bot2@tip-bot2>
Date: Tue, 25 Feb 2025 20:10:26 -0000
From: "tip-bot2 for Matthew Wilcox (Oracle)" <tip-bot2@...utronix.de>
To: linux-tip-commits@...r.kernel.org
Cc: kernel test robot <oliver.sang@...el.com>,
"Matthew Wilcox (Oracle)" <willy@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>, x86@...nel.org,
linux-kernel@...r.kernel.org
Subject: [tip: x86/mm] x86/mm: Clear _PAGE_DIRTY when we clear _PAGE_RW
The following commit has been merged into the x86/mm branch of tip:
Commit-ID: d75a256b6a64132fc7aab57ad4c96218e3ae383b
Gitweb: https://git.kernel.org/tip/d75a256b6a64132fc7aab57ad4c96218e3ae383b
Author: Matthew Wilcox (Oracle) <willy@...radead.org>
AuthorDate: Tue, 25 Feb 2025 19:37:32
Committer: Ingo Molnar <mingo@...nel.org>
CommitterDate: Tue, 25 Feb 2025 20:59:32 +01:00
x86/mm: Clear _PAGE_DIRTY when we clear _PAGE_RW
The bit pattern of _PAGE_DIRTY set and _PAGE_RW clear is used to
mark shadow stacks. This is currently checked for in mk_pte() but
not pfn_pte(). If we add the check to pfn_pte(), it catches vfree()
calling set_direct_map_invalid_noflush() which calls __change_page_attr()
which loads the old protection bits from the PTE, clears the specified
bits and uses pfn_pte() to construct the new PTE.
We should, therefore, clear the _PAGE_DIRTY bit whenever we clear
_PAGE_RW. I opted to do it in the callers in case we want to use
__change_page_attr() to create shadow stacks inside the kernel at some
point in the future. Arguably, we might also want to clear _PAGE_ACCESSED
here.
Reported-by: kernel test robot <oliver.sang@...el.com>
Signed-off-by: Matthew Wilcox (Oracle) <willy@...radead.org>
Signed-off-by: Ingo Molnar <mingo@...nel.org>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Closes: https://lore.kernel.org/oe-lkp/202502241646.719f4651-lkp@intel.com
---
arch/x86/mm/pat/set_memory.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 84d0bca..d174015 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -2628,7 +2628,7 @@ static int __set_pages_np(struct page *page, int numpages)
.pgd = NULL,
.numpages = numpages,
.mask_set = __pgprot(0),
- .mask_clr = __pgprot(_PAGE_PRESENT | _PAGE_RW),
+ .mask_clr = __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY),
.flags = CPA_NO_CHECK_ALIAS };
/*
@@ -2715,7 +2715,7 @@ int __init kernel_map_pages_in_pgd(pgd_t *pgd, u64 pfn, unsigned long address,
.pgd = pgd,
.numpages = numpages,
.mask_set = __pgprot(0),
- .mask_clr = __pgprot(~page_flags & (_PAGE_NX|_PAGE_RW)),
+ .mask_clr = __pgprot(~page_flags & (_PAGE_NX|_PAGE_RW|_PAGE_DIRTY)),
.flags = CPA_NO_CHECK_ALIAS,
};
@@ -2758,7 +2758,7 @@ int __init kernel_unmap_pages_in_pgd(pgd_t *pgd, unsigned long address,
.pgd = pgd,
.numpages = numpages,
.mask_set = __pgprot(0),
- .mask_clr = __pgprot(_PAGE_PRESENT | _PAGE_RW),
+ .mask_clr = __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY),
.flags = CPA_NO_CHECK_ALIAS,
};
Powered by blists - more mailing lists