[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <afd222473c7b18ea942e754a6c4df26ed74eedee.1664298261.git.thomas.lendacky@amd.com>
Date: Tue, 27 Sep 2022 12:04:16 -0500
From: Tom Lendacky <thomas.lendacky@....com>
To: <linux-kernel@...r.kernel.org>, <x86@...nel.org>
CC: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
"Kirill A. Shutemov" <kirill@...temov.name>,
"H. Peter Anvin" <hpa@...or.com>,
Michael Roth <michael.roth@....com>,
Joerg Roedel <jroedel@...e.de>,
Andy Lutomirski <luto@...nel.org>,
Peter Zijlstra <peterz@...radead.org>
Subject: [PATCH v5 1/6] x86/sev: Fix calculation of end address based on number of pages
When calculating an end address based on an unsigned int number of pages,
the number of pages must be cast to an unsigned long so that any value
greater than or equal to 0x100000 does not result in zero after the shift.
Fixes: 5e5ccff60a29 ("x86/sev: Add helper for validating pages in early enc attribute changes")
Signed-off-by: Tom Lendacky <thomas.lendacky@....com>
---
arch/x86/kernel/sev.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kernel/sev.c b/arch/x86/kernel/sev.c
index c05f0124c410..cac56540929d 100644
--- a/arch/x86/kernel/sev.c
+++ b/arch/x86/kernel/sev.c
@@ -649,7 +649,7 @@ static void pvalidate_pages(unsigned long vaddr, unsigned int npages, bool valid
int rc;
vaddr = vaddr & PAGE_MASK;
- vaddr_end = vaddr + (npages << PAGE_SHIFT);
+ vaddr_end = vaddr + ((unsigned long)npages << PAGE_SHIFT);
while (vaddr < vaddr_end) {
rc = pvalidate(vaddr, RMP_PG_SIZE_4K, validate);
@@ -666,7 +666,7 @@ static void __init early_set_pages_state(unsigned long paddr, unsigned int npage
u64 val;
paddr = paddr & PAGE_MASK;
- paddr_end = paddr + (npages << PAGE_SHIFT);
+ paddr_end = paddr + ((unsigned long)npages << PAGE_SHIFT);
while (paddr < paddr_end) {
/*
--
2.37.3
Powered by blists - more mailing lists