lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Mon, 28 Nov 2022 14:43:28 +0000 From: "Michael Kelley (LINUX)" <mikelley@...rosoft.com> To: Wei Liu <wei.liu@...nel.org>, "bp@...en8.de" <bp@...en8.de> CC: "hpa@...or.com" <hpa@...or.com>, KY Srinivasan <kys@...rosoft.com>, Haiyang Zhang <haiyangz@...rosoft.com>, Dexuan Cui <decui@...rosoft.com>, "luto@...nel.org" <luto@...nel.org>, "peterz@...radead.org" <peterz@...radead.org>, "davem@...emloft.net" <davem@...emloft.net>, "edumazet@...gle.com" <edumazet@...gle.com>, "kuba@...nel.org" <kuba@...nel.org>, "pabeni@...hat.com" <pabeni@...hat.com>, "lpieralisi@...nel.org" <lpieralisi@...nel.org>, "robh@...nel.org" <robh@...nel.org>, "kw@...ux.com" <kw@...ux.com>, "bhelgaas@...gle.com" <bhelgaas@...gle.com>, "arnd@...db.de" <arnd@...db.de>, "hch@...radead.org" <hch@...radead.org>, "m.szyprowski@...sung.com" <m.szyprowski@...sung.com>, "robin.murphy@....com" <robin.murphy@....com>, "thomas.lendacky@....com" <thomas.lendacky@....com>, "brijesh.singh@....com" <brijesh.singh@....com>, "tglx@...utronix.de" <tglx@...utronix.de>, "mingo@...hat.com" <mingo@...hat.com>, "dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>, Tianyu Lan <Tianyu.Lan@...rosoft.com>, "kirill.shutemov@...ux.intel.com" <kirill.shutemov@...ux.intel.com>, "sathyanarayanan.kuppuswamy@...ux.intel.com" <sathyanarayanan.kuppuswamy@...ux.intel.com>, "ak@...ux.intel.com" <ak@...ux.intel.com>, "isaku.yamahata@...el.com" <isaku.yamahata@...el.com>, "Williams, Dan J" <dan.j.williams@...el.com>, "jane.chu@...cle.com" <jane.chu@...cle.com>, "seanjc@...gle.com" <seanjc@...gle.com>, "tony.luck@...el.com" <tony.luck@...el.com>, "x86@...nel.org" <x86@...nel.org>, "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>, "linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>, "netdev@...r.kernel.org" <netdev@...r.kernel.org>, "linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>, "linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>, "iommu@...ts.linux.dev" <iommu@...ts.linux.dev> Subject: RE: [PATCH v4 1/1] x86/ioremap: Fix page aligned size calculation in __ioremap_caller() From: Wei Liu <wei.liu@...nel.org> Sent: Friday, November 25, 2022 7:20 AM > > On Tue, Nov 22, 2022 at 09:40:42AM -0800, Michael Kelley wrote: > > Current code re-calculates the size after aligning the starting and > > ending physical addresses on a page boundary. But the re-calculation > > also embeds the masking of high order bits that exceed the size of > > the physical address space (via PHYSICAL_PAGE_MASK). If the masking > > removes any high order bits, the size calculation results in a huge > > value that is likely to immediately fail. > > > > Fix this by re-calculating the page-aligned size first. Then mask any > > high order bits using PHYSICAL_PAGE_MASK. > > > > Fixes: ffa71f33a820 ("x86, ioremap: Fix incorrect physical address handling in PAE mode") > > Acked-by: Dave Hansen <dave.hansen@...ux.intel.com> > > Signed-off-by: Michael Kelley <mikelley@...rosoft.com> > > Reviewed-by: Wei Liu <wei.liu@...nel.org> > > > --- > > > > This patch was previously Patch 1 of a larger series[1]. Breaking > > it out separately per discussion with Dave Hansen and Boris Petkov. > > > > [1] https://lore.kernel.org/linux-hyperv/1668624097-14884-1-git-send-email-mikelley@microsoft.com/ > > Boris -- you were going to pick up this patch separately though urgent. Can you go ahead and do that? https://lore.kernel.org/linux-hyperv/Y3vo5drAFPQSsrF4@zn.tnic/ Michael > > arch/x86/mm/ioremap.c | 8 +++++++- > > 1 file changed, 7 insertions(+), 1 deletion(-) > > > > diff --git a/arch/x86/mm/ioremap.c b/arch/x86/mm/ioremap.c > > index 78c5bc6..6453fba 100644 > > --- a/arch/x86/mm/ioremap.c > > +++ b/arch/x86/mm/ioremap.c > > @@ -217,9 +217,15 @@ static void __ioremap_check_mem(resource_size_t addr, unsigned long size, > > * Mappings have to be page-aligned > > */ > > offset = phys_addr & ~PAGE_MASK; > > - phys_addr &= PHYSICAL_PAGE_MASK; > > + phys_addr &= PAGE_MASK; > > size = PAGE_ALIGN(last_addr+1) - phys_addr; > > > > + /* > > + * Mask out any bits not part of the actual physical > > + * address, like memory encryption bits. > > + */ > > + phys_addr &= PHYSICAL_PAGE_MASK; > > + > > retval = memtype_reserve(phys_addr, (u64)phys_addr + size, > > pcm, &new_pcm); > > if (retval) { > > -- > > 1.8.3.1 > >
Powered by blists - more mailing lists