lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BYAPR21MB1688430B2111541FE68D3569D7059@BYAPR21MB1688.namprd21.prod.outlook.com>
Date:   Mon, 14 Nov 2022 16:53:36 +0000
From:   "Michael Kelley (LINUX)" <mikelley@...rosoft.com>
To:     Dave Hansen <dave.hansen@...el.com>,
        "hpa@...or.com" <hpa@...or.com>, KY Srinivasan <kys@...rosoft.com>,
        Haiyang Zhang <haiyangz@...rosoft.com>,
        "wei.liu@...nel.org" <wei.liu@...nel.org>,
        Dexuan Cui <decui@...rosoft.com>,
        "luto@...nel.org" <luto@...nel.org>,
        "peterz@...radead.org" <peterz@...radead.org>,
        "davem@...emloft.net" <davem@...emloft.net>,
        "edumazet@...gle.com" <edumazet@...gle.com>,
        "kuba@...nel.org" <kuba@...nel.org>,
        "pabeni@...hat.com" <pabeni@...hat.com>,
        "lpieralisi@...nel.org" <lpieralisi@...nel.org>,
        "robh@...nel.org" <robh@...nel.org>, "kw@...ux.com" <kw@...ux.com>,
        "bhelgaas@...gle.com" <bhelgaas@...gle.com>,
        "arnd@...db.de" <arnd@...db.de>,
        "hch@...radead.org" <hch@...radead.org>,
        "m.szyprowski@...sung.com" <m.szyprowski@...sung.com>,
        "robin.murphy@....com" <robin.murphy@....com>,
        "thomas.lendacky@....com" <thomas.lendacky@....com>,
        "brijesh.singh@....com" <brijesh.singh@....com>,
        "tglx@...utronix.de" <tglx@...utronix.de>,
        "mingo@...hat.com" <mingo@...hat.com>,
        "bp@...en8.de" <bp@...en8.de>,
        "dave.hansen@...ux.intel.com" <dave.hansen@...ux.intel.com>,
        Tianyu Lan <Tianyu.Lan@...rosoft.com>,
        "kirill.shutemov@...ux.intel.com" <kirill.shutemov@...ux.intel.com>,
        "sathyanarayanan.kuppuswamy@...ux.intel.com" 
        <sathyanarayanan.kuppuswamy@...ux.intel.com>,
        "ak@...ux.intel.com" <ak@...ux.intel.com>,
        "isaku.yamahata@...el.com" <isaku.yamahata@...el.com>,
        "Williams, Dan J" <dan.j.williams@...el.com>,
        "jane.chu@...cle.com" <jane.chu@...cle.com>,
        "seanjc@...gle.com" <seanjc@...gle.com>,
        "tony.luck@...el.com" <tony.luck@...el.com>,
        "x86@...nel.org" <x86@...nel.org>,
        "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        "linux-hyperv@...r.kernel.org" <linux-hyperv@...r.kernel.org>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "linux-pci@...r.kernel.org" <linux-pci@...r.kernel.org>,
        "linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>,
        "iommu@...ts.linux.dev" <iommu@...ts.linux.dev>
Subject: RE: [PATCH v2 01/12] x86/ioremap: Fix page aligned size calculation
 in __ioremap_caller()

From: Dave Hansen <dave.hansen@...el.com> Sent: Monday, November 14, 2022 8:40 AM
> 
> On 11/10/22 22:21, Michael Kelley wrote:
> > --- a/arch/x86/mm/ioremap.c
> > +++ b/arch/x86/mm/ioremap.c
> > @@ -218,7 +218,7 @@ static void __ioremap_check_mem(resource_size_t addr,
> unsigned long size,
> >  	 */
> >  	offset = phys_addr & ~PAGE_MASK;
> >  	phys_addr &= PHYSICAL_PAGE_MASK;
> > -	size = PAGE_ALIGN(last_addr+1) - phys_addr;
> > +	size = (PAGE_ALIGN(last_addr+1) & PHYSICAL_PAGE_MASK) - phys_addr;
> 
> Michael, thanks for the explanation in your other reply.  First and
> foremost, I *totally* missed the reason for this patch.  I was thinking
> about issues that could pop up from the _lower_ bits being masked off.
> 
> Granted, your changelog _did_ say "upper bits", so shame on me.  But, it
> would be great to put some more background in the changelog to make it a
> bit harder for silly reviewers to miss such things.
> 
> I'd also like to propose something that I think is more straightforward:
> 
>         /*
>          * Mappings have to be page-aligned
>          */
>         offset = phys_addr & ~PAGE_MASK;
>         phys_addr &= PAGE_MASK;
>         size = PAGE_ALIGN(last_addr+1) - phys_addr;
> 
> 	/*
> 	 * Mask out any bits not parts of the actual physical
> 	 * address, like memory encryption bits.
> 	 */
> 	phys_addr &= PHYSICAL_PAGE_MASK;
> 
> Because, first of all, that "Mappings have to be page-aligned" thing is
> (now) doing more than page-aligning things.  Second, the moment you mask
> out the metadata bits, the 'size' calculation gets harder.  Doing it in
> two phases (page alignment followed by metadata bit masking) breaks up
> the two logical operations.
> 

Work for me.  Will do this in v3.

Michael

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ