[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7d621ab2-6717-c6b6-5c3c-90af4c2f4afc@intel.com>
Date: Fri, 11 Nov 2022 16:11:32 -0800
From: Dave Hansen <dave.hansen@...el.com>
To: Michael Kelley <mikelley@...rosoft.com>, hpa@...or.com,
kys@...rosoft.com, haiyangz@...rosoft.com, wei.liu@...nel.org,
decui@...rosoft.com, luto@...nel.org, peterz@...radead.org,
davem@...emloft.net, edumazet@...gle.com, kuba@...nel.org,
pabeni@...hat.com, lpieralisi@...nel.org, robh@...nel.org,
kw@...ux.com, bhelgaas@...gle.com, arnd@...db.de,
hch@...radead.org, m.szyprowski@...sung.com, robin.murphy@....com,
thomas.lendacky@....com, brijesh.singh@....com, tglx@...utronix.de,
mingo@...hat.com, bp@...en8.de, dave.hansen@...ux.intel.com,
Tianyu.Lan@...rosoft.com, kirill.shutemov@...ux.intel.com,
sathyanarayanan.kuppuswamy@...ux.intel.com, ak@...ux.intel.com,
isaku.yamahata@...el.com, dan.j.williams@...el.com,
jane.chu@...cle.com, seanjc@...gle.com, tony.luck@...el.com,
x86@...nel.org, linux-kernel@...r.kernel.org,
linux-hyperv@...r.kernel.org, netdev@...r.kernel.org,
linux-pci@...r.kernel.org, linux-arch@...r.kernel.org,
iommu@...ts.linux.dev
Subject: Re: [PATCH v2 01/12] x86/ioremap: Fix page aligned size calculation
in __ioremap_caller()
On 11/10/22 22:21, Michael Kelley wrote:
> If applying the PHYSICAL_PAGE_MASK to the phys_addr argument causes
> upper bits to be masked out, the re-calculation of size to account for
> page alignment is incorrect because the same bits are not masked out
> in last_addr.
>
> Fix this by masking the page aligned last_addr as well.
This makes sense at first glance.
How did you notice this? What is the impact to users? Did the bug
actually cause you some trouble or was it by inspection? Do you have a
sense of how many folks might be impacted? Any thoughts on how it
lasted for 14+ years?
For the functionality of the mapping, I guess 'size' doesn't really
matter because even a 1-byte 'size' will map a page. The other fallout
would be from memtype_reserve() reserving too little. But, that's
unlikely to matter for small mappings because even though:
ioremap(0x1800, 0x800);
would end up just reserving 0x1000->0x1800, it still wouldn't allow
ioremap(0x1000, 0x800);
to succeed because *both* of them would end up trying to reserve the
beginning of the page. Basically, the first caller effectively reserves
the whole page and any second user will fail.
So the other place it would matter would be for mappings that span two
pages, say:
ioremap(0x1fff, 0x2)
But I guess those aren't very common. Most large ioremap() callers seem
to already have base and size page-aligned.
Anyway, sorry to make so much of a big deal about a one-liner. But,
these decade-old bugs really make me wonder how they stuck around for so
long.
I'd be curious if you thought about this too while putting together this
fox.
Powered by blists - more mailing lists