[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <66b523ac448e2_c1448294ec@dwillia2-xfh.jf.intel.com.notmuch>
Date: Thu, 8 Aug 2024 12:59:40 -0700
From: Dan Williams <dan.j.williams@...el.com>
To: Thomas Gleixner <tglx@...utronix.de>, Dan Williams
<dan.j.williams@...el.com>, Max Ramanouski <max8rr8@...il.com>,
<dave.hansen@...ux.intel.com>, <luto@...nel.org>, <peterz@...radead.org>
CC: <max8rr8@...il.com>, <linux-kernel@...r.kernel.org>, <x86@...nel.org>, Dan
Williams <dan.j.williams@...el.com>, <jhubbard@...dia.com>,
<apopple@...dia.com>
Subject: Re: [PATCH 1/1] x86/ioremap: Use is_vmalloc_addr in iounmap
[ add Alistair and John ]
Thomas Gleixner wrote:
> On Thu, Aug 08 2024 at 09:39, Dan Williams wrote:
> > Dan Williams wrote:
> >> Apologies was trying to quickly reverse engineer how private memory
> >> might be different than typical memremap_pages(), but it is indeed the
> >> same in this aspect.
> >>
> >> So the real difference is that the private memory case tries to
> >> allocate physical memory by searching for holes in the iomem_resource
> >> starting from U64_MAX. That might explain why only the private memory
> >> case is violating assumptions with respect to high_memory spilling into
> >> vmalloc space.
> >
> > Not U64_MAX, but it starts searching for free physical address space
> > starting at MAX_PHYSMEM_BITS, see gfr_start().
>
> Wait. MAX_PHYSMEM_BITS is either 46 (4-level) or 52 (5-level), which is
> fully covered by the identity map space.
>
> So even if the search starts from top of that space, how do we end up
> with high_memory > VMALLOC_START?
>
> That does not make any sense at all
Max, or Alistair can you provide more details of how private memory spills over
into the VMALLOC space on these platforms?
I too would have thought that MAX_PHYSMEM_BITS protects against this?
Powered by blists - more mailing lists