[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z-uqcSYvRD6ZPPQs@gmail.com>
Date: Tue, 1 Apr 2025 10:57:21 +0200
From: Ingo Molnar <mingo@...nel.org>
To: Balbir Singh <balbirs@...dia.com>
Cc: linux-kernel@...r.kernel.org, x86@...nel.org,
Christian König <christian.koenig@....com>,
Kees Cook <kees@...nel.org>, Bjorn Helgaas <bhelgaas@...gle.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Peter Zijlstra <peterz@...radead.org>,
Andy Lutomirski <luto@...nel.org>,
Alex Deucher <alexander.deucher@....com>,
Bert Karwatzki <spasswolf@....de>,
Madhavan Srinivasan <maddy@...ux.ibm.com>,
Nicholas Piggin <npiggin@...il.com>
Subject: Re: [PATCH] arch/x86: memory_hotplug, do not bump up max_pfn for
device private pages
* Balbir Singh <balbirs@...dia.com> wrote:
> arch/x86/mm/init_64.c | 15 ++++++++++++---
> 1 file changed, 12 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> index dce60767124f..cc60b57473a4 100644
> --- a/arch/x86/mm/init_64.c
> +++ b/arch/x86/mm/init_64.c
> @@ -970,9 +970,18 @@ int add_pages(int nid, unsigned long start_pfn, unsigned long nr_pages,
> ret = __add_pages(nid, start_pfn, nr_pages, params);
> WARN_ON_ONCE(ret);
>
> - /* update max_pfn, max_low_pfn and high_memory */
> - update_end_of_memory_vars(start_pfn << PAGE_SHIFT,
> - nr_pages << PAGE_SHIFT);
> + /*
> + * add_pages() is called by memremap_pages() for adding device private
> + * pages. Do not bump up max_pfn in the device private path. max_pfn
> + * changes affect dma_addressing_limited. dma_addressing_limited
> + * returning true when max_pfn is the device's addressable memory,
> + * can force device drivers to use bounce buffers and impact their
> + * performance
> + */
> + if (!params->pgmap)
> + /* update max_pfn, max_low_pfn and high_memory */
> + update_end_of_memory_vars(start_pfn << PAGE_SHIFT,
> + nr_pages << PAGE_SHIFT);
So given that device private pages are not supposed to be mapped
directly, not including these PFNs in max_pfn absolutely sounds like
the correct fix to me.
But wouldn't the abnormally high max_pfn also cause us to create a too
large direct mapping to cover it, or does something save us there? Such
an overly large mapping would increase kernel page table size rather
substantially on non-gbpages systems, AFAICS.
Say we create a 16TB mapping on a 16GB system - 1024x larger: to map 16
TB with largepages requires 8,388,608 largepage mappings (!), which
with 8-byte page table entries takes up ~64MB of unswappable RAM. (!!)
Is my math off, or am I misunderstanding something here?
Anyway, I've applied your fix to tip:x86/urgent with a few edits to the
comments and the changelog, but I've also expanded the Cc: list of the
commit liberally, in hope of getting more reviews for this fix. :-)
Thanks,
Ingo
Powered by blists - more mailing lists