[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <f609436a-dd4a-43ac-9259-384d3695e709@nokia.com>
Date: Wed, 15 Oct 2025 18:08:47 +0200
From: Stefan Wiehler <stefan.wiehler@...ia.com>
To: Arnd Bergmann <arnd@...db.de>
Cc: linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org
Subject: Re: Highmem on AXM and K2
Hi Arnd,
> Thanks a lot for getting back to me on this!
Thanks as well for your comprehensive feedback, and sorry for my late reply,
I've been on a long vacation.
> Ok, I think this one is fine for the highmem usage at least. According
> to our current discussions, we'd likely start by reducing the references
> to highmem but keep supporting it for the page cache. Even with the
> earliler timeline I posted in [1], 2029 would be covered by the LTS kernel
> that gets released in late 2026.
Ok, thanks for confirming on that!
> The other problem on this platform is that upstream support has been
> abandoned by Intel a long time ago and as you know the drivers that
> did get merged are incomplete, so you are already on your own.
>
> Alexander Sverdlin at some point in the past considered upstreaming
> more of it, but that never happened during his time at Nokia. If you
> or someone else are interested in improving upstream support for the
> Axxia platform, that is still something that could be done. I know
> there are other users interested in it, and some of the original
> software team are now working for consulting companies that would
> likely offer their services.
Unfortunately, we have finally abandoned plans of upstreaming AXM last year.
Almost everybody who has a deep understanding of AXM (like Alex) has left the
company in the meanwhile, some drivers are not in a state that would allow
upstreaming without major refactoring (to put it diplomatically ;-), and with
the EOL on the horizon, there is not much interest from the management side
either anymore…
> Can you share more details about how exactly system uses its highmem
> today? In particular:
>
> - The physical memory layout, especially whether the memory
> that is assigned to Linux is physically contigouous, or if
> the memory owned by other components (the network processor
> or an FPGA) is taken from the middle. Note that at the
> moment, any memory that is too far away from the first
> page becomes highmem, even if the total RAM is under 800MB.
>
> - Is the memory mainly used for file backing (including tmpfs) or
> is it used as anonymous memory (like malloc()) in a few processes?
>
> - If most of the memory is mapped into a small number of
> processes, how close are you to reaching the available 3GB
> virtual address limit in those processes?
>
> - If possible, share the contents of /proc/meminfo, /proc/zoneinfo
> and the /proc/${PID}/maps for the processes with the largest vsize
> (according to ps)?
>
> If the largest process needs more than 2GB of virtual address
> space, then there is not much hope in changing to CONFIG_VMSPLIT_2G
> or CONFIG_VMSPLIT_1G. On the other hand if your workload does
> not rely on having all that memory mapped into a single address
> space, using VMPLIT_1G would likely improve system performance
> noticeably and have no other downside.
For some of these questions, I'll need to consult with higher layer application
developers first. I'll do some investigations and come back to you on this.
Kind regards,
Stefan
Powered by blists - more mailing lists