[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <9ac3eb75-efac-4f3c-a3e9-6953db4babf8@app.fastmail.com>
Date: Mon, 22 Sep 2025 15:20:23 +0200
From: "Arnd Bergmann" <arnd@...db.de>
To: "Stefan Wiehler" <stefan.wiehler@...ia.com>
Cc: linux-kernel@...r.kernel.org, linux-arm-kernel@...ts.infradead.org
Subject: Re: Highmem on AXM and K2
On Fri, Sep 19, 2025, at 15:24, Stefan Wiehler wrote:
> Hi Arnd,
>
> You've been calling out for users still needing highmem, and here we are ;-) We
> use both the Intel AXM 5516 and the TI KeyStone II with > 4 GB of RAM in our
> networking products.
Hi Stefan,
Thanks a lot for getting back to me on this!
> We use the AXM 5516 with 6 GB of RAM and highmem is therefore a must for us.
> The latest estimate from product management for EOL is June 2029. We would
> appreciate if this date (plus some buffer) could be kept in mind when choosing
> the last LTS kernel with highmem.
Ok, I think this one is fine for the highmem usage at least. According
to our current discussions, we'd likely start by reducing the references
to highmem but keep supporting it for the page cache. Even with the
earliler timeline I posted in [1], 2029 would be covered by the LTS kernel
that gets released in late 2026.
The other problem on this platform is that upstream support has been
abandoned by Intel a long time ago and as you know the drivers that
did get merged are incomplete, so you are already on your own.
Alexander Sverdlin at some point in the past considered upstreaming
more of it, but that never happened during his time at Nokia. If you
or someone else are interested in improving upstream support for the
Axxia platform, that is still something that could be done. I know
there are other users interested in it, and some of the original
software team are now working for consulting companies that would
likely offer their services.
> With the TI K2, the situation is more complicated: The EOL for the latest
> product hasn't even been defined yet, but most likely we need to support it
> until 2037; it really depends on when the last 2G/3G networks will be shut
> down. Obviously the community cannot wait that long with highmem removal. While
> we have 5 GB of RAM there, a little bit less than half is used by Linux.
Right, this is clearly the more worrying of the two platforms, and I had
not expected the timeline to extend that far into the future. The platform
has some nasty quirks with its memory layout (having to use lowmem at
physical address >0x100000000 to get coherent DMA, and needing ARM_LPAE for
that) and does not have a lot of activity upstream, so I had hoped that
it would not be as long-lived as come other platforms.
> I see two options:
> 1. We'll need to evaluate if we could move away from current CONFIG_MEMSPLIT_3G
> with our rather special memory configuration. My current understanding is that
> there hasn't been a lot of interest in getting CONFIG_VMSPLIT_4G_4G into
> mainline. As we cannot accept major performance degradation, I doubt that this
> will be a viable path.
> 2. We'll need to switch over to the last highmem-enabled SLTS kernel, earliest
> around 2028 (to keep some support buffer).
Right, I agree that these are the two possible options, and I think
we can make the timeline work for option 2, though option 1 is
likely better longtime if we can come up with solution that works
for your specific workload.
Can you share more details about how exactly system uses its highmem
today? In particular:
- The physical memory layout, especially whether the memory
that is assigned to Linux is physically contigouous, or if
the memory owned by other components (the network processor
or an FPGA) is taken from the middle. Note that at the
moment, any memory that is too far away from the first
page becomes highmem, even if the total RAM is under 800MB.
- Is the memory mainly used for file backing (including tmpfs) or
is it used as anonymous memory (like malloc()) in a few processes?
- If most of the memory is mapped into a small number of
processes, how close are you to reaching the available 3GB
virtual address limit in those processes?
- If possible, share the contents of /proc/meminfo, /proc/zoneinfo
and the /proc/${PID}/maps for the processes with the largest vsize
(according to ps)?
If the largest process needs more than 2GB of virtual address
space, then there is not much hope in changing to CONFIG_VMSPLIT_2G
or CONFIG_VMSPLIT_1G. On the other hand if your workload does
not rely on having all that memory mapped into a single address
space, using VMPLIT_1G would likely improve system performance
noticeably and have no other downside.
Arnd
[1] https://lore.kernel.org/lkml/4ff89b72-03ff-4447-9d21-dd6a5fe1550f@app.fastmail.com/
Powered by blists - more mailing lists