[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACRpkdZFQSN_t-Vx7xOXq0aF6Vf-XvsZKGF6yNMn7_dCeaZi_w@mail.gmail.com>
Date: Mon, 26 Aug 2024 11:05:48 +0200
From: Linus Walleij <linus.walleij@...aro.org>
To: Vincent Legoll <vincent.legoll@...il.com>
Cc: arnd@...db.de, aaro.koskinen@....fi, alexandre.torgue@...s.st.com,
Andrew Lunn <andrew@...n.ch>, broonie@...nel.org, dmitry.torokhov@...il.com,
gregory.clement@...tlin.com, jeremy@...emypeper.com, jmkrzyszt@...il.com,
kristoffer.ericson@...il.com, Krzysztof Kozlowski <krzk@...nel.org>,
Linux Kernel ML <linux-kernel@...r.kernel.org>,
Russell King - ARM Linux <linux@...linux.org.uk>, Nicolas Pitre <nico@...xnic.net>, nikita.shubin@...uefel.me,
ramanara@...dia.com, richard.earnshaw@....com, richard.sandiford@....com,
robert.jarzmik@...e.fr, sebastian.hesselbarth@...il.com, tony@...mide.com,
linux-mips@...r.kernel.org
Subject: Re: [RFC} arm architecture board/feature deprecation timeline
On Fri, Aug 23, 2024 at 10:47 PM Vincent Legoll
<vincent.legoll@...il.com> wrote:
> It looks like the highmem feature is deemed for removal.
>
> I am investigating the loss of some available RAM on a GnuBee PC1 board.
>
> An highmem-enabled kernel can access a 64MB chunk of RAM that a
> no-highmem can't. The board has 512 MB.
>
> That's more than 10% on a RAM-poor NAS-oriented board, probably worth
> the hassle to get it back.
>
> I built & flashed a current OpenWRT snapshot, without any modifications,
> wich gave the following output:
(...)
> The lost RAM is back usable.
>
> Is there an alternative to CONFIG_HIGHMEM to use that RAM chunk ?
Userspace can still use it right?
The approach we are taking on ARM32 (despite it's.... really hard) is
to try to create
separate address spaces for the kernel and userspace so that in kernel context
the kernel can use 4GB of memory as it wants without the restriction of userpace
taking up the low addresses.
This looks easy until you run into some kernel assumptions about memory being
in a linear map at all times. Which I am wrestling with.
Yours,
Linus Walleij
Powered by blists - more mailing lists