[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <0028a425-140d-4baa-b482-d6f2d349f8c2@app.fastmail.com>
Date: Thu, 05 Feb 2026 15:38:15 +0100
From: "Arnd Bergmann" <arnd@...db.de>
To: "Linus Walleij" <linusw@...nel.org>,
"Jonathan Cameron" <jonathan.cameron@...wei.com>
Cc: "Yushan Wang" <wangyushan12@...wei.com>,
"Alexandre Belloni" <alexandre.belloni@...tlin.com>,
"Drew Fustini" <fustini@...nel.org>, "Krzysztof Kozlowski" <krzk@...nel.org>,
"Linus Walleij" <linus.walleij@...aro.org>, "Will Deacon" <will@...nel.org>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
fanghao11@...wei.com, linuxarm@...wei.com, liuyonglong@...wei.com,
prime.zeng@...ilicon.com, "Zhou Wang" <wangzhou1@...ilicon.com>,
"Wei Xu" <xuwei5@...ilicon.com>, linux-mm@...r.kernel.org,
"SeongJae Park" <sj@...nel.org>,
"Reinette Chatre" <reinette.chatre@...el.com>,
"James Morse" <james.morse@....com>, "Zeng Heng" <zengheng4@...wei.com>,
ben.horgan@....com, "Tony Luck" <tony.luck@...el.com>,
"Dave Martin" <Dave.Martin@....com>, "Babu Moger" <babu.moger@....com>
Subject: Re: [PATCH 1/3] soc cache: L3 cache driver for HiSilicon SoC
On Thu, Feb 5, 2026, at 14:47, Linus Walleij wrote:
> On Thu, Feb 5, 2026 at 11:18 AM Jonathan Cameron
> <jonathan.cameron@...wei.com> wrote:
>
>> Take the closest example to this which is resctl (mpam on arm).
>> This actually has a feature that smells a bit like this.
>> Pseudo-cache locking.
>>
>> https://docs.kernel.org/filesystems/resctrl.html#cache-pseudo-locking
>
> That was very interesting. And more than a little bit complex.
> IIUC MPAM is mostly about requesting bandwidth to/from the
> memory.
>
> But maybe cache lockdown can build on top?
The CoreLink interconnect blocks also have a way to lock
cache lines, e.g. for the CMN-600:
https://developer.arm.com/documentation/100180/0200/SLC-Memory-System/Software-configurable-memory-region-locking?lang=en
These are fairly common, but so far we never had the need to
expose this in a low-level driver. If we add driver interface
for HiSilicon chips that use a custom interconnect, we should
probably check at least that the user ABI would theoretically
also work on Arm interconnects if one wrote a driver.
Another similar technology is memory that has already been locked
by firmware (or hardware design, i.e. not a cache), and there are
a few I remember:
- drivers/misc/sram.c exports sram from physical addresses to
userspace. For a deeply embedded system with known amounts
of locked-down L3 cache, the firmware could just pre-lock
the cache and expose it to the kernel as an sram.
- your own arch/arm/kernel/tcm.c, which does not currently have
any upstream users. I don't remember if it ever had.
- arch/sh/mm/numa.c supports locked SRAM through numactl, originally
added by Paul Mundt. The idea of using numactl to move page
cache pages is somwhat appealing, but it really messes
with the assumptions about NUMA interfaces, and I think this
is the only 32-bit target that exposes numactl. There are
probably no applications left using this in modern kernels.
- arch/arc/ had an elaborate downstream patch for wireless
network SoCs from (IIRC) Quantenna that would link mark
any performance-sensitive .text and .data parts of the network
stack to be in on-chip SRAM, but no user interface.
Arnd
Powered by blists - more mailing lists