lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20260204134447.00000afd@huawei.com>
Date: Wed, 4 Feb 2026 13:44:47 +0000
From: Jonathan Cameron <jonathan.cameron@...wei.com>
To: Linus Walleij <linusw@...nel.org>
CC: Yushan Wang <wangyushan12@...wei.com>, <alexandre.belloni@...tlin.com>,
	<arnd@...db.de>, <fustini@...nel.org>, <krzk@...nel.org>,
	<linus.walleij@...aro.org>, <will@...nel.org>,
	<linux-arm-kernel@...ts.infradead.org>, <linux-kernel@...r.kernel.org>,
	<fanghao11@...wei.com>, <linuxarm@...wei.com>, <liuyonglong@...wei.com>,
	<prime.zeng@...ilicon.com>, <wangzhou1@...ilicon.com>,
	<xuwei5@...ilicon.com>, SeongJae Park <sj@...nel.org>, <linux-mm@...ck.org>
Subject: Re: [PATCH 1/3] soc cache: L3 cache driver for HiSilicon SoC


Fixed linux-mm address that got added a few emails back.

On Wed, 4 Feb 2026 13:40:20 +0000
Jonathan Cameron <jonathan.cameron@...wei.com> wrote:

> On Wed, 4 Feb 2026 01:10:01 +0100
> Linus Walleij <linusw@...nel.org> wrote:
> 
> > Hi Yushan,
> > 
> > thanks for your patch!
> > 
> > On Tue, Feb 3, 2026 at 5:18 PM Yushan Wang <wangyushan12@...wei.com> wrote:  
> > >
> > > The driver will create a file of `/dev/hisi_l3c` on init, mmap
> > > operations to it will allocate a memory region that is guaranteed to be
> > > placed in L3 cache.
> > >
> > > The driver also provides unmap() to deallocated the locked memory.
> > >
> > > The driver also provides an ioctl interface for user to get cache lock
> > > information, such as lock restrictions and locked sizes.
> > >
> > > Signed-off-by: Yushan Wang <wangyushan12@...wei.com>    
> > 
> > The commit message does not say *why* you are doing this?
> >   
> > > +config HISI_SOC_L3C
> > > +       bool "HiSilicon L3 Cache device driver"
> > > +       depends on ACPI
> > > +       depends on ARM64 || COMPILE_TEST
> > > +       help
> > > +         This driver provides the functions to lock L3 cache entries from
> > > +         being evicted for better performance.    
> > 
> > Here is the reason though.
> > 
> > Things like this need to be CC to linux-mm@...r.kernel.org.
> > 
> > I don't see why userspace would be so well informed as to make decisions
> > about what should be locked in the L3 cache and not?
> > 
> > I see the memory hierarchy as any other hardware: a resource that is
> > allocated and arbitrated by the kernel.
> > 
> > The MM subsytem knows which memory is most cache hot.
> > Especially when you use DAMON DAMOS, which has the sole
> > purpose of executing actions like that. Here is a good YouTube.
> > https://www.youtube.com/watch?v=xKJO4kLTHOI  
> Hi Linus,
> 
> This typically isn't about cache hot.  It it were, the data would
> be in the cache without this. It's about ensuring something that would
> otherwise unlikely to be there is in the cache.
> 
> Normally that's a latency critical region.  In general the kernel
> has no chance of figuring out what those are ahead of time, only
> userspace can know (based on profiling etc) that is per workload.
> The first hit matters in these use cases and it's not something
> the prefetchers can help with.
> 
> The only thing we could do if this was in kernel would be to
> have userspace pass some hints and then let the kernel actually
> kick off the process. That just boils down to using a different
> interface to do what this driver is doing (and that's the conversaion
> this series is trying to get going)  It's a finite resource
> and you absolutely need userspace to be able to tell if it
> got what it asked for or not.
> 
> Damon might be useful for that preanalysis though but it can't do
> anything for the infrequent extremely latency sensitive accesses.
> Normally this is fleet wide stuff based on intensive benchmarking
> of a few nodes.  Same sort of approach as the original warehouse
> scale computing paper on tuning zswap capacity across a fleet.
> Its an extreme form of profile guided optimization (and not
> currently automatic I think?). If we are putting code in this
> locked region, the program has been carefully recompiled / linked
> to group the critical parts so that we can use the minimum number
> of these locked regions. Data is a little simpler.
> 
> It's kind of similar to resctl but at a sub process granularity.
> 
> > 
> > Shouldn't the MM subsystem be in charge of determining, locking
> > down and freeing up hot regions in L3 cache?
> > 
> > This looks more like userspace is going to determine that but
> > how exactly? By running DAMON? Then it's better to keep the
> > whole mechanism in the kernel where it belongs and let the
> > MM subsystem adapt locked L3 cache to the usage patterns.  
> 
> I haven't yet come up with any plausible scheme by which the MM
> subsystem could do this.
> 
> I think what we need here Yushan, is more detail on end to end
> use cases for this.  Some examples etc as clearer motivation.
> 
> Jonathan
> 
> > 
> > Yours,
> > Linus Walleij
> >   
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ