lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230414124633.GB27611@willie-the-truck>
Date:   Fri, 14 Apr 2023 13:46:35 +0100
From:   Will Deacon <will@...nel.org>
To:     Sudeep Holla <sudeep.holla@....com>
Cc:     Radu Rendec <rrendec@...hat.com>, linux-kernel@...r.kernel.org,
        Catalin Marinas <catalin.marinas@....com>,
        Pierre Gondois <Pierre.Gondois@....com>,
        linux-arm-kernel@...ts.infradead.org
Subject: Re: [PATCH v4 2/3] cacheinfo: Add arm64 early level initializer
 implementation

On Thu, Apr 13, 2023 at 04:05:05PM +0100, Sudeep Holla wrote:
> On Thu, Apr 13, 2023 at 03:45:22PM +0100, Will Deacon wrote:
> > On Thu, Apr 13, 2023 at 11:22:26AM +0100, Sudeep Holla wrote:
> > > Hi Will,
> > > 
> > > On Wed, Apr 12, 2023 at 02:57:58PM -0400, Radu Rendec wrote:
> > > > This patch adds an architecture specific early cache level detection
> > > > handler for arm64. This is basically the CLIDR_EL1 based detection that
> > > > was previously done (only) in init_cache_level().
> > > > 
> > > > This is part of a patch series that attempts to further the work in
> > > > commit 5944ce092b97 ("arch_topology: Build cacheinfo from primary CPU").
> > > > Previously, in the absence of any DT/ACPI cache info, architecture
> > > > specific cache detection and info allocation for secondary CPUs would
> > > > happen in non-preemptible context during early CPU initialization and
> > > > trigger a "BUG: sleeping function called from invalid context" splat on
> > > > an RT kernel.
> > > > 
> > > > This patch does not solve the problem completely for RT kernels. It
> > > > relies on the assumption that on most systems, the CPUs are symmetrical
> > > > and therefore have the same number of cache leaves. The cacheinfo memory
> > > > is allocated early (on the primary CPU), relying on the new handler. If
> > > > later (when CLIDR_EL1 based detection runs again on the secondary CPU)
> > > > the initial assumption proves to be wrong and the CPU has in fact more
> > > > leaves, the cacheinfo memory is reallocated, and that still triggers a
> > > > splat on an RT kernel.
> > > > 
> > > > In other words, asymmetrical CPU systems *must* still provide cacheinfo
> > > > data in DT/ACPI to avoid the splat on RT kernels (unless secondary CPUs
> > > > happen to have less leaves than the primary CPU). But symmetrical CPU
> > > > systems (the majority) can now get away without the additional DT/ACPI
> > > > data and rely on CLIDR_EL1 based detection.
> > > > 
> > > 
> > > If you are okay with the change, can I have your Acked-by, so that I can
> > > route this via Greg's tree ?
> > 
> > I really dislike the profileration of __weak functions in this file,
> 
> You mean in the generic cacheinfo.c right ? Coz arm64 version must not have
> any and that is the file in this patch.

Right, but we're providing implementations of both early_cache_level() and
init_cache_level(), which are weak symbols in the core code.

Will

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ