lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <5fab7650-7313-2c20-54eb-65078dd9c3d9@marcan.st>
Date:   Sun, 17 Oct 2021 18:16:29 +0900
From:   Hector Martin <marcan@...can.st>
To:     Stephen Boyd <sboyd@...nel.org>,
        linux-arm-kernel@...ts.infradead.org
Cc:     Alyssa Rosenzweig <alyssa@...enzweig.io>,
        Sven Peter <sven@...npeter.dev>, Marc Zyngier <maz@...nel.org>,
        Mark Kettenis <mark.kettenis@...all.nl>,
        Michael Turquette <mturquette@...libre.com>,
        Rob Herring <robh+dt@...nel.org>,
        Krzysztof Kozlowski <krzysztof.kozlowski@...onical.com>,
        Viresh Kumar <vireshk@...nel.org>, Nishanth Menon <nm@...com>,
        Catalin Marinas <catalin.marinas@....com>,
        "Rafael J. Wysocki" <rafael@...nel.org>,
        Kevin Hilman <khilman@...nel.org>,
        Ulf Hansson <ulf.hansson@...aro.org>,
        linux-clk@...r.kernel.org, devicetree@...r.kernel.org,
        linux-pm@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 7/9] clk: apple: Add clk-apple-cluster driver to
 manage CPU p-states

On 15/10/2021 07.07, Stephen Boyd wrote:
> This looks bad from a locking perspective. How is lockdep holding up
> with this driver? We're underneath the prepare lock here and we're
> setting a couple level registers which is all good but now we're calling
> into genpd code and who knows what's going to happen locking wise.

It seems this is all going away given the other discussion threads point 
towards handling this directly via OPP in the cpufreq-dt driver. I'll 
run whatever I end up with for v2 through lockdep though, good call!

> I don't actually see anything in here that indicates this is supposed to
> be a clk provider. Is it being modeled as a clk so that it can use
> cpufreq-dt? If it was a clk provider I'd expect it to be looking at
> parent clk rates, and reading hardware to calculate frequencies based on
> dividers and multipliers, etc. None of that is happening here.
> 
> Why not write a cpufreq driver, similar to qcom-cpufreq-hw.c that looks
> through the OPP table and then writes the value into the pstate
> registers? The registers in here look awfully similar to the qcom
> hardware. I don't know what the DESIRED1 and DESIRED2 registers are for
> though. Maybe they're so that one or the other frequency can be used if
> available? Like a min/max?
> 
> Either way, writing this as a cpufreq driver avoids the clk framework
> entirely which is super great for me :) It also avoids locking headaches
> from the clk prepare lock, and it also lets you support lockless cpufreq
> transitions by implementing the fast_switch function. I don't see any
> downsides to the cpufreq driver approach.

I wasn't too sure about this approach; I thought using a clk provider 
would end up simplifying things since I could use the cpufreq-dt 
machinery to take care of all the OPP stuff, and a lot of SoCs seemed to 
be going that way, but it seems cpufreq might be a better approach for 
this SoC?

There can only be one cpufreq driver instance, while I used two clock 
controllers to model the two clusters. So in the cpufreq case, the 
driver itself would have to deal with all potential CPU cluster 
instances/combinations itself. Not sure how much more code that will be, 
hopefully not too much...

I see qcom-cpufreq-hw uses a qcom,freq-domain prop to link CPUs to the 
cpufreq domains. cpufreq-dt and vexpress-spc-cpufreq instead use 
dev_pm_opp_get_sharing_cpus to look for shared OPP tables. Is there a 
reason not to do it that way and avoid the vendor prop? I guess the prop 
is more explicit while the sharing approach would have an implicit order 
dependency (i.e. CPUs are always grouped by cluster and clusters are 
listed in /cpus in the same order as in the cpufreq node)...

(Ack on the other comments, but if this becomes a cpufreq driver most of 
it is going to end up rewritten... :))

For the cpufreq case, do you have any suggestions as to how to relate it 
to the memory controller configuration tweaks? Ideally this would go 
through the OPP tables so it can be customized for future SoCs without 
stuff hardcoded in the driver... it seems the configuration affects 
power saving behavior / latencies, so it doesn't quite match the 
interconnect framework bandwidth request stuff. I'm also not sure how 
this would affect fast_switch, since going through those frameworks 
might imply locks... we might even find ourselves with a situation in 
the near future where multiple cpufreq policies can request memory 
controller latency reduction independently; I can come up with how to do 
this locklessly using atomics, but I can't imagine that being workable 
with higher-level frameworks, it would have to be a vendor-specific 
mechanism at that point...

-- 
Hector Martin (marcan@...can.st)
Public Key: https://mrcn.st/pub

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ