lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201112204358.GA1027187@ulmo>
Date:   Thu, 12 Nov 2020 21:43:58 +0100
From:   Thierry Reding <thierry.reding@...il.com>
To:     Dmitry Osipenko <digetx@...il.com>
Cc:     Ulf Hansson <ulf.hansson@...aro.org>,
        Viresh Kumar <viresh.kumar@...aro.org>,
        Jonathan Hunter <jonathanh@...dia.com>,
        Alan Stern <stern@...land.harvard.edu>,
        Peter Chen <Peter.Chen@....com>,
        Mark Brown <broonie@...nel.org>,
        Liam Girdwood <lgirdwood@...il.com>,
        Adrian Hunter <adrian.hunter@...el.com>,
        Krzysztof Kozlowski <krzk@...nel.org>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        Lee Jones <lee.jones@...aro.org>,
        Uwe Kleine-König 
        <u.kleine-koenig@...gutronix.de>,
        Mauro Carvalho Chehab <mchehab@...nel.org>,
        Rob Herring <robh+dt@...nel.org>,
        Marek Szyprowski <m.szyprowski@...sung.com>,
        Peter Geis <pgwipeout@...il.com>,
        Nicolas Chauvet <kwizart@...il.com>,
        linux-samsung-soc <linux-samsung-soc@...r.kernel.org>,
        driverdevel <devel@...verdev.osuosl.org>,
        Linux USB List <linux-usb@...r.kernel.org>,
        linux-pwm@...r.kernel.org,
        "linux-mmc@...r.kernel.org" <linux-mmc@...r.kernel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        DTML <devicetree@...r.kernel.org>,
        dri-devel <dri-devel@...ts.freedesktop.org>,
        Linux Media Mailing List <linux-media@...r.kernel.org>,
        linux-tegra <linux-tegra@...r.kernel.org>
Subject: Re: [PATCH v1 00/30] Introduce core voltage scaling for NVIDIA
 Tegra20/30 SoCs

On Thu, Nov 12, 2020 at 10:57:27PM +0300, Dmitry Osipenko wrote:
> 11.11.2020 14:38, Ulf Hansson пишет:
> > On Sun, 8 Nov 2020 at 13:19, Dmitry Osipenko <digetx@...il.com> wrote:
> >>
> >> 05.11.2020 18:22, Dmitry Osipenko пишет:
> >>> 05.11.2020 12:45, Ulf Hansson пишет:
> >>> ...
> >>>> I need some more time to review this, but just a quick check found a
> >>>> few potential issues...
> >>>
> >>> Thank you for starting the review! I'm pretty sure it will take a couple
> >>> revisions until all the questions will be resolved :)
> >>>
> >>>> The "core-supply", that you specify as a regulator for each
> >>>> controller's device node, is not the way we describe power domains.
> >>>> Instead, it seems like you should register a power-domain provider
> >>>> (with the help of genpd) and implement the ->set_performance_state()
> >>>> callback for it. Each device node should then be hooked up to this
> >>>> power-domain, rather than to a "core-supply". For DT bindings, please
> >>>> have a look at Documentation/devicetree/bindings/power/power-domain.yaml
> >>>> and Documentation/devicetree/bindings/power/power_domain.txt.
> >>>>
> >>>> In regards to the "sync state" problem (preventing to change
> >>>> performance states until all consumers have been attached), this can
> >>>> then be managed by the genpd provider driver instead.
> >>>
> >>> I'll need to take a closer look at GENPD, thank you for the suggestion.
> >>>
> >>> Sounds like a software GENPD driver which manages clocks and voltages
> >>> could be a good idea, but it also could be an unnecessary
> >>> over-engineering. Let's see..
> >>>
> >>
> >> Hello Ulf and all,
> >>
> >> I took a detailed look at the GENPD and tried to implement it. Here is
> >> what was found:
> >>
> >> 1. GENPD framework doesn't aggregate performance requests from the
> >> attached devices. This means that if deviceA requests performance state
> >> 10 and then deviceB requests state 3, then framework will set domain's
> >> state to 3 instead of 10.
> >>
> >> https://elixir.bootlin.com/linux/v5.10-rc2/source/drivers/base/power/domain.c#L376
> > 
> > As Viresh also stated, genpd does aggregate the votes. It even
> > performs aggregation hierarchy (a genpd is allowed to have parent(s)
> > to model a topology).
> 
> Yes, I already found and fixed the bug which confused me previously and
> it's working well now.
> 
> >> 2. GENPD framework has a sync() callback in the genpd.domain structure,
> >> but this callback isn't allowed to be used by the GENPD implementation.
> >> The GENPD framework always overrides that callback for its own needs.
> >> Hence GENPD doesn't allow to solve the bootstrapping
> >> state-synchronization problem in a nice way.
> >>
> >> https://elixir.bootlin.com/linux/v5.10-rc2/source/drivers/base/power/domain.c#L2606
> > 
> > That ->sync() callback isn't the callback you are looking for, it's a
> > PM domain specific callback - and has other purposes.
> > 
> > To solve the problem you refer to, your genpd provider driver (a
> > platform driver) should assign its ->sync_state() callback. The
> > ->sync_state() callback will be invoked, when all consumer devices
> > have been attached (and probed) to their corresponding provider.
> > 
> > You may have a look at drivers/cpuidle/cpuidle-psci-domain.c, to see
> > an example of how this works. If there is anything unclear, just tell
> > me and I will try to help.
> 
> Indeed, thank you for the clarification. This variant works well.
> 
> >> 3. Tegra doesn't have a dedicated hardware power-controller for the core
> >> domain, instead there is only an external voltage regulator. Hence we
> >> will need to create a phony device-tree node for the virtual power
> >> domain, which is probably a wrong thing to do.
> > 
> > No, this is absolutely the correct thing to do.
> > 
> > This isn't a virtual power domain, it's a real power domain. You only
> > happen to model the control of it as a regulator, as it fits nicely
> > with that for *this* SoC. Don't get me wrong, that's fine as long as
> > the supply is specified only in the power-domain provider node.
> > 
> > On another SoC, you might have a different FW interface for the power
> > domain provider that doesn't fit well with the regulator. When that
> > happens, all you need to do is to implement a new power domain
> > provider and potentially re-define the power domain topology. More
> > importantly, you don't need to re-invent yet another slew of device
> > specific bindings - for each SoC.
> > 
> >>
> >> ===
> >>
> >> Perhaps it should be possible to create some hacks to work around
> >> bullets 2 and 3 in order to achieve what we need for DVFS on Tegra, but
> >> bullet 1 isn't solvable without changing how the GENPD core works.
> >>
> >> Altogether, the GENPD in its current form is a wrong abstraction for a
> >> system-wide DVFS in a case where multiple devices share power domain and
> >> this domain is a voltage regulator. The regulator framework is the
> >> correct abstraction in this case for today.
> > 
> > Well, I admit it's a bit complex. But it solves the problem in a
> > nicely abstracted way that should work for everybody, at least in my
> > opinion.
> 
> The OPP framework supports both voltage regulator and power domain,
> hiding the implementation details from drivers. This means that OPP API
> usage will be the same regardless of what approach (regulator or power
> domain) is used for a particular SoC.
> 
> > Although, let's not exclude that there are pieces missing in genpd or
> > the opp layer, as this DVFS feature is rather new - but then we should
> > just extend/fix it.
> 
> Will be nice to have a per-device GENPD performance stats.
> 
> Thierry, could you please let me know what do you think about replacing
> regulator with the power domain? Do you think it's a worthwhile change?
> 
> The difference in comparison to using voltage regulator directly is
> minimal, basically the core-supply phandle is replaced is replaced with
> a power-domain phandle in a device tree.

These new power-domain handles would have to be added to devices that
potentially already have a power-domain handle, right? Isn't that going
to cause issues? I vaguely recall that we already have multiple power
domains for the XUSB controller and we have to jump through extra hoops
to make that work.

> The only thing which makes me feel a bit uncomfortable is that there is
> no real hardware node for the power domain node in a device-tree.

Could we anchor the new power domain at the PMC for example? That would
allow us to avoid the "virtual" node. On the other hand, if we were to
use a regulator, we'd be adding a node for that, right? So isn't this
effectively going to be the same node if we use a power domain? Both
software constructs are using the same voltage regulator, so they should
be able to be described by the same device tree node, shouldn't they?

Thierry

Download attachment "signature.asc" of type "application/pgp-signature" (834 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ