[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1490963291.29083.69.camel@pengutronix.de>
Date: Fri, 31 Mar 2017 14:28:11 +0200
From: Lucas Stach <l.stach@...gutronix.de>
To: Dong Aisheng <dongas86@...il.com>
Cc: Andrey Smirnov <andrew.smirnov@...il.com>,
Shawn Guo <shawnguo@...nel.org>, yurovsky@...il.com,
Fabio Estevam <fabio.estevam@....com>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
rjw@...ysocki.net
Subject: Re: [PATCH v7 2/2] soc/imx: Add GPCv2 power gating driver
Hi Dong,
Am Samstag, den 01.04.2017, 12:10 +0800 schrieb Dong Aisheng:
[...]
> > If we need the domains to be up before the consumers, the only
> > way to deal with that is to select CONFIG_PM and
> > CONFIG_PM_GENERIC_DOMAINS from the platform, so we have proper probe
> > defer handling for consumer devices of the power domains.
> >
>
> A bit confuse about these words...
If those options are selected we get proper PROBE_DEFER handling for
consumers of the power domain, so probe order doesn't matter.
> > > Personally i'd be more like Rockchip's power domain implementation.
> >
> > Why?
> >
> > > See:
> > > arch/arm/boot/dts/rk3288.dtsi
> > > drivers/soc/rockchip/pm_domains.c
> > > Dcumentation/devicetree/bindings/soc/rockchip/power_domain.txt
> > >
> > > How about refer to the Rockchip's way?
> >
> > Why? We just changed the way how it's done for GPCv1, after more than 1
> > year of those patches being on the list. Why should we do it differently
> > for GPCv2?
> >
>
> Hmm?
>
> I just thought your GPCv1 change was picked a few weeks ago (Feb 17 2017)
> and there's no current users in kernel of the new binding.
> That's why i come out of the idea if we could improve it before any users.
>
> Maybe i made mistake?
The patches to change this have been out for over 1 year. I'm less than
motivated to change the binding again, after it has gone through the DT
review and has finally been picked up.
> See:
> commit 721cabf6c6600dbe689ee2782bc087270e97e652
> Author: Lucas Stach <l.stach@...gutronix.de>
> Date: Fri Feb 17 20:02:44 2017 +0100
>
> soc: imx: move PGC handling to a new GPC driver
>
> This is an almost complete re-write of the previous GPC power gating control
> code found in the IMX architecture code. It supports both the old and the new
> DT binding, allowing more domains to be added later and generally makes the
> driver easier to extend, while keeping compatibility with existing DTBs.
>
> As the result, all functionality regarding the power gating controller
> gets removed from the IMX architecture GPC driver. It keeps only the
> IRQ controller code in the architecture, as this is closely coupled to
> the CPU idle implementation.
>
> Signed-off-by: Lucas Stach <l.stach@...gutronix.de>
> Signed-off-by: Shawn Guo <shawnguo@...nel.org>
>
> > >
> > > Then it could also address our issues and the binding would be
> > > still like:
> > > gpc: gpc@...a0000 {
> > > compatible = "fsl,imx7d-gpc";
> > > reg = <0x303a0000 0x1000>;
> > > interrupt-controller;
> > > interrupts = <GIC_SPI 87 IRQ_TYPE_LEVEL_HIGH>;
> > > #interrupt-cells = <3>;
> > > interrupt-parent = <&intc>;
> > >
> > > pgc {
> > > #address-cells = <1>;
> > > #size-cells = <0>;
> > >
> > > pgc_pcie_phy: power-domain@...7_POWER_DOMAIN_PCIE_PHY {
> > > reg = <IMX7_POWER_DOMAIN_PCIE_PHY>;
> > > power-supply = <®_1p0d>;
> > > clocks = <xxx>;
> > > };
> > >
> > > ....
> > > };
> > > };
> > >
> > > It also drops #power-domain-cells and register domain by
> > > one provider with multi domains which is more align with HW.
> > >
> > > How do you think of it?
> >
> > How is this more aligned with the hardware? Both options are an
> > arbitrary abstraction chosen in the DT binding.
> >
>
> GPC is a Power Controller which controls multi power domains to subsystem
> by its different registers bits.
> e.g. PCIE/MIPI/USB HSIC PHY.
> • 0xC00 ~ 0xC3F: PGC for MIPI PHY
> • 0xC40 ~ 0xC7F: PGC for PCIE_PHY
> • 0xD00 ~ 0xD3F: PGC for USB HSIC PHY
>
> So i thought it looks more like GPC is a power domain provider with multi
> domains support to different subsystems from HW point of view.
>
> Isn't that true?
Linux and hardware devices are not required to match 1:1. There is
nothing that would make subdividing a single hardware device into
multiple ones bad style.
>
> And there's also other two concerns:
> First, current genpd sysfs output still not include provider.
> e.g.
> root@...6qdlsolo:~# cat /sys/kernel/debug/pm_genpd/pm_genpd_summary
> domain status slaves
> /device runtime status
> ----------------------------------------------------------------------
> PU off-0
> /devices/soc0/soc/130000.gpu suspended
> /devices/soc0/soc/134000.gpu suspended
> /devices/soc0/soc/2204000.gpu suspended
> /devices/soc0/soc/2000000.aips-bus/2040000.vpu suspended
> ARM off-0
>
> I wonder it might be a bit mess once the provider is added while each
> domain is registered as a virtual provider.
The provider is a Linux device. Linux devices don't necessarily have to
correspond to hardware devices. There is no such rule.
>
> Second, it sacrifices a bit performance when look-up PM domain in
> genpd_get_from_provider if every domain is a provider.
>
The performance penalty of a list walk won't hurt us in the probe path,
where we are (re-)probing entire devices. There is a lot more going on
than a simple list walk.
> Though it is arguable that currently only 3 domains support on MX7,
> but who knows the future when it becomes much more.
>
> However, i did see many exist users in kernel using one provider one
> domain way. Maybe i'm over worried and it's not big deal.
I see that one provider with multiple domains is used more often, that
doesn't means it's necessarily better. At least I haven't hear a
convincing argument on why the chosen implementation in the GPC driver
is worse than the alternative.
Regards,
Lucas
Powered by blists - more mailing lists