[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20200210103110.GB19089@bogus>
Date: Mon, 10 Feb 2020 10:31:10 +0000
From: Sudeep Holla <sudeep.holla@....com>
To: Ulf Hansson <ulf.hansson@...aro.org>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@....com>,
Lina Iyer <ilina@...eaurora.org>,
Maulik Shah <mkshah@...eaurora.org>,
Stephen Boyd <swboyd@...omium.org>,
Andy Gross <agross@...nel.org>,
David Brown <david.brown@...aro.org>,
linux-arm-msm <linux-arm-msm@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linux PM <linux-pm@...r.kernel.org>,
Linux ARM <linux-arm-kernel@...ts.infradead.org>,
Bjorn Andersson <bjorn.andersson@...aro.org>,
Evan Green <evgreen@...omium.org>,
Doug Anderson <dianders@...omium.org>,
Rajendra Nayak <rnayak@...eaurora.org>, lsrao@...eaurora.org,
"Rafael J. Wysocki" <rjw@...ysocki.net>
Subject: Re: [PATCH v3 5/7] drivers: firmware: psci: Add hierarchical domain
idle states converter
On Sat, Feb 08, 2020 at 11:25:18AM +0100, Ulf Hansson wrote:
> On Fri, 7 Feb 2020 at 17:15, Sudeep Holla <sudeep.holla@....com> wrote:
> >
> > On Fri, Feb 07, 2020 at 04:52:52PM +0100, Ulf Hansson wrote:
> > > On Fri, 7 Feb 2020 at 15:48, Lorenzo Pieralisi
> > > <lorenzo.pieralisi@....com> wrote:
> > > >
> > > > On Fri, Feb 07, 2020 at 01:32:28PM +0100, Ulf Hansson wrote:
> > > > > [...]
> > > > >
> > > > > > > I understand the arguments for using PC vs OSI and agree with it. But
> > > > > > > what in PSCI is against Linux knowing when the last core is powering
> > > > > > > down when the PSCI is configured to do only Platform Cordinated.
> > > > > >
> > > > > > Nothing :D. But knowing the evolution and reasons for adding OSI in the
> > > > > > PSCI specification and having argued about benefits of OSI over PC for
> > > > > > years and finally when we have it in mainline, this argument of using
> > > > > > PC for exact reasons why OSI evolved is something I can't understand
> > > > > > and I am confused.
> > > > > >
> > > > > > > There should not be any objection to drivers knowing when all the cores
> > > > > > > are powered down, be it reference counting CPU PM notifications or using
> > > > > > > a cleaner approach like this where GendPD framwork does everything
> > > > > > > cleanly and gives a nice callback. ARM architecture allows for different
> > > > > > > aspects of CPU access be handled at different levels. I see this as an
> > > > > > > extension of that approach.
> > > > > > >
> > > > > >
> > > > > > One thing that was repeatedly pointed out during OSI patch review was no
> > > > > > extra overhead for PC mode where firmware can make decisions. So, just
> > > > > > use OSI now and let us be done with this discussion of OSI vs PC. If PC
> > > > > > is what you think you need for future, we can revert all OSI changes and
> > > > > > start discussing again :-)
> > > > >
> > > > > Just to make it clear, I fully agree with you in regards to overhead
> > > > > for PC-mode. This is especially critical for ARM SoCs with lots of
> > > > > cores, I assume.
> > > > >
> > > > > However, the overhead you refer to, is *only* going to be present in
> > > > > case when the DTS has the hierarchical CPU topology description with
> > > > > "power-domains". Because, that is *optional* to use, I am expecting
> > > > > only those SoC/platforms that needs to manage last-man activities to
> > > > > use this layout, the others will remain unaffected.
> > > >
> > > > In PC mode not only there is no need but it is wrong to manage
> > > > any last-man activity in the kernel. I wonder why we are still
> > > > talking about this to be honest.
> > >
> > > I guess the discussion is here because there is a use case to consider now.
> > >
> >
> > If this is what Bjorn presented in his email, I have responded to that.
> > If it's any different, please let us know the complete details.
> >
> > > For sure, we agree on what is the best solution. But this is rather
> > > about what can we do to improve the current situation, if we should do
> > > anything.
> > >
> >
> > Sure, and I haven't found a reason to do that in OSPM yet(as part of the
> > discussion in this thread)
> >
> > > >
> > > > Code to handle PSCI platform coordinated mode has been/is in
> > > > the kernel today and that's all is needed according to the PSCI
> > > > specifications.
> > >
> > > PSCI specifies CPU power management, not SoC power management. If
> > > these things were completely decoupled, I would agree with you, but
> > > that's not the case. Maybe SCMI, etc, helps with this in future.
> > >
> >
> > Why does that not work even if they are not decoupled. The IO/device
> > that share with CPU votes from OSPM and the CPU/Cluster from PSCI in
> > PC mode. There is no argument there, but why it needs to be done in OSPM
> > is the objection here.
>
> That implies the votes from I/O devices needs to reach the FW
> immediately when the vote is done. No caching or other optimizations
> can be done at OSPM.
>
> In principle, the FW needs to have an always up to date view of the
> votes, etc. That sounds highly inefficient, both from energy and
> latency point of view, at least in my opinion.
>
Sorry but I need to re-iterate, use OSI if you need all those fancy
caching and other optimizations.
> >
> > > Anyway, my fear is that not many ARM vendors implements OSI support,
> > > but still they have "last-man-activities" to deal with. This is not
> > > only QCOM.
> > >
> >
> > I am interested to hear from them. And the same question to same too as
> > above.
>
> I have been talking to some of them. But, yes, we need to hear more from them.
>
> >
> > > I guess an option would be to add OSI support to the public ARM
> > > Trusted Firmware, then we could more easily point to that - rather
> > > than trying to mitigate the problem on the kernel side.
> > >
> >
> > I would say go for it. But don't mix responsibility of OSPM in PC vs OSI.
> > We have discussed this for years and I hope this discussion ends ASAP.
> > I don't see any point in dragging this any further.
>
> Okay.
>
I keep saying that but still responding to the discussions. I must stop ;-)
--
Regards,
Sudeep
Powered by blists - more mailing lists