[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPDyKFoZ+QQFdG3yQ5wGpg2Z5c9WksUhresGz02o3HVrGt1UhQ@mail.gmail.com>
Date: Fri, 7 Feb 2020 16:52:52 +0100
From: Ulf Hansson <ulf.hansson@...aro.org>
To: Lorenzo Pieralisi <lorenzo.pieralisi@....com>
Cc: Sudeep Holla <sudeep.holla@....com>,
Lina Iyer <ilina@...eaurora.org>,
Maulik Shah <mkshah@...eaurora.org>,
Stephen Boyd <swboyd@...omium.org>,
Andy Gross <agross@...nel.org>,
David Brown <david.brown@...aro.org>,
linux-arm-msm <linux-arm-msm@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Linux PM <linux-pm@...r.kernel.org>,
Linux ARM <linux-arm-kernel@...ts.infradead.org>,
Bjorn Andersson <bjorn.andersson@...aro.org>,
Evan Green <evgreen@...omium.org>,
Doug Anderson <dianders@...omium.org>,
Rajendra Nayak <rnayak@...eaurora.org>, lsrao@...eaurora.org,
"Rafael J. Wysocki" <rjw@...ysocki.net>
Subject: Re: [PATCH v3 5/7] drivers: firmware: psci: Add hierarchical domain
idle states converter
On Fri, 7 Feb 2020 at 15:48, Lorenzo Pieralisi
<lorenzo.pieralisi@....com> wrote:
>
> On Fri, Feb 07, 2020 at 01:32:28PM +0100, Ulf Hansson wrote:
> > [...]
> >
> > > > I understand the arguments for using PC vs OSI and agree with it. But
> > > > what in PSCI is against Linux knowing when the last core is powering
> > > > down when the PSCI is configured to do only Platform Cordinated.
> > >
> > > Nothing :D. But knowing the evolution and reasons for adding OSI in the
> > > PSCI specification and having argued about benefits of OSI over PC for
> > > years and finally when we have it in mainline, this argument of using
> > > PC for exact reasons why OSI evolved is something I can't understand
> > > and I am confused.
> > >
> > > > There should not be any objection to drivers knowing when all the cores
> > > > are powered down, be it reference counting CPU PM notifications or using
> > > > a cleaner approach like this where GendPD framwork does everything
> > > > cleanly and gives a nice callback. ARM architecture allows for different
> > > > aspects of CPU access be handled at different levels. I see this as an
> > > > extension of that approach.
> > > >
> > >
> > > One thing that was repeatedly pointed out during OSI patch review was no
> > > extra overhead for PC mode where firmware can make decisions. So, just
> > > use OSI now and let us be done with this discussion of OSI vs PC. If PC
> > > is what you think you need for future, we can revert all OSI changes and
> > > start discussing again :-)
> >
> > Just to make it clear, I fully agree with you in regards to overhead
> > for PC-mode. This is especially critical for ARM SoCs with lots of
> > cores, I assume.
> >
> > However, the overhead you refer to, is *only* going to be present in
> > case when the DTS has the hierarchical CPU topology description with
> > "power-domains". Because, that is *optional* to use, I am expecting
> > only those SoC/platforms that needs to manage last-man activities to
> > use this layout, the others will remain unaffected.
>
> In PC mode not only there is no need but it is wrong to manage
> any last-man activity in the kernel. I wonder why we are still
> talking about this to be honest.
I guess the discussion is here because there is a use case to consider now.
For sure, we agree on what is the best solution. But this is rather
about what can we do to improve the current situation, if we should do
anything.
>
> Code to handle PSCI platform coordinated mode has been/is in
> the kernel today and that's all is needed according to the PSCI
> specifications.
PSCI specifies CPU power management, not SoC power management. If
these things were completely decoupled, I would agree with you, but
that's not the case. Maybe SCMI, etc, helps with this in future.
Anyway, my fear is that not many ARM vendors implements OSI support,
but still they have "last-man-activities" to deal with. This is not
only QCOM.
I guess an option would be to add OSI support to the public ARM
Trusted Firmware, then we could more easily point to that - rather
than trying to mitigate the problem on the kernel side.
Kind regards
Uffe
Powered by blists - more mailing lists