lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 9 Jul 2014 16:20:10 +0300
From:	Peter De Schrijver <pdeschrijver@...dia.com>
To:	Thierry Reding <thierry.reding@...il.com>
CC:	Stephen Warren <swarren@...dotorg.org>,
	Mikko Perttunen <mperttunen@...dia.com>,
	"tj@...nel.org" <tj@...nel.org>,
	"linux-arm-kernel@...ts.infradead.org" 
	<linux-arm-kernel@...ts.infradead.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"linux-tegra@...r.kernel.org" <linux-tegra@...r.kernel.org>,
	"linux-ide@...r.kernel.org" <linux-ide@...r.kernel.org>
Subject: Re: [PATCH 6/9] ARM: tegra: Export tegra_powergate_power_on

On Wed, Jul 09, 2014 at 02:56:14PM +0200, Thierry Reding wrote:
> * PGP Signed by an unknown key
> 
> On Wed, Jul 09, 2014 at 03:43:44PM +0300, Peter De Schrijver wrote:
> > On Wed, Jul 09, 2014 at 02:04:02PM +0200, Thierry Reding wrote:
> > > > For those 2 domains we can find the necessary clocks and resets by parsing
> > > > the relevant existing DT nodes for PCIe and gr3d. For clocks, this isn't
> > > > even needed as we can always register some extra clkdev's to get them. There
> > > > is no equivalent for resets so we have to parse the gr3d and pcie DT nodes,
> > > > but that's not too bad I think.
> > > 
> > > Even if we could really do this, at this point I don't see an advantage.
> > > All that it would be doing is move to some subsystem that doesn't quite
> > > match what we need just for the sake of moving to that subsystem. Having
> > > a Tegra-specific API doesn't sound so bad anymore.
> > > 
> > 
> > The advantage would be that we can use LP0/SC7 as a cpuidle state.
> 
> How is that going to work? And why does it need powergates to be

pm_runtime_get() and pm_runtime_put() hook into genpd. So genpd knows
when all devices in a domain are idle. It can then decide to turn off
the domain (based on the decision of a per domain governor). If all
domains are off (except for the non-powergateable domain), a special cpuidle
state can be enabled by genpd which will initiate a transition to LP0 without
actually doing a full system suspend.

> implemented as power domains? If we switch off power gates, then we need
> to restore context in the drivers anyway, therefore I assume .suspend()
> and .resume() would need to be called, in which case powergate handling
> can surely be done at that stage, can't it?
> 

.suspend() and .resume() are not used for this. genpd uses other per device
callbacks to save and restore the state which are invoked when the domain
is turned off and on (.save_state and .restore_state). The major difference
with .suspend() and .resume() is that .suspend() has to perform 3 tasks:
prevent any new requests to the driver, finalize or cancel all outstanding
requests and save the hw context. .save_state will only be called when the
device is idle (based on the refcount controlled by pm_runtime_get() and
pm_runtime_put()) which means it only has to handle saving the hw context.

> > Also system
> > resume from LP0 can be faster as we potentially don't have to resume all
> > domains at once.
> 
> I don't understand what that's got to do with anything. If we call into
> the PMC driver explicitly via tegra_powergate_*() functions from driver
> code, then we have full control over suspend/resume in the drivers, and
> therefore don't need to resume all at once either.

But then we would be duplicating all the bookkeeping required for this? What's
the point of that?

Cheers,

Peter.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ