lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 21 Oct 2021 09:57:02 +0800
From:   Shawn Guo <shawn.guo@...aro.org>
To:     Maulik Shah <mkshah@...eaurora.org>
Cc:     swboyd@...omium.org, mka@...omium.org, evgreen@...omium.org,
        bjorn.andersson@...aro.org, linux-arm-msm@...r.kernel.org,
        linux-kernel@...r.kernel.org, agross@...nel.org,
        dianders@...omium.org, linux@...ck-us.net, rnayak@...eaurora.org,
        lsrao@...eaurora.org,
        Mahesh Sivasubramanian <msivasub@...eaurora.org>,
        Lina Iyer <ilina@...eaurora.org>
Subject: Re: [PATCH v12 2/5] soc: qcom: Add Sleep stats driver

On Tue, Oct 19, 2021 at 06:16:57PM +0530, Maulik Shah wrote:
> Hi Shawn,
> 
> On 10/19/2021 3:17 PM, Shawn Guo wrote:
> > On Mon, Oct 18, 2021 at 07:45:30PM +0530, Maulik Shah wrote:
> > > > > +static void qcom_create_soc_sleep_stat_files(struct dentry *root, void __iomem *reg,
> > > > > +					     struct stats_data *d,
> > > > > +					     const struct stats_config *config)
> > > > > +{
> > > > > +	char stat_type[sizeof(u32) + 1] = {0};
> > > > > +	size_t stats_offset = config->stats_offset;
> > > > > +	u32 offset = 0, type;
> > > > > +	int i, j;
> > > > > +
> > > > > +	/*
> > > > > +	 * On RPM targets, stats offset location is dynamic and changes from target
> > > > > +	 * to target and sometimes from build to build for same target.
> > > > > +	 *
> > > > > +	 * In such cases the dynamic address is present at 0x14 offset from base
> > > > > +	 * address in devicetree. The last 16bits indicates the stats_offset.
> > > > > +	 */
> > > > > +	if (config->dynamic_offset) {
> > > > > +		stats_offset = readl(reg + RPM_DYNAMIC_ADDR);
> > > > > +		stats_offset &= RPM_DYNAMIC_ADDR_MASK;
> > > > > +	}
> > > > > +
> > > > > +	for (i = 0; i < config->num_records; i++) {
> > > > > +		d[i].base = reg + offset + stats_offset;
> > > > > +
> > > > > +		/*
> > > > > +		 * Read the low power mode name and create debugfs file for it.
> > > > > +		 * The names read could be of below,
> > > > > +		 * (may change depending on low power mode supported).
> > > > > +		 * For rpmh-sleep-stats: "aosd", "cxsd" and "ddr".
> > > > > +		 * For rpm-sleep-stats: "vmin" and "vlow".
> > > > 
> > > > It reports 'vmin' and 'xosd' on MSM8939, 'vmin' and 'vlow' on SDM660.
> > > > I know that 'vmin' is VDD Minimization mode, and 'xosd' is XO Shutdown
> > > > mode.  But I'm not sure about 'vlow' mode.  Could you share some
> > > > information regarding what this low power mode is, and how it differs
> > > > from 'vmin' and 'xosd'?
> > > 
> > > vlow and xosd are same.
> > > vmin is xosd plus voltage minimization of chip, memory rails.
> > 
> > Thanks much for the info, Maulik!
> > 
> > I'm running your driver on qcm2290 and trying to reach vlow mode.
> > 
> > # cat /sys/kernel/debug/qcom_sleep_stats/vlow
> > Count: 0
> > Last Entered At: 0
> > Last Exited At: 0
> > Accumulated Duration: 0
> > Client Votes: 0x81
> > # echo mem > /sys/power/state
> > [  551.446603] PM: suspend entry (s2idle)
> > [  551.450948] Filesystems sync: 0.000 seconds
> > [  551.462828] Freezing user space processes ... (elapsed 0.002 seconds) done.
> > [  551.472276] OOM killer disabled.
> > [  551.475556] Freezing remaining freezable tasks ... (elapsed 0.001 seconds) done.
> > [  551.484461] printk: Suspending console(s) (use no_console_suspend to debug)
> > [  551.561280] OOM killer enabled.
> > [  551.564461] Restarting tasks ... done.
> > [  551.569652] PM: suspend exit
> > # cat /sys/kernel/debug/qcom_sleep_stats/vlow
> > Count: 0
> > Last Entered At: 0
> > Last Exited At: 0
> > Accumulated Duration: 0
> > Client Votes: 0x818081
> > 
> > The count doesn't increases along with suspend/resume cycle at the
> > moment.  But as you can see, 'Client Votes' field changes.  If possible,
> > could you shed some light on what this means?
> 
> The count will increase only when all the subsystems (APSS/Modem,etc) are in
> power down mode and finally RPM decides to turn off xo clock.
> 
> > 
> > As the comparison, I'm also running the downstream 'rpm_master_stats'
> > driver in the same kernel, and the 'xo_count' field of APSS does
> > increase along with suspend/resume cycle.  May I ask some advices what
> > I'm possibly missing and thus getting different result between 'vlow'
> > and 'rpm_master_stats' report?
> 
> The vlow is a SoC level state whereas the rpm master stats indicate
> individual subsystem state. Since you are running suspend-resume the APSS is
> going to sleep so you see xo_count incremented for it but for MPSS i see it
> does not increase (modem is not entering to low power mode). similarly for
> ADSP/CDSP it does not increment. if all of these subsystems goes to power
> down and then there is sufficient sleep time for the SoC then you may see
> vlow/vmin incrementing.
> 
> Hope this clarifies.

Thanks Maulik!  It's very helpful.  I have a couple of further
questions, if you do not mind.

1. We can understand most of vlow/vmin output.  But could you help
   decode 'Client Votes'?  It looks like the bits are shifting along
   with suspend/resume cycle.

2. In the rpm_master_stats output below, I know masters (processors)
   APSS, MPSS, ADSP and CDSP, but not really sure what TZ is.  If it's
   TrustZone, shouldn't it covered by APSS?

Thanks for sharing your insights!

Shawn

> > # cat /sys/kernel/debug/rpm_master_stats
> > APSS
> >          shutdown_req:0x37EA3CC74
> >          wakeup_ind:0x0
> >          bringup_req:0x37F041958
> >          bringup_ack:0x37F042D54
> >          xo_last_entered_at:0x286FF36AC
> >          xo_last_exited_at:0x28AF94178
> >          xo_accumulated_duration:0x3EDD55B
> >          last_sleep_transition_duration:0x122f
> >          last_wake_transition_duration:0x11f8
> >          xo_count:0x1
> >          wakeup_reason:0x0
> >          numshutdowns:0x641
> >          active_cores:0x1
> >                  core0
> > MPSS
> >          shutdown_req:0x0
> >          wakeup_ind:0x0
> >          bringup_req:0x0
> >          bringup_ack:0x0
> >          xo_last_entered_at:0x0
> >          xo_last_exited_at:0x0
> >          xo_accumulated_duration:0x0
> >          last_sleep_transition_duration:0x0
> >          last_wake_transition_duration:0x0
> >          xo_count:0x0
> >          wakeup_reason:0x0
> >          numshutdowns:0x0
> >          active_cores:0x1
> >                  core0
> > ADSP
> >          shutdown_req:0x0
> >          wakeup_ind:0x0
> >          bringup_req:0x0
> >          bringup_ack:0x0
> >          xo_last_entered_at:0x0
> >          xo_last_exited_at:0x0
> >          xo_accumulated_duration:0x0
> >          last_sleep_transition_duration:0x0
> >          last_wake_transition_duration:0x0
> >          xo_count:0x0
> >          wakeup_reason:0x0
> >          numshutdowns:0x0
> >          active_cores:0x1
> >                  core0
> > CDSP
> >          shutdown_req:0x0
> >          wakeup_ind:0x0
> >          bringup_req:0x0
> >          bringup_ack:0x0
> >          xo_last_entered_at:0x0
> >          xo_last_exited_at:0x0
> >          xo_accumulated_duration:0x0
> >          last_sleep_transition_duration:0x0
> >          last_wake_transition_duration:0x0
> >          xo_count:0x0
> >          wakeup_reason:0x0
> >          numshutdowns:0x0
> >          active_cores:0x0
> > TZ
> >          shutdown_req:0x0
> >          wakeup_ind:0x0
> >          bringup_req:0x0
> >          bringup_ack:0x0
> >          xo_last_entered_at:0x0
> >          xo_last_exited_at:0x0
> >          xo_accumulated_duration:0x0
> >          last_sleep_transition_duration:0x0
> >          last_wake_transition_duration:0x0
> >          xo_count:0x0
> >          wakeup_reason:0x0
> >          numshutdowns:0x0
> >          active_cores:0x0

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ