[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20211019094720.GD7231@dragon>
Date: Tue, 19 Oct 2021 17:47:21 +0800
From: Shawn Guo <shawn.guo@...aro.org>
To: Maulik Shah <mkshah@...eaurora.org>
Cc: swboyd@...omium.org, mka@...omium.org, evgreen@...omium.org,
bjorn.andersson@...aro.org, linux-arm-msm@...r.kernel.org,
linux-kernel@...r.kernel.org, agross@...nel.org,
dianders@...omium.org, linux@...ck-us.net, rnayak@...eaurora.org,
lsrao@...eaurora.org,
Mahesh Sivasubramanian <msivasub@...eaurora.org>,
Lina Iyer <ilina@...eaurora.org>
Subject: Re: [PATCH v12 2/5] soc: qcom: Add Sleep stats driver
On Mon, Oct 18, 2021 at 07:45:30PM +0530, Maulik Shah wrote:
> > > +static void qcom_create_soc_sleep_stat_files(struct dentry *root, void __iomem *reg,
> > > + struct stats_data *d,
> > > + const struct stats_config *config)
> > > +{
> > > + char stat_type[sizeof(u32) + 1] = {0};
> > > + size_t stats_offset = config->stats_offset;
> > > + u32 offset = 0, type;
> > > + int i, j;
> > > +
> > > + /*
> > > + * On RPM targets, stats offset location is dynamic and changes from target
> > > + * to target and sometimes from build to build for same target.
> > > + *
> > > + * In such cases the dynamic address is present at 0x14 offset from base
> > > + * address in devicetree. The last 16bits indicates the stats_offset.
> > > + */
> > > + if (config->dynamic_offset) {
> > > + stats_offset = readl(reg + RPM_DYNAMIC_ADDR);
> > > + stats_offset &= RPM_DYNAMIC_ADDR_MASK;
> > > + }
> > > +
> > > + for (i = 0; i < config->num_records; i++) {
> > > + d[i].base = reg + offset + stats_offset;
> > > +
> > > + /*
> > > + * Read the low power mode name and create debugfs file for it.
> > > + * The names read could be of below,
> > > + * (may change depending on low power mode supported).
> > > + * For rpmh-sleep-stats: "aosd", "cxsd" and "ddr".
> > > + * For rpm-sleep-stats: "vmin" and "vlow".
> >
> > It reports 'vmin' and 'xosd' on MSM8939, 'vmin' and 'vlow' on SDM660.
> > I know that 'vmin' is VDD Minimization mode, and 'xosd' is XO Shutdown
> > mode. But I'm not sure about 'vlow' mode. Could you share some
> > information regarding what this low power mode is, and how it differs
> > from 'vmin' and 'xosd'?
>
> vlow and xosd are same.
> vmin is xosd plus voltage minimization of chip, memory rails.
Thanks much for the info, Maulik!
I'm running your driver on qcm2290 and trying to reach vlow mode.
# cat /sys/kernel/debug/qcom_sleep_stats/vlow
Count: 0
Last Entered At: 0
Last Exited At: 0
Accumulated Duration: 0
Client Votes: 0x81
# echo mem > /sys/power/state
[ 551.446603] PM: suspend entry (s2idle)
[ 551.450948] Filesystems sync: 0.000 seconds
[ 551.462828] Freezing user space processes ... (elapsed 0.002 seconds) done.
[ 551.472276] OOM killer disabled.
[ 551.475556] Freezing remaining freezable tasks ... (elapsed 0.001 seconds) done.
[ 551.484461] printk: Suspending console(s) (use no_console_suspend to debug)
[ 551.561280] OOM killer enabled.
[ 551.564461] Restarting tasks ... done.
[ 551.569652] PM: suspend exit
# cat /sys/kernel/debug/qcom_sleep_stats/vlow
Count: 0
Last Entered At: 0
Last Exited At: 0
Accumulated Duration: 0
Client Votes: 0x818081
The count doesn't increases along with suspend/resume cycle at the
moment. But as you can see, 'Client Votes' field changes. If possible,
could you shed some light on what this means?
As the comparison, I'm also running the downstream 'rpm_master_stats'
driver in the same kernel, and the 'xo_count' field of APSS does
increase along with suspend/resume cycle. May I ask some advices what
I'm possibly missing and thus getting different result between 'vlow'
and 'rpm_master_stats' report?
# cat /sys/kernel/debug/rpm_master_stats
APSS
shutdown_req:0x37EA3CC74
wakeup_ind:0x0
bringup_req:0x37F041958
bringup_ack:0x37F042D54
xo_last_entered_at:0x286FF36AC
xo_last_exited_at:0x28AF94178
xo_accumulated_duration:0x3EDD55B
last_sleep_transition_duration:0x122f
last_wake_transition_duration:0x11f8
xo_count:0x1
wakeup_reason:0x0
numshutdowns:0x641
active_cores:0x1
core0
MPSS
shutdown_req:0x0
wakeup_ind:0x0
bringup_req:0x0
bringup_ack:0x0
xo_last_entered_at:0x0
xo_last_exited_at:0x0
xo_accumulated_duration:0x0
last_sleep_transition_duration:0x0
last_wake_transition_duration:0x0
xo_count:0x0
wakeup_reason:0x0
numshutdowns:0x0
active_cores:0x1
core0
ADSP
shutdown_req:0x0
wakeup_ind:0x0
bringup_req:0x0
bringup_ack:0x0
xo_last_entered_at:0x0
xo_last_exited_at:0x0
xo_accumulated_duration:0x0
last_sleep_transition_duration:0x0
last_wake_transition_duration:0x0
xo_count:0x0
wakeup_reason:0x0
numshutdowns:0x0
active_cores:0x1
core0
CDSP
shutdown_req:0x0
wakeup_ind:0x0
bringup_req:0x0
bringup_ack:0x0
xo_last_entered_at:0x0
xo_last_exited_at:0x0
xo_accumulated_duration:0x0
last_sleep_transition_duration:0x0
last_wake_transition_duration:0x0
xo_count:0x0
wakeup_reason:0x0
numshutdowns:0x0
active_cores:0x0
TZ
shutdown_req:0x0
wakeup_ind:0x0
bringup_req:0x0
bringup_ack:0x0
xo_last_entered_at:0x0
xo_last_exited_at:0x0
xo_accumulated_duration:0x0
last_sleep_transition_duration:0x0
last_wake_transition_duration:0x0
xo_count:0x0
wakeup_reason:0x0
numshutdowns:0x0
active_cores:0x0
Any comment or suggestion would be much appreciated!
Shawn
Powered by blists - more mailing lists