[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BEC9F67575FA1E429CA7CF5AE9BE36343F154E@SHSMSX102.ccr.corp.intel.com>
Date: Mon, 4 Feb 2013 03:08:13 +0000
From: "Li, Fei" <fei.li@...el.com>
To: "Rafael J. Wysocki" <rjw@...k.pl>
CC: "akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-pm@...r.kernel.org" <linux-pm@...r.kernel.org>,
"Liu, Chuansheng" <chuansheng.liu@...el.com>
Subject: RE: [PATCH V4] suspend: enable freeze timeout configuration through
sys
>
> On Friday, February 01, 2013 04:56:03 PM fli24 wrote:
>
> The code looks OK, but I'm still having a more fundamental problem with this
> change, because ->
>
> > At present, the value of timeout for freezing is 20s, which is
> > meaningless in case that one thread is frozen with mutex locked
> > and another thread is trying to lock the mutex, as this time of
> > freezing will fail unavoidably.
>
> -> the situation described above shouldn't happen and if it does, then there
> is a bug that needs to be fixed.
Yes, we agree with that it's a bug that needs to be fixed, and we are already done
for some cases.
During the process, we realize that tuning the timeout value to smaller value will help
to expose freezing failure for some cases.
> The change you're proposing seems to be targeted at hiding those bugs rather
> than at fixing them. I wonder why we should hide those bugs instead of fixing
> them?
>
> Rafael
We understand your concern about hiding bugs, and we are not going to do this.
In contrast, setting timeout value to smaller value means more strict threshold, and
it will expose the potential issue more quickly with more chance.
Taking diversity of system into consideration, we think it's suitable to expose the interface
to support different configuration on different system for purpose of debugging and
performance tuning.
Do you think it make sense for you?
Best Regards,
Li Fei
>
> > And if there is no new wakeup event registered, the system will
> > waste at most 20s for such meaningless trying of freezing.
> >
> > With this patch, the value of timeout can be configured to smaller
> > value, so such meaningless trying of freezing will be aborted in
> > earlier time, and later freezing can be also triggered in earlier
> > time. And more power will be saved.
> > In normal case on mobile phone, it costs real little time to freeze
> > processes. On some platform, it only costs about 20ms to freeze
> > user space processes and 10ms to freeze kernel freezable threads.
> >
> > Signed-off-by: Liu Chuansheng <chuansheng.liu@...el.com>
> > Signed-off-by: Li Fei <fei.li@...el.com>
> > ---
> > Documentation/power/freezing-of-tasks.txt | 5 +++++
> > include/linux/freezer.h | 5 +++++
> > kernel/power/main.c | 27
> +++++++++++++++++++++++++++
> > kernel/power/process.c | 4 ++--
> > 4 files changed, 39 insertions(+), 2 deletions(-)
> >
> > diff --git a/Documentation/power/freezing-of-tasks.txt
> b/Documentation/power/freezing-of-tasks.txt
> > index 6ec291e..85894d8 100644
> > --- a/Documentation/power/freezing-of-tasks.txt
> > +++ b/Documentation/power/freezing-of-tasks.txt
> > @@ -223,3 +223,8 @@ since they ask the freezer to skip freezing this task,
> since it is anyway
> > only after the entire suspend/hibernation sequence is complete.
> > So, to summarize, use [un]lock_system_sleep() instead of directly using
> > mutex_[un]lock(&pm_mutex). That would prevent freezing failures.
> > +
> > +V. Miscellaneous
> > +/sys/power/pm_freeze_timeout controls how long it will cost at most to
> freeze
> > +all user space processes or all freezable kernel threads, in unit of millisecond.
> > +The default value is 20000, with range of unsigned integer.
> > diff --git a/include/linux/freezer.h b/include/linux/freezer.h
> > index e4238ce..e70df40 100644
> > --- a/include/linux/freezer.h
> > +++ b/include/linux/freezer.h
> > @@ -13,6 +13,11 @@ extern bool pm_freezing; /* PM freezing in effect
> */
> > extern bool pm_nosig_freezing; /* PM nosig freezing in effect */
> >
> > /*
> > + * Timeout for stopping processes
> > + */
> > +extern unsigned int freeze_timeout_msecs;
> > +
> > +/*
> > * Check if a process has been frozen
> > */
> > static inline bool frozen(struct task_struct *p)
> > diff --git a/kernel/power/main.c b/kernel/power/main.c
> > index 1c16f91..3e1c9da 100644
> > --- a/kernel/power/main.c
> > +++ b/kernel/power/main.c
> > @@ -553,6 +553,30 @@ power_attr(pm_trace_dev_match);
> >
> > #endif /* CONFIG_PM_TRACE */
> >
> > +#ifdef CONFIG_FREEZER
> > +static ssize_t pm_freeze_timeout_show(struct kobject *kobj,
> > + struct kobj_attribute *attr, char *buf)
> > +{
> > + return sprintf(buf, "%u\n", freeze_timeout_msecs);
> > +}
> > +
> > +static ssize_t pm_freeze_timeout_store(struct kobject *kobj,
> > + struct kobj_attribute *attr,
> > + const char *buf, size_t n)
> > +{
> > + unsigned long val;
> > +
> > + if (kstrtoul(buf, 10, &val))
> > + return -EINVAL;
> > +
> > + freeze_timeout_msecs = val;
> > + return n;
> > +}
> > +
> > +power_attr(pm_freeze_timeout);
> > +
> > +#endif /* CONFIG_FREEZER*/
> > +
> > static struct attribute * g[] = {
> > &state_attr.attr,
> > #ifdef CONFIG_PM_TRACE
> > @@ -576,6 +600,9 @@ static struct attribute * g[] = {
> > &pm_print_times_attr.attr,
> > #endif
> > #endif
> > +#ifdef CONFIG_FREEZER
> > + &pm_freeze_timeout_attr.attr,
> > +#endif
> > NULL,
> > };
> >
> > diff --git a/kernel/power/process.c b/kernel/power/process.c
> > index d5a258b..98088e0 100644
> > --- a/kernel/power/process.c
> > +++ b/kernel/power/process.c
> > @@ -21,7 +21,7 @@
> > /*
> > * Timeout for stopping processes
> > */
> > -#define TIMEOUT (20 * HZ)
> > +unsigned int __read_mostly freeze_timeout_msecs = 20 * MSEC_PER_SEC;
> >
> > static int try_to_freeze_tasks(bool user_only)
> > {
> > @@ -36,7 +36,7 @@ static int try_to_freeze_tasks(bool user_only)
> >
> > do_gettimeofday(&start);
> >
> > - end_time = jiffies + TIMEOUT;
> > + end_time = jiffies + msecs_to_jiffies(freeze_timeout_msecs);
> >
> > if (!user_only)
> > freeze_workqueues_begin();
> >
> --
> I speak only for myself.
> Rafael J. Wysocki, Intel Open Source Technology Center.
Powered by blists - more mailing lists