[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20171213084612epcms5p7755822fff34c87907de2236923e82305@epcms5p7>
Date: Wed, 13 Dec 2017 08:46:12 +0000
From: Vikas Bansal <vikas.bansal@...sung.com>
To: "Rafael J. Wysocki" <rjw@...ysocki.net>,
"gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
"len.brown@...el.com" <len.brown@...el.com>,
"pavel@....cz" <pavel@....cz>,
"linux-pm@...r.kernel.org" <linux-pm@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH V3] PM: In kernel power management domain_pm created for
async schedules
Sender : Rafael J. Wysocki <rjw@...ysocki.net>
Date : 2017-12-06 19:48 (GMT+5:30)
> On Wednesday, December 6, 2017 3:12:38 PM CET gregkh@...uxfoundation.org wrote:
> > On Wed, Dec 06, 2017 at 12:07:14PM +0000, Vikas Bansal wrote:
> > > Description:
> >
> > Why is this here?
> >
> > >
> > > If there is a driver in system which starts creating async schedules
> > > just after resume (Same as our case, in which we faced issue).
> > > Then async_synchronize_full API in PM cores starts waiting for completion
> > > of async schedules created by that driver (Even though those are in a domain).
> > > Because of this kernel resume time is increased (We faces the same issue)
> > > and whole system is delayed.
> > > This problem can be solved by creating a domain for
> > > async schedules in PM core (As we solved in our case).
> > > Below patch is for solving this problem.
> >
> > Very odd formatting.
> >
> > >
> > > Changelog:
> > > 1. Created Async domain domain_pm.
> > > 2. Converted async_schedule to async_schedule_domain.
> > > 3. Converted async_synchronize_full to async_synchronize_full_domain
> >
> > I'm confused. Have you read kernel patch submissions? Look at how they
> > are formatted. The documentation in the kernel tree should help you out
> > a lot here.
> >
> > Also, this is not v1, it has changed from the previous version. Always
> > describe, in the correct way, the changes from previous submissions.
Setting the correct version and chaging the formatting.
> >
> >
> > >
> > >
> > >
> > > Signed-off-by: Vikas Bansal <vikas.bansal@...sung.com>
> > > Signed-off-by: Anuj Gupta <anuj01.gupta@...sung.com>
> > > ---
> > > drivers/base/power/main.c | 27 +++++++++++++++------------
> > > 1 file changed, 15 insertions(+), 12 deletions(-)
> > >
> > > diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
> > > index db2f044..042b034 100644
> > > --- a/drivers/base/power/main.c
> > > +++ b/drivers/base/power/main.c
> > > @@ -39,6 +39,7 @@
> > > #include "power.h"
> > >
> > > typedef int (*pm_callback_t)(struct device *);
> > > +static ASYNC_DOMAIN(domain_pm);
> > >
> > > /*
> > > * The entries in the dpm_list list are in a depth first order, simply
> > > @@ -615,7 +616,8 @@ void dpm_noirq_resume_devices(pm_message_t state)
> > > reinit_completion(&dev->power.completion);
> > > if (is_async(dev)) {
> > > get_device(dev);
> > > - async_schedule(async_resume_noirq, dev);
> > > + async_schedule_domain(async_resume_noirq, dev,
> >
> > Always run your patches through scripts/checkpatch.pl so you do you not
> > get grumpy maintainers telling you to use scripts/checkpatch.pl
> >
> > Stop. Take some time. Redo the patch in another day or so, and then
> > resend it later, _AFTER_ you have addressed the issues. Don't rush,
> > there is no race here.
>
> Also it is not clear to me if this fixes a mainline kernel issue,
> because the changelog mentions a driver doing something odd, but it
> doesn't say which one it is and whether or not it is in the tree.
No, this driver is not part of mainline yet.
Chaging the patch and changelog as suggested. Changed the name of domain from
"domain_pm" to "async_pm". But kept the name in subject as domain_pm, just to
avoid confusion.
>
> Thanks,
> Rafael
If there is a driver in system which starts creating async schedules just after
resume (Same as our case, in which we faced issue). Then async_synchronize_full
API in PM cores starts waiting for completion of async schedules created by
that driver (Even though those are in a domain). Because of this kernel resume
time is increased (We faces the same issue) and whole system is delayed.
For solving this problem Async domain async_pm was created and "async_schedule"
API call was replaced with "async_schedule_domain"."async_synchronize_full" was
replaced with "async_synchronize_full_domain".
Signed-off-by: Vikas Bansal <vikas.bansal@...sung.com>
Signed-off-by: Anuj Gupta <anuj01.gupta@...sung.com>
---
drivers/base/power/main.c | 27 +++++++++++++++------------
1 file changed, 15 insertions(+), 12 deletions(-)
diff --git a/drivers/base/power/main.c b/drivers/base/power/main.c
index db2f044..03b71e3 100644
--- a/drivers/base/power/main.c
+++ b/drivers/base/power/main.c
@@ -39,6 +39,7 @@
#include "power.h"
typedef int (*pm_callback_t)(struct device *);
+static ASYNC_DOMAIN(async_pm);
/*
* The entries in the dpm_list list are in a depth first order, simply
@@ -615,7 +616,8 @@ void dpm_noirq_resume_devices(pm_message_t state)
reinit_completion(&dev->power.completion);
if (is_async(dev)) {
get_device(dev);
- async_schedule(async_resume_noirq, dev);
+ async_schedule_domain(async_resume_noirq, dev,
+ &async_pm);
}
}
@@ -641,7 +643,7 @@ void dpm_noirq_resume_devices(pm_message_t state)
put_device(dev);
}
mutex_unlock(&dpm_list_mtx);
- async_synchronize_full();
+ async_synchronize_full_domain(&async_pm);
dpm_show_time(starttime, state, 0, "noirq");
trace_suspend_resume(TPS("dpm_resume_noirq"), state.event, false);
}
@@ -755,7 +757,8 @@ void dpm_resume_early(pm_message_t state)
reinit_completion(&dev->power.completion);
if (is_async(dev)) {
get_device(dev);
- async_schedule(async_resume_early, dev);
+ async_schedule_domain(async_resume_early, dev,
+ &async_pm);
}
}
@@ -780,7 +783,7 @@ void dpm_resume_early(pm_message_t state)
put_device(dev);
}
mutex_unlock(&dpm_list_mtx);
- async_synchronize_full();
+ async_synchronize_full_domain(&async_pm);
dpm_show_time(starttime, state, 0, "early");
trace_suspend_resume(TPS("dpm_resume_early"), state.event, false);
}
@@ -919,7 +922,7 @@ void dpm_resume(pm_message_t state)
reinit_completion(&dev->power.completion);
if (is_async(dev)) {
get_device(dev);
- async_schedule(async_resume, dev);
+ async_schedule_domain(async_resume, dev, &async_pm);
}
}
@@ -946,7 +949,7 @@ void dpm_resume(pm_message_t state)
put_device(dev);
}
mutex_unlock(&dpm_list_mtx);
- async_synchronize_full();
+ async_synchronize_full_domain(&async_pm);
dpm_show_time(starttime, state, 0, NULL);
cpufreq_resume();
@@ -1156,7 +1159,7 @@ static int device_suspend_noirq(struct device *dev)
if (is_async(dev)) {
get_device(dev);
- async_schedule(async_suspend_noirq, dev);
+ async_schedule_domain(async_suspend_noirq, dev, &async_pm);
return 0;
}
return __device_suspend_noirq(dev, pm_transition, false);
@@ -1202,7 +1205,7 @@ int dpm_noirq_suspend_devices(pm_message_t state)
break;
}
mutex_unlock(&dpm_list_mtx);
- async_synchronize_full();
+ async_synchronize_full_domain(&async_pm);
if (!error)
error = async_error;
@@ -1316,7 +1319,7 @@ static int device_suspend_late(struct device *dev)
if (is_async(dev)) {
get_device(dev);
- async_schedule(async_suspend_late, dev);
+ async_schedule_domain(async_suspend_late, dev, &async_pm);
return 0;
}
@@ -1361,7 +1364,7 @@ int dpm_suspend_late(pm_message_t state)
break;
}
mutex_unlock(&dpm_list_mtx);
- async_synchronize_full();
+ async_synchronize_full_domain(&async_pm);
if (!error)
error = async_error;
if (error) {
@@ -1576,7 +1579,7 @@ static int device_suspend(struct device *dev)
if (is_async(dev)) {
get_device(dev);
- async_schedule(async_suspend, dev);
+ async_schedule_domain(async_suspend, dev, &async_pm);
return 0;
}
@@ -1622,7 +1625,7 @@ int dpm_suspend(pm_message_t state)
break;
}
mutex_unlock(&dpm_list_mtx);
- async_synchronize_full();
+ async_synchronize_full_domain(&async_pm);
if (!error)
error = async_error;
if (error) {
--
1.7.9.5
Powered by blists - more mailing lists