[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1477527.x1Afvfmi0d@vostro.rjw.lan>
Date: Sat, 08 Jun 2013 02:54:05 +0200
From: "Rafael J. Wysocki" <rjw@...k.pl>
To: yanmin_zhang@...ux.intel.com
Cc: shuox.liu@...el.com, linux-kernel@...r.kernel.org,
linux-pm@...r.kernel.org, pavel@....cz, len.brown@...el.com,
gregkh@...uxfoundation.org
Subject: Re: [PATCH 0/2] Run callback of device_prepare/complete consistently
On Saturday, June 08, 2013 08:42:12 AM Yanmin Zhang wrote:
> On Fri, 2013-06-07 at 12:36 +0200, Rafael J. Wysocki wrote:
> > On Friday, June 07, 2013 04:20:30 PM shuox.liu@...el.com wrote:
> > > dpm_run_callback is used in other stages of power states changing.
> > > It provides debug info message and time measurement when call these
> > > callback. We also want to benefit ->prepare and ->complete.
> > >
> > > [PATCH 1/2] PM: use dpm_run_callback in device_prepare
> > > [PATCH 2/2] PM: add dpm_run_callback_void and use it in device_complete
> >
> > Is this an "Oh, why don't we do that?" series, or is it useful for anything
> > in practice? I'm asking, because we haven't added that stuff to start with
> > since we didn't see why it would be useful to anyone.
> >
> > And while patch [1/2] reduces the code size (by 1 line), so I can see some
> > (tiny) benefit from applying it, patch [2/2] adds more code and is there any
> > paractical reason?
> Sometimes, suspend-to-ram path spends too much time (either suspend slowly
> or wakeup slowly) and we need optimize it.
> With the 2 patches, we could collect initcall_debug printk info and manually
> check what prepare/complete callbacks consume too much time.
Well, can you point me to a single driver where prepare/complete causes this
type of problems to happen?
Rafael
--
I speak only for myself.
Rafael J. Wysocki, Intel Open Source Technology Center.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists