[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGETcx-NEjg5GwEMyz7C88ZhBrpFd55Md05Wez4kurvmdaWabQ@mail.gmail.com>
Date: Mon, 18 Nov 2024 20:04:26 -0800
From: Saravana Kannan <saravanak@...gle.com>
To: "Rafael J. Wysocki" <rafael@...nel.org>, Pavel Machek <pavel@....cz>, Len Brown <len.brown@...el.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>, Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>, Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>, Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>, Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Valentin Schneider <vschneid@...hat.com>
Cc: Geert Uytterhoeven <geert@...ux-m68k.org>, Marek Vasut <marex@...x.de>, Bird@...gle.com,
Tim <Tim.Bird@...y.com>, kernel-team@...roid.com, linux-pm@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v1 0/5] Optimize async device suspend/resume
On Thu, Nov 14, 2024 at 2:09 PM Saravana Kannan <saravanak@...gle.com> wrote:
>
> A lot of the details are in patch 4/5 and 5/5. The summary is that
> there's a lot of overhead and wasted work in how async device
> suspend/resume is handled today. I talked about this and otther
> suspend/resume issues at LPC 2024[1].
>
> You can remove a lot of the overhead by doing a breadth first queuing of
> async suspend/resumes. That's what this patch series does. I also
> noticed that during resume, because of EAS, we don't use the bigger CPUs
> as quickly. This was leading to a lot of scheduling latency and
> preemption of runnable threads and increasing the resume latency. So, we
> also disable EAS for that tiny period of resume where we know there'll
> be a lot of parallelism.
>
> On a Pixel 6, averaging over 100 suspend/resume cycles, this patch
> series yields significant improvements:
> +---------------------------+-----------+----------------+------------+-------+
> | Phase | Old full sync | Old full async | New full async |
> | | | | + EAS disabled |
> +---------------------------+-----------+----------------+------------+-------+
> | Total dpm_suspend*() time | 107 ms | 72 ms | 62 ms |
> +---------------------------+-----------+----------------+------------+-------+
> | Total dpm_resume*() time | 75 ms | 90 ms | 61 ms |
> +---------------------------+-----------+----------------+------------+-------+
> | Sum | 182 ms | 162 ms | 123 ms |
> +---------------------------+-----------+----------------+------------+-------+
>
> There might be room for some more optimizations in the future, but I'm
> keep this patch series simple enough so that it's easier to review and
> check that it's not breaking anything. If this series lands and is
> stable and no bug reports for a few months, I can work on optimizing
> this a bit further.
>
> Thanks,
> Saravana
> P.S: Cc-ing some usual suspects you might be interested in testing this
> out.
>
> [1] - https://lpc.events/event/18/contributions/1845/
>
> Saravana Kannan (5):
> PM: sleep: Fix runtime PM issue in dpm_resume()
> PM: sleep: Remove unnecessary mutex lock when waiting on parent
> PM: sleep: Add helper functions to loop through superior/subordinate
> devs
> PM: sleep: Do breadth first suspend/resume for async suspend/resume
> PM: sleep: Spread out async kworker threads during dpm_resume*()
> phases
>
> drivers/base/power/main.c | 325 +++++++++++++++++++++++++++++---------
Hi Rafael/Greg,
I'm waiting for one of your reviews before I send out the next version.
-Saravana
> kernel/power/suspend.c | 16 ++
> kernel/sched/topology.c | 13 ++
> 3 files changed, 276 insertions(+), 78 deletions(-)
>
> --
> 2.47.0.338.g60cca15819-goog
>
Powered by blists - more mailing lists