lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 05 Apr 2024 19:52:16 +0200
From: Thomas Gleixner <tglx@...utronix.de>
To: Anna-Maria Behnsen <anna-maria@...utronix.de>, linux-kernel@...r.kernel.org
Cc: "Rafael J . Wysocki" <rafael@...nel.org>, Pavel Machek <pavel@....cz>,
 Len Brown <len.brown@...el.com>, Ulf Hansson <ulf.hansson@...aro.org>,
 linux-pm@...r.kernel.org, Frederic Weisbecker <frederic@...nel.org>,
 x86@...nel.org, Anna-Maria Behnsen <anna-maria@...utronix.de>, Mario
 Limonciello <mario.limonciello@....com>, stable@...nel.org
Subject: Re: [PATCH] PM: s2idle: Make sure CPUs will wakeup directly on resume

On Fri, Apr 05 2024 at 10:34, Anna-Maria Behnsen wrote:
> s2idle works like a regular suspend with freezing processes and freezing
> devices. All CPUs except the control CPU go into idle. Once this is
> completed the control CPU kicks all other CPUs out of idle, so that they
> reenter the idle loop and then enter s2idle state. The control CPU then
> issues an swait() on the suspend state and therefore enters the idle loop
> as well.
>
> Due to being kicked out of idle, the other CPUs leave their NOHZ states,
> which means the tick is active and the corresponding hrtimer is programmed
> to the next jiffie.
>
> On entering s2idle the CPUs shut down their local clockevent device to
> prevent wakeups. The last CPU which enters s2idle shuts down its local
> clockevent and freezes timekeeping.
>
> On resume, one of the CPUs receives the wakeup interrupt, unfreezes
> timekeeping and its local clockevent and starts the resume process. At that
> point all other CPUs are still in s2idle with their clockevents switched
> off. They only resume when they are kicked by another CPU or after resuming
> devices and then receiving a device interrupt.
>
> That means there is no guarantee that all CPUs will wakeup directly on
> resume. As the consequence there is no guarantee that timers which are

s/As the/As a/

> queued on those CPUs and should expire directly after resume, are
> handled. Also timer list timers which are remotely queued to one of those
> CPUs after resume will not result in a reporgramming IPI as the tick is

s/reporgramming/reprogamming/

> active. A queue hrtimer will also not result in a reprogramming IPI because

s/A queue/Queueing a/

> the first hrtimer event is already in the past.
>
> The recent introduction of the timer pull model (7ee988770326 ("timers:
> Implement the hierarchical pull model")) amplifies this problem, if the
> current migrator is one of the non woken up CPUs. When a non pinned timer
> list timer is queued and the queueing CPU goes idle, it relies on the still
> suspended migrator CPU to expire the timer which will happen by chance.
>
> The problem existis since commit 8d89835b0467 ("PM: suspend: Do not pause
> cpuidle in the suspend-to-idle path"). There the cpuidle_pause() call which
> in turn invoked a wakeup for all idle CPUs was moved to a later point in
> the resume process. This might not be reached or reached very late because
> it waits on a timer of a still suspended CPU.
>
> Address this by kicking all CPUs out of idle after the control CPU returns
> from swait() so that they resume their timers and restore consistent system
> state.
>
> Closes: https://bugzilla.kernel.org/show_bug.cgi?id=218641
> Fixes: 8d89835b0467 ("PM: suspend: Do not pause cpuidle in the suspend-to-idle path")
> Signed-off-by: Anna-Maria Behnsen <anna-maria@...utronix.de>
> Tested-by: Mario Limonciello <mario.limonciello@....com>
> Cc: stable@...nel.org

Reviewed-by: Thomas Gleixner <tglx@...utronix.de>


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ