lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aTl2VdDTRrVu6og_@localhost.localdomain>
Date: Wed, 10 Dec 2025 14:32:05 +0100
From: Frederic Weisbecker <frederic@...nel.org>
To: Marcelo Tosatti <mtosatti@...hat.com>
Cc: linux-kernel@...r.kernel.org, Peter Zijlstra <peterz@...radead.org>,
	"Rafael J. Wysocki" <rafael@...nel.org>, linux-pm@...r.kernel.org
Subject: Re: [PATCH v2] sched/idle: disable tick in idle=poll idle entry

Le Mon, Nov 03, 2025 at 08:30:06AM -0300, Marcelo Tosatti a écrit :
> On Thu, Oct 30, 2025 at 05:06:49PM +0100, Frederic Weisbecker wrote:
> > (Adding more people in Cc)
> > 
> > Le Wed, Oct 29, 2025 at 03:00:56PM -0300, Marcelo Tosatti a écrit :
> > > 
> > > Commit a5183862e76fdc25f36b39c2489b816a5c66e2e5 
> > > ("tick/nohz: Conditionally restart tick on idle exit") allows
> > > a nohz_full CPU to enter idle and return from it with the 
> > > scheduler tick disabled (since the tick might be undesired noise).
> > > 
> > > The idle=poll case still unconditionally restarts the tick when entering
> > > idle.
> > > 
> > > To reduce the noise for that case as well, stop the tick when entering
> > > idle, for the idle=poll case.
> > > 
> > > Change tick_nohz_full_kick_cpu to set NEED_RESCHED bit, to handle the
> > > case where a new timer is added from an interrupt. This breaks out of
> > > cpu_idle_poll and rearms the timer if necessary.
> > > 
> > > ---
> > > 
> > > v2: Handle the case where a new timer is added from an interrupt (Frederic Weisbecker)
> > > 
> > >  include/linux/sched.h    |    2 ++
> > >  kernel/sched/core.c      |   10 ++++++++++
> > >  kernel/sched/idle.c      |    2 +-
> > >  kernel/time/tick-sched.c |    1 +
> > >  4 files changed, 14 insertions(+), 1 deletion(-)
> > > 
> > > diff --git a/include/linux/sched.h b/include/linux/sched.h
> > > index cbb7340c5866..1f6938dc20cd 100644
> > > --- a/include/linux/sched.h
> > > +++ b/include/linux/sched.h
> > > @@ -2428,4 +2428,6 @@ extern void migrate_enable(void);
> > >  
> > >  DEFINE_LOCK_GUARD_0(migrate, migrate_disable(), migrate_enable())
> > >  
> > > +void set_tif_resched_if_polling(int cpu);
> > > +
> > >  #endif
> > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > > index f1ebf67b48e2..f0b84600084b 100644
> > > --- a/kernel/sched/core.c
> > > +++ b/kernel/sched/core.c
> > > @@ -988,6 +988,11 @@ static bool set_nr_if_polling(struct task_struct *p)
> > >  	return true;
> > >  }
> > >  
> > > +void set_tif_resched_if_polling(int cpu)
> > > +{
> > > +	set_nr_if_polling(cpu_rq(cpu)->idle);
> > > +}
> > > +
> > >  #else
> > >  static inline bool set_nr_and_not_polling(struct thread_info *ti, int tif)
> > >  {
> > > @@ -999,6 +1004,11 @@ static inline bool set_nr_if_polling(struct task_struct *p)
> > >  {
> > >  	return false;
> > >  }
> > > +
> > > +void set_tif_resched_if_polling(int cpu)
> > > +{
> > > +	set_tsk_need_resched(cpu_rq(cpu)->idle);
> > > +}
> > >  #endif
> > >  
> > >  static bool __wake_q_add(struct wake_q_head *head, struct task_struct *task)
> > > diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
> > > index c39b089d4f09..428c2d1cbd1b 100644
> > > --- a/kernel/sched/idle.c
> > > +++ b/kernel/sched/idle.c
> > > @@ -324,7 +324,7 @@ static void do_idle(void)
> > >  		 * idle as we know that the IPI is going to arrive right away.
> > >  		 */
> > >  		if (cpu_idle_force_poll || tick_check_broadcast_expired()) {
> > > -			tick_nohz_idle_restart_tick();
> > > +			tick_nohz_idle_stop_tick();
> > 
> > Shouldn't we simply remove the tick_nohz_idle_restart_tick() line? The nohz_full
> > CPU should have entered here with the tick disabled already.
> > 
> > Also non-nohz_full systems shouldn't care.
> 
> With tick_nohz_idle_restart_tick removed:
> 
> <idle>-0 [001] d.h2. 51.356672: hrtimer_start: hrtimer=ffff927ae205c418 function=tick_nohz_handler expires=51360062500 softexpires=51360062500 mode=ABS
> <idle>-0 [001] d.h2. 51.357671: hrtimer_cancel: hrtimer=ffff927ae205c418
> <idle>-0 [001] d.h1. 51.357671: hrtimer_expire_entry: hrtimer=ffff927ae205c418 function=tick_nohz_handler now=51360063398
> <idle>-0 [001] d.h1. 51.357671: hrtimer_expire_exit: hrtimer=ffff927ae205c418
> <idle>-0 [001] d.h2. 51.357671: hrtimer_start: hrtimer=ffff927ae205c418 function=tick_nohz_handler expires=51361062500 softexpires=51361062500 mode=ABS
> <idle>-0 [001] d.h2. 51.358671: hrtimer_cancel: hrtimer=ffff927ae205c418
> <idle>-0 [001] d.h1. 51.358671: hrtimer_expire_entry: hrtimer=ffff927ae205c418 function=tick_nohz_handler now=51361063420
> <idle>-0 [001] d.h1. 51.358672: hrtimer_expire_exit: hrtimer=ffff927ae205c418
> <idle>-0 [001] d.h2. 51.358672: hrtimer_start: hrtimer=ffff927ae205c418 function=tick_nohz_handler expires=51362062500 softexpires=51362062500 mode=ABS
> <idle>-0 [001] d.h2. 51.359671: hrtimer_cancel: hrtimer=ffff927ae205c418
> <idle>-0 [001] d.h1. 51.359671: hrtimer_expire_entry: hrtimer=ffff927ae205c418 function=tick_nohz_handler now=51362063447
> <idle>-0 [001] d.h1. 51.359672: hrtimer_expire_exit: hrtimer=ffff927ae205c418
> <idle>-0 [001] d.h2. 51.359672: hrtimer_start: hrtimer=ffff927ae205c418 function=tick_nohz_handler expires=51363062500 softexpires=51363062500 mode=ABS
> 
> CPU 1 is idle and isolated.

Suprising, somehow the CPU's tick never interrupted a non-idle section. I guess
it's possible after boot. Or the CPU had tick dependencies before. I was about
to propose stopping the tick right before exiting to userspace but since you're
using idle=poll, I guess userspace must be reached as fast as possible and
therefore you prefer to stop the tick before the next wake-up rather that after?

Also instead of polling in kernel, why not polling in userspace for events? This
sounds like a saner isolation design. Entering/exiting the kernel is always a
risk for something going wrong.

> 
> > >  			cpu_idle_poll();
> > >  		} else {
> > >  			cpuidle_idle_call();
> > > diff --git a/kernel/time/tick-sched.c b/kernel/time/tick-sched.c
> > > index c527b421c865..efc3653999dc 100644
> > > --- a/kernel/time/tick-sched.c
> > > +++ b/kernel/time/tick-sched.c
> > > @@ -408,6 +408,7 @@ void tick_nohz_full_kick_cpu(int cpu)
> > >  	if (!tick_nohz_full_cpu(cpu))
> > >  		return;
> > >  
> > > +	set_tif_resched_if_polling(cpu);
> > 
> > Perhaps stuff that within wake_up_full_nohz_cpu() and call
> > set_nr_if_polling() directly.
> 
> Can't call set_nr_if_polling() directly since if TIF_POLLING_NRFLAG is
> undefined:
> 
> static inline bool set_nr_if_polling(struct task_struct *p)
> {
>         return false;
> }
> 
> So the wakeup won't occur. Or am i missing something?

Ok but can you at least move that to wake_up_full_nohz_cpu()?
tick_nohz_full_kick_cpu() is more general and doesn't only concern
new timers.

-- 
Frederic Weisbecker
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ