lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <877c6f80bo.ffs@tglx>
Date: Tue, 28 Jan 2025 15:44:43 +0100
From: Thomas Gleixner <tglx@...utronix.de>
To: John Stultz <jstultz@...gle.com>, LKML <linux-kernel@...r.kernel.org>
Cc: John Stultz <jstultz@...gle.com>, Anna-Maria Behnsen
 <anna-maria@...utronix.de>, Frederic Weisbecker <frederic@...nel.org>,
 Ingo Molnar <mingo@...nel.org>, Peter Zijlstra <peterz@...radead.org>,
 Juri Lelli <juri.lelli@...hat.com>, Vincent Guittot
 <vincent.guittot@...aro.org>, Dietmar Eggemann <dietmar.eggemann@....com>,
 Steven Rostedt <rostedt@...dmis.org>, Ben Segall <bsegall@...gle.com>, Mel
 Gorman <mgorman@...e.de>, Valentin Schneider <vschneid@...hat.com>,
 Stephen Boyd <sboyd@...nel.org>, Yury Norov <yury.norov@...il.com>, Bitao
 Hu <yaoma@...ux.alibaba.com>, Andrew Morton <akpm@...ux-foundation.org>,
 kernel-team@...roid.com
Subject: Re: [RFC][PATCH 1/3] time/tick: Pipe tick count down through
 cputime accounting

On Mon, Jan 27 2025 at 22:32, John Stultz wrote:
> In working up the dynHZ patch, I found that the skipping of
> ticks would result in large latencies for itimers.
>
> As I dug into it, I realized there is still some logic that
> assumes we don't miss ticks, resulting in late expiration of
> cputime timers.
>
> So this patch pipes the actual number of ticks passed down
> through cputime accounting.

This word salad does not qualify as a proper technical changelog. See
Documentation/

>  /*
>   * Must be called with interrupts disabled !
>   */
> -static void tick_do_update_jiffies64(ktime_t now)
> +static unsigned long tick_do_update_jiffies64(ktime_t now)
>  {
>  	unsigned long ticks = 1;
>  	ktime_t delta, nextp;
> @@ -70,7 +70,7 @@ static void tick_do_update_jiffies64(ktime_t now)
>  	 */
>  	if (IS_ENABLED(CONFIG_64BIT)) {
>  		if (ktime_before(now, smp_load_acquire(&tick_next_period)))
> -			return;
> +			return 0;

So if the CPU's tick handler does not update jiffies, then this returns
zero ticks....

> -static void tick_sched_do_timer(struct tick_sched *ts, ktime_t now)
> +static unsigned long tick_sched_do_timer(struct tick_sched *ts, ktime_t now)
>  {
>  	int tick_cpu, cpu = smp_processor_id();
> -
> +	unsigned long ticks = 0;

And you have also zero ticks, when the CPU is not the tick_cpu:

>  	/* Check if jiffies need an update */
>  	if (tick_cpu == cpu)
> -		tick_do_update_jiffies64(now);
> +		ticks = tick_do_update_jiffies64(now);

...

> +	return ticks;
>  }
>  
> -static void tick_sched_handle(struct tick_sched *ts, struct pt_regs *regs)
> +static void tick_sched_handle(struct tick_sched *ts, unsigned long ticks, struct pt_regs *regs)
>  {
>  	/*
>  	 * When we are idle and the tick is stopped, we have to touch
> @@ -264,7 +266,7 @@ static void tick_sched_handle(struct tick_sched *ts, struct pt_regs *regs)
>  	    tick_sched_flag_test(ts, TS_FLAG_STOPPED)) {
>  		touch_softlockup_watchdog_sched();
>  		if (is_idle_task(current))
> -			ts->idle_jiffies++;
> +			ts->idle_jiffies += ticks;
>  		/*
>  		 * In case the current tick fired too early past its expected
>  		 * expiration, make sure we don't bypass the next clock reprogramming
> @@ -273,7 +275,7 @@ static void tick_sched_handle(struct tick_sched *ts, struct pt_regs *regs)
>  		ts->next_tick = 0;
>  	}
>  
> -	update_process_times(user_mode(regs));
> +	update_process_times(ticks, user_mode(regs));

Which is then fed to update_process_times() and subsequently to
account_process_ticks().

IOW, any CPUs tick handler which does not actually advance jiffies is
going to account ZERO ticks...

Seriously?

Thanks,

        tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ