lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <874kg8wzps.ffs@nanos.tec.linutronix.de>
Date:   Wed, 14 Apr 2021 19:19:43 +0200
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Marcelo Tosatti <mtosatti@...hat.com>
Cc:     Anna-Maria Behnsen <anna-maria@...utronix.de>,
        linux-kernel@...r.kernel.org,
        Frederic Weisbecker <frederic@...nel.org>,
        Peter Xu <peterx@...hat.com>,
        Nitesh Narayan Lal <nitesh@...hat.com>,
        Alex Belits <abelits@...vell.com>
Subject: Re: [PATCH v2] hrtimer: avoid retrigger_next_event IPI

Marcelo,

On Tue, Apr 13 2021 at 14:04, Marcelo Tosatti wrote:
> Setting the realtime clock triggers an IPI to all CPUs to reprogram
> hrtimers.

s/hrtimers/clock event device/

> However, only realtime and TAI clocks have their offsets updated
> (and therefore potentially require a reprogram).
>
> Check if it only has monotonic active timers, and in that case

boottime != monotonic 

> update the realtime and TAI base offsets remotely, skipping the IPI.

Something like this perhaps:

Instead of sending an IPI unconditionally, check each per CPU hrtimer base
whether it has active timers in the CLOCK_REALTIME and CLOCK_TAI bases. If
that's not the case, update the realtime and TAI base offsets remotely and
skip the IPI. This ensures that any subsequently armed timers on
CLOCK_REALTIME and CLOCK_TAI are evaluated with the correct offsets.

Hmm?

> +#define CLOCK_SET_BASES ((1U << HRTIMER_BASE_REALTIME)|		\

Missing space between ) and |

> +			 (1U << HRTIMER_BASE_REALTIME_SOFT)|	\
> +			 (1U << HRTIMER_BASE_TAI)|		\
> +			 (1U << HRTIMER_BASE_TAI_SOFT))
> +
> +static bool need_reprogram_timer(struct hrtimer_cpu_base *cpu_base)
> +{
> +	unsigned int active = 0;
> +
> +	if (cpu_base->softirq_activated)
> +		return true;
> +
> +	active = cpu_base->active_bases & HRTIMER_ACTIVE_SOFT;
> +
> +	active = active | (cpu_base->active_bases & HRTIMER_ACTIVE_HARD);
> +
> +	if ((active & CLOCK_SET_BASES) == 0)
> +		return false;
> +
> +	return true;

That's a pretty elaborate way of writing:

       return (cpu_base->active_bases & CLOCK_SET_BASES) != 0;

> +}
> +
>  /*
>   * Clock realtime was set
>   *
> @@ -885,9 +907,31 @@ static void hrtimer_reprogram(struct hrtimer *timer, bool reprogram)
>  void clock_was_set(void)
>  {
>  #ifdef CONFIG_HIGH_RES_TIMERS
> -	/* Retrigger the CPU local events everywhere */
> -	on_each_cpu(retrigger_next_event, NULL, 1);
> +	cpumask_var_t mask;
> +	int cpu;
> +
> +	if (!zalloc_cpumask_var(&mask, GFP_KERNEL)) {
> +		on_each_cpu(retrigger_next_event, NULL, 1);
> +		goto set_timerfd;
> +	}
> +
> +	/* Avoid interrupting CPUs if possible */
> +	preempt_disable();

That preempt disable is only required around the function call, for the
loop cpus_read_lock() is sufficient.

> +	for_each_online_cpu(cpu) {
> +		unsigned long flags;
> +		struct hrtimer_cpu_base *cpu_base = &per_cpu(hrtimer_bases, cpu);
> +
> +		raw_spin_lock_irqsave(&cpu_base->lock, flags);
> +		if (need_reprogram_timer(cpu_base))
> +			cpumask_set_cpu(cpu, mask);
> +		raw_spin_unlock_irqrestore(&cpu_base->lock, flags);
> +	}
> +
> +	smp_call_function_many(mask, retrigger_next_event, NULL, 1);
> +	preempt_enable();
> +	free_cpumask_var(mask);
>  #endif
> +set_timerfd:

My brain compiler tells me that with CONFIG_HIGH_RES_TIMERS=n a real
compiler will emit a warning about a defined, but unused label....

>  	timerfd_clock_was_set();
>  }

Thanks,

        tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ