lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <55C8C6B7.9010707@arm.com>
Date:	Mon, 10 Aug 2015 16:43:51 +0100
From:	Juri Lelli <juri.lelli@....com>
To:	Frederic Weisbecker <fweisbec@...il.com>,
	Peter Zijlstra <peterz@...radead.org>
CC:	LKML <linux-kernel@...r.kernel.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Preeti U Murthy <preeti@...ux.vnet.ibm.com>,
	Christoph Lameter <cl@...ux.com>,
	Ingo Molnar <mingo@...nel.org>,
	Viresh Kumar <viresh.kumar@...aro.org>,
	Rik van Riel <riel@...hat.com>
Subject: Re: [PATCH 07/10] sched: Migrate sched to use new tick dependency
 mask model

Hi,

On 10/08/15 16:29, Frederic Weisbecker wrote:
> On Mon, Aug 10, 2015 at 05:11:51PM +0200, Peter Zijlstra wrote:
>> On Mon, Aug 10, 2015 at 04:28:47PM +0200, Peter Zijlstra wrote:
>>> On Mon, Aug 10, 2015 at 04:16:58PM +0200, Frederic Weisbecker wrote:
>>>
>>>> I considered many times relying on hrtick btw but everyone seem to say it has a lot
>>>> of overhead, especially due to clock reprogramming on schedule() calls.
>>>
>>> Yeah, I have some vague ideas of how to take out much of that overhead
>>> (tglx will launch frozen sharks at me I suspect), but we cannot get
>>> around the overhead of actually having to program the hardware and that
>>> is still a significant amount on many machines.
>>>
>>> Supposedly machines with TSC deadline are better, but I've not tried
>>> to benchmark that.
>>
>> Basically something along these lines.. which avoids a whole bunch of
>> hrtimer stuff.
>>
>> But without fast hardware its all still pointless.
>>
>> diff --git a/include/linux/hrtimer.h b/include/linux/hrtimer.h
>> index 76dd4f0da5ca..c279950cb8c3 100644
>> --- a/include/linux/hrtimer.h
>> +++ b/include/linux/hrtimer.h
>> @@ -200,6 +200,7 @@ struct hrtimer_cpu_base {
>>  	unsigned int			nr_retries;
>>  	unsigned int			nr_hangs;
>>  	unsigned int			max_hang_time;
>> +	ktime_t				expires_sched;
>>  #endif
>>  	struct hrtimer_clock_base	clock_base[HRTIMER_MAX_CLOCK_BASES];
>>  } ____cacheline_aligned;
>> diff --git a/kernel/time/hrtimer.c b/kernel/time/hrtimer.c
>> index 5c7ae4b641c4..be9c0a555eaa 100644
>> --- a/kernel/time/hrtimer.c
>> +++ b/kernel/time/hrtimer.c
>> @@ -68,6 +68,7 @@ DEFINE_PER_CPU(struct hrtimer_cpu_base, hrtimer_bases) =
>>  {
>>  	.lock = __RAW_SPIN_LOCK_UNLOCKED(hrtimer_bases.lock),
>>  	.seq = SEQCNT_ZERO(hrtimer_bases.seq),
>> +	.expires_sched = { .tv64 = KTIME_MAX, },
>>  	.clock_base =
>>  	{
>>  		{
>> @@ -460,7 +461,7 @@ static inline void hrtimer_update_next_timer(struct hrtimer_cpu_base *cpu_base,
>>  static ktime_t __hrtimer_get_next_event(struct hrtimer_cpu_base *cpu_base)
>>  {
>>  	struct hrtimer_clock_base *base = cpu_base->clock_base;
>> -	ktime_t expires, expires_next = { .tv64 = KTIME_MAX };
>> +	ktime_t expires, expires_next = cpu_base->expires_sched;
>>  	unsigned int active = cpu_base->active_bases;
>>  
>>  	hrtimer_update_next_timer(cpu_base, NULL);
>> @@ -1289,6 +1290,33 @@ static void __hrtimer_run_queues(struct hrtimer_cpu_base *cpu_base, ktime_t now)
>>  
>>  #ifdef CONFIG_HIGH_RES_TIMERS
>>  
>> +void sched_hrtick_set(u64 ns)
>> +{
>> +	struct hrtimer_cpu_base *cpu_base = this_cpu_ptr(&hrtimer_bases);
>> +	ktime_t expires = ktime_add_ns(ktime_get(), ns);
>> +
>> +	raw_spin_lock(&cpu_base->lock);
>> +	cpu_base->expires_sched = expires;
>> +
>> +	if (expires.tv64 < cpu_base->expires_next.tv64)
>> +		hrtimer_force_reprogram(cpu_base, 0);
>> +
>> +	raw_spin_unlock(&cpu_base->lock);
>> +}
>> +
>> +void sched_hrtick_cancel(void)
>> +{
>> +	struct hrtimer_cpu_base *cpu_base = this_cpu_ptr(&hrtimer_bases);
>> +
>> +	raw_spin_lock(&cpu_base->lock);
>> +	/*
>> +	 * If the current event was this sched event, eat the superfluous
>> +	 * interrupt rather than touch the hardware again.
>> +	 */
>> +	cpu_base->expires_sched.tv64 = KTIME_MAX;
>> +	raw_spin_unlock(&cpu_base->lock);
>> +}
> 
> Well, there could be a more proper way to do this without tying that to the scheduler
> tick. This could be some sort of hrtimer_cancel_soft() which more generally cancels a
> timer without cancelling the interrupt itself. We might want to still keep track of that
> lost interrupt though in case of later clock reprogramming that fits the lost interrupt.
> With a field like cpu_base->expires_interrupt. I thought about expires_soft and expires_hard
> but I think that terminology is already used :-)
> 
> That said that feature at least wouldn't fit nohz full which really wants to avoid spurious
> interrupts.
> 

Quite a detailed reply to my naive question :).
Thanks a lot Frederic and Peter for this!

For what concerns SCHED_DEADLINE, I guess the bottom line is
that it makes sense to use hrtick for sub-millisecond accounting
only (without nohz full).

Best,

- Juri

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ