[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.LFD.2.00.0903171017590.29264@localhost.localdomain>
Date: Tue, 17 Mar 2009 11:22:24 +0100 (CET)
From: Thomas Gleixner <tglx@...utronix.de>
To: Arun R Bharadwaj <arun@...ux.vnet.ibm.com>
cc: linux-kernel@...r.kernel.org, linux-pm@...ts.linux-foundation.org,
a.p.zijlstra@...llo.nl, ego@...ibm.com, mingo@...e.hu,
andi@...stfloor.org, venkatesh.pallipadi@...el.com,
vatsa@...ux.vnet.ibm.com, arjan@...radead.org,
svaidy@...ux.vnet.ibm.com
Subject: Re: [v3 PATCH 4/4] timers: logic to move non pinned timers
On Mon, 16 Mar 2009, Arun R Bharadwaj wrote:
> @@ -627,6 +628,16 @@ __mod_timer(struct timer_list *timer, un
>
> new_base = __get_cpu_var(tvec_bases);
>
> + current_cpu = smp_processor_id();
> + preferred_cpu = get_nohz_load_balancer();
> + if (get_sysctl_timer_migration() && idle_cpu(current_cpu) &&
> + !pinned && preferred_cpu != -1) {
> + new_base = per_cpu(tvec_bases, preferred_cpu);
> + timer_set_base(timer, new_base);
> + timer->expires = expires;
> + internal_add_timer(new_base, timer);
> + goto out_unlock;
> + }
Err. This change breaks the timer->base logic. Why can't it just
select the base and use the existing code ?
> @@ -198,8 +200,16 @@ switch_hrtimer_base(struct hrtimer *time
> {
> struct hrtimer_clock_base *new_base;
> struct hrtimer_cpu_base *new_cpu_base;
> + int current_cpu, preferred_cpu;
> +
> + current_cpu = smp_processor_id();
> + preferred_cpu = get_nohz_load_balancer();
> + if (get_sysctl_timer_migration() && !pinned && preferred_cpu != -1
> + && idle_cpu(current_cpu))
> + new_cpu_base = &per_cpu(hrtimer_bases, preferred_cpu);
> + else
> + new_cpu_base = &__get_cpu_var(hrtimer_bases);
>
> - new_cpu_base = &__get_cpu_var(hrtimer_bases);
> new_base = &new_cpu_base->clock_base[base->index];
Hmm. This can lead to high latencies when you enqueue the timer on
the other CPU simply because we can not reprogram the timer hardware
on the other CPU in the CONFIG_HIGH_RES=y case.
Let's assume we are on CPU0 and try to enqueue the timer on CPU1,
where the next timer expiry is 5ms away. The timer which we enqueue
is due in 500us. So you introduce 4.5ms latency.
Thanks,
tglx
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists