[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20071116084741.GA20011@elte.hu>
Date: Fri, 16 Nov 2007 09:47:42 +0100
From: Ingo Molnar <mingo@...e.hu>
To: Avi Kivity <avi@...ranet.com>
Cc: Arjan van de Ven <arjan@...radead.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
akpm@...ux-foundation.org
Subject: [patch] x86: make delay_tsc() preemptible again
* Ingo Molnar <mingo@...e.hu> wrote:
> but that should not be needed in this case. Why doesnt the TSC using
> delay loop simply poll the CPU it is on and fix up the TSC?
something like the patch below.
Ingo
--------------->
Subject: x86: make delay_tsc() preemptible again
From: Ingo Molnar <mingo@...e.hu>
make delay_tsc() preemptible again.
Signed-off-by: Ingo Molnar <mingo@...e.hu>
---
arch/x86/lib/delay_32.c | 28 +++++++++++++++++++++++-----
arch/x86/lib/delay_64.c | 30 ++++++++++++++++++++++++------
2 files changed, 47 insertions(+), 11 deletions(-)
Index: linux/arch/x86/lib/delay_32.c
===================================================================
--- linux.orig/arch/x86/lib/delay_32.c
+++ linux/arch/x86/lib/delay_32.c
@@ -38,17 +38,35 @@ static void delay_loop(unsigned long loo
:"0" (loops));
}
-/* TSC based delay: */
+/*
+ * TSC based delay:
+ *
+ * We are careful about preemption as TSC's are per-CPU.
+ */
static void delay_tsc(unsigned long loops)
{
- unsigned long bclock, now;
+ unsigned long prev, now;
+ long left = loops;
+ int prev_cpu, cpu;
- preempt_disable(); /* TSC's are per-cpu */
- rdtscl(bclock);
+ preempt_disable();
+ rdtscl(prev);
do {
+ prev_cpu = smp_processor_id();
rep_nop();
+ preempt_enable();
+
+ preempt_disable();
+ cpu = smp_processor_id();
rdtscl(now);
- } while ((now-bclock) < loops);
+ /*
+ * If we preempted we skip this small amount of time:
+ */
+ if (prev_cpu != cpu)
+ prev = now;
+ left -= now - prev;
+ prev = now;
+ } while (left > 0);
preempt_enable();
}
Index: linux/arch/x86/lib/delay_64.c
===================================================================
--- linux.orig/arch/x86/lib/delay_64.c
+++ linux/arch/x86/lib/delay_64.c
@@ -26,17 +26,35 @@ int read_current_timer(unsigned long *ti
return 0;
}
+/*
+ * TSC based delay:
+ *
+ * We are careful about preemption as TSC's are per-CPU.
+ */
void __delay(unsigned long loops)
{
- unsigned bclock, now;
+ unsigned long prev, now;
+ long left = loops;
+ int prev_cpu, cpu;
- preempt_disable(); /* TSC's are pre-cpu */
- rdtscl(bclock);
+ preempt_disable();
+ rdtscl(prev);
do {
- rep_nop();
+ prev_cpu = smp_processor_id();
+ rep_nop();
+ preempt_enable();
+
+ preempt_disable();
+ cpu = smp_processor_id();
rdtscl(now);
- }
- while ((now-bclock) < loops);
+ /*
+ * If we preempted we skip this small amount of time:
+ */
+ if (prev_cpu != cpu)
+ prev = now;
+ left -= now - prev;
+ prev = now;
+ } while (left > 0);
preempt_enable();
}
EXPORT_SYMBOL(__delay);
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists