lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 25 May 2008 14:44:39 +0200 (CEST)
From:	Thomas Gleixner <tglx@...utronix.de>
To:	Steven Rostedt <rostedt@...dmis.org>
cc:	LKML <linux-kernel@...r.kernel.org>,
	linux-rt-users <linux-rt-users@...r.kernel.org>, akpm@...l.org,
	Ingo Molnar <mingo@...e.hu>,
	Clark Williams <clark.williams@...il.com>,
	Peter Zijlstra <peterz@...radead.org>,
	"Luis Claudio R. Goncalves" <lclaudio@...g.org>,
	Gregory Haskins <ghaskins@...ell.com>, Andi Kleen <ak@...e.de>,
	Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH] x86: enable preemption in delay

On Sat, 24 May 2008, Steven Rostedt wrote:
> The RT team has been searching for a nasty latency. This latency shows
> up out of the blue and has been seen to be as big as 5ms!
>
> @@ -44,12 +44,33 @@ static void delay_loop(unsigned long loo
>  static void delay_tsc(unsigned long loops)
>  {
>  	unsigned long bclock, now;
> +	int cpu;
> 
> -	preempt_disable();		/* TSC's are per-cpu */
> +	preempt_disable();
> +	cpu = smp_processor_id();
>  	rdtscl(bclock);
>  	do {
>  		rep_nop();
>  		rdtscl(now);
> +		/* Allow RT tasks to run */
> +		preempt_enable();
> +		preempt_disable();
> +		/*
> +		 * It is possible that we moved to another CPU,
> +		 * and since TSC's are per-cpu we need to
> +		 * calculate that. The delay must guarantee that
> +		 * we wait "at least" the amount of time. Being
> +		 * moved to another CPU could make the wait longer
> +		 * but we just need to make sure we waited long
> +		 * enough. Rebalance the counter for this CPU.
> +		 */
> +		if (unlikely(cpu != smp_processor_id())) {

Eeek, once you migrated you do this all the time. you need to update
cpu here.

> +			if ((now-bclock) >= loops)
> +				break;

Also this is really dangerous with unsynchronized TSCs. You might get
migrated and return immediately because the TSC on the other CPU is
far ahead.

What you really want is something like the patch below, but we should
reuse the sched_clock_cpu() thingy to make that simpler. Looking into
that right now.

Thanks,
	tglx

diff --git a/arch/x86/lib/delay_32.c b/arch/x86/lib/delay_32.c
index 4535e6d..66a3c32 100644
--- a/arch/x86/lib/delay_32.c
+++ b/arch/x86/lib/delay_32.c
@@ -40,17 +40,51 @@ static void delay_loop(unsigned long loops)
 		:"0" (loops));
 }
 
+/*
+ * 5 usec on a 1GHZ machine. Not necessarily correct, but not too long
+ * either.
+ */
+#define TSC_MIGRATE_COUNT 5000
+
 /* TSC based delay: */
 static void delay_tsc(unsigned long loops)
 {
 	unsigned long bclock, now;
+	int cpu;
 
-	preempt_disable();		/* TSC's are per-cpu */
+	preempt_disable();
+	cpu = smp_processor_id();
 	rdtscl(bclock);
 	do {
 		rep_nop();
-		rdtscl(now);
-	} while ((now-bclock) < loops);
+
+		/* Allow RT tasks to run */
+		preempt_enable();
+		preempt_disable();
+
+		/*
+		 * It is possible that we moved to another CPU, and
+		 * since TSC's are per-cpu we need to calculate
+		 * that. The delay must guarantee that we wait "at
+		 * least" the amount of time. Being moved to another
+		 * CPU could make the wait longer but we just need to
+		 * make sure we waited long enough. Rebalance the
+		 * counter for this CPU.
+		 */
+		if (unlikely(cpu != smp_processor_id())) {
+			if (loops <= TSC_MIGRATE_COUNT)
+				break;
+			cpu = smp_processor_id();
+			rdtscl(bclock);
+			loops -= TSC_MIGRATE_COUNT;
+		} else {
+			rdtscl(now);
+			if ((now - bclock) >= loops)
+				break;
+			loops -= (now - bclock);
+			bclock = now;
+		}
+	} while (loops > 0);
 	preempt_enable();
 }
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ