lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1195205942.3059.1.camel@twins>
Date:	Fri, 16 Nov 2007 10:39:02 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	Ingo Molnar <mingo@...e.hu>
Cc:	Avi Kivity <avi@...ranet.com>,
	Arjan van de Ven <arjan@...radead.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	akpm@...ux-foundation.org
Subject: Re: [patch] x86: make delay_tsc() preemptible again


On Fri, 2007-11-16 at 09:47 +0100, Ingo Molnar wrote:
> * Ingo Molnar <mingo@...e.hu> wrote:
> 
> > but that should not be needed in this case. Why doesnt the TSC using 
> > delay loop simply poll the CPU it is on and fix up the TSC?
> 
> something like the patch below.
> 
> 	Ingo
> 
> --------------->
> Subject: x86: make delay_tsc() preemptible again
> From: Ingo Molnar <mingo@...e.hu>
> 
> make delay_tsc() preemptible again.
> 
> Signed-off-by: Ingo Molnar <mingo@...e.hu>
> ---
>  arch/x86/lib/delay_32.c |   28 +++++++++++++++++++++++-----
>  arch/x86/lib/delay_64.c |   30 ++++++++++++++++++++++++------
>  2 files changed, 47 insertions(+), 11 deletions(-)
> 
> Index: linux/arch/x86/lib/delay_32.c
> ===================================================================
> --- linux.orig/arch/x86/lib/delay_32.c
> +++ linux/arch/x86/lib/delay_32.c
> @@ -38,17 +38,35 @@ static void delay_loop(unsigned long loo
>  		:"0" (loops));
>  }
>  
> -/* TSC based delay: */
> +/*
> + * TSC based delay:
> + *
> + * We are careful about preemption as TSC's are per-CPU.
> + */
>  static void delay_tsc(unsigned long loops)
>  {
> -	unsigned long bclock, now;
> +	unsigned long prev, now;
> +	long left = loops;
> +	int prev_cpu, cpu;
>  
> -	preempt_disable();		/* TSC's are per-cpu */
> -	rdtscl(bclock);
> +	preempt_disable();
> +	rdtscl(prev);
>  	do {
> +		prev_cpu = smp_processor_id();
>  		rep_nop();
> +		preempt_enable();

Why not have the rep_nop() here between the enable, and disable ?

> +
> +		preempt_disable();
> +		cpu = smp_processor_id();
>  		rdtscl(now);
> -	} while ((now-bclock) < loops);
> +		/*
> +		 * If we preempted we skip this small amount of time:
                           ^ migrated, perhaps?

> +		 */
> +		if (prev_cpu != cpu)
> +			prev = now;
> +		left -= now - prev;
> +		prev = now;
> +	} while (left > 0);
>  	preempt_enable();
>  }


Otherwise, looks like a very nice patch :-)


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ