lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 1 Feb 2007 12:36:05 +0100
From:	Andi Kleen <ak@...e.de>
To:	jbohac@...e.cz
Cc:	linux-kernel@...r.kernel.org, Vojtech Pavlik <vojtech@...e.cz>,
	ssouhlal@...ebsd.org, arjan@...radead.org, tglx@...utronix.de,
	johnstul@...ibm.com, zippel@...ux-m68k.org, andrea@...e.de
Subject: Re: [patch 9/9] Make use of the Master Timer

On Thursday 01 February 2007 11:00, jbohac@...e.cz wrote:

> +		case VXTIME_TSC:
> +			rdtscll(tsc);

Where is the CPU synchronization? 

> +	cpu = smp_processor_id();
> +	rdtscll(t);

Also no synchronization. It's slower, but needed.

>  unsigned long long sched_clock(void)
>  {
> -	unsigned long a = 0;
> -
> -	rdtscll(a);
> -	return cycles_2_ns(a);
> +	return monotonic_clock();
>  }

This is overkill because sched_clock() doesn't need a globally monotonic
clock, per CPU monotonic is enough. The old version was fine.


> +static __always_inline void do_vgettimeofday(struct timeval * tv, u64 tsc, int cpu)
> +{
> +	unsigned int sec;
> +	s64 nsec;
>  
> -	do {
> -		sequence = read_seqbegin(&__xtime_lock);
> -		
> -		sec = __xtime.tv_sec;
> -		usec = __xtime.tv_nsec / 1000;
> -
> -			usec += ((readl((void __iomem *)
> -				   fix_to_virt(VSYSCALL_HPET) + 0xf0) -
> -				  __vxtime.last) * __vxtime.quot) >> 32;
> -	} while (read_seqretry(&__xtime_lock, sequence));
> +	sec = __xtime.tv_sec;
> +	nsec = __xtime.tv_nsec;
> +	nsec +=	max(__do_gettimeoffset(tsc, cpu), __vxtime.drift);
>  
> -	tv->tv_sec = sec + usec / 1000000;
> -	tv->tv_usec = usec % 1000000;
> +	sec += nsec / NSEC_PER_SEC;
> +	nsec %= NSEC_PER_SEC;

Using while() here is probably faster (done in vdso patchkit where
gtod got mysteriously faster). Modulo and divisions are slow, even 
for constants when they are large.

You might want to use the algorithm from 
ftp://one.firstfloor.org/pub/ak/x86_64/quilt/patches/vdso

> +	if (nsec < 0) {
> +		--sec;
> +		nsec += NSEC_PER_SEC;
> +	}
> +	tv->tv_sec = sec;
> +	tv->tv_usec = nsec / NSEC_PER_USEC;

Similar. 

>  }
>  
>  /* RED-PEN may want to readd seq locking, but then the variable should be write-once. */
> @@ -107,10 +118,39 @@ static __always_inline long time_syscall
>  
>  int __vsyscall(0) vgettimeofday(struct timeval * tv, struct timezone * tz)
>  {
> -	if (!__sysctl_vsyscall)
> +	int cpu = 0;
> +	u64 tsc;
> +	unsigned long seq;
> +	int do_syscall = !__sysctl_vsyscall;
> +
> +	if (tv && !do_syscall)
> +		switch (__vxtime.mode) {
> +			case VXTIME_TSC:
> +			case VXTIME_TSCP:
> +				do {
> +					seq = read_seqbegin(&__xtime_lock);
> +
> +					if (__vxtime.mode == VXTIME_TSC)
> +						rdtscll(tsc);
> +					else {
> +						rdtscpll(tsc, cpu);
> +						cpu &= 0xfff;
> +					}
> +
> +					if (unlikely(__vxtime.cpu[cpu].tsc_invalid))
> +						do_syscall = 1;
> +					else
> +						do_vgettimeofday(tv, tsc, cpu);
> +
> +				} while (read_seqretry(&__xtime_lock, seq));
> +				break;
> +			default:
> +				do_syscall = 1;

Why do you not set __sysctl_vsyscall correctly for the mode at initialization?


-Andi
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists