[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.1803141527300.2481@nanos.tec.linutronix.de>
Date: Wed, 14 Mar 2018 15:48:10 +0100 (CET)
From: Thomas Gleixner <tglx@...utronix.de>
To: jason.vas.dias@...il.com
cc: linux-kernel@...r.kernel.org, x86@...nel.org, mingo@...nel.org,
peterz@...radead.org, andi@...stfloor.org
Subject: Re: [PATCH v4.16-rc5 2/3] x86/vdso: on Intel, VDSO should handle
CLOCK_MONOTONIC_RAW
On Wed, 14 Mar 2018, jason.vas.dias@...il.com wrote:
Again: Read and comply with Documentation/process/ and fix the complaints
of checkpatch.pl.
> diff --git a/arch/x86/entry/vdso/vclock_gettime.c b/arch/x86/entry/vdso/vclock_gettime.c
> index fbc7371..2c46675 100644
> --- a/arch/x86/entry/vdso/vclock_gettime.c
> +++ b/arch/x86/entry/vdso/vclock_gettime.c
> @@ -184,10 +184,9 @@ notrace static u64 vread_tsc(void)
>
> notrace static u64 vread_tsc_raw(void)
> {
> - u64 tsc
> + u64 tsc = (gtod->has_rdtscp ? rdtscp((void*)0) : rdtsc_ordered())
> , last = gtod->raw_cycle_last;
Aside of the totally broken coding style including usage of (void*)0 :
Did you ever benchmark rdtscp() against rdtsc_ordered()?
If so, then the results want to be documented in the changelog and this
change only makes sense when rdtscp() is actually faster.
Please document how you measured that so others can actually run the same
tests and make their own judgement.
If it would turn out that rdtscp() is faster, which I doubt, then the
conditional is the wrong way to do that. It wants to be patched at boot
time which completely avoids conditionals.
Thanks,
tglx
Powered by blists - more mailing lists