lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150729140759.GB17367@nazgul.tnic>
Date:	Wed, 29 Jul 2015 16:07:59 +0200
From:	Borislav Petkov <bp@...en8.de>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	linux-kernel@...r.kernel.org, mingo@...nel.org,
	jasonbaron0@...il.com, luto@...capital.net, tglx@...utronix.de,
	rostedt@...dmis.org, will.deacon@....com, liuj97@...il.com,
	rabin@....in, ralf@...ux-mips.org, ddaney@...iumnetworks.com,
	benh@...nel.crashing.org, michael@...erman.id.au,
	heiko.carstens@...ibm.com, davem@...emloft.net, vbabka@...e.cz
Subject: Re: [PATCH -v2 8/8] x86, tsc: Employ static_branch_likely()

On Tue, Jul 28, 2015 at 03:21:03PM +0200, Peter Zijlstra wrote:
> Because of the static_key restrictions we had to take an unconditional
> jump for the most likely case, causing $I bloat.
> 
> Rewrite to use the new primitives.
> 
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> ---
>  arch/x86/kernel/tsc.c |   22 ++++++++++------------
>  1 file changed, 10 insertions(+), 12 deletions(-)
> 
> --- a/arch/x86/kernel/tsc.c
> +++ b/arch/x86/kernel/tsc.c
> @@ -38,7 +38,7 @@ static int __read_mostly tsc_unstable;
>     erroneous rdtsc usage on !cpu_has_tsc processors */
>  static int __read_mostly tsc_disabled = -1;
>  
> -static struct static_key __use_tsc = STATIC_KEY_INIT;
> +static DEFINE_STATIC_KEY_FALSE(__use_tsc);
>  
>  int tsc_clocksource_reliable;
>  
> @@ -274,7 +274,12 @@ static void set_cyc2ns_scale(unsigned lo
>   */
>  u64 native_sched_clock(void)
>  {
> -	u64 tsc_now;
> +	if (static_branch_likely(&__use_tsc)) {
> +		u64 tsc_now = rdtsc();
> +
> +		/* return the value in ns */
> +		return cycles_2_ns(tsc_now);
> +	}

Hallelujah, this asm finally looks good:

native_sched_clock:
	pushq	%rbp	#
	movq	%rsp, %rbp	#,
	andq	$-16, %rsp	#,
#APP
# 36 "./arch/x86/include/asm/jump_label.h" 1
	1:.byte 0xe9
	 .long .L121 - 2f	#
	2:
	.pushsection __jump_table,  "aw" 
	 .balign 8 
	 .quad 1b, .L121, __use_tsc+1 	#,
	.popsection 

# 0 "" 2
# 124 "./arch/x86/include/asm/msr.h" 1
	rdtsc
# 0 "" 2
#NO_APP

	...

        leave
        ret
.L121:
        imulq   $1000000, jiffies_64(%rip), %rdx        #, jiffies_64, D.28480
        movabsq $-4294667296000000, %rax        #, tmp135
        leave
        addq    %rdx, %rax      # D.28480, D.28480
        ret

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.
--
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ