lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4A084426.2080604@redhat.com>
Date:	Mon, 11 May 2009 18:28:38 +0300
From:	Avi Kivity <avi@...hat.com>
To:	Ingo Molnar <mingo@...e.hu>
CC:	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Mark Langsdorf <mark.langsdorf@....com>,
	Joerg Roedel <joerg.roedel@....com>, kvm@...r.kernel.org,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH][KVM][retry 1] Add support for Pause Filtering to AMD
 SVM

Ingo Molnar wrote:
> * Avi Kivity <avi@...hat.com> wrote:
>
>   
>>> I.e. the 3000 cycles value itself could be eliminated as well. 
>>> (with just a common-sense max of say 100,000 cycles enforced)
>>>       
>> Yeah, though that has a much smaller effect as it's only 
>> responsible for a few microseconds of spinning.
>>     
>
> 3000 cycles would be 1-2 usecs. Isnt the VM exit+entry cost still in 
> that range?
>   

It's 3000 executions of rep nop, so you need to account for the entire 
spinlock loop body.

The Linux spinlock is

             "1:\t"
             "cmpl %0, %2\n\t"
             "je 2f\n\t"
             "rep ; nop\n\t"
             "movzwl %1, %2\n\t"
             /* don't need lfence here, because loads are in-order */
             "jmp 1b\n"

5 instructions, maybe 2-3 cycles, not counting any special rep nop 
overhead.  Mark, any idea what the spin time is?

VM entry/exit is around 1us on the newer processors.

-- 
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ