lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 11 May 2009 16:38:44 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Mark Langsdorf <mark.langsdorf@....com>
Cc:	joerg.roedel@....com, linux-kernel@...r.kernel.org,
	Ingo Molnar <mingo@...e.hu>
Subject: Re: [PATCH][KVM] Add support for Pause Filtering to AMD SVM

On Tue, 2009-05-05 at 09:09 -0500, Mark Langsdorf wrote:
> commit 6f15c833f56267baf5abdd0fbc90a81489573053
> Author: Mark Langsdorf <mlangsdo@...pnow.amd.com>
> Date:   Mon May 4 15:02:38 2009 -0500
> 
>     New AMD processors will support the Pause Filter Feature.
>     This feature creates a new field in the VMCB called Pause
>     Filter Count.  If Pause Filter Count is greater than 0 and
>     ntercepting PAUSEs is enabled, the processor will increment
>     an internal counter when a PAUSE instruction occurs instead
>     of intercepting.  When the internal counter reaches the
>     Pause Filter Count value, a PAUSE intercept will occur.
>     
>     This feature can be used to detect contended spinlocks,
>     especially when the lock holding VCPU is not scheduled.
>     Rescheduling another VCPU prevents the VCPU seeking the
>     lock from wasting its quantum by spinning idly.
>     
>     Experimental results show that most spinlocks are held
>     for less than 1000 PAUSE cycles or more than a few
>     thousand.  Default the Pause Filter Counter to 3000 to
>     detect the contended spinlocks.
>     
>     Processor support for this feature is indicated by a CPUID
>     bit.
>     
>     On a 24 core system running 4 guests each with 16 VCPUs,
>     this patch improved overall performance of each guest's
>     32 job kernbench by approximately 1%.  Further performance
>     improvement may be possible with a more sophisticated
>     yield algorithm.

Isn't a much better solution to the spinlock problem a usable
monitor-wait implementation?

If we implement virt spinlocks using monitor-wait they don't spin but
simply wait in place, the HV could then decide to run someone else.

This is the HV equivalent to futexes.

The only problem with this is that the current hardware has horrid mwait
wakeup latencies. If this were (much) improved you don't need such ugly
yield hacks like this.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ