lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Mon,  9 Sep 2013 11:11:31 -0400
From:	Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>
To:	linux-kernel@...r.kernel.org, xen-devel@...ts.xenproject.org,
	boris.ostrovsky@...cle.com, david.vrabel@...rix.com,
	stefan.bader@...onical.com, stefano.stabellini@...citrix.com,
	jeremy@...p.org
Subject: [PATCH]  Bug-fixes to enable PV ticketlock to work under Xen PVHVM with Linux v3.12. (v2)

Changelog since v1 (see https://lkml.org/lkml/2013/9/7/78)
 - Added Reviewed-by tag.
 - Fleshed out description of patches
 - Ran some perf
 - Used xen_smp_prepare_boot_cpu instead of xen_hvm_smp_init

After a bit of false starts, lots of debugging, and tons of help from Stefano and
David on how event mechanism is suppose to work I am happy to present a set
of bug-fixes that make PV ticketlocks work under Xen PVHVM with Linux v3.12.

v3.12 has now thanks to commit 816434ec4a674fcdb3c2221a6dffdc8f34020550
(Merge branch 'x86-spinlocks-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip)
PV ticketlocks. That means:
 - Xen PV bytelock has been replaced by Xen PV ticketlock.
 - Xen PVHVM is using ticketlock (not PV variant) - this patch makes it PV.
 - baremetal is still using ticketlocks.

In other words everything in the kernel is ticketlock based with the virtualizations
having the 'PV' part to aid.

Please take a look at the patches. They are also available as a git tree
under:

 git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/pvticketlock.v7.1

I had run some light performance tests with two guests - each over subscribed by
2:1 and running a spinlock_hog wherein each CPU tries to get a lock. The machine
is a 4 CPU Intel, and each guest is running with 8 VCPUs.

It is not a perfect test-case - for that I have asked our internal performance
engineer to test it out with various workloads and guests.

However it does demonstrate that these patches do work and they do not
incur any performance regressions (and yes, they do show a performance
improvement).

 arch/x86/xen/enlighten.c |  1 -
 arch/x86/xen/smp.c       | 28 +++++++++++++++++++++++-----
 arch/x86/xen/spinlock.c  | 45 ++++++++-------------------------------------
 3 files changed, 31 insertions(+), 43 deletions(-)

Konrad Rzeszutek Wilk (5):
      xen/spinlock: Fix locking path engaging too soon under PVHVM.
      xen/spinlock: We don't need the old structure anymore
      xen/smp: Update pv_lock_ops functions before alternative code starts under PVHVM
      xen/spinlock: Don't setup xen spinlock IPI kicker if disabled.
      Revert "xen/spinlock: Disable IRQ spinlock (PV) allocation on PVHVM"

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ