lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150715093924.GH2859@worktop.programming.kicks-ass.net>
Date:	Wed, 15 Jul 2015 11:39:24 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Waiman Long <Waiman.Long@...com>
Cc:	Ingo Molnar <mingo@...hat.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
	linux-kernel@...r.kernel.org, Scott J Norton <scott.norton@...com>,
	Douglas Hatch <doug.hatch@...com>,
	Davidlohr Bueso <dave@...olabs.net>
Subject: Re: [PATCH v2 4/6] locking/pvqspinlock: Allow vCPUs kick-ahead

On Tue, Jul 14, 2015 at 10:13:35PM -0400, Waiman Long wrote:
> Frequent CPU halting (vmexit) and CPU kicking (vmenter) lengthens
> critical section and block forward progress.  This patch implements
> a kick-ahead mechanism where the unlocker will kick the queue head
> vCPUs as well as up to four additional vCPUs next to the queue head
> if they were halted.  The kickings are done after exiting the critical
> section to improve parallelism.
> 
> The amount of kick-ahead allowed depends on the number of vCPUs
> in the VM guest.  This patch, by itself, won't do much as most of
> the kickings are currently done at lock time. Coupled with the next
> patch that defers lock time kicking to unlock time, it should improve
> overall system performance in a busy overcommitted guest.
> 
> Linux kernel builds were run in KVM guest on an 8-socket, 4
> cores/socket Westmere-EX system and a 4-socket, 8 cores/socket
> Haswell-EX system. Both systems are configured to have 32 physical
> CPUs. The kernel build times before and after the patch were:
> 
> 		    Westmere			Haswell
>   Patch		32 vCPUs    48 vCPUs	32 vCPUs    48 vCPUs
>   -----		--------    --------    --------    --------
>   Before patch	 3m25.0s    10m34.1s	 2m02.0s    15m35.9s
>   After patch    3m27.4s    10m32.0s	 2m00.8s    14m52.5s
> 
> There wasn't too much difference before and after the patch.

That means either the patch isn't worth it, or as you seem to imply its
in the wrong place in this series.

> @@ -224,7 +233,16 @@ static unsigned int pv_lock_hash_bits __read_mostly;
>   */
>  void __init __pv_init_lock_hash(void)
>  {
> -	int pv_hash_size = ALIGN(4 * num_possible_cpus(), PV_HE_PER_LINE);
> +	int ncpus = num_possible_cpus();
> +	int pv_hash_size = ALIGN(4 * ncpus, PV_HE_PER_LINE);
> +	int i;
> +
> +	/*
> +	 * The minimum number of vCPUs required in each kick-ahead level
> +	 */
> +	static const u8 kick_ahead_threshold[PV_KICK_AHEAD_MAX] = {
> +		4, 8, 16, 32
> +	};

You are aware we have ilog2(), right?

> +	/*
> +	 * Enable the unlock kick ahead mode according to the number of
> +	 * vCPUs available.
> +	 */
> +	for (i = PV_KICK_AHEAD_MAX; i > 0; i--)
> +		if (ncpus >= kick_ahead_threshold[i - 1]) {
> +			pv_kick_ahead = i;
> +			break;
> +		}

That's missing { }.

> +	if (pv_kick_ahead)
> +		pr_info("PV unlock kick ahead level %d enabled\n",
> +			pv_kick_ahead);

Idem.

That said, I still really dislike this patch, it again seems a random
bunch of hacks.

You also do not offer any support for any of the magic numbers..
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ