lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5640CF08.2010001@hpe.com>
Date:	Mon, 09 Nov 2015 11:51:20 -0500
From:	Waiman Long <waiman.long@....com>
To:	Peter Zijlstra <peterz@...radead.org>
CC:	Ingo Molnar <mingo@...hat.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	"H. Peter Anvin" <hpa@...or.com>, x86@...nel.org,
	linux-kernel@...r.kernel.org,
	Scott J Norton <scott.norton@....com>,
	Douglas Hatch <doug.hatch@....com>,
	Davidlohr Bueso <dave@...olabs.net>
Subject: Re: [PATCH tip/locking/core v9 6/6] locking/pvqspinlock: Queue node
 adaptive spinning

On 11/06/2015 03:37 PM, Peter Zijlstra wrote:
> On Fri, Nov 06, 2015 at 12:54:06PM -0500, Waiman Long wrote:
>>>> +static void pv_wait_node(struct mcs_spinlock *node, struct mcs_spinlock *prev)
>>>>   {
>>>>   	struct pv_node *pn = (struct pv_node *)node;
>>>> +	struct pv_node *pp = (struct pv_node *)prev;
>>>>   	int waitcnt = 0;
>>>>   	int loop;
>>>> +	bool wait_early;
>>>>
>>>>   	/* waitcnt processing will be compiled out if !QUEUED_LOCK_STAT */
>>>>   	for (;; waitcnt++) {
>>>> -		for (loop = SPIN_THRESHOLD; loop; loop--) {
>>>> +		for (wait_early = false, loop = SPIN_THRESHOLD; loop; loop--) {
>>>>   			if (READ_ONCE(node->locked))
>>>>   				return;
>>>> +			if (pv_wait_early(pp, loop)) {
>>>> +				wait_early = true;
>>>> +				break;
>>>> +			}
>>>>   			cpu_relax();
>>>>   		}
>>>>
>>> So if prev points to another node, it will never see vcpu_running. Was
>>> that fully intended?
>> I had added code in pv_wait_head_or_lock to set the state appropriately for
>> the queue head vCPU.
> Yes, but that's the head, for nodes we'll always have halted or hashed.

The node state was initialized to be vcpu_running. In pv_wait_node(), it 
will be changed to vcpu_halted before sleeping and back to vcpu_running 
after that. So it is not true that it is either halted or hashed.

In case, it was changed to vcpu_hashed, it will be changed back to 
vcpu_running in pv_wait_head_lock before entering the active spinning 
loop. There are definitely a small amount of time where the node state 
does not reflect the actual vCPU state, but that is the best we can do 
so far.

Cheers,
Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ