lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160625152130.GA2452@insomnia>
Date:	Sat, 25 Jun 2016 23:21:30 +0800
From:	Boqun Feng <boqun.feng@...il.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Pan Xinhui <xinhui.pan@...ux.vnet.ibm.com>,
	linux-kernel@...r.kernel.org, mingo@...hat.com, dave@...olabs.net,
	will.deacon@....com, Waiman.Long@....com, benh@...nel.crashing.org
Subject: Re: [PATCH] locking/osq: Drop the overload of osq lock

On Sat, Jun 25, 2016 at 04:24:47PM +0200, Peter Zijlstra wrote:
> On Sat, Jun 25, 2016 at 01:42:03PM -0400, Pan Xinhui wrote:
> > An over-committed guest with more vCPUs than pCPUs has a heavy overload
> > in osq_lock().
> > 
> > This is because vCPU A hold the osq lock and yield out, vCPU B wait
> > per_cpu node->locked to be set. IOW, vCPU B wait vCPU A to run and
> > unlock the osq lock. Even there is need_resched(), it did not help on
> > such scenario.
> > 
> > To fix such bad issue, add a threshold in one while-loop of osq_lock().
> > The value of threshold is somehow equal to SPIN_THRESHOLD.
> 
> Blergh, virt ...
> 
> So yes, lock holder preemption sucks. You would also want to limit the
> immediate spin on owner.
> 
> Also; I really hate these random number spin-loop thresholds.
> 
> Is it at all possible to get feedback from your LPAR stuff that the vcpu
> was preempted? Because at that point we can add do something like:
> 

Good point!

> 
> 	int vpc = vcpu_preempt_count();
> 
> 	...
> 
> 	for (;;) {
> 
> 		/* the big spin loop */
> 
> 		if (need_resched() || vpc != vcpu_preempt_count())

So on PPC, we have lppaca::yield_count to detect when an vcpu is
preempted, if the yield_count is even, the vcpu is running, otherwise it
is preempted(__spin_yield() is a user of this).

Therefore it makes more sense we

		if (need_resched() || vcpu_is_preempted(old))

here, and implement vcpu_is_preempted() on PPC as

bool vcpu_is_preempted(int cpu)
{
	return !!(be32_to_cpu(lppaca_of(cpu).yield_count) & 1)
}

Thoughts?

Regards,
Boqun

> 			/* bail */
> 
> 	}
> 
> 
> With a default implementation like:
> 
> static inline int vcpu_preempt_count(void)
> {
> 	return 0;
> }
> 
> So the compiler can make it all go away.
> 
> 
> But on virt muck it would stop spinning the moment the vcpu gets
> preempted, which is the right moment I'm thinking.

Download attachment "signature.asc" of type "application/pgp-signature" (474 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ