lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 9 Apr 2009 18:14:04 +0200
From:	Heiko Carstens <heiko.carstens@...ibm.com>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Ingo Molnar <mingo@...e.hu>,
	Martin Schwidefsky <schwidefsky@...ibm.com>,
	Christian Borntraeger <borntraeger@...ibm.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] mutex: have non-spinning mutexes on s390 by default

On Thu, 09 Apr 2009 17:54:56 +0200
Peter Zijlstra <peterz@...radead.org> wrote:

> > Index: linux-2.6/kernel/sched_features.h
> > ===================================================================
> > --- linux-2.6.orig/kernel/sched_features.h
> > +++ linux-2.6/kernel/sched_features.h
> > @@ -14,4 +14,8 @@ SCHED_FEAT(LB_WAKEUP_UPDATE, 1)
> >  SCHED_FEAT(ASYM_EFF_LOAD, 1)
> >  SCHED_FEAT(WAKEUP_OVERLAP, 0)
> >  SCHED_FEAT(LAST_BUDDY, 1)
> > +#ifdef CONFIG_HAVE_DEFAULT_NO_SPIN_MUTEXES
> > +SCHED_FEAT(OWNER_SPIN, 0)
> > +#else
> >  SCHED_FEAT(OWNER_SPIN, 1)
> > +#endif
> 
> Hmm, I'd rather have you'd make the whole block in __mutex_lock_common
> go away on that CONFIG thingy.

Sure, I can do that.

> Would be nice though to get something working on s390, does it have a
> monitor wait like ins where it can properly sleep so that another
> virtual host can run?
> 
> If so, we could possibly do a monitor wait on the lock owner field
> instead of spinning.

What we now have: the cpu_relax() implementation on s390 will yield the
current (virtual) cpu and give it back to the hypervisor. If the
hypervisor has nothing else to schedule or for fairness reasons decides
that this cpu has to run again it will be scheduled again.
One roundtrip (exit/reenter) is quite expensive (~1200 cycles).

We also have a directed yield where you can tell the hypervisor "don't
schedule me again until cpu x was has been scheduled".

And we have an IPI (signal processer with order code sense running)
which examines if the target cpu is actually physically backed. That's
in the order of ~80 cycles. At least that's what I just measured with
two hypervisors running below my Linux kernel.

So the idea for the spinning loop would be to additionaly test if the
targer cpu is physically backed. If it is we could happily spin and
sense again, more or less like the current code does.
Now the question is what do we do if it isn't backed? Yield the current
cpu in favour of the target cpu or just make the current process sleep
and schedule a different one?
For that I'd like to have performance numbers before we go in any
direction.. It might take some time to get the numbers since it's not
easy to get a slot for performance measurements.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ