lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Sun, 23 Dec 2012 20:55:08 -0200 From: Rafael Aquini <aquini@...hat.com> To: Rik van Riel <riel@...hat.com> Cc: linux-kernel@...r.kernel.org, walken@...gle.com, lwoodman@...hat.com, jeremy@...p.org, Jan Beulich <JBeulich@...ell.com>, Thomas Gleixner <tglx@...utronix.de> Subject: Re: [RFC PATCH 2/3] x86,smp: proportional backoff for ticket spinlocks On Fri, Dec 21, 2012 at 06:51:15PM -0500, Rik van Riel wrote: > Subject: x86,smp: proportional backoff for ticket spinlocks > > Simple fixed value proportional backoff for ticket spinlocks. > By pounding on the cacheline with the spin lock less often, > bus traffic is reduced. In cases of a data structure with > embedded spinlock, the lock holder has a better chance of > making progress. > > Signed-off-by: Rik van Riel <riel@...hat.com> > --- Reviewed-by: Rafael Aquini <aquini@...hat.com> > arch/x86/kernel/smp.c | 6 ++++-- > 1 files changed, 4 insertions(+), 2 deletions(-) > > diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c > index 20da354..4e44840 100644 > --- a/arch/x86/kernel/smp.c > +++ b/arch/x86/kernel/smp.c > @@ -118,9 +118,11 @@ static bool smp_no_nmi_ipi = false; > void ticket_spin_lock_wait(arch_spinlock_t *lock, struct __raw_tickets inc) > { > for (;;) { > - cpu_relax(); > - inc.head = ACCESS_ONCE(lock->tickets.head); > + int loops = 50 * (__ticket_t)(inc.tail - inc.head); > + while (loops--) > + cpu_relax(); > > + inc.head = ACCESS_ONCE(lock->tickets.head); > if (inc.head == inc.tail) > break; > } > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists