lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20121222034910.GG27621@home.goodmis.org>
Date:	Fri, 21 Dec 2012 22:49:10 -0500
From:	Steven Rostedt <rostedt@...dmis.org>
To:	Rik van Riel <riel@...hat.com>
Cc:	David Daney <ddaney.cavm@...il.com>, linux-kernel@...r.kernel.org,
	aquini@...hat.com, walken@...gle.com, lwoodman@...hat.com,
	jeremy@...p.org, Jan Beulich <JBeulich@...ell.com>,
	Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [RFC PATCH 3/3] x86,smp: auto tune spinlock backoff delay factor

On Fri, Dec 21, 2012 at 09:51:35PM -0500, Rik van Riel wrote:
> On 12/21/2012 07:47 PM, David Daney wrote:
> 
> >>+#define MIN_SPINLOCK_DELAY 1
> >>+#define MAX_SPINLOCK_DELAY 1000
> >>+DEFINE_PER_CPU(int, spinlock_delay) = { MIN_SPINLOCK_DELAY };
> >
> >
> >This gives the same delay for all locks in the system, but the amount of
> >work done under each lock is different.  So, for any given lock, the
> >delay is not optimal.
> >
> >This is an untested idea that came to me after looking at this:
> >
> >o Assume that for any given lock, the optimal delay is the same for all
> >CPUs in the system.
> >
> >o Store a per-lock delay value in arch_spinlock_t.

This can bloat the data structures. I would like to avoid that.

> >
> >o Once a CPU owns the lock it can update the delay as you do for the
> >per_cpu version.  Tuning the delay on fewer of the locking operations
> >reduces bus traffic, but makes it converge more slowly.
> >
> >o Bonus points if you can update the delay as part of the releasing store.
> 
> It would absolutely have to be part of the same load and
> store cycle, otherwise we would increase bus traffic and
> defeat the purpose.
> 
> However, since spinlock contention should not be the
> usual state, and all a scalable lock does is make sure
> that N+1 CPUs does not perform worse than N CPUs, using
> scalable locks is a stop-gap measure.
> 
> I believe a stop-gap measure should be kept as simple as
> we can. I am willing to consider moving to a per-lock
> delay factor if we can figure out an easy way to do it,
> but I would like to avoid too much extra complexity...

Rik,

I like your solution. It's rather simple and simple solutions tend to
end up being the closest to optimal. The more complex a solution gets,
the more it starts chasing fireflies.

Locks that are not likely to be contended will most likely not hit this
code, as it only gets triggered when contended. Now, really the wait
will be the size of the critical section. If quick locks gets hit a lot,
the auto delay will shink. If long critical sections start getting
contention, the auto delay will grow. But the general delay should
become a balance in the system that should be ideal. Kind of like the
NUMA balancing ;-)

Anyway, I'd like to see this code tested, and more benchmarks run
against it.

Thanks,

-- Steve


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ