[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <50D52037.60602@redhat.com>
Date: Fri, 21 Dec 2012 21:51:35 -0500
From: Rik van Riel <riel@...hat.com>
To: David Daney <ddaney.cavm@...il.com>
CC: linux-kernel@...r.kernel.org, aquini@...hat.com, walken@...gle.com,
lwoodman@...hat.com, jeremy@...p.org,
Jan Beulich <JBeulich@...ell.com>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [RFC PATCH 3/3] x86,smp: auto tune spinlock backoff delay factor
On 12/21/2012 07:47 PM, David Daney wrote:
>> +#define MIN_SPINLOCK_DELAY 1
>> +#define MAX_SPINLOCK_DELAY 1000
>> +DEFINE_PER_CPU(int, spinlock_delay) = { MIN_SPINLOCK_DELAY };
>
>
> This gives the same delay for all locks in the system, but the amount of
> work done under each lock is different. So, for any given lock, the
> delay is not optimal.
>
> This is an untested idea that came to me after looking at this:
>
> o Assume that for any given lock, the optimal delay is the same for all
> CPUs in the system.
>
> o Store a per-lock delay value in arch_spinlock_t.
>
> o Once a CPU owns the lock it can update the delay as you do for the
> per_cpu version. Tuning the delay on fewer of the locking operations
> reduces bus traffic, but makes it converge more slowly.
>
> o Bonus points if you can update the delay as part of the releasing store.
It would absolutely have to be part of the same load and
store cycle, otherwise we would increase bus traffic and
defeat the purpose.
However, since spinlock contention should not be the
usual state, and all a scalable lock does is make sure
that N+1 CPUs does not perform worse than N CPUs, using
scalable locks is a stop-gap measure.
I believe a stop-gap measure should be kept as simple as
we can. I am willing to consider moving to a per-lock
delay factor if we can figure out an easy way to do it,
but I would like to avoid too much extra complexity...
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists