[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1356618448.30414.948.camel@edumazet-glaptop>
Date: Thu, 27 Dec 2012 06:27:28 -0800
From: Eric Dumazet <eric.dumazet@...il.com>
To: Michel Lespinasse <walken@...gle.com>
Cc: Rik van Riel <riel@...hat.com>,
Steven Rostedt <rostedt@...dmis.org>,
linux-kernel@...r.kernel.org, aquini@...hat.com,
lwoodman@...hat.com, jeremy@...p.org,
Jan Beulich <JBeulich@...ell.com>,
Thomas Gleixner <tglx@...utronix.de>,
Tom Herbert <therbert@...gle.com>
Subject: Re: [RFC PATCH 3/3 -v2] x86,smp: auto tune spinlock backoff delay
factor
On Wed, 2012-12-26 at 22:07 -0800, Michel Lespinasse wrote:
> If we go with per-spinlock tunings, I feel we'll most likely want to
> add an associative cache in order to avoid the 1/16 chance (~6%) of
> getting 595Mbit/s instead of 982Mbit/s when there is a hash collision.
>
> I would still prefer if we could make up something that didn't require
> per-spinlock tunings, but it's not clear if that'll work. At least we
> now know of a simple enough workload to figure it out :)
Even with a per spinlock tuning, we can find workloads where holding
time depends on the context.
For example, complex qdisc hierarchy typically use different times on
enqueue and dequeue operations.
So the hash sounds good to me, because the hash key could mix both lock
address and caller IP ( __builtin_return_address(1) in
ticket_spin_lock_wait())
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists