[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CANN689HeTk0jNJb4sF_0t714MvFnU0vLw0=JeXYRUwMx8GDnXQ@mail.gmail.com>
Date: Wed, 26 Dec 2012 22:07:50 -0800
From: Michel Lespinasse <walken@...gle.com>
To: Rik van Riel <riel@...hat.com>
Cc: Eric Dumazet <eric.dumazet@...il.com>,
Steven Rostedt <rostedt@...dmis.org>,
linux-kernel@...r.kernel.org, aquini@...hat.com,
lwoodman@...hat.com, jeremy@...p.org,
Jan Beulich <JBeulich@...ell.com>,
Thomas Gleixner <tglx@...utronix.de>,
Tom Herbert <therbert@...gle.com>
Subject: Re: [RFC PATCH 3/3 -v2] x86,smp: auto tune spinlock backoff delay factor
On Wed, Dec 26, 2012 at 11:51 AM, Rik van Riel <riel@...hat.com> wrote:
> On 12/26/2012 02:10 PM, Eric Dumazet wrote:
>> We might try to use a hash on lock address, and an array of 16 different
>> delays so that different spinlocks have a chance of not sharing the same
>> delay.
>>
>> With following patch, I get 982 Mbits/s with same bench, so an increase
>> of 45 % instead of a 13 % regression.
Awesome :)
> I will probably keep it as a separate patch 4/4, with
> your report and performance numbers in it, to preserve
> the reason why we keep multiple hashed values, etc...
>
> There is enough stuff in this code that will be
> indishinguishable from magic if we do not document it
> properly...
If we go with per-spinlock tunings, I feel we'll most likely want to
add an associative cache in order to avoid the 1/16 chance (~6%) of
getting 595Mbit/s instead of 982Mbit/s when there is a hash collision.
I would still prefer if we could make up something that didn't require
per-spinlock tunings, but it's not clear if that'll work. At least we
now know of a simple enough workload to figure it out :)
--
Michel "Walken" Lespinasse
A program is never fully debugged until the last user dies.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists