[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFzPxJ3rfZvAvrP9LxCXfDU8AAVvhMZnP-OAoe-ycc70aw@mail.gmail.com>
Date: Wed, 27 Feb 2013 09:10:55 -0800
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Rik van Riel <riel@...hat.com>
Cc: Ingo Molnar <mingo@...nel.org>, "H. Peter Anvin" <hpa@...or.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>, aquini@...hat.com,
Andrew Morton <akpm@...ux-foundation.org>,
Thomas Gleixner <tglx@...utronix.de>,
Michel Lespinasse <walken@...gle.com>,
linux-tip-commits@...r.kernel.org,
Steven Rostedt <rostedt@...dmis.org>,
"Vinod, Chegu" <chegu_vinod@...com>,
"Low, Jason" <jason.low2@...com>
Subject: Re: [tip:core/locking] x86/smp: Move waiting on contended ticket lock
out of line
On Wed, Feb 27, 2013 at 8:42 AM, Rik van Riel <riel@...hat.com> wrote:
>
> To keep the results readable and relevant, I am reporting the
> plateau performance numbers. Comments are given where required.
>
> 3.7.6 vanilla 3.7.6 w/ backoff
>
> all_utime 333000 333000
> alltests 300000-470000 180000-440000 large variability
> compute 528000 528000
> custom 290000-320000 250000-330000 4 fast runs, 1 slow
> dbase 920000 925000
> disk 100000 90000-120000 similar plateau, wild
> swings with patches
> five_sec 140000 140000
> fserver 160000-300000 250000-430000 w/ patch drops off at
> higher number of users
> high_systime 80000-110000 30000-125000 w/ patch mostly 40k-70k,
> wild wings
> long no performance platform, equal performance for both
> new_dbase 960000 96000
> new_fserver 150000-300000 210000-420000 vanilla drops off,
> w/ patches wild swings
> shared 270000-440000 120000-440000 all runs ~equal to
> vanilla up to 1000
> users, one out of 5
> runs slows down past
> 1100 users
> short 120000 190000
Ugh. That really is rather random. "short" and fserver seems to
improve a lot (including the "new" version), the others look like they
are either unchanged or huge regressions.
Is there any way to get profiles for the improved versions vs the
regressed ones? It might well be that we have two different classes of
spinlocks. Maybe we could make the back-off version be *explicit* (ie
not part of the normal "spin_lock()", but you'd use a special
"spin_lock_backoff()" function for it) because it works well for some
cases but not for others?
Hmm? At the very least, it would give us an idea of *which* spinlock
it is that causes the most pain. I think your earlier indications was
that it's the mutex->wait_lock or something?
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists