[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CA+55aFxeLszgfTZeG7L6289DcaoKT+uhtf5nwB6r=2i9Sy4Mrg@mail.gmail.com>
Date: Mon, 10 Jun 2013 17:51:14 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: Paul McKenney <paulmck@...ux.vnet.ibm.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...e.hu>,
赖江山 <laijs@...fujitsu.com>,
Dipankar Sarma <dipankar@...ibm.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
Josh Triplett <josh@...htriplett.org>, niv@...ibm.com,
Thomas Gleixner <tglx@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
Valdis Kletnieks <Valdis.Kletnieks@...edu>,
David Howells <dhowells@...hat.com>,
Eric Dumazet <edumazet@...gle.com>,
Darren Hart <darren@...art.com>,
Frédéric Weisbecker <fweisbec@...il.com>,
sbw@....edu
Subject: Re: [PATCH RFC ticketlock] Auto-queued ticketlock
On Mon, Jun 10, 2013 at 5:44 PM, Steven Rostedt <rostedt@...dmis.org> wrote:
>
> OK, I haven't found a issue here yet, but youss are beiing trickssy! We
> don't like trickssy, and we must find precccciouss!!!
.. and I personally have my usual reservations. I absolutely hate
papering over scalability issues, and historically whenever people
have ever thought that we want complex spinlocks, the problem has
always been that the locking sucks.
So reinforced by previous events, I really feel that code that needs
this kind of spinlock is broken and needs to be fixed, rather than
actually introduce tricky spinlocks.
So in order to merge something like this, I want (a) numbers for real
loads and (b) explanations for why the spinlock users cannot be fixed.
Because "we might hit loads" is just not good enough. I would counter
with "hiding problems causes more of them".
Linus
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists