[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070128203841.GA27397@infradead.org>
Date: Sun, 28 Jan 2007 20:38:41 +0000
From: Christoph Hellwig <hch@...radead.org>
To: Ingo Molnar <mingo@...e.hu>
Cc: Christoph Hellwig <hch@...radead.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Linus Torvalds <torvalds@...l.org>,
Andrew Morton <akpm@...l.org>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/7] breaking the global file_list_lock
On Sun, Jan 28, 2007 at 07:41:16PM +0100, Ingo Molnar wrote:
> starts one process per CPU and open()s/close()s a file all over again,
> simulating an open/close-intense workload. This pattern is quite typical
> of several key Linux applications.
>
> Using Peter's s_files patchset the following scalability improvement can
> be measured (lower numbers are better):
>
> ----------------------------------------------------------------------
> v2.6.20-rc6 | v2.6.20-rc6+Peter's s_files queue
> ----------------------------------------------------------------------
> dual-core: 2.11 usecs/op | 1.51 usecs/op ( +39.7% win )
> 8-socket: 6.30 usecs/op | 2.70 usecs/op ( +233.3% win )
Thanks for having some numbers to start with.
> Now could you please tell me why i had to waste 3.5 hours on measuring
> and profiling this /again/, while a tiny little bit of goodwill from
> your side could have avoided this? I told you that we lock-profiled this
> under -rt, and that it's an accurate measurement of such things - as the
> numbers above prove it too. Would it have been so hard to say something
> like: "Cool Peter! That lock had been in our way of good open()/close()
> scalability for such a long time and it's an obviously good idea to
> eliminate it. Now here's a couple of suggestions of how to do it even
> simpler: [...]." Why did you have to in essence piss on his patchset?
> Any rational explanation?
Can we please stop this stupid pissing contest. I'm totally fine to admit
yours is bigger than mine in public, so let's get back to the facts.
The patchkit we're discussing here introduces a lot of complexity:
- a new type of implicitly locked linked lists
- a new synchronization primitive
- a new locking scheme that utilizes the previous two items, aswell
as rcu.
I think we definitly we want some numbers (which you finally provided)
to justify this.
Then going on to the implementation I don't like trying to "fix" a problem
with this big hammer approach. I've outlined some alternate ways that
actually simplify both the underlying data structures and locking that
should help towards this problem instead of making the code more complex
and really hard to understand.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists