[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 6 Jun 2016 14:20:32 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Waiman Long <waiman.long@....com>
Cc: Dave Hansen <dave.hansen@...el.com>,
"Chen, Tim C" <tim.c.chen@...el.com>,
Ingo Molnar <mingo@...hat.com>,
Davidlohr Bueso <dbueso@...e.de>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
Jason Low <jason.low2@...com>,
Michel Lespinasse <walken@...gle.com>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Waiman Long <waiman.long@...com>,
Al Viro <viro@...iv.linux.org.uk>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: performance delta after VFS i_mutex=>i_rwsem conversion
On Mon, Jun 6, 2016 at 2:13 PM, Waiman Long <waiman.long@....com> wrote:
>
> The tricky part about optimistic spinning in rwsem is that we don't know for
> sure if any of the lock holding readers is running or not.
I'm notm sure how common the reader-vs-writer contention is, at least
for the new inode use. I'm sure you can trigger it with crazy
benchmarks, but I wouldn't worry about it unless people start
complaining.
The writer-writer case is easy to trigger with write-heavy loads (ok,
rename/unlink in this case). Are there real loads where there are lots
of concurrent lookup and writes? I really don't know (note that
"lookup" needs to be uncached and actually hit the lowlevel filesystem
for the locking to even trigger in the first place).
I guess some "concurrent readdir with unlink" load would show that
behavior, but is it _realistic_? No idea. Let's not worry about it too
much until somebody shows a reason to worry.
Linus
Powered by blists - more mailing lists