[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130830194059.GC13318@ZenIV.linux.org.uk>
Date: Fri, 30 Aug 2013 20:40:59 +0100
From: Al Viro <viro@...IV.linux.org.uk>
To: Waiman Long <waiman.long@...com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Ingo Molnar <mingo@...nel.org>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Jeff Layton <jlayton@...hat.com>,
Miklos Szeredi <mszeredi@...e.cz>,
Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Peter Zijlstra <peterz@...radead.org>,
Steven Rostedt <rostedt@...dmis.org>,
Andi Kleen <andi@...stfloor.org>,
"Chandramouleeswaran, Aswin" <aswin@...com>,
"Norton, Scott J" <scott.norton@...com>
Subject: Re: [PATCH v7 1/4] spinlock: A new lockref structure for lockless
update of refcount
On Fri, Aug 30, 2013 at 03:20:48PM -0400, Waiman Long wrote:
> There are more contention in the lglock than I remember for the run
> in 3.10. This is an area that I need to look at. In fact, lglock is
> becoming a problem for really large machine with a lot of cores. We
> have a prototype 16-socket machine with 240 cores under development.
> The cost of doing a lg_global_lock will be very high in that type of
> machine given that it is already high in this 80-core machine. I
> have been thinking about instead of per-cpu spinlocks, we could
> change the locking to per-node level. While there will be more
> contention for lg_local_lock, the cost of doing a lg_global_lock
> will be much lower and contention within the local die should not be
> too bad. That will require either a per-node variable infrastructure
> or simulated with the existing per-cpu subsystem.
Speaking of lglock, there's a low-hanging fruit in that area: we have
no reason whatsoever to put anything but regular files with FMODE_WRITE
on the damn per-superblock list - the *only* thing it's used for is
mark_files_ro(), which will skip everything except those. And since
read opens normally outnumber the writes quite a bit... Could you
try the diff below and see if it changes the picture? files_lglock
situation ought to get better...
diff --git a/fs/file_table.c b/fs/file_table.c
index b44e4c5..322cd37 100644
--- a/fs/file_table.c
+++ b/fs/file_table.c
@@ -385,6 +385,10 @@ static inline void __file_sb_list_add(struct file *file, struct super_block *sb)
*/
void file_sb_list_add(struct file *file, struct super_block *sb)
{
+ if (likely(!(file->f_mode & FMODE_WRITE)))
+ return;
+ if (!S_ISREG(file_inode(file)->i_mode))
+ return;
lg_local_lock(&files_lglock);
__file_sb_list_add(file, sb);
lg_local_unlock(&files_lglock);
@@ -450,8 +454,6 @@ void mark_files_ro(struct super_block *sb)
lg_global_lock(&files_lglock);
do_file_list_for_each_entry(sb, f) {
- if (!S_ISREG(file_inode(f)->i_mode))
- continue;
if (!file_count(f))
continue;
if (!(f->f_mode & FMODE_WRITE))
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists