[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130628083223.GF4165@ZenIV.linux.org.uk>
Date: Fri, 28 Jun 2013 09:32:23 +0100
From: Al Viro <viro@...IV.linux.org.uk>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Dave Chinner <david@...morbit.com>, Jan Kara <jack@...e.cz>,
Dave Jones <davej@...hat.com>, Oleg Nesterov <oleg@...hat.com>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
Linux Kernel <linux-kernel@...r.kernel.org>,
"Eric W. Biederman" <ebiederm@...ssion.com>,
Andrey Vagin <avagin@...nvz.org>,
Steven Rostedt <rostedt@...dmis.org>
Subject: Re: frequent softlockups with 3.10rc6.
On Thu, Jun 27, 2013 at 10:22:45PM -1000, Linus Torvalds wrote:
> > It looks ok, but I still think it is solving the wrong problem.
> > FWIW, your optimisation has much wider application that just this
> > one place. I'll have a look to see how we can apply this approach
> > across all the inode lookup+validate code we currently have that
> > unconditionally takes the inode->i_lock....
>
> Yes, I was looking at all the other cases that also seemed to be
> testing i_state for those "about to go away" cases.
FWIW, there's a subtle issue here - something like ext2_new_inode()
starts with allocating an inode and putting it into list (no I_NEW
yet), then decides what inumber will it have and calls insert_inode_locked(),
which sets I_NEW. Then we proceed with initializing the inode (and
eventually do unlock_new_inode(), which removes I_NEW). We depend on having
no pages in the pagecache of that sucker prior to insert_inode_locked() call;
you really don't want to start playing with writeback on this half-initialized
in-core inode.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists