[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20081030134235.6d1276ae@barsoom.rdu.redhat.com>
Date: Thu, 30 Oct 2008 13:42:35 -0400
From: Jeff Layton <jlayton@...hat.com>
To: "Steve French" <smfrench@...il.com>
Cc: linux-fsdevel <linux-fsdevel@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 0/4] cifs: fix oopses and mem corruption with concurrent
mount/umount (try #4)
On Thu, 30 Oct 2008 12:25:36 -0500
"Steve French" <smfrench@...il.com> wrote:
> This is much better since it doesn't regress the tcon sharing (which we
> still need to extend for the shared superblock case) - I particularly like
> the new routines to put (free) the tcon and put the smb session, but think
> the locking gets more complicated (with little performance gain) moving from
> one global spinlock covering tcon/smb-session/cifs-tcp-socket to one
> embedded within each structure, and also could be confusing since the cifs
> tcp socket is already protected by a semaphore and now would have a spinlock
> too. I think we greatly increase the chance of deadlock having to nest
> spinlocks.
>
lockdep is pretty good about warning you if you're in danger of deadlock
(and I generally always have lockdep turned on).
I really think the fine-grained locking is the best scheme. It's pretty
clear how it should work. If you're walking or altering the lists
(and/or changing the refcounts) then you need to take the locks that
protect them. (granted, I probably need to do another respin that adds
some comments to this effect).
I think we want to resist having locks that protect too many things.
With that, we end up with the locks held over too much code. Not only is
that generally worse for performance, but it can paper over race
conditions.
--
Jeff Layton <jlayton@...hat.com>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists