[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20081030140134.7adcef56@barsoom.rdu.redhat.com>
Date: Thu, 30 Oct 2008 14:01:34 -0400
From: Jeff Layton <jlayton@...hat.com>
To: "Steve French" <smfrench@...il.com>
Cc: linux-fsdevel <linux-fsdevel@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 0/4] cifs: fix oopses and mem corruption with concurrent
mount/umount (try #4)
On Thu, 30 Oct 2008 12:51:03 -0500
"Steve French" <smfrench@...il.com> wrote:
> On Thu, Oct 30, 2008 at 12:42 PM, Jeff Layton <jlayton@...hat.com> wrote:
> > I think we want to resist having locks that protect too many things.
> > With that, we end up with the locks held over too much code. Not only is
> > that generally worse for performance, but it can paper over race
> > conditions.
>
> I agree that it is trivially worse for performance to have a single
> spinlock protecting the three interrelated structures (cifs tcp, smb
> and tree connection structs), but since they point to one another and
> frequently have operations that require us to use all three lists -
> to do things like iterate through all tree connections within a
> particular smb session, or iterate across all cifs smb sessions within
> each cifs tcp session - it makes code more complicated to have to grab
> and unlock multiple spinlocks in the correct order every time across
> all exit paths etc.
>
A fair point, but most of that is in rarely-traveled procfile code. One
thing we could consider is some helper macros or functions. For
instance, a for_all_tcons() function or something that would take a
pointer to a function that takes a tcon arg. It would
basically just walk over all the tcons and handle the locking
correctly and call the function for each.
In any case, I don't see the benefit of not using fine grained locking
here. deadlock is a possibility, but I think having well-defined
locking rules mitigates that danger.
--
Jeff Layton <jlayton@...hat.com>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists