[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0804032014350.4616@blonde.site>
Date: Thu, 3 Apr 2008 20:26:54 +0100 (BST)
From: Hugh Dickins <hugh@...itas.com>
To: Erez Zadok <ezk@...sunysb.edu>
cc: "Josef 'Jeff' Sipek" <jeffpc@...efsipek.net>,
akpm@...ux-foundation.org, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org, Al Viro <viro@...iv.linux.org.uk>,
hch@...radead.org
Subject: Re: fs_stack/eCryptfs: remove 3rd arg of copy_attr_all, add locking
to copy_inode_size
On Thu, 3 Apr 2008, Erez Zadok wrote:
> In message <20080403182001.GB30189@...efsipek.net>, "Josef 'Jeff' Sipek" writes:
> > I think you need to check CONFIG_PREEMPT as well.
>
> I'm not sure if it's needed in case of CONFIG_PREEMPT. Anyone? The code
> for i_size_write (below), and the comment at the top of the function,
> suggest that the spinlock is needed only to prevent the lots seqcount.
Correct.
> BTW, some time ago I reviewed all callers of i_size_write. I did so again
> just now, and the results were the same:
>
> - a LOT of callers of i_size_write don't take any lock
They mostly know that i_mutex is already held (as i_size_write comment
mentions); but I believe that's up to the individual filesystem.
> - some take another spinlock in a different data structure
> - those that do take the spinlock, do so unconditionally
> - only unionfs and fs/stack.c wrap the spinlock in
>
> #if BITS_PER_LONG == 32 && defined(CONFIG_SMP)
I chose to follow the #ifdeffery of i_size_write(),
but you could do it unconditionally if you prefer:
just a little more overhead when it's not needed.
As I've said elsewhere, I don't think the result can be entirely
safe against concurrent changes in the lower filesystem, using
different locking; but I don't know how resilient unionfs is
expected to be against messing directly with lower at the same
time as upper level.
Hugh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists