[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080125181624.GA3234@duck.suse.cz>
Date: Fri, 25 Jan 2008 19:16:24 +0100
From: Jan Kara <jack@...e.cz>
To: Nick Piggin <nickpiggin@...oo.com.au>
Cc: linux-kernel@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [PATCH RESEND] Minimal fix for private_list handling races
On Fri 25-01-08 19:34:07, Nick Piggin wrote:
> On Thursday 24 January 2008 02:48, Jan Kara wrote:
>
<snip>
> > In the current version, we readd buffer to
> > private_list if it is found dirty in the second while loop of
> > fsync_buffers() and that should be enough.
>
> Sure, I think there is still a data race though, but if there is one
> it's already been there for a long time and nobody cares too much about
> those anyway.
>
>
> > > But let's see... there must be a memory ordering problem here in existing
> > > code anyway, because I don't see any barriers. Between b_assoc_buffers
> > > and b_state (via buffer_dirty); fsync_buffers_list vs
> > > mark_buffer_dirty_inode, right?
> >
> > I'm not sure. What exactly to you mean? BTW: spin_lock is a memory
> > barrier, isn't it?
>
> In existing code:
>
> mark_buffer_dirty_inode(): fsync_buffers_list():
> test_set_buffer_dirty(bh); list_del_init(&bh->b_assoc_buffers);
> if (list_empty(&bh->b_assoc_buffers)) if (buffer_dirty(bh)) {
> ... list_add(&bh->b_assoc_buffers, );
>
> These two code sequences can run concurrently because only fsync_buffers_list
> takes the lock.
>
> So fsync_buffers_list can speculatively load bh->b_state before
> its stores to clear b_assoc_buffers propagate to the CPU running
> mark_buffer_dirty_inode.
>
> So if there is a !dirty buffer on the list, then fsync_buffers_list will
> remove it from the list, but mark_buffer_dirty_inode won't see it has been
> removed from the list and won't re-add it. I think.
>
> This is actually even possible to hit on x86 because they reorder loads
> past stores. It needs a smp_mb() before if (buffer_dirty(bh) {}.
OK, I'll believe you ;) I was never completely sure what all can happen
if two processors access+write some memory without exclusion by a lock...
> Actually I very much dislike testing list entries locklessly, because they
> are not trivially atomic operations like single stores... which is another
> reason why I like your first patch.
Yes, that was actually the reason why I changed the checks from
list_empty() to b_assoc_map testing. Well, so I'll add the barrier and
maybe also change these list_empty() checks to b_assoc_map tests...
Honza
--
Jan Kara <jack@...e.cz>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists