[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <x491v48d47d.fsf@segfault.boston.devel.redhat.com>
Date: Wed, 19 Jan 2011 15:32:54 -0500
From: Jeff Moyer <jmoyer@...hat.com>
To: Nick Piggin <npiggin@...il.com>
Cc: Jan Kara <jack@...e.cz>, Andrew Morton <akpm@...ux-foundation.org>,
linux-fsdevel <linux-fsdevel@...r.kernel.org>,
linux-kernel@...r.kernel.org,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Subject: Re: [patch] fs: aio fix rcu lookup
Nick Piggin <npiggin@...il.com> writes:
> On Thu, Jan 20, 2011 at 6:46 AM, Jeff Moyer <jmoyer@...hat.com> wrote:
>> Jeff Moyer <jmoyer@...hat.com> writes:
>>
>>> Jan Kara <jack@...e.cz> writes:
>>>
>>>> But there's the second race I describe making it possible
>>>> for new IO to be created after io_destroy() has waited for all IO to
>>>> finish...
>>>
>>> Can't that be solved by introducing memory barriers around the accesses
>>> to ->dead?
>>
>> Upon further consideration, I don't think so.
>>
>> Given the options, I think adding the synchronize rcu to the io_destroy
>> path is the best way forward. You're already waiting for a bunch of
>> queued I/O to finish, so there is no guarantee that you're going to
>> finish that call quickly.
>
> I think synchronize_rcu() is not something to sprinkle around outside
> very slow paths. It can be done without synchronize_rcu.
I'm not sure I understand what you're saying. Do you mean to imply that
io_destroy is not a very slow path? Because it is. I prefer a solution
that doesn't re-architecht things in order to solve a theoretical issue
that's never been observed.
Cheers,
Jeff
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists