lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110120040308.GD8476@linux.vnet.ibm.com>
Date:	Wed, 19 Jan 2011 20:03:08 -0800
From:	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
To:	Nick Piggin <npiggin@...il.com>
Cc:	Jeff Moyer <jmoyer@...hat.com>, Jan Kara <jack@...e.cz>,
	Andrew Morton <akpm@...ux-foundation.org>,
	linux-fsdevel <linux-fsdevel@...r.kernel.org>,
	linux-kernel@...r.kernel.org
Subject: Re: [patch] fs: aio fix rcu lookup

On Thu, Jan 20, 2011 at 08:20:00AM +1100, Nick Piggin wrote:
> On Thu, Jan 20, 2011 at 8:03 AM, Jeff Moyer <jmoyer@...hat.com> wrote:
> > Nick Piggin <npiggin@...il.com> writes:
> >
> >> On Thu, Jan 20, 2011 at 7:32 AM, Jeff Moyer <jmoyer@...hat.com> wrote:
> >>> Nick Piggin <npiggin@...il.com> writes:
> >>>
> >>>> On Thu, Jan 20, 2011 at 6:46 AM, Jeff Moyer <jmoyer@...hat.com> wrote:
> >>>>> Jeff Moyer <jmoyer@...hat.com> writes:
> >>>>>
> >>>>>> Jan Kara <jack@...e.cz> writes:
> >>>>>>
> >>>>>>>  But there's the second race I describe making it possible
> >>>>>>> for new IO to be created after io_destroy() has waited for all IO to
> >>>>>>> finish...
> >>>>>>
> >>>>>> Can't that be solved by introducing memory barriers around the accesses
> >>>>>> to ->dead?
> >>>>>
> >>>>> Upon further consideration, I don't think so.
> >>>>>
> >>>>> Given the options, I think adding the synchronize rcu to the io_destroy
> >>>>> path is the best way forward.  You're already waiting for a bunch of
> >>>>> queued I/O to finish, so there is no guarantee that you're going to
> >>>>> finish that call quickly.
> >>>>
> >>>> I think synchronize_rcu() is not something to sprinkle around outside
> >>>> very slow paths. It can be done without synchronize_rcu.
> >>>
> >>> I'm not sure I understand what you're saying.  Do you mean to imply that
> >>> io_destroy is not a very slow path?  Because it is.  I prefer a solution
> >>> that doesn't re-architecht things in order to solve a theoretical issue
> >>> that's never been observed.
> >>
> >> Even something that happens once per process lifetime, like in fork/exit
> >> is not necessarily suitable for RCU.
> >
> > Now you've really lost me.  ;-)  Processes which utilize the in-kernel
> > aio interface typically create an ioctx at process startup, use that for
> > submitting all of their io, then destroy it on exit.  Think of a
> > database.  Every time you call io_submit, you're doing a lookup of the
> > ioctx.
> >
> >> I don't know exactly how all programs use io_destroy -- of the small
> >> number that do, probably an even smaller number would care here. But I
> >> don't think it simplifies things enough to use synchronize_rcu for it.
> >
> > Above it sounded like you didn't think AIO should be using RCU at all.
> 
> synchronize_rcu of course, not RCU (typo).

I think that Nick is suggesting that call_rcu() be used instead.
Perhaps also very sparing use of synchronize_rcu_expedited(), which
is faster than synchronize_rcu(), but which which uses more CPU time.

							Thanx, Paul

> > Here it sounds like you are just against synchronize_rcu.  Which is it?
> > And if the latter, then please tell me in what cases you feel one would
> > be justified in calling synchronize_rcu.  For now, I simply disagree
> > with you.  As I said before, you're already potentially waiting for disk
> > I/O to complete.  It doesn't get much worse than that for latency.
> 
> I think synchronize_rcu should firstly not be used unless it gives a good
> simplification, or speedup in fastpath.
> 
> When that is satified, then it is a question of exactly what kind of slow
> path it should be used in. I don't think it should be used in process-
> synchronous code (eg syscalls) except for error cases, resource
> exhaustion, management syscalls (like module unload).
> 
> For example "it's waiting for IO anyway" is not a good reason, IMO.
> Firstly because it may not be waiting for a 10ms disk IO, it may be
> waiting for anything up to an in-RAM device. Secondly because it
> could be quite slow depending on the RCU model used.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ