lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 21 Dec 2006 00:57:57 -0800
From:	"Chen, Kenneth W" <kenneth.w.chen@...el.com>
To:	"'Andrew Morton'" <akpm@...l.org>
Cc:	<linux-aio@...ck.org>,
	"'Trond Myklebust'" <trond.myklebust@....uio.no>,
	"'xb'" <xavier.bru@...l.net>,
	"'Zach Brown'" <zach.brown@...cle.com>,
	<linux-kernel@...r.kernel.org>,
	"Paul E. McKenney" <paulmck@...ibm.com>
Subject: RE: [patch] aio: fix buggy put_ioctx call in aio_complete

Andrew Morton wrote on Thursday, December 21, 2006 12:18 AM
> Alas, your above description doesn't really tell us what the bug is, so I'm
> at a bit of a loss here.
> 
> <finds http://marc.theaimsgroup.com/?l=linux-aio&m=116616463009218&w=2>
> 
> So that's a refcounting bug.  But it's really a locking bug, because
> refcounting needs locking too.

I should've quoted the original bug report (kicking myself for those
fat fingers!): http://marc.theaimsgroup.com/?l=linux-kernel&m=116599593200888&w=2

The bug manifested from an expectation that __put_ioctx can be called
in the softirq context, but it really shouldn't. Normally, aio_complete
will not decrement last ref count on ioctx, but under stressed system,
it might.


> > Problem is in wait_for_all_aios(), it is checking wait status without
> > properly holding an ioctx lock. Perhaps, this patch is walking on thin
> > ice.  It abuses rcu over a buggy code. OTOH, I really don't want to hold
> > ctx_lock over the entire wakeup call at the end of aio_complete:
> > 
> >         if (waitqueue_active(&ctx->wait))
> >                 wake_up(&ctx->wait);
> > 
> > I'm worried about longer lock hold time in aio_complete and potentially 
> > increase lock contention for concurrent I/O completion.
> 
> There is a potential problem where we deliver a storm of wakeups at the
> waitqueue, and until the waked-up process actually ges going and removes
> itself from the waitqueue, those wake_up()s do lots of work waking up an
> already-runnable task.
> 
> If we were using DEFINE_WAIT/prepare_to_wait/finish_wait in there then the
> *first* wake_up() would do some work, but all the following ones are
> practically free.
> 
> So if you're going to get in there and run the numbers on this, try both
> approaches.

Yes, I agree and I would like to bring your patch on "DEFINE_WAIT..." back
too.  I will run that experiment.


> > A quick look
> > at lockmeter data I had on a 4 socket system (with dual core + HT), it
> > already showing signs of substantial lock contention in aio_complete.
> > I'm afraid putting the above call inside ioctx_lock will make things
> > worse.
> 
> It beats oopsing.

Yeah, correctness absolutely over rule optimization.  I just hope I can
find a way to satisfy both.


> > And synchronize_rcu fits the bill perfectly: aio_complete sets wakeup
> > status, drop ioctx_lock, do the wakeup call all protected inside rcu
> > lock.  Then wait_for_all_aios will just wait for all that sequence to
> > complete before it proceed with __put_ioctx().  All nice and easy.
> 
> Possibly it would be less abusive to use preempt_disable()/enable (with
> suitable comments) and synchronize_kernel().  To at least remove the rcu
> signals from in there.

I think I'm going to abandon this whole synchronize thing and going to
put the wake up call inside ioctx_lock spin lock along with the other
patch you mentioned above in the waiter path.  On top of that, I have
another patch attempts to perform wake-up only when the waiter can truly
proceed in aio_read_evt so dribbling I/O completion doesn't inefficiently
waking up waiter too frequently and only to have waiter put back to sleep
again. I will dig that up and experiment.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ