lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130108012813.GO26407@google.com>
Date:	Mon, 7 Jan 2013 17:28:13 -0800
From:	Kent Overstreet <koverstreet@...gle.com>
To:	Andrew Morton <akpm@...ux-foundation.org>
Cc:	linux-kernel@...r.kernel.org, linux-aio@...ck.org,
	linux-fsdevel@...r.kernel.org, zab@...hat.com, bcrl@...ck.org,
	jmoyer@...hat.com, axboe@...nel.dk, viro@...iv.linux.org.uk,
	tytso@....edu
Subject: Re: [PATCH 14/32] aio: Make aio_read_evt() more efficient, convert
 to hrtimers

On Mon, Jan 07, 2013 at 05:00:55PM -0800, Andrew Morton wrote:
> aio_read_events_ring() is called via the
> wait_event_interruptible_hrtimeout() macro's call to `condition' - to
> work out whether aio_read_events_ring() should terminate.
> 
> A problem we should think about is "under what circumstances will
> aio_read_events_ring() set us into TASK_RUNNING?".  We don't want
> aio_read_events_ring() to do this too often because it will cause
> schedule() to fall through and we end up in a busy loop, chewing CPU. 
> 
> afacit, aio_read_events_ring() will usually return non-zero if it
> flipped us into TASK_RUNNING state.  An exception is where the
> mutex_trylock() failed, in which case the thread slept in mutex_lock(),
> whcih will help with the CPU-chewing.  But aio_read_events_ring() can
> then end up returning 0 but in state TASK_RUNNING which will cause a
> small cpu-chew in wait_event_interruptible_hrtimeout().

Yeah, that was my reasoning too.

> I think :( It is unfortunately complex and it would be nice to make
> this dynamic behaviour more clear and solid.  Or at least documented! 
> Explain how this code avoid getting stuck in a cpu-burning loop.  To
> help prevent people from causing a cpu-burning loop when they later
> change the code.

*nods*

> > However - I was told that calling mutex_lock() in TASK_INTERRUPTIBLE
> > state was bad, but thinking about it more I'm not seeing how that's the
> > case. Either mutex_lock() finds the lock uncontended and doesn't touch
> > the task state, or it does and leaves it in TASK_RUNNING when it
> > returns.
> > 
> > IOW, I don't see how it'd behave any differently from what I'd doing.
> > 
> > Any light you could shed would be most appreciated.
> 
> Well, the problem with running mutex_lock() in TASK_[UN]INTERRUPTIBLE
> is just that: it may or may not flip you into TASK_RUNNING, so what the
> heck is the caller thinking of?  It's strange to set the task state a
> particular way, then call a function which will randomly go and undo
> that.
> 
> The cause of all this is the wish to use a wait_event `condition'
> predicate which must take a mutex.  hrm.

I've run into this problem before, and I've yet to come up with a
satisfactory solution. What we kind of want is just pthreads style
condition variables. Or something. I'm surprised this doesn't come up
more often.

But, this code has been through like 5 iterations (with Zach Brown
picking most of them apart) and I think this is the best we've come up
with. Trying to get the task state stuff exactly right led to it being
_much_ more contorted, I think.

Does the patch below help?


> 
> > > IOW, I don't have the foggiest clue what you're trying to do here and
> > > you owe us all a code comment.  At least.
> > 
> > Yeah, will do.
> 
> Excited!
> 
> > This look better for the types?
> 
> yup.
> 
> 
> Also, it's unclear why kioctx.shadow_tail exists.  Some overviewy
> explanation at its definitions site is needed, IMO.

Ah, that's mostly just to reduce cacheline bouncing - in practice the
tail pointer that aio_complete() uses tends to be a lot more contended
than the head pointer, since events get delivered one at a time and then
pulled off all at once. So aio_complete() keeps it up to date and then
aio_read_events() doesn't have to compete for the tail cacheline.


commit ab92ba18a0a891821edd967c46dc988326ef6bb0
Author: Kent Overstreet <koverstreet@...gle.com>
Date:   Mon Jan 7 17:27:19 2013 -0800

    aio: Document, clarify aio_read_events() and shadow_tail
    
    Signed-off-by: Kent Overstreet <koverstreet@...gle.com>

diff --git a/fs/aio.c b/fs/aio.c
index 21b2c27..932170a 100644
--- a/fs/aio.c
+++ b/fs/aio.c
@@ -102,6 +102,19 @@ struct kioctx {
 	struct {
 		struct mutex	ring_lock;
 		wait_queue_head_t wait;
+
+		/*
+		 * Copy of the real tail, that aio_complete uses - to reduce
+		 * cacheline bouncing. The real tail will tend to be much more
+		 * contended - since typically events are delivered one at a
+		 * time, and then aio_read_events() slurps them up a bunch at a
+		 * time - so it's helpful if aio_read_events() isn't also
+		 * contending for the tail. So, aio_complete() updates
+		 * shadow_tail whenever it updates tail.
+		 *
+		 * Also needed because tail is used as a hacky lock and isn't
+		 * always the real tail.
+		 */
 		unsigned	shadow_tail;
 	} ____cacheline_aligned_in_smp;
 
@@ -845,10 +858,7 @@ static long aio_read_events_ring(struct kioctx *ctx,
 	long ret = 0;
 	int copy_ret;
 
-	if (!mutex_trylock(&ctx->ring_lock)) {
-		__set_current_state(TASK_RUNNING);
-		mutex_lock(&ctx->ring_lock);
-	}
+	mutex_lock(&ctx->ring_lock);
 
 	ring = kmap_atomic(ctx->ring_pages[0]);
 	head = ring->head;
@@ -859,8 +869,6 @@ static long aio_read_events_ring(struct kioctx *ctx,
 	if (head == ctx->shadow_tail)
 		goto out;
 
-	__set_current_state(TASK_RUNNING);
-
 	while (ret < nr) {
 		long avail = (head < ctx->shadow_tail
 			      ? ctx->shadow_tail : ctx->nr) - head;
@@ -939,6 +947,20 @@ static long read_events(struct kioctx *ctx, long min_nr, long nr,
 		until = timespec_to_ktime(ts);
 	}
 
+	/*
+	 * Note that aio_read_events() is being called as the conditional - i.e.
+	 * we're calling it after prepare_to_wait() has set task state to
+	 * TASK_INTERRUPTIBLE.
+	 *
+	 * But aio_read_events() can block, and if it blocks it's going to flip
+	 * the task state back to TASK_RUNNING.
+	 *
+	 * This should be ok, provided it doesn't flip the state back to
+	 * TASK_RUNNING and return 0 too much - that causes us to spin. That
+	 * will only happen if the mutex_lock() call blocks, and we then find
+	 * the ringbuffer empty. So in practice we should be ok, but it's
+	 * something to be aware of when touching this code.
+	 */
 	wait_event_interruptible_hrtimeout(ctx->wait,
 			aio_read_events(ctx, min_nr, nr, event, &ret), until);
 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ