[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130205172015.GA27179@google.com>
Date: Tue, 5 Feb 2013 09:20:15 -0800
From: Kent Overstreet <koverstreet@...gle.com>
To: Valdis.Kletnieks@...edu
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Hillf Danton <dhillf@...il.com>,
Benjamin LaHaise <bcrl@...ck.org>,
linux-kernel@...r.kernel.org, linux-aio@...ck.org
Subject: Re: next-20130117 - kernel BUG with aio
On Tue, Feb 05, 2013 at 10:53:00AM -0500, Valdis.Kletnieks@...edu wrote:
> On Thu, 31 Jan 2013 16:37:27 -0800, Kent Overstreet said:
> > On Thu, Jan 31, 2013 at 01:59:52PM -0800, Andrew Morton wrote:
> > > Did this get fixed?
>
> > With the patches I sent you, yes - not seeing a new linux-next tree yet?
>
> Well, it's a mixed bag at my end. Finally got a chance to do some more
> testing, and:
>
> 1) next-20130128 didn't show anything in dmesg, but my VirtualBox Windows 7
> images appear to livelock on the way up - the Windows throbber would keep
> going, but it never made any actual progress towards booting. (Part of the
> delay was fixing a next-20121224 environment, and then discovering it
> took Windows *two* reboot cycles to get its act back together after getting
> into that hung state).
>
> 2_ next-20130128 plus the following 3 patches:
>
> Subject: [PATCH 1/3] aio: Fix a null pointer deref in batch_complete_aio
> Subject: [PATCH 3/3] aio-use-cancellation-list-lazily-fix
> Subject: [PATCH 2/3] aio-kill-ki_retry-fix-fix
The "smoosh struct kiocb" patch also needs to be dropped. That causes
aio_rw_vect_retry() to check ki_nbytes/ki_left after they've been
overwritten by aio_complete(), which causes it to return an error when
it shouldn't have, which causes aio_run_iocb() to double complete the
iocb causing put_reqs_available() to be called twice and the count
screwed up.
> VirtualBox appears to be functional (I did 2 complete boot/shutdown
> sequences of both a 32-bit and 64-bit Win7 Enterprise image). *HOWEVER*,
> I saw 3 of these in dmesg:
>
> [ 668.278624] WARNING: at fs/aio.c:348 put_ioctx+0x1c0/0x241()
>
> [ 668.278652] Call Trace:
> [ 668.278660] [<ffffffff8102ed10>] warn_slowpath_common+0x7c/0x96
> [ 668.278665] [<ffffffff8102edc9>] warn_slowpath_null+0x15/0x17
> [ 668.278669] [<ffffffff8114c562>] put_ioctx+0x1c0/0x241
> [ 668.278673] [<ffffffff8114d42a>] sys_io_destroy+0x4c/0x5c
> [ 668.278679] [<ffffffff8160c112>] system_call_fastpath+0x16/0x1b
>
> and the code there says:
>
> WARN_ON(atomic_read(&ctx->reqs_available) > ctx->nr);
>
> which leaves me wondering exactly how we exited the while loop
> just above - is the intention that it loop until reqs_available == ctx->nr
> exactly? Looks like if 'avail' is anything other than exactly 1 in
> that while loop, we can be at a state where reqs_avail == (ctx->nr -1),
> get 'avail=2', do the atomic_add, fall out of the loop, and trigger
> the WARN_ON.
>
> Damned if I see how that can happen though....
>
>
>
>
>
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists