[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110811140101.GA18802@quack.suse.cz>
Date: Thu, 11 Aug 2011 16:01:01 +0200
From: Jan Kara <jack@...e.cz>
To: Michael Tokarev <mjt@....msk.ru>
Cc: Jan Kara <jack@...e.cz>, linux-ext4@...r.kernel.org
Subject: Re: DIO process stuck apparently due to dioread_nolock (3.0)
Hello,
On Thu 11-08-11 16:21:42, Michael Tokarev wrote:
> 11.08.2011 15:59, Jan Kara wrote:
> > On Wed 10-08-11 14:51:17, Michael Tokarev wrote:
> >> For a few days I'm evaluating various options to use
> >> storage. I'm interested in concurrent direct I/O
> >> (oracle rdbms workload).
> >>
> >> I noticed that somehow, ext4fs in mixed read-write
> >> test greatly prefers writes over reads - writes goes
> >> at full speed while reads are almost non-existent.
> >>
> >> Sandeen on IRC pointed me at dioread_nolock mount
> >> option, which I tried with great results, if not
> >> one "but".
> >>
> >> There's a deadlock somewhere, which I can't trigger
> >> "on demand" - I can't hit the right condition. It
> >> happened twice in a row already, each time after the
> >> same scenario (more about that later).
> >>
> >> When it happens, a process doing direct AIO stalls
> >> infinitely, with the following backtrace:
> >>
> >> [87550.759848] INFO: task oracle:23176 blocked for more than 120 seconds.
> >> [87550.759892] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> >> [87550.759955] oracle D 0000000000000000 0 23176 1 0x00000000
> >> [87550.760006] ffff8820457b47d0 0000000000000082 ffff880600000000 ffff881278e3f7d0
> >> [87550.760085] ffff8806215c1fd8 ffff8806215c1fd8 ffff8806215c1fd8 ffff8820457b47d0
> >> [87550.760163] ffffea0010bd7c68 ffffffff00000000 ffff882045512ef8 ffffffff810eeda2
> >> [87550.760245] Call Trace:
> >> [87550.760285] [<ffffffff810eeda2>] ? __do_fault+0x422/0x520
> >> [87550.760327] [<ffffffff81111ded>] ? kmem_getpages+0x5d/0x170
> >> [87550.760367] [<ffffffff81112e58>] ? ____cache_alloc_node+0x48/0x140
> >> [87550.760430] [<ffffffffa0123e6d>] ? ext4_file_write+0x20d/0x260 [ext4]
> >> [87550.760475] [<ffffffff8106aee0>] ? abort_exclusive_wait+0xb0/0xb0
> >> [87550.760523] [<ffffffffa0123c60>] ? ext4_llseek+0x120/0x120 [ext4]
> >> [87550.760566] [<ffffffff81162173>] ? aio_rw_vect_retry+0x73/0x1d0
> >> [87550.760607] [<ffffffff8116302f>] ? aio_run_iocb+0x5f/0x160
> >> [87550.760646] [<ffffffff81164258>] ? do_io_submit+0x4f8/0x600
> >> [87550.760689] [<ffffffff81359b52>] ? system_call_fastpath+0x16/0x1b
> > Hmm, the stack trace does not quite make sense to me - the part between
> > __do_fault and aio_rw_vect_retry is somehow broken. I can imagine we
> > blocked in ext4_file_write() but I don't see any place there where we would
> > allocate memory. By any chance, are there messages like "Unaligned AIO/DIO
> > on inode ..." in the kernel log?
>
> Yes, there are warnings about unaligned DIO, referring to this same
> process actually. Oracle does almost good job at aligning writes
> (usually it does i/o by its blocks which are 4Kb by default but
> are set to something larger - like 16Kb - for larger database).
> Except of a few cases, and lgwr process is one of them (*) - it
> writes logfiles using 512b blocks. This is okay for a raw device
> with 512bytes blocks, but ext4 expects 4k writes at min.
>
> (*) another case is writing to control file, which is also done in
> 512byte chunks.
Ah, OK. I think the code ext4_end_io_nolock() handling the wait queue
might be racy. waitqueue_active() check is missing a barrier I think.
Does attached (untested) patch fix the issue for you?
Honza
--
Jan Kara <jack@...e.cz>
SUSE Labs, CR
View attachment "0001-ext4-Fix-missed-wakeups.patch" of type "text/x-patch" (1389 bytes)
Powered by blists - more mailing lists