[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1242811142.2560.33.camel@ymzhang>
Date: Wed, 20 May 2009 17:19:02 +0800
From: "Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>
To: Jens Axboe <jens.axboe@...cle.com>
Cc: linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
chris.mason@...cle.com, david@...morbit.com, hch@...radead.org,
akpm@...ux-foundation.org, jack@...e.cz
Subject: Re: [PATCH 0/11] Per-bdi writeback flusher threads #4
On Wed, 2009-05-20 at 10:54 +0200, Jens Axboe wrote:
> On Wed, May 20 2009, Jens Axboe wrote:
> > On Wed, May 20 2009, Zhang, Yanmin wrote:
> > > On Tue, 2009-05-19 at 08:20 +0200, Jens Axboe wrote:
> > > > On Tue, May 19 2009, Zhang, Yanmin wrote:
> > > > > On Mon, 2009-05-18 at 14:19 +0200, Jens Axboe wrote:
> > > > > > Hi,
> > > > > >
> > > > > > This is the fourth version of this patchset. Chances since v3:
> > > > > >
> > > > > > - Dropped a prep patch, it has been included in mainline since.
> > > > > >
> > > > > > - Add a work-to-do list to the bdi. This is struct bdi_work. Each
> > > > > > wb thread will notice and execute work on bdi->work_list. The arguments
> > > > > > are which sb (or NULL for all) to flush and how many pages to flush.
> > > > > >
> > > > > > - Fix a bug where not all bdi's would end up on the bdi_list, so potentially
> > > > > > some data would not be flushed.
> > > > > >
> > > > > > - Make wb_kupdated() pass on wbc->older_than_this so we maintain the same
> > > > > > behaviour for kupdated flushes.
> > > > > >
> > > > > > - Have the wb thread flush first before sleeping, to avoid losing the
> > > > > > first flush on lazy register.
> > > > > >
> > > > > > - Rebase to newer kernels.
> > >
> > > > I'm attaching two patches - apply #1 to -rc6, and then #2 is a roll-up
> > > > of the patch series that you can apply next.
> > > Jens,
> > >
> > > I run into 2 issues with kernel 2.6.30-rc6+BDI_Flusher_V4. Below is one.
> > >
> > > Tue May 19 00:00:00 CST 2009
> > > BUG: unable to handle kernel NULL pointer dereference at 00000000000001d8
> > > IP: [<ffffffff803f3c4c>] generic_make_request+0x10a/0x384
> > > PGD 0
> > > Oops: 0000 [#1] SMP
> > > last sysfs file: /sys/block/sdb/stat
> > > CPU 0
> > > Modules linked in: igb
> > > Pid: 1445, comm: bdi-8:16 Not tainted 2.6.30-rc6-bdiflusherv4 #1 X8DTN
> > > RIP: 0010:[<ffffffff803f3c4c>] [<ffffffff803f3c4c>] generic_make_request+0x10a/0x384
> > > RSP: 0018:ffff8800bd04da60 EFLAGS: 00010206
> > > RAX: 0000000000000000 RBX: ffff8801be45d500 RCX: 00000000038a0df8
> > > RDX: 0000000000000008 RSI: 0000000000000576 RDI: ffff8801bf408680
> > > RBP: ffff8801be45d500 R08: ffffe20001ee8140 R09: ffff8800bd04da98
> > > R10: 0000000000000000 R11: ffff8800bd72eb40 R12: ffff8801be45d500
> > > R13: ffff88005f51f310 R14: 0000000000000008 R15: ffff8800b15a5458
> > > FS: 0000000000000000(0000) GS:ffffc20000000000(0000) knlGS:0000000000000000
> > > CS: 0010 DS: 0018 ES: 0018 CR0: 000000008005003b
> > > CR2: 00000000000001d8 CR3: 0000000000201000 CR4: 00000000000006e0
> > > DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> > > DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> > > Process bdi-8:16 (pid: 1445, threadinfo ffff8800bd04c000, task ffff8800bd1b75f0)
> > > Stack:
> > > 0000000000000008 ffffffff8027a613 00000000848dc000 ffffffffffffffff
> > > ffff8800a8190f50 ffffffff00000012 ffff8800a81938e0 ffffc2000000001b
> > > 0000000000000000 0000000000000000 ffffe200026f9c30 0000000000000000
> > > Call Trace:
> > > [<ffffffff8027a613>] ? mempool_alloc+0x59/0x10f
> > > [<ffffffff803f3f70>] ? submit_bio+0xaa/0xb1
> > > [<ffffffff802c6a3f>] ? submit_bh+0xe3/0x103
> > > [<ffffffff802c92ea>] ? __block_write_full_page+0x1fb/0x2f2
> > > [<ffffffff802c7d6a>] ? end_buffer_async_write+0x0/0xfb
> > > [<ffffffff8027e8d2>] ? __writepage+0xa/0x25
> > > [<ffffffff8027f036>] ? write_cache_pages+0x21c/0x338
> > > [<ffffffff8027e8c8>] ? __writepage+0x0/0x25
> > > [<ffffffff8027f195>] ? do_writepages+0x27/0x2d
> > > [<ffffffff802c22c1>] ? __writeback_single_inode+0x159/0x2b3
> > > [<ffffffff8071e52a>] ? thread_return+0x3e/0xaa
> > > [<ffffffff8027f267>] ? determine_dirtyable_memory+0xd/0x1d
> > > [<ffffffff8027f2dd>] ? get_dirty_limits+0x1d/0x255
> > > [<ffffffff802c27bc>] ? generic_sync_wb_inodes+0x1b4/0x220
> > > [<ffffffff802c3130>] ? wb_do_writeback+0x16c/0x215
> > > [<ffffffff802c323e>] ? bdi_writeback_task+0x65/0x10d
> > > [<ffffffff8024cc06>] ? autoremove_wake_function+0x0/0x2e
> > > [<ffffffff8024cb27>] ? bit_waitqueue+0x10/0xa0
> > > [<ffffffff80289257>] ? bdi_start_fn+0x0/0xba
> > > [<ffffffff802892c6>] ? bdi_start_fn+0x6f/0xba
> > > [<ffffffff8024c860>] ? kthread+0x54/0x80
> > > [<ffffffff8020c97a>] ? child_rip+0xa/0x20
> > > [<ffffffff8024c80c>] ? kthread+0x0/0x80
> > > [<ffffffff8020c970>] ? child_rip+0x0/0x20
> > >
>
> I found one issue yesterday and one today that could cause issues, not
> sure it would explain this one. But at least it's worth a try, if it's
> reproducible.
I just reproduced it a moment ago manually.
[global]
direct=0
ioengine=mmap
iodepth=256
iodepth_batch=32
size=4G
bs=4k
pre_read=1
overwrite=1
numjobs=1
loops=5
runtime=600
group_reporting
directory=/mnt/stp/fiodata
[job_group0_sub0]
startdelay=0
rw=randwrite
filename=data0/f1:data0/f2
The fio includes my preread patch to flush files to memory.
Before starting the second testing, I did a cache dropping by:
#echo "3">/proc/sys/vm/drop_caches.
I suspect the drop_caches trigger it.
> I'm attaching the three patches I have against the posted
> series. The one in the middle is just an optimization, the first and
> third are the bug fixes.
I will test it tomorrow.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists