[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1234946582.2604.423.camel@ymzhang>
Date: Wed, 18 Feb 2009 16:43:02 +0800
From: "Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>
To: Pekka Enberg <penberg@...helsinki.fi>
Cc: Christoph Lameter <cl@...ux-foundation.org>,
Mel Gorman <mel@....ul.ie>,
Nick Piggin <nickpiggin@...oo.com.au>,
Nick Piggin <npiggin@...e.de>,
Linux Memory Management List <linux-mm@...ck.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Lin Ming <ming.m.lin@...el.com>
Subject: Re: [patch] SLQB slab allocator (try 2)
On Wed, 2009-02-18 at 09:48 +0200, Pekka Enberg wrote:
> Hi Yanmin,
>
> On Wed, 2009-02-18 at 09:05 +0800, Zhang, Yanmin wrote:
> > On Tue, 2009-02-17 at 12:05 -0500, Christoph Lameter wrote:
> > > Well yes you missed two locations (kmalloc_caches array has to be
> > > redimensioned) and I also was writing the same patch...
> > >
> > > Here is mine:
> > >
> > > Subject: SLUB: Do not pass 8k objects through to the page allocator
> > >
> > > Increase the maximum object size in SLUB so that 8k objects are not
> > > passed through to the page allocator anymore. The network stack uses 8k
> > > objects for performance critical operations.
> > Kernel 2.6.29-rc2 panic with the patch.
> >
> > BUG: unable to handle kernel NULL pointer dereference at (null)
> > IP: [<ffffffff8028fae3>] kmem_cache_alloc+0x43/0x97
> > PGD 0
> > Oops: 0000 [#1] SMP
> > last sysfs file:
> > CPU 0
> > Modules linked in:
> > Pid: 1, comm: swapper Not tainted 2.6.29-rc2slubstat8k #1
> > RIP: 0010:[<ffffffff8028fae3>] [<ffffffff8028fae3>] kmem_cache_alloc+0x43/0x97
> > RSP: 0018:ffff88022f865e20 EFLAGS: 00010046
> > RAX: 0000000000000000 RBX: 0000000000000246 RCX: 0000000000000002
> > RDX: 0000000000000000 RSI: 000000000000063f RDI: ffffffff808096c7
> > RBP: 00000000000000d0 R08: 0000000000000004 R09: 000000000012e941
> > R10: 0000000000000002 R11: 0000000000000020 R12: ffffffff80991c48
> > R13: ffffffff809a9b43 R14: ffffffff809f8000 R15: 0000000000000000
> > FS: 0000000000000000(0000) GS:ffffffff80a13080(0000) knlGS:0000000000000000
> > CS: 0010 DS: 0018 ES: 0018 CR0: 000000008005003b
> > CR2: 0000000000000000 CR3: 0000000000201000 CR4: 00000000000006e0
> > DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
> > DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
> > Process swapper (pid: 1, threadinfo ffff88022f864000, task ffff88022f868000)
> > Stack:
> > ffffffff809f43e0 0000000000000020 ffffffff809aa469 0000000000000086
> > ffffffff809f8000 ffffffff809a9b43 ffffffff80aaae80 ffffffff809f43e0
> > 0000000000000020 ffffffff809aa469 0000000000000000 ffffffff809d86a0
> > Call Trace:
> > [<ffffffff809aa469>] ? populate_rootfs+0x0/0xdf
> > [<ffffffff809a9b43>] ? unpack_to_rootfs+0x59/0x97f
> > [<ffffffff809aa469>] ? populate_rootfs+0x0/0xdf
> > [<ffffffff809aa481>] ? populate_rootfs+0x18/0xdf
> > [<ffffffff80209051>] ? _stext+0x51/0x120
> > [<ffffffff802d69b2>] ? create_proc_entry+0x73/0x8a
> > [<ffffffff802619c0>] ? register_irq_proc+0x92/0xaa
> > [<ffffffff809a4896>] ? kernel_init+0x12e/0x188
> > [<ffffffff8020ce3a>] ? child_rip+0xa/0x20
> > [<ffffffff809a4768>] ? kernel_init+0x0/0x188
> > [<ffffffff8020ce30>] ? child_rip+0x0/0x20
> > Code: be 3f 06 00 00 48 c7 c7 c7 96 80 80 e8 b8 e2 f9 ff e8 c5 c2 45 00 9c 5b fa 65 8b 04 25 24 00 00 00 48 98 49 8b 94 c4 e8
> > RIP [<ffffffff8028fae3>] kmem_cache_alloc+0x43/0x97
> > RSP <ffff88022f865e20>
> > CR2: 0000000000000000
> > ---[ end trace a7919e7f17c0a725 ]---
> > swapper used greatest stack depth: 5376 bytes left
> > Kernel panic - not syncing: Attempted to kill init!
>
> Aah, we need to fix up some more PAGE_SHIFTs in the code.
The new patch fixes hang issue. netperf UDP-U-4k (start CPU_NUM clients) result is pretty good.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists