[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0706111129270.18327@schroedinger.engr.sgi.com>
Date: Mon, 11 Jun 2007 11:32:33 -0700 (PDT)
From: Christoph Lameter <clameter@....com>
To: Håvard Skinnemoen <hskinnemoen@...il.com>
cc: Haavard Skinnemoen <hskinnemoen@...el.com>,
Linux Kernel <linux-kernel@...r.kernel.org>
Subject: Re: kernel BUG at mm/slub.c:3689!
On Mon, 11 Jun 2007, Håvard Skinnemoen wrote:
> > Note that I do not get why you would be aligning the objects to 32 bytes.
> > Increasing the smallest cache size wastes a lot of memory. And it is
> > usually advantageous if multiple related objects are in the same cacheline
> > unless you have heavy SMP contention.
>
> It's not about performance at all, it's about DMA buffers allocated
> using kmalloc() getting corrupted. Imagine this:
Uhhh... How about using a separate slab for the DMA buffers?
> Maybe there are other solutions to this problem, but the old SLAB
> allocator did guarantee 32-byte alignment as long as SLAB debugging
> was turned off, so setting ARCH_KMALLOC_MINALIGN seemed like the
> easiest way to get back to the old, known-working behaviour.
SLABs mininum object size is 32 thus you had no problems. I see. SLAB
does not guarantee 32 byte alignment. It just happened to work. If you
switch on CONFIG_SLAB_DEBUG you will likely get into trouble.
So I'd suggest to set up a special slab for your DMA buffers.
Powered by blists - more mailing lists