[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1180027605.3692.51.camel@mulgrave.il.steeleye.com>
Date: Thu, 24 May 2007 12:26:45 -0500
From: James Bottomley <James.Bottomley@...elEye.com>
To: Christoph Lameter <clameter@....com>
Cc: "Salyzyn, Mark" <mark_salyzyn@...ptec.com>,
Aubrey Li <aubreylee@...il.com>,
Bernhard Walle <bwalle@...e.de>, linux-scsi@...r.kernel.org,
Andrew Morton <akpm@...ux-foundation.org>,
linux-kernel@...r.kernel.org, Alan Cox <alan@...rguk.ukuu.org.uk>
Subject: RE: [PATCH] [scsi] Remove __GFP_DMA
On Thu, 2007-05-24 at 10:22 -0700, Christoph Lameter wrote:
> On Thu, 24 May 2007, James Bottomley wrote:
>
> > The idea was basically to match an allocation to a device mask. I was
> > going to do a generic implementation (which would probably kmalloc,
> > check the physaddr and fall back to GFP_DMA if we were unlucky) but
> > allow the architectures to override.
>
> Hmmmm... We could actually implement something like it in the slab
> allocators. The mask parameter would lead the allocator to check if the
> objects are in a satisfactory range. If not it could examine its partial
> lists for slabs that satisfy the range. If that does not work then it
> would eventually go to the page allocator to ask for a page in a fitting
> range.
>
> That wont be fast though. How performance sensitive are the allocations?
Most of them aren't very. They're usually things that are set at driver
intialisation time (shared mailbox memory etc).
If the allocation is performance sensitive and has to be done at command
or ioctl submission time, we tend to feed the consistent or mapped
memory into a dma pool which pre allocates and solves the performance
issue.
In the aacraid case, the mbox is done at ioctl time, but isn't
incredibly performance sensitive.
James
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists