[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AE4F746F2AECFC4DA4AADD66A1DFEF019E1CAA@otce2k301.adaptec.com>
Date: Thu, 24 May 2007 09:24:53 -0400
From: "Salyzyn, Mark" <mark_salyzyn@...ptec.com>
To: "Aubrey Li" <aubreylee@...il.com>,
"Christoph Lameter" <clameter@....com>
Cc: "Bernhard Walle" <bwalle@...e.de>, <linux-scsi@...r.kernel.org>,
"Andrew Morton" <akpm@...ux-foundation.org>,
<linux-kernel@...r.kernel.org>,
"James Bottomley" <James.Bottomley@...eleye.com>,
"Alan Cox" <alan@...rguk.ukuu.org.uk>
Subject: RE: [PATCH] [scsi] Remove __GFP_DMA
So, is the sequence:
p = kmalloc(upsg->sg[i].count,GFP_KERNEL);
. . .
addr = pci_map_single(dev->pdev, p, upsg->sg[i].count,
data_dir);
Going to ensure that we have a 31 bit (not 32 bit) physical address?
If not, then I reject this patch. We can not consider replacement with
pci_alloc_consistent until it works on AMD respecting the DMA masks.
Sincerely -- Mark Salyzyn
> -----Original Message-----
> From: linux-kernel-owner@...r.kernel.org
> [mailto:linux-kernel-owner@...r.kernel.org] On Behalf Of Aubrey Li
> Sent: Tuesday, May 22, 2007 10:41 PM
> To: Christoph Lameter
> Cc: Bernhard Walle; linux-scsi@...r.kernel.org; Andrew
> Morton; linux-kernel@...r.kernel.org; James Bottomley
> Subject: Re: [PATCH] [scsi] Remove __GFP_DMA
>
>
> On 5/23/07, Christoph Lameter <clameter@....com> wrote:
> > On Mon, 21 May 2007, Bernhard Walle wrote:
> >
> > > [PATCH] [scsi] Remove __GFP_DMA
> > >
> > > After 821de3a27bf33f11ec878562577c586cd5f83c64, it's not
> necessary to alloate a
> > > DMA buffer any more in sd.c.
> > >
> > > Signed-off-by: Bernhard Walle <bwalle@...e.de>
> >
> > Great that avoids a DMA kmalloc slab. Any other GFP_DMAs
> left in the scsi
> > layer?
> >
> > Acked-by: Christoph Lameter <clameter@....com>
>
> Yes, here is another patch
>
> Signed-off-by: Aubrey.Li <aubreylee@...il.com>
> ---
> drivers/scsi/aacraid/commctrl.c | 12 ++++++------
> 1 files changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/scsi/aacraid/commctrl.c
> b/drivers/scsi/aacraid/commctrl.c
> index 72b0393..405722d 100644
> --- a/drivers/scsi/aacraid/commctrl.c
> +++ b/drivers/scsi/aacraid/commctrl.c
> @@ -580,8 +580,8 @@ static int aac_send_raw_srb(struct aac_dev* dev,
> void __user * arg)
> for (i = 0; i < upsg->count; i++) {
> u64 addr;
> void* p;
> - /* Does this really need to be
> GFP_DMA? */
> - p =
> kmalloc(upsg->sg[i].count,GFP_KERNEL|__GFP_DMA);
> +
> + p =
> kmalloc(upsg->sg[i].count,GFP_KERNEL);
> if(p == 0) {
>
> dprintk((KERN_DEBUG"aacraid: Could not allocate SG buffer - size
> = %d buffer number %d of %d\n",
>
> upsg->sg[i].count,i,upsg->count));
> @@ -624,8 +624,8 @@ static int aac_send_raw_srb(struct aac_dev* dev,
> void __user * arg)
> for (i = 0; i < usg->count; i++) {
> u64 addr;
> void* p;
> - /* Does this really need to be
> GFP_DMA? */
> - p =
> kmalloc(usg->sg[i].count,GFP_KERNEL|__GFP_DMA);
> +
> + p =
> kmalloc(usg->sg[i].count,GFP_KERNEL);
> if(p == 0) {
> kfree (usg);
>
> dprintk((KERN_DEBUG"aacraid: Could not allocate SG buffer - size
> = %d buffer number %d of %d\n",
> @@ -666,8 +666,8 @@ static int aac_send_raw_srb(struct aac_dev* dev,
> void __user * arg)
> for (i = 0; i < upsg->count; i++) {
> u64 addr;
> void* p;
> - /* Does this really need to be
> GFP_DMA? */
> - p =
> kmalloc(usg->sg[i].count,GFP_KERNEL|__GFP_DMA);
> +
> + p =
> kmalloc(usg->sg[i].count,GFP_KERNEL);
> if(p == 0) {
>
> dprintk((KERN_DEBUG"aacraid: Could not allocate SG buffer - size
> = %d buffer number %d of %d\n",
>
> usg->sg[i].count,i,usg->count));
> --
> 1.5.1.1
> -
> To unsubscribe from this list: send the line "unsubscribe
> linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/
>
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists