[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20080918035726F.fujita.tomonori@lab.ntt.co.jp>
Date: Thu, 18 Sep 2008 04:20:14 +0900
From: FUJITA Tomonori <fujita.tomonori@....ntt.co.jp>
To: andi@...stfloor.org
Cc: fujita.tomonori@....ntt.co.jp, mingo@...e.hu, joerg.roedel@....com,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/3] fix GART to respect device's dma_mask about
virtual mappings
On Wed, 17 Sep 2008 02:24:04 +0200
Andi Kleen <andi@...stfloor.org> wrote:
> On Wed, Sep 17, 2008 at 08:53:42AM +0900, FUJITA Tomonori wrote:
> > On Tue, 16 Sep 2008 19:58:24 +0200
> > Andi Kleen <andi@...stfloor.org> wrote:
> >
> > > > > Those always are handled elsewhere in the block layer (using the bounce_pfn
> > > > > mechanism)
> > > >
> > > > I don't think that the bounce guarantees that dma_alloc_coherent()
> > > > returns an address that a device can access to.
> > >
> > > dma_alloc_coherent() is not used for block IO data. And dma_alloc_coherent()
> > > does handle masks > 24bit < 32bits just fine.
> >
> > What do you mean? For example, some aacraid cards have 31bit dma
> > mask. What guarantees that IOMMUs's dma_alloc_coherent don't return a
> > virtual address > 31bit < 32bit?
>
> At least the old IOMMU implementations (GART, non GART) handled this
> by falling back to GFP_DMA. I haven't checked if that didn't get broken
> in the recent reorganization, but if it got it should be fixed of course.
> But hopefully it still works.
The falling back mechanism was moved to pci-nommu from the common code
since it doesn't work for other IOMMUs that always need virtual
mappings. Calgary needs this dma_mask trick too but I guess that it's
unlikely that the IBM servers with Calgary have weird hardware.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists