[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080627174034.GG10197@8bytes.org>
Date: Fri, 27 Jun 2008 19:40:35 +0200
From: Joerg Roedel <joro@...tes.org>
To: Muli Ben-Yehuda <muli@...ibm.com>
Cc: Andi Kleen <andi@...stfloor.org>, Adrian Bunk <bunk@...nel.org>,
Joerg Roedel <joerg.roedel@....com>, tglx@...utronix.de,
mingo@...hat.com, linux-kernel@...r.kernel.org,
iommu@...ts.linux-foundation.org, bhavna.sarathy@....com,
Sebastian.Biemueller@....com, robert.richter@....com,
Ben-Ami Yassour1 <benami@...ibm.com>
Subject: Re: [PATCH 01/34] AMD IOMMU: add Kconfig entry
On Fri, Jun 27, 2008 at 01:31:00PM -0400, Muli Ben-Yehuda wrote:
> On Fri, Jun 27, 2008 at 07:20:30PM +0200, Joerg Roedel wrote:
>
> > > Could you elaborate on what you mean here? I assume you're
> > > thinking one I/O address space for the host, and one I/O address
> > > space per guest with assigned devices?
> >
> > I think we can create an address space which almost direct-maps the
> > physical memory and let some room free for the aperture at the
> > beginning (say 64MB). If a mapping request arrives the code looks if
> > it has to do mapping (physical address of memory to map is in the
> > first 64MB or not in the device address range). If this is not the
> > case it simply returns the physical address as dma_addr. otherwise
> > it does the expensive mapping. This way we could minimize the
> > default overhead which we will get with an IOMMU and still use it
> > for virtualization and as a GART replacement.
>
> What you are suggesting is an "almost-direct-map" approach for the
> host I/O address space, which provides no protection from mis-behaving
> host drivers. If we could avoid needing a GART replacement (see below
> for why I think we could), you could simply avoid enabling translation
> for host devices and be done with it.
Yes. As I said, this is for the non-isolating case. For the isolation
case (which is needed for protection) it is harder to optimize. But
there I think about some sort of lazy IOMMU TLB flushing. The flushing
and 'wait for the flush to finish' is the most expensive part in the
mapping and unmapping code path. But this needs some experiments.
> In my humble opinion it's more interesting to try and figure out how
> to get protection from mis-behaving host drivers while still keeping
> performance as close as possible to native.
True. But I also see IOMMU as an device usable to pass devices to
virtualization guests. In this case you don't necessarily want device
isolation in the host (for devices only the host uses). So the
optimization for the non-isolation case is also important imho.
> > > > and to handle devices with limited DMA address ranges.
> > >
> > > I'd be pretty surprised if you'll find such devices on machines which
> > > will have AMD's IOMMU...
> >
> > Think of 32bit PCI devices in a host with more than 4GB memory :)
>
> I am thinking of them and I'd be surprised if you'd find any in such
> machines. Certainly I assume none of the on-board devices will have
> this ancient limitation. But hey, it could happen ;-)
The IOMMU machine under my desk has a 32bit PCI slot with a card in it
:-)
Cheers,
Joerg
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists