lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080929133311.GK27928@amd.com>
Date:	Mon, 29 Sep 2008 15:33:11 +0200
From:	Joerg Roedel <joerg.roedel@....com>
To:	FUJITA Tomonori <fujita.tomonori@....ntt.co.jp>
CC:	joro@...tes.org, muli@...ibm.com, amit.shah@...hat.com,
	linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
	iommu@...ts.linux-foundation.org, dwmw2@...radead.org,
	mingo@...hat.com
Subject: Re: [PATCH 9/9] x86/iommu: use dma_ops_list in get_dma_ops

On Mon, Sep 29, 2008 at 10:16:44PM +0900, FUJITA Tomonori wrote:
> On Mon, 29 Sep 2008 11:36:52 +0200
> Joerg Roedel <joro@...tes.org> wrote:
> 
> > On Mon, Sep 29, 2008 at 12:30:44PM +0300, Muli Ben-Yehuda wrote:
> > > On Sun, Sep 28, 2008 at 09:13:33PM +0200, Joerg Roedel wrote:
> > > 
> > > > I think we should try to build a paravirtualized IOMMU for KVM
> > > > guests.  It should work this way: We reserve a configurable amount
> > > > of contiguous guest physical memory and map it dma contiguous using
> > > > some kind of hardware IOMMU. This is possible with all hardare
> > > > IOMMUs we have in the field by now, also Calgary and GART. The guest
> > > > does dma_coherent allocations from this memory directly and is done.
> > > > For map_single and map_sg 
> > > > the guest can do bounce buffering. We avoid nearly all pvdma hypercalls
> > > > with this approach, keep guest swapping working and solve also the
> > > > problems with device dma_masks and guest memory that is not contigous on
> > > > the host side.
> > > 
> > > I'm not sure I follow, but if I understand correctly with this
> > > approach the guest could only DMA into buffers that fall within the
> > > range you allocated for DMA and mapped. Isn't that a pretty nasty
> > > limitation?  The guest would need to bounce-bufer every frame that
> > > happened to not fall inside that range, with the resulting loss of
> > > performance.
> > 
> > The bounce buffering is needed for map_single/map_sg allocations. For
> > dma_alloc_coherent we can directly allocate from that range. The
> > performance loss of the bounce buffering may be lower than the
> > hypercalls we need as the alternative (we need hypercalls for map, unmap
> > and sync).
> 
> Nobody cares about the performance of dma_alloc_coherent. Only the
> performance of map_single/map_sg matters.
>
> I'm not sure how expensive the hypercalls are, but they are more
> expensive than bounce buffering coping lots of data for every I/Os?

I don't think that we can avoid bounce buffering into the guests at all
(with and without my idea of a paravirtualized IOMMU) when we want to
handle dma_masks and requests that cross guest physical pages properly.

With mapping/unmapping through hypercalls we add the world-switch
overhead to the copy-overhead. We can't avoid this when we have no
hardware support at all. But already with older IOMMUs like Calgary and
GART we can at least avoid the world-switch. And since, for example,
every 64 bit capable AMD processor has a GART we can make use of it.

Joerg

-- 
           |           AMD Saxony Limited Liability Company & Co. KG
 Operating |         Wilschdorfer Landstr. 101, 01109 Dresden, Germany
 System    |                  Register Court Dresden: HRA 4896
 Research  |              General Partner authorized to represent:
 Center    |             AMD Saxony LLC (Wilmington, Delaware, US)
           | General Manager of AMD Saxony LLC: Dr. Hans-R. Deppe, Thomas McCoy

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ