lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 29 Sep 2008 11:36:52 +0200
From:	Joerg Roedel <joro@...tes.org>
To:	Muli Ben-Yehuda <muli@...ibm.com>
Cc:	Joerg Roedel <joerg.roedel@....com>,
	Amit Shah <amit.shah@...hat.com>, linux-kernel@...r.kernel.org,
	kvm@...r.kernel.org, iommu@...ts.linux-foundation.org,
	David Woodhouse <dwmw2@...radead.org>,
	Ingo Molnar <mingo@...hat.com>,
	FUJITA Tomonori <fujita.tomonori@....ntt.co.jp>
Subject: Re: [PATCH 9/9] x86/iommu: use dma_ops_list in get_dma_ops

On Mon, Sep 29, 2008 at 12:30:44PM +0300, Muli Ben-Yehuda wrote:
> On Sun, Sep 28, 2008 at 09:13:33PM +0200, Joerg Roedel wrote:
> 
> > I think we should try to build a paravirtualized IOMMU for KVM
> > guests.  It should work this way: We reserve a configurable amount
> > of contiguous guest physical memory and map it dma contiguous using
> > some kind of hardware IOMMU. This is possible with all hardare
> > IOMMUs we have in the field by now, also Calgary and GART. The guest
> > does dma_coherent allocations from this memory directly and is done.
> > For map_single and map_sg 
> > the guest can do bounce buffering. We avoid nearly all pvdma hypercalls
> > with this approach, keep guest swapping working and solve also the
> > problems with device dma_masks and guest memory that is not contigous on
> > the host side.
> 
> I'm not sure I follow, but if I understand correctly with this
> approach the guest could only DMA into buffers that fall within the
> range you allocated for DMA and mapped. Isn't that a pretty nasty
> limitation?  The guest would need to bounce-bufer every frame that
> happened to not fall inside that range, with the resulting loss of
> performance.

The bounce buffering is needed for map_single/map_sg allocations. For
dma_alloc_coherent we can directly allocate from that range. The
performance loss of the bounce buffering may be lower than the
hypercalls we need as the alternative (we need hypercalls for map, unmap
and sync).

Joerg

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ