lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <1181761960.16287.34.camel@ghaskins-t60p.haskins.net>
Date:	Wed, 13 Jun 2007 15:12:40 -0400
From:	Gregory Haskins <ghaskins@...ell.com>
To:	David Brown <dmlb2000@...il.com>
Cc:	kvm-devel@...ts.sourceforge.net,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [kvm-devel] kvm + IOMMU

On Wed, 2007-06-13 at 11:52 -0700, David Brown wrote:
> > Hi David,
> >   I am not an expert here, but I don't believe it would work without
> > changes to KVM.  My understanding is that you use an IOMMU in this
> > fashion if you want to direct-map a device into a guest for devices that
> > do not have a local IOMMU-like functionality built in already.  For
> > instance, perhaps you want to assign an off-the-shelf ethernet NIC to a
> > guest.  The IOMMU would serve to translate between GPA and system based
> > DMA addresses.  However, the hypervisor would really need to be involved
> > in the setup of this mapping on the IOMMU in the first place.
> 
> Okay its understandable that the initial setup of the mapping between
> virtual and actual would be done by some OS (most-likely host).
> However isn't the actual mapping when the guest starts and requests
> devices supposed to be handled by hardware? I would think performance
> wouldn't scale very well if the host OS had to maintain mappings and
> translate addresses every time a guest requests access to a mapped
> device.

Sorry, I didn't word that very well.  The hypervisor would have to do
the initial setup, and possibly any bookkeeping updates as the state of
the MMU mappings change over time (if applicable).  But this is
"slow-path" type stuff.  Once the mappings are in place, memory accesses
are handled by the hardware at full speed.

This is akin to how an OS sets up the regular MMU, but then the task
runs at full speed directly on the memory until something like a fault
occurs.  Essentially, IOMMUs perform translation for the peripheral
busses "in" to memory, as a regular MMU performs translation "out".
Someone has to be behind the wheel configuring the translation tables.
However, once they are setup, accesses to memory within the configured
regions happen without software interaction.

> 
> > KVM (currently) virtualizes/emulates all components in the logical
> > "system" presented to the guest.  It doesn't yet support the notion of
> > direct-mapping a physical component.  I doubt you will have to wait too
> > long for someone to add this feature, however :)  It's just not there
> > today (to my knowledge, anyway)
> 
> That's good to hear. :)
> 
> > But to answer your question, when configured up like this the IO
> > subsystem in question should perform pretty close to native (at least in
> > theory).
> 
> Hopefully you mean that the hardware is handling the mapping so that
> the host OS won't have to take the burden of mapping a bunch of
> addresses all the time.

That's correct.

> 
> Thanks,
> - David Brown

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ