lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 2 Jun 2010 14:21:00 +0300
From:	"Michael S. Tsirkin" <mst@...hat.com>
To:	Joerg Roedel <joro@...tes.org>
Cc:	Avi Kivity <avi@...hat.com>, Tom Lyon <pugs@...co.com>,
	linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
	chrisw@...s-sol.org, hjk@...utronix.de, gregkh@...e.de,
	aafabbri@...co.com, scofeldm@...co.com
Subject: Re: [PATCH] VFIO driver: Non-privileged user level PCI drivers

On Wed, Jun 02, 2010 at 01:12:25PM +0200, Joerg Roedel wrote:
> On Wed, Jun 02, 2010 at 01:38:28PM +0300, Michael S. Tsirkin wrote:
> > On Wed, Jun 02, 2010 at 12:35:16PM +0200, Joerg Roedel wrote:
> 
> > > With the userspace interface a process can create io-page-faults
> > > anyway if it wants. We can't protect us from this.
> > 
> > We could fail all operations until an iommu is bound.  This will help
> > catch bugs with access before setup. We can not do this if a domain is
> > bound by default.
> 
> Even if it is bound to a domain the userspace driver could program the
> device to do dma to unmapped regions causing io-page-faults. The kernel
> can't do anything about it.

It can always corrupt its own memory directly as well :)
But that is not a reason not to detect errors if we can,
and not to make APIs hard to misuse.

> > > The second IOMMU_MAP ioctl is just to show that existing mappings would
> > > be destroyed if the device is assigned to another address space. Not
> > > strictly necessary. So we have two ioctls but save one call to create
> > > the iommu-domain.
> > 
> > With 10 devices you have 10 extra ioctls.
> 
> And this works implicitly with your proposal?

Yes.  so you do:
iommu = open
ioctl(dev1, BIND, iommu)
ioctl(dev2, BIND, iommu)
ioctl(dev3, BIND, iommu)
ioctl(dev4, BIND, iommu)

No need to add a SHARE ioctl.


> Remember that we still
> need to be able to provide seperate mappings for each device to support
> IOMMU emulation for the guest.

Generally not true. E.g. guest can enable iommu passthrough
or have domain per a group of devices.

> I think my proposal does not have any
> extra costs.

with my proposal we have 1 ioctl per device + 1 per domain.
with yours we have 2 ioctls per device is iommu is shared
and 1 if it is not shared.

as current apps share iommu it seems to make sense
to optimize for that.

> > > Because we express here that "dev2 shares the iommu mappings of dev1".
> > > Thats easy to remember.
> > 
> > they both share the mappings. which one gets the iommu
> > destroyed (breaking the device if it is now doing DMA)?
> 
> As I wrote the domain has a reference count and is destroyed only when
> it goes down to zero. This does not happen as long as a device is bound
> to it.
> 
> 	Joerg

We were talking about UNSHARE ioctl:
ioctl(dev1, UNSHARE, dev2)
Does it change the domain for dev1 or dev2?
If you make a mistake you get a hard to debug bug.

-- 
MST
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ