lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 02 Jun 2010 05:59:49 +0300
From:	Avi Kivity <avi@...hat.com>
To:	Tom Lyon <pugs@...n-about.com>
CC:	"Michael S. Tsirkin" <mst@...hat.com>,
	linux-kernel@...r.kernel.org, kvm@...r.kernel.org,
	chrisw@...s-sol.org, joro@...tes.org, hjk@...utronix.de,
	gregkh@...e.de, aafabbri@...co.com, scofeldm@...co.com,
	alex.williamson@...hat.com
Subject: Re: [PATCH] VFIO driver: Non-privileged user level PCI drivers

On 06/02/2010 12:26 AM, Tom Lyon wrote:
>
> I'm not really opposed to multiple devices per domain, but let me point out how I
> ended up here.  First, the driver has two ways of mapping pages, one based on the
> iommu api and one based on the dma_map_sg api.  With the latter, the system
> already allocates a domain per device and there's no way to control it. This was
> presumably done to help isolation between drivers.  If there are multiple drivers
> in the user level, do we not want the same isoation to apply to them?
>    

In the case of kvm, we don't want isolation between devices, because 
that doesn't happen on real hardware.  So if the guest programs devices 
to dma to each other, we want that to succeed.

> Also, domains are not a very scarce resource - my little core i5 has 256,
> and the intel architecture goes to 64K.
>    

But there is a 0.2% of mapped memory per domain cost for the page 
tables.  For the kvm use case, that could be significant since a guest 
may have large amounts of memory and large numbers of assigned devices.

> And then there's the fact that it is possible to have multiple disjoint iommus on a system,
> so it may not even be possible to bring 2 devices under one domain.
>    

That's indeed a deficiency.

> Given all that, I am inclined to leave it alone until someone has a real problem.
> Note that not sharing iommu domains doesn't mean you can't share device memory,
> just that you have to do multiple mappings
>    

I think we do have a real problem (though a mild one).

The only issue I see with deferring the solution is that the API becomes 
gnarly; both the kernel and userspace will have to support both APIs 
forever.  Perhaps we can implement the new API but defer the actual 
sharing until later, don't know how much work this saves.  Or Alex/Chris 
can pitch in and help.

-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ