lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20141027175835.GC6202@8bytes.org>
Date:	Mon, 27 Oct 2014 18:58:35 +0100
From:	Joerg Roedel <joro@...tes.org>
To:	Gerald Schaefer <gerald.schaefer@...ibm.com>
Cc:	Frank Blaschka <blaschka@...ux.vnet.ibm.com>,
	schwidefsky@...ibm.com, linux-kernel@...r.kernel.org,
	linux-s390@...r.kernel.org, iommu@...ts.linux-foundation.org,
	sebott@...ux.vnet.ibm.com
Subject: Re: [PATCH linux-next] iommu: add iommu for s390 platform

On Mon, Oct 27, 2014 at 06:02:19PM +0100, Gerald Schaefer wrote:
> On Mon, 27 Oct 2014 17:25:02 +0100
> Joerg Roedel <joro@...tes.org> wrote:
> > Is there some hardware reason for this or is that just an
> > implementation detail that can be changed. In other words, does the
> > hardware allow to use the same DMA table for multiple devices?
> 
> Yes, the HW would allow shared DMA tables, but the implementation would
> need some non-trivial changes. For example, we have a per-device spin_lock
> for DMA table manipulations and the code in arch/s390/pci/pci_dma.c knows
> nothing about IOMMU domains or shared DMA tables, it just implements a set
> of dma_map_ops.

I think it would make sense to move the DMA table handling code and the
dma_map_ops implementation to the IOMMU driver too. This is also how
some other IOMMU drivers implement it.

The plan is to consolidate the dma_ops implementations someday and have
a common implementation that works with all IOMMU drivers across
architectures. This would benefit s390 as well and obsoletes the driver
specific dma_ops implementation.

> Of course this would also go horribly wrong if a device was already
> in use (via the current dma_map_ops), but I guess using devices through
> the IOMMU_API prevents using them otherwise?

This is taken care of by the device drivers. A driver for a device
either uses the DMA-API or does its own management of DMA mappings using
the IOMMU-API. VFIO is an example for the later case.

> > I think it is much easier to use the same DMA table for all devices
> > in a domain, if the hardware allows that.
> 
> Yes, in this case, having one DMA table per domain and sharing it
> between all devices in that domain sounds like a good idea. However,
> I can't think of any use case for this, and Frank probably had a very
> special use case in mind where this scenario doesn't appear, hence the
> "one device per domain" restriction.

One usecase is device access from user-space via VFIO. A userspace
process might want to access multiple devices at the same time and VFIO
would implement this by assigning all of these devices to the same IOMMU
domain.

This requirement also comes also from the IOMMU-API itself. The
intention of the API is to make different IOMMUs look the same through
the API, and this is violated when drivers implement a 1-1
domain->device mapping.

> So, if having multiple devices per domain is a must, then we probably
> need a thorough rewrite of the arch/s390/pci/pci_dma.c code.

Yes, this is a requirement for new IOMMU drivers. We already have
drivers implementing the same 1-1 relation and we are about to fix them.
But I don't want to add new drivers doing the same.


	Joerg

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ