[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20141027191840.4c76e9ac@thinkpad>
Date: Mon, 27 Oct 2014 19:18:40 +0100
From: Gerald Schaefer <gerald.schaefer@...ibm.com>
To: Joerg Roedel <joro@...tes.org>
Cc: Frank Blaschka <blaschka@...ux.vnet.ibm.com>,
schwidefsky@...ibm.com, linux-kernel@...r.kernel.org,
linux-s390@...r.kernel.org, iommu@...ts.linux-foundation.org,
sebott@...ux.vnet.ibm.com
Subject: Re: [PATCH linux-next] iommu: add iommu for s390 platform
On Mon, 27 Oct 2014 18:58:35 +0100
Joerg Roedel <joro@...tes.org> wrote:
> On Mon, Oct 27, 2014 at 06:02:19PM +0100, Gerald Schaefer wrote:
> > On Mon, 27 Oct 2014 17:25:02 +0100
> > Joerg Roedel <joro@...tes.org> wrote:
> > > Is there some hardware reason for this or is that just an
> > > implementation detail that can be changed. In other words, does
> > > the hardware allow to use the same DMA table for multiple devices?
> >
> > Yes, the HW would allow shared DMA tables, but the implementation
> > would need some non-trivial changes. For example, we have a
> > per-device spin_lock for DMA table manipulations and the code in
> > arch/s390/pci/pci_dma.c knows nothing about IOMMU domains or shared
> > DMA tables, it just implements a set of dma_map_ops.
>
> I think it would make sense to move the DMA table handling code and
> the dma_map_ops implementation to the IOMMU driver too. This is also
> how some other IOMMU drivers implement it.
Yes, I feared that this would come up, but I agree that it looks like the
best solution, at least if we really want/need the IOMMU API for s390 now.
I'll need to discuss this with Frank, he seems to be on vacation this week.
Thanks for your feedback and explanations!
> The plan is to consolidate the dma_ops implementations someday and
> have a common implementation that works with all IOMMU drivers across
> architectures. This would benefit s390 as well and obsoletes the
> driver specific dma_ops implementation.
>
> > Of course this would also go horribly wrong if a device was already
> > in use (via the current dma_map_ops), but I guess using devices
> > through the IOMMU_API prevents using them otherwise?
>
> This is taken care of by the device drivers. A driver for a device
> either uses the DMA-API or does its own management of DMA mappings
> using the IOMMU-API. VFIO is an example for the later case.
>
> > > I think it is much easier to use the same DMA table for all
> > > devices in a domain, if the hardware allows that.
> >
> > Yes, in this case, having one DMA table per domain and sharing it
> > between all devices in that domain sounds like a good idea. However,
> > I can't think of any use case for this, and Frank probably had a
> > very special use case in mind where this scenario doesn't appear,
> > hence the "one device per domain" restriction.
>
> One usecase is device access from user-space via VFIO. A userspace
> process might want to access multiple devices at the same time and
> VFIO would implement this by assigning all of these devices to the
> same IOMMU domain.
>
> This requirement also comes also from the IOMMU-API itself. The
> intention of the API is to make different IOMMUs look the same through
> the API, and this is violated when drivers implement a 1-1
> domain->device mapping.
>
> > So, if having multiple devices per domain is a must, then we
> > probably need a thorough rewrite of the arch/s390/pci/pci_dma.c
> > code.
>
> Yes, this is a requirement for new IOMMU drivers. We already have
> drivers implementing the same 1-1 relation and we are about to fix
> them. But I don't want to add new drivers doing the same.
>
>
> Joerg
>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-s390"
> in the body of a message to majordomo@...r.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists