[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <56990C40.4050407@codeaurora.org>
Date: Fri, 15 Jan 2016 10:12:00 -0500
From: Sinan Kaya <okaya@...eaurora.org>
To: Mark Rutland <mark.rutland@....com>
Cc: dmaengine@...r.kernel.org, timur@...eaurora.org,
devicetree@...r.kernel.org, cov@...eaurora.org,
vinod.koul@...el.com, jcm@...hat.com, agross@...eaurora.org,
arnd@...db.de, linux-arm-msm@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
kvmarm@...ts.cs.columbia.edu, marc.zyngier@....com,
christoffer.dall@...aro.org, shankerd@...eaurora.org,
Vikram Sethi <vikrams@...eaurora.org>
Subject: Re: [PATCH V12 3/7] dma: add Qualcomm Technologies HIDMA management
driver
Hi Mark,
On 1/15/2016 9:56 AM, Mark Rutland wrote:
> Hi,
>
> [adding KVM people, given this is meant for virtualization]
>
> On Mon, Jan 11, 2016 at 09:45:43AM -0500, Sinan Kaya wrote:
>> The Qualcomm Technologies HIDMA device has been designed to support
>> virtualization technology. The driver has been divided into two to follow
>> the hardware design.
>>
>> 1. HIDMA Management driver
>> 2. HIDMA Channel driver
>>
>> Each HIDMA HW consists of multiple channels. These channels share some set
>> of common parameters. These parameters are initialized by the management
>> driver during power up. Same management driver is used for monitoring the
>> execution of the channels. Management driver can change the performance
>> behavior dynamically such as bandwidth allocation and prioritization.
>>
>> The management driver is executed in hypervisor context and is the main
>> management entity for all channels provided by the device.
>
> You mention repeatedly that this is designed for virtualization, but
> looking at the series as it stands today I can't see how this operates
> from the host side.
>
> This doesn't seem to tie into KVM or VFIO, and as far as I can tell
> there's no mechanism for associating channels with a particular virtual
> address space (i.e. no configuration of an external or internal IOMMU),
> nor pinning of guest pages to allow for DMA to occur safely.
I'm using VFIO platform driver for this purpose. VFIO platform driver is
capable of assigning any platform device to a guest machine with this driver.
You just unbind the HIDMA channel driver from the hypervisor and bind to vfio
driver using the very same approach you'd use with PCIe.
Of course, this all assumes the presence of an IOMMU driver on the system. VFIO
driver uses the IOMMU driver to create the mappings.
The mechanism used here is not different from VFIO PCI from user perspective.
>
> Given that, I'm at a loss as to how this would be used in a hypervisor
> context. What am I missing?
>
> Are there additional patches, or do you have some userspace that works
> with this in some limited configuration?
No, these are the only patches. We have one patch for the QEMU but from kernel
perspective this is it.
I just rely on the platform VFIO driver to do the work.
>
> Thanks,
> Mark.
>
Sinan
Powered by blists - more mailing lists