lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6159200.u9U0p8aIkU@wuerfel>
Date:	Mon, 16 Nov 2015 16:58:21 +0100
From:	Arnd Bergmann <arnd@...db.de>
To:	linux-arm-kernel@...ts.infradead.org
Cc:	Sinan Kaya <okaya@...eaurora.org>, dmaengine@...r.kernel.org,
	timur@...eaurora.org, cov@...eaurora.org, jcm@...hat.com,
	linux-arm-msm@...r.kernel.org, linux-kernel@...r.kernel.org,
	agross@...eaurora.org
Subject: Re: [PATCH V5 2/3] dma: add Qualcomm Technologies HIDMA management driver

On Sunday 15 November 2015 15:54:13 Sinan Kaya wrote:
> The Qualcomm Technologies HIDMA device has been designed
> to support virtualization technology. The driver has been
> divided into two to follow the hardware design.
> 
> 1. HIDMA Management driver
> 2. HIDMA Channel driver
> 
> Each HIDMA HW consists of multiple channels. These channels
> share some set of common parameters. These parameters are
> initialized by the management driver during power up.
> Same management driver is used for monitoring the execution
> of the channels. Management driver can change the performance
> behavior dynamically such as bandwidth allocation and
> prioritization.
> 
> The management driver is executed in hypervisor context and
> is the main management entity for all channels provided by
> the device.

Sorry for asking this question so late, but can you explain what the
point is behind this? It seems counterintuitive to me to have a
DMA engine that is meant for speeding up memory-to-memory transfers
when you run it in a virtual machine where you either need to go
through a virtual IOMMU to set up page table entries, as that will
likely cause more performance overhead than you could possibly
gain, or you assume that all the guest memory is pinned, which
in turn destroys a lot of the assumptions that we are making
in KVM to have useful VM guests.

Where am I going wrong here?

>  .../devicetree/bindings/dma/qcom_hidma_mgmt.txt    |  61 ++++
>  drivers/dma/qcom/Kconfig                           |  10 +
>  drivers/dma/qcom/Makefile                          |   1 +
>  drivers/dma/qcom/hidma_mgmt.c                      | 306 +++++++++++++++++++++
>  drivers/dma/qcom/hidma_mgmt.h                      |  38 +++
>  drivers/dma/qcom/hidma_mgmt_sys.c                  | 231 ++++++++++++++++

Each sysfs file API you add needs a documentation in Documentation/ABI/.

	Arnd
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ