lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1393629520-12713-3-git-send-email-santosh.shilimkar@ti.com>
Date:	Fri, 28 Feb 2014 18:18:39 -0500
From:	Santosh Shilimkar <santosh.shilimkar@...com>
To:	<linux-kernel@...r.kernel.org>
CC:	<linux-arm-kernel@...ts.infradead.org>,
	<devicetree@...r.kernel.org>, Sandeep Nair <sandeep_n@...com>,
	Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
	Kumar Gala <galak@...eaurora.org>,
	Olof Johansson <olof@...om.net>, Arnd Bergmann <arnd@...db.de>,
	Grant Likely <grant.likely@...aro.org>,
	Rob Herring <robh+dt@...nel.org>,
	Mark Rutland <mark.rutland@....com>,
	Santosh Shilimkar <santosh.shilimkar@...com>
Subject: [PATCH 2/3] soc: keystone: add QMSS driver

From: Sandeep Nair <sandeep_n@...com>

The QMSS (Queue Manager Sub System) found on Keystone SOCs is one of
the main hardware sub system which forms the backbone of the Keystone
Multi-core Navigator. QMSS consist of queue managers, packed-data structure
processors(PDSP), linking RAM, descriptor pools and infrastructure
Packet DMA.

The Queue Manager is a hardware module that is responsible for accelerating
management of the packet queues. Packets are queued/de-queued by writing or
reading descriptor address to a particular memory mapped location. The PDSPs
perform QMSS related functions like accumulation, QoS, or event management.
Linking RAM registers are used to link the descriptors which are stored in
descriptor RAM. Descriptor RAM is configurable as internal or external memory.

The QMSS driver manages the PDSP setups, linking RAM regions,
queue pool management (allocation, push, pop and notify) and descriptor
pool management. The specifics on the device tree bindings for
QMSS can be found in:
        Documentation/devicetree/bindings/soc/keystone-qmss.txt

Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Cc: Kumar Gala <galak@...eaurora.org>
Cc: Olof Johansson <olof@...om.net>
Cc: Arnd Bergmann <arnd@...db.de>
Cc: Grant Likely <grant.likely@...aro.org>
Cc: Rob Herring <robh+dt@...nel.org>
Cc: Mark Rutland <mark.rutland@....com>
Signed-off-by: Sandeep Nair <sandeep_n@...com>
Signed-off-by: Santosh Shilimkar <santosh.shilimkar@...com>
---
 .../devicetree/bindings/soc/keystone-qmss.txt      |  209 +++
 drivers/Kconfig                                    |    2 +
 drivers/Makefile                                   |    3 +
 drivers/soc/Kconfig                                |    2 +
 drivers/soc/Makefile                               |    5 +
 drivers/soc/keystone/Kconfig                       |   15 +
 drivers/soc/keystone/Makefile                      |    5 +
 drivers/soc/keystone/qmss_acc.c                    |  591 ++++++++
 drivers/soc/keystone/qmss_queue.c                  | 1533 ++++++++++++++++++++
 drivers/soc/keystone/qmss_queue.h                  |  236 +++
 include/linux/soc/keystone_qmss.h                  |  390 +++++
 11 files changed, 2991 insertions(+)
 create mode 100644 Documentation/devicetree/bindings/soc/keystone-qmss.txt
 create mode 100644 drivers/soc/Makefile
 create mode 100644 drivers/soc/keystone/Kconfig
 create mode 100644 drivers/soc/keystone/Makefile
 create mode 100644 drivers/soc/keystone/qmss_acc.c
 create mode 100644 drivers/soc/keystone/qmss_queue.c
 create mode 100644 drivers/soc/keystone/qmss_queue.h
 create mode 100644 include/linux/soc/keystone_qmss.h

diff --git a/Documentation/devicetree/bindings/soc/keystone-qmss.txt b/Documentation/devicetree/bindings/soc/keystone-qmss.txt
new file mode 100644
index 0000000..f975207
--- /dev/null
+++ b/Documentation/devicetree/bindings/soc/keystone-qmss.txt
@@ -0,0 +1,209 @@
+* Texas Instruments Keystone QMSS driver
+
+Required properties:
+- compatible	: Must be "ti,keystone-qmss";
+- clocks	: phandle to the reference clock for this device.
+- queue-range	: <start number> total range of queue numbers for the device.
+- linkram0	: <address size> for internal link ram, where size is the total
+		  link ram entries.
+- linkram1	: <address size> for external link ram, where size is the total
+		  external link ram entries. If the address is specified as "0"
+		  driver will allocate memory.
+- qmgrs         : the number of individual queue managers in the device. On
+                  keystone 1 range of devices there should be only one node.
+		  On keystone 2 devices there can be more than 1 node.
+  -- managed-queues	: the actual queues managed by each queue manager
+			  instance, specified as <"base queue #" "# of queues">.
+  -- reg		: Address and length of the register set for the device
+			  for peek, status, config, region, push, pop regions.
+  -- reg-names		: Names for the above register regions. The name to be
+			  used is as follows:
+			  - "config" : Queue configuration region.
+			  - "status" : Queue status RAM.
+			  - "region" : Descriptor memory setup region.
+			  - "push"   : Queue Management/Queue Proxy region.
+			  - "pop"    : Queue Management/Queue Proxy region.
+			  - "peek"   : Queue Peek region.
+- queue-pools	: Queue ranges are grouped into 3 type of pools:
+		  - qpend	    : pool of qpend(interruptible) queues
+		  - general-purpose : pool of general queues, primarly used
+				      as free descriptor queues or the
+				      transmit DMA queues.
+		  - accumulator	    : pool of queues on accumulator channel
+		  Each range can have the following properties:
+  -- values		: number of queues to use per queue range, specified as
+			  <"base queue #" "# of queues">.
+  -- interrupts		: Optional property to specify the interrupt mapping
+			  for interruptible queues. The driver additionaly sets
+			  the interrupt affinity based on the cpu mask.
+  -- reserved		: Optional property used to reserve the range. Queues
+			  in a reserved range can only be allocated by id.
+  -- accumulator	: Accumulator channel property specified as:
+			  <pdsp-id, channel, entries, pacing mode, latency>
+			  pdsp-id     : QMSS PDSP running accumulator firmware
+					on which the channel has to be
+					configured
+			  channel     : Accumulator channel number
+			  entries     : Size of the accumulator descriptor list
+			  pacing mode : Interrupt pacing mode
+					0 : None, i.e interrupt on list full
+					1 : Time delay since last interrupt
+					2 : Time delay since first new packet
+					3 : Time delay since last new packet
+			  latency     : time to delay the interrupt, specified
+					in microseconds.
+  -- multi-queue	: Optional property to specify that the channel has to
+			  monitor upto 32 queues starting at the base queue #.
+- descriptor-regions	: Descriptor memory region specification.
+  -- id				: region number.
+  -- values			: number of descriptors in the region,
+				  specified as
+				  <"# of descriptors" "descriptor size">.
+  -- link-index			: start index, i.e. index of the first
+				  descriptor in the region.
+
+Optional properties:
+- dma-coherent	: Present if DMA operations are coherent.
+- pdsps		: PDSP configuration, if any.
+  -- firmware		: firmware to be loaded on the PDSP.
+  -- id			: the qmss pdsp that will run the firmware.
+  -- reg		: Address and length of the register set of the PDSP
+			  for iram, intd, region, command regions.
+  -- reg-names		: Names for the above register regions. The name to be
+			  used is as follows:
+			  - "iram"   : PDSP internal RAM region.
+			  - "reg"    : PDSP control/status region registers.
+			  - "intd"   : QMSS interrupt distributor registers.
+			  - "cmd"    : PDSP command interface region.
+
+Example:
+
+qmss: qmss@...0000 {
+	compatible = "ti,keystone-qmss";
+	dma-coherent;
+	#address-cells = <1>;
+	#size-cells = <1>;
+	clocks = <&chipclk13>;
+	ranges;
+	queue-range	= <0 0x4000>;
+	linkram0	= <0x100000 0x8000>;
+	linkram1	= <0x0 0x10000>;
+
+	qmgrs {
+		#address-cells = <1>;
+		#size-cells = <1>;
+		ranges;
+		qmgr0 {
+			managed-queues = <0 0x2000>;
+			reg = <0x2a40000 0x20000>,
+			      <0x2a06000 0x400>,
+			      <0x2a02000 0x1000>,
+			      <0x2a03000 0x1000>,
+			      <0x2a80000 0x20000>,
+			      <0x2a80000 0x20000>;
+			reg-names = "peek", "status", "config",
+				    "region", "push", "pop";
+		};
+
+		qmgr1 {
+			managed-queues = <0x2000 0x2000>;
+			reg = <0x2a60000 0x20000>,
+			      <0x2a06400 0x400>,
+			      <0x2a04000 0x1000>,
+			      <0x2a05000 0x1000>,
+			      <0x2aa0000 0x20000>,
+			      <0x2aa0000 0x20000>;
+			reg-names = "peek", "status", "config",
+				    "region", "push", "pop";
+		};
+	};
+	queue-pools {
+		qpend {
+			qpend-0 {
+				values = <658 8>;
+				interrupts =<0 40 0xf04 0 41 0xf04 0 42 0xf04
+					     0 43 0xf04 0 44 0xf04 0 45 0xf04
+					     0 46 0xf04 0 47 0xf04>;
+			};
+			qpend-1 {
+				values = <8704 16>;
+				interrupts = <0 48 0xf04 0 49 0xf04 0 50 0xf04
+					      0 51 0xf04 0 52 0xf04 0 53 0xf04
+					      0 54 0xf04 0 55 0xf04 0 56 0xf04
+					      0 57 0xf04 0 58 0xf04 0 59 0xf04
+					      0 60 0xf04 0 61 0xf04 0 62 0xf04
+					      0 63 0xf04>;
+				reserved;
+			};
+			qpend-2 {
+				values = <8720 16>;
+				interrupts = <0 64 0xf04 0 65 0xf04 0 66 0xf04
+					      0 59 0xf04 0 68 0xf04 0 69 0xf04
+					      0 70 0xf04 0 71 0xf04 0 72 0xf04
+					      0 73 0xf04 0 74 0xf04 0 75 0xf04
+					      0 76 0xf04 0 77 0xf04 0 78 0xf04
+					      0 79 0xf04>;
+			};
+		};
+		general-purpose {
+			gp-0 {
+				values = <4000 64>;
+			};
+			netcp-tx {
+				values = <640 9>;
+				reserved;
+			};
+		};
+		accumulator {
+			acc-0 {
+				values = <128 32>;
+				accumulator = <0 36 16 2 50>;
+				interrupts = <0 215 0xf01>;
+				multi-queue;
+				reserved;
+			};
+			acc-1 {
+				values = <160 32>;
+				accumulator = <0 37 16 2 50>;
+				interrupts = <0 216 0xf01>;
+				multi-queue;
+			};
+			acc-2 {
+				values = <192 32>;
+				accumulator = <0 38 16 2 50>;
+				interrupts = <0 217 0xf01>;
+				multi-queue;
+			};
+			acc-3 {
+				values = <224 32>;
+				accumulator = <0 39 16 2 50>;
+				interrupts = <0 218 0xf01>;
+				multi-queue;
+			};
+		};
+	};
+	descriptor-regions {
+		#address-cells = <1>;
+		#size-cells = <1>;
+		ranges;
+		region-12 {
+			id = <12>;
+			values	= <8192 128>;	/* num_desc desc_size */
+			link-index = <0x4000>;
+		};
+	};
+	pdsps {
+		#address-cells = <1>;
+		#size-cells = <1>;
+		ranges;
+		pdsp0@...a10000 {
+			firmware = "keystone/qmss_pdsp_acc48_k2_le_1_0_0_8.fw";
+			reg = <0x2a10000 0x1000>,
+			      <0x2a0f000 0x100>,
+			      <0x2a0c000 0x3c8>,
+			      <0x2a20000 0x4000>;
+			reg-names = "iram", "reg", "intd", "cmd";
+			id = <0>;
+		};
+	};
+}; /* qmss */
diff --git a/drivers/Kconfig b/drivers/Kconfig
index 37f955f..3220516 100644
--- a/drivers/Kconfig
+++ b/drivers/Kconfig
@@ -146,6 +146,8 @@ source "drivers/remoteproc/Kconfig"
 
 source "drivers/rpmsg/Kconfig"
 
+source "drivers/soc/Kconfig"
+
 source "drivers/devfreq/Kconfig"
 
 source "drivers/extcon/Kconfig"
diff --git a/drivers/Makefile b/drivers/Makefile
index 0d8e2a4..0c22db8 100644
--- a/drivers/Makefile
+++ b/drivers/Makefile
@@ -146,6 +146,9 @@ obj-$(CONFIG_IOMMU_SUPPORT)	+= iommu/
 obj-$(CONFIG_REMOTEPROC)	+= remoteproc/
 obj-$(CONFIG_RPMSG)		+= rpmsg/
 
+# SOC specific drivers
+obj-y				+= soc/
+
 # Virtualization drivers
 obj-$(CONFIG_VIRT_DRIVERS)	+= virt/
 obj-$(CONFIG_HYPERV)		+= hv/
diff --git a/drivers/soc/Kconfig b/drivers/soc/Kconfig
index 2f9d7d0..59980bd 100644
--- a/drivers/soc/Kconfig
+++ b/drivers/soc/Kconfig
@@ -1,3 +1,5 @@
 menu "SOC specific Drivers"
 
+source "drivers/soc/keystone/Kconfig"
+
 endmenu
diff --git a/drivers/soc/Makefile b/drivers/soc/Makefile
new file mode 100644
index 0000000..c5d141e
--- /dev/null
+++ b/drivers/soc/Makefile
@@ -0,0 +1,5 @@
+#
+# Makefile for the Linux kernel SOC specific device drivers.
+#
+
+obj-$(CONFIG_ARCH_KEYSTONE)		+= keystone/
diff --git a/drivers/soc/keystone/Kconfig b/drivers/soc/keystone/Kconfig
new file mode 100644
index 0000000..0b3131b
--- /dev/null
+++ b/drivers/soc/keystone/Kconfig
@@ -0,0 +1,15 @@
+#
+# TI Keystone Soc drivers
+#
+
+config KEYSTONE_QMSS
+	tristate "Keystone Queue Manager Sub System"
+	depends on ARCH_KEYSTONE
+	help
+	  Say y here to support the Keystone Hardware Queue Manager support.
+	  The Queue Manager is a hardware module that is responsible for
+	  accelerating management of the packet queues. Packets are queued/
+	  de-queued by writing/reading descriptor address to a particular
+	  memory mapped location in the Queue Manager module.
+
+	  If unsure, say N.
diff --git a/drivers/soc/keystone/Makefile b/drivers/soc/keystone/Makefile
new file mode 100644
index 0000000..56e66f8
--- /dev/null
+++ b/drivers/soc/keystone/Makefile
@@ -0,0 +1,5 @@
+#
+# TI Keystone SOC drivers
+#
+
+obj-$(CONFIG_KEYSTONE_QMSS)	+= qmss_queue.o qmss_acc.o
diff --git a/drivers/soc/keystone/qmss_acc.c b/drivers/soc/keystone/qmss_acc.c
new file mode 100644
index 0000000..3d61144
--- /dev/null
+++ b/drivers/soc/keystone/qmss_acc.c
@@ -0,0 +1,591 @@
+/*
+ * Keystone accumulator queue manager
+ *
+ * Copyright (C) 2014 Texas Instruments Incorporated - http://www.ti.com
+ * Author:	Sandeep Nair <sandeep_n@...com>
+ *		Cyril Chemparathy <cyril@...com>
+ *		Santosh Shilimkar <santosh.shilimkar@...com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/io.h>
+#include <linux/interrupt.h>
+#include <linux/bitops.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/soc/keystone_qmss.h>
+#include <linux/platform_device.h>
+#include <linux/dma-mapping.h>
+#include <linux/of.h>
+#include <linux/of_device.h>
+#include <linux/of_address.h>
+#include <linux/firmware.h>
+
+#include "qmss_queue.h"
+
+#define kqmss_range_offset_to_inst(kdev, range, q)	\
+	(range->queue_base_inst + (q << kdev->inst_shift))
+
+static void __kqmss_acc_notify(struct kqmss_range_info *range,
+				struct kqmss_acc_channel *acc)
+{
+	struct kqmss_device *kdev = range->kdev;
+	struct kqmss_queue_inst *inst;
+	int range_base, queue;
+
+	range_base = kdev->base_id + range->queue_base;
+
+	if (range->flags & RANGE_MULTI_QUEUE) {
+		for (queue = 0; queue < range->num_queues; queue++) {
+			inst = kqmss_range_offset_to_inst(kdev, range,
+								queue);
+			if (inst->notify_needed) {
+				inst->notify_needed = 0;
+				dev_dbg(kdev->dev, "acc-irq: notifying %d\n",
+					range_base + queue);
+				kqmss_queue_notify(inst);
+			}
+		}
+	} else {
+		queue = acc->channel - range->acc_info.start_channel;
+		inst = kqmss_range_offset_to_inst(kdev, range, queue);
+		dev_dbg(kdev->dev, "acc-irq: notifying %d\n",
+			range_base + queue);
+		kqmss_queue_notify(inst);
+	}
+}
+
+static int kqmss_acc_set_notify(struct kqmss_range_info *range,
+				struct kqmss_queue_inst *kq,
+				bool enabled)
+{
+	struct kqmss_pdsp_info *pdsp = range->acc_info.pdsp;
+	struct kqmss_device *kdev = range->kdev;
+	u32 mask, offset;
+
+	/*
+	 * when enabling, we need to re-trigger an interrupt if we
+	 * have descriptors pending
+	 */
+	if (!enabled || atomic_read(&kq->desc_count) <= 0)
+		return 0;
+
+	kq->notify_needed = 1;
+	atomic_inc(&kq->acc->retrigger_count);
+	mask = BIT(kq->acc->channel % 32);
+	offset = ACC_INTD_OFFSET_STATUS(kq->acc->channel);
+	dev_dbg(kdev->dev, "setup-notify: re-triggering irq for %s\n",
+		kq->acc->name);
+	writel_relaxed(mask, pdsp->intd + offset);
+	return 0;
+}
+
+static irqreturn_t kqmss_acc_int_handler(int irq, void *_instdata)
+{
+	struct kqmss_acc_channel *acc;
+	struct kqmss_queue_inst *kq = NULL;
+	struct kqmss_range_info *range;
+	struct kqmss_pdsp_info *pdsp;
+	struct kqmss_acc_info *info;
+	struct kqmss_device *kdev;
+
+	u32 *list, *list_cpu, val, idx, notifies;
+	int range_base, channel, queue = 0;
+	dma_addr_t list_dma;
+
+	range = _instdata;
+	info  = &range->acc_info;
+	kdev  = range->kdev;
+	pdsp  = range->acc_info.pdsp;
+	acc   = range->acc;
+
+	range_base = kdev->base_id + range->queue_base;
+	if ((range->flags & RANGE_MULTI_QUEUE) == 0) {
+		for (queue = 0; queue < range->num_irqs; queue++)
+			if (range->irqs[queue].irq == irq)
+				break;
+		kq = kqmss_range_offset_to_inst(kdev, range, queue);
+		acc += queue;
+	}
+
+	channel = acc->channel;
+	list_dma = acc->list_dma[acc->list_index];
+	list_cpu = acc->list_cpu[acc->list_index];
+	dev_dbg(kdev->dev, "acc-irq: channel %d, list %d, virt %p, phys %x\n",
+		channel, acc->list_index, list_cpu, list_dma);
+	if (atomic_read(&acc->retrigger_count)) {
+		atomic_dec(&acc->retrigger_count);
+		__kqmss_acc_notify(range, acc);
+		writel_relaxed(1, pdsp->intd + ACC_INTD_OFFSET_COUNT(channel));
+		/* ack the interrupt */
+		writel_relaxed(ACC_CHANNEL_INT_BASE + channel,
+			       pdsp->intd + ACC_INTD_OFFSET_EOI);
+
+		return IRQ_HANDLED;
+	}
+
+	notifies = readl_relaxed(pdsp->intd + ACC_INTD_OFFSET_COUNT(channel));
+	WARN_ON(!notifies);
+	dma_sync_single_for_cpu(kdev->dev, list_dma, info->list_size,
+				DMA_FROM_DEVICE);
+
+	for (list = list_cpu; list < list_cpu + (info->list_size / sizeof(u32));
+	     list += ACC_LIST_ENTRY_WORDS) {
+		if (ACC_LIST_ENTRY_WORDS == 1) {
+			dev_dbg(kdev->dev,
+				"acc-irq: list %d, entry @%p, %08x\n",
+				acc->list_index, list, list[0]);
+		} else if (ACC_LIST_ENTRY_WORDS == 2) {
+			dev_dbg(kdev->dev,
+				"acc-irq: list %d, entry @%p, %08x %08x\n",
+				acc->list_index, list, list[0], list[1]);
+		} else if (ACC_LIST_ENTRY_WORDS == 4) {
+			dev_dbg(kdev->dev,
+				"acc-irq: list %d, entry @%p, %08x %08x %08x %08x\n",
+				acc->list_index, list, list[0], list[1],
+				list[2], list[3]);
+		}
+
+		val = list[ACC_LIST_ENTRY_DESC_IDX];
+		if (!val)
+			break;
+
+		if (range->flags & RANGE_MULTI_QUEUE) {
+			queue = list[ACC_LIST_ENTRY_QUEUE_IDX] >> 16;
+			if (queue < range_base ||
+			    queue >= range_base + range->num_queues) {
+				dev_err(kdev->dev,
+					"bad queue %d, expecting %d-%d\n",
+					queue, range_base,
+					range_base + range->num_queues);
+				break;
+			}
+			queue -= range_base;
+			kq = kqmss_range_offset_to_inst(kdev, range,
+								queue);
+		}
+
+		if (atomic_inc_return(&kq->desc_count) >= KQMSS_ACC_DESCS_MAX) {
+			atomic_dec(&kq->desc_count);
+			dev_err(kdev->dev,
+				"acc-irq: queue %d full, entry dropped\n",
+				queue + range_base);
+			continue;
+		}
+
+		idx = atomic_inc_return(&kq->desc_tail) & KQMSS_ACC_DESCS_MASK;
+		kq->descs[idx] = val;
+		kq->notify_needed = 1;
+		dev_dbg(kdev->dev, "acc-irq: enqueue %08x at %d, queue %d\n",
+			val, idx, queue + range_base);
+	}
+
+	__kqmss_acc_notify(range, acc);
+	memset(list_cpu, 0, info->list_size);
+	dma_sync_single_for_device(kdev->dev, list_dma, info->list_size,
+				   DMA_TO_DEVICE);
+
+	/* flip to the other list */
+	acc->list_index ^= 1;
+
+	/* reset the interrupt counter */
+	writel_relaxed(1, pdsp->intd + ACC_INTD_OFFSET_COUNT(channel));
+
+	/* ack the interrupt */
+	writel_relaxed(ACC_CHANNEL_INT_BASE + channel,
+		       pdsp->intd + ACC_INTD_OFFSET_EOI);
+
+	return IRQ_HANDLED;
+}
+
+int kqmss_range_setup_acc_irq(struct kqmss_range_info *range,
+				int queue, bool enabled)
+{
+	struct kqmss_device *kdev = range->kdev;
+	struct kqmss_acc_channel *acc;
+	unsigned long cpu_map;
+	int ret = 0, irq;
+	u32 old, new;
+
+	if (range->flags & RANGE_MULTI_QUEUE) {
+		acc = range->acc;
+		irq = range->irqs[0].irq;
+		cpu_map = range->irqs[0].cpu_map;
+	} else {
+		acc = range->acc + queue;
+		irq = range->irqs[queue].irq;
+		cpu_map = range->irqs[queue].cpu_map;
+	}
+
+	old = acc->open_mask;
+	if (enabled)
+		new = old | BIT(queue);
+	else
+		new = old & ~BIT(queue);
+	acc->open_mask = new;
+
+	dev_dbg(kdev->dev,
+		"setup-acc-irq: open mask old %08x, new %08x, channel %s\n",
+		old, new, acc->name);
+
+	if (likely(new == old))
+		return 0;
+
+	if (new && !old) {
+		dev_dbg(kdev->dev,
+			"setup-acc-irq: requesting %s for channel %s\n",
+			acc->name, acc->name);
+		ret = request_irq(irq, kqmss_acc_int_handler, 0, acc->name,
+				  range);
+		if (!ret && cpu_map) {
+			ret = irq_set_affinity_hint(irq, to_cpumask(&cpu_map));
+			if (ret) {
+				dev_warn(range->kdev->dev,
+					 "Failed to set IRQ affinity\n");
+				return ret;
+			}
+		}
+	}
+
+	if (old && !new) {
+		dev_dbg(kdev->dev, "setup-acc-irq: freeing %s for channel %s\n",
+			acc->name, acc->name);
+		free_irq(irq, range);
+	}
+
+	return ret;
+}
+
+static const char *kqmss_acc_result_str(enum kqmss_acc_result result)
+{
+	static const char * const result_str[] = {
+		[ACC_RET_IDLE]			= "idle",
+		[ACC_RET_SUCCESS]		= "success",
+		[ACC_RET_INVALID_COMMAND]	= "invalid command",
+		[ACC_RET_INVALID_CHANNEL]	= "invalid channel",
+		[ACC_RET_INACTIVE_CHANNEL]	= "inactive channel",
+		[ACC_RET_ACTIVE_CHANNEL]	= "active channel",
+		[ACC_RET_INVALID_QUEUE]		= "invalid queue",
+		[ACC_RET_INVALID_RET]		= "invalid return code",
+	};
+
+	if (result >= ARRAY_SIZE(result_str))
+		return result_str[ACC_RET_INVALID_RET];
+	else
+		return result_str[result];
+}
+
+static enum kqmss_acc_result
+kqmss_acc_write(struct kqmss_device *kdev, struct kqmss_pdsp_info *pdsp,
+		struct kqmss_reg_acc_command *cmd)
+{
+	u32 result;
+
+	dev_dbg(kdev->dev, "acc command %08x %08x %08x %08x %08x\n",
+		cmd->command, cmd->queue_mask, cmd->list_phys,
+		cmd->queue_num, cmd->timer_config);
+
+	writel_relaxed(cmd->timer_config, &pdsp->acc_command->timer_config);
+	writel_relaxed(cmd->queue_num, &pdsp->acc_command->queue_num);
+	writel_relaxed(cmd->list_phys, &pdsp->acc_command->list_phys);
+	writel_relaxed(cmd->queue_mask, &pdsp->acc_command->queue_mask);
+	writel_relaxed(cmd->command, &pdsp->acc_command->command);
+
+	/* wait for the command to clear */
+	do {
+		result = readl_relaxed(&pdsp->acc_command->command);
+	} while ((result >> 8) & 0xff);
+
+	return (result >> 24) & 0xff;
+}
+
+static void kqmss_acc_setup_cmd(struct kqmss_device *kdev,
+				struct kqmss_range_info *range,
+				struct kqmss_reg_acc_command *cmd,
+				int queue)
+{
+	struct kqmss_acc_info *info = &range->acc_info;
+	struct kqmss_acc_channel *acc;
+	int queue_base;
+	u32 queue_mask;
+
+	if (range->flags & RANGE_MULTI_QUEUE) {
+		acc = range->acc;
+		queue_base = range->queue_base;
+		queue_mask = BIT(range->num_queues) - 1;
+	} else {
+		acc = range->acc + queue;
+		queue_base = range->queue_base + queue;
+		queue_mask = 0;
+	}
+
+	memset(cmd, 0, sizeof(*cmd));
+	cmd->command    = acc->channel;
+	cmd->queue_mask = queue_mask;
+	cmd->list_phys  = acc->list_dma[0];
+	cmd->queue_num  = info->list_entries << 16;
+	cmd->queue_num |= queue_base;
+
+	cmd->timer_config = ACC_LIST_ENTRY_TYPE << 18;
+	if (range->flags & RANGE_MULTI_QUEUE)
+		cmd->timer_config |= ACC_CFG_MULTI_QUEUE;
+	cmd->timer_config |= info->pacing_mode << 16;
+	cmd->timer_config |= info->timer_count;
+}
+
+static void kqmss_acc_stop(struct kqmss_device *kdev,
+				struct kqmss_range_info *range,
+				int queue)
+{
+	struct kqmss_reg_acc_command cmd;
+	struct kqmss_acc_channel *acc;
+	enum kqmss_acc_result result;
+
+	acc = range->acc + queue;
+
+	kqmss_acc_setup_cmd(kdev, range, &cmd, queue);
+	cmd.command |= ACC_CMD_DISABLE_CHANNEL << 8;
+	result = kqmss_acc_write(kdev, range->acc_info.pdsp, &cmd);
+
+	dev_dbg(kdev->dev, "stopped acc channel %s, result %s\n",
+		acc->name, kqmss_acc_result_str(result));
+}
+
+static enum kqmss_acc_result kqmss_acc_start(struct kqmss_device *kdev,
+						struct kqmss_range_info *range,
+						int queue)
+{
+	struct kqmss_reg_acc_command cmd;
+	struct kqmss_acc_channel *acc;
+	enum kqmss_acc_result result;
+
+	acc = range->acc + queue;
+
+	kqmss_acc_setup_cmd(kdev, range, &cmd, queue);
+	cmd.command |= ACC_CMD_ENABLE_CHANNEL << 8;
+	result = kqmss_acc_write(kdev, range->acc_info.pdsp, &cmd);
+
+	dev_dbg(kdev->dev, "started acc channel %s, result %s\n",
+		acc->name, kqmss_acc_result_str(result));
+
+	return result;
+}
+
+static int kqmss_acc_init_range(struct kqmss_range_info *range)
+{
+	struct kqmss_device *kdev = range->kdev;
+	struct kqmss_acc_channel *acc;
+	enum kqmss_acc_result result;
+	int queue;
+
+	for (queue = 0; queue < range->num_queues; queue++) {
+		acc = range->acc + queue;
+
+		kqmss_acc_stop(kdev, range, queue);
+		acc->list_index = 0;
+		result = kqmss_acc_start(kdev, range, queue);
+
+		if (result != ACC_RET_SUCCESS)
+			return -EIO;
+
+		if (range->flags & RANGE_MULTI_QUEUE)
+			return 0;
+	}
+	return 0;
+}
+
+static int kqmss_acc_init_queue(struct kqmss_range_info *range,
+				struct kqmss_queue_inst *kq)
+{
+	unsigned id = kq->id - range->queue_base;
+
+	kq->descs = devm_kzalloc(range->kdev->dev,
+				 KQMSS_ACC_DESCS_MAX * sizeof(u32), GFP_KERNEL);
+	if (!kq->descs)
+		return -ENOMEM;
+
+	kq->acc = range->acc;
+	if ((range->flags & RANGE_MULTI_QUEUE) == 0)
+		kq->acc += id;
+	return 0;
+}
+
+static int kqmss_acc_open_queue(struct kqmss_range_info *range,
+				struct kqmss_queue_inst *inst, unsigned flags)
+{
+	unsigned id = inst->id - range->queue_base;
+
+	return kqmss_range_setup_acc_irq(range, id, true);
+}
+
+static int kqmss_acc_close_queue(struct kqmss_range_info *range,
+					struct kqmss_queue_inst *inst)
+{
+	unsigned id = inst->id - range->queue_base;
+
+	return kqmss_range_setup_acc_irq(range, id, false);
+}
+
+static int kqmss_acc_free_range(struct kqmss_range_info *range)
+{
+	struct kqmss_device *kdev = range->kdev;
+	struct kqmss_acc_channel *acc;
+	struct kqmss_acc_info *info;
+	int channel, channels;
+
+	info = &range->acc_info;
+
+	if (range->flags & RANGE_MULTI_QUEUE)
+		channels = 1;
+	else
+		channels = range->num_queues;
+
+	for (channel = 0; channel < channels; channel++) {
+		acc = range->acc + channel;
+		if (!acc->list_cpu[0])
+			continue;
+		dma_unmap_single(kdev->dev, acc->list_dma[0],
+				 info->mem_size, DMA_BIDIRECTIONAL);
+		free_pages_exact(acc->list_cpu[0], info->mem_size);
+	}
+	devm_kfree(range->kdev->dev, range->acc);
+	return 0;
+}
+
+struct kqmss_range_ops kqmss_acc_range_ops = {
+	.set_notify	= kqmss_acc_set_notify,
+	.init_queue	= kqmss_acc_init_queue,
+	.open_queue	= kqmss_acc_open_queue,
+	.close_queue	= kqmss_acc_close_queue,
+	.init_range	= kqmss_acc_init_range,
+	.free_range	= kqmss_acc_free_range,
+};
+
+/**
+ * kqmss_init_acc_range: Initialise accumulator ranges
+ *
+ * @kdev:		qmss device
+ * @node:		device node
+ * @range:		qmms range information
+ *
+ * Return 0 on success or error
+ */
+int kqmss_init_acc_range(struct kqmss_device *kdev,
+				struct device_node *node,
+				struct kqmss_range_info *range)
+{
+	struct kqmss_acc_channel *acc;
+	struct kqmss_pdsp_info *pdsp;
+	struct kqmss_acc_info *info;
+	int ret, channel, channels;
+	int list_size, mem_size;
+	dma_addr_t list_dma;
+	void *list_mem;
+	u32 config[5];
+
+	range->flags |= RANGE_HAS_ACCUMULATOR;
+	info = &range->acc_info;
+
+	ret = of_property_read_u32_array(node, "accumulator", config, 5);
+	if (ret)
+		return ret;
+
+	info->pdsp_id		= config[0];
+	info->start_channel	= config[1];
+	info->list_entries	= config[2];
+	info->pacing_mode	= config[3];
+	info->timer_count	= config[4] / ACC_DEFAULT_PERIOD;
+
+	if (info->start_channel > ACC_MAX_CHANNEL) {
+		dev_err(kdev->dev, "channel %d invalid for range %s\n",
+			info->start_channel, range->name);
+		return -EINVAL;
+	}
+
+	if (info->pacing_mode > 3) {
+		dev_err(kdev->dev, "pacing mode %d invalid for range %s\n",
+			info->pacing_mode, range->name);
+		return -EINVAL;
+	}
+
+	pdsp = kqmss_find_pdsp(kdev, info->pdsp_id);
+	if (!pdsp) {
+		dev_err(kdev->dev, "pdsp id %d not found for range %s\n",
+			info->pdsp_id, range->name);
+		return -EINVAL;
+	}
+
+	info->pdsp = pdsp;
+	channels = range->num_queues;
+	if (of_get_property(node, "multi-queue", NULL)) {
+		range->flags |= RANGE_MULTI_QUEUE;
+		channels = 1;
+		if (range->queue_base & (32 - 1)) {
+			dev_err(kdev->dev,
+				"misaligned multi-queue accumulator range %s\n",
+				range->name);
+			return -EINVAL;
+		}
+		if (range->num_queues > 32) {
+			dev_err(kdev->dev,
+				"too many queues in accumulator range %s\n",
+				range->name);
+			return -EINVAL;
+		}
+	}
+
+	/* figure out list size */
+	list_size  = info->list_entries;
+	list_size *= ACC_LIST_ENTRY_WORDS * sizeof(u32);
+	info->list_size = list_size;
+	mem_size   = PAGE_ALIGN(list_size * 2);
+	info->mem_size  = mem_size;
+	range->acc = devm_kzalloc(kdev->dev, channels * sizeof(*range->acc),
+				  GFP_KERNEL);
+	if (!range->acc)
+		return -ENOMEM;
+
+	for (channel = 0; channel < channels; channel++) {
+		acc = range->acc + channel;
+		acc->channel = info->start_channel + channel;
+
+		/* allocate memory for the two lists */
+		list_mem = alloc_pages_exact(mem_size, GFP_KERNEL | GFP_DMA);
+		if (!list_mem)
+			return -ENOMEM;
+
+		list_dma = dma_map_single(kdev->dev, list_mem, mem_size,
+					  DMA_BIDIRECTIONAL);
+		if (dma_mapping_error(kdev->dev, list_dma)) {
+			free_pages_exact(list_mem, mem_size);
+			return -ENOMEM;
+		}
+
+		memset(list_mem, 0, mem_size);
+		dma_sync_single_for_device(kdev->dev, list_dma, mem_size,
+					   DMA_TO_DEVICE);
+		scnprintf(acc->name, sizeof(acc->name), "hwqueue-acc-%d",
+			  acc->channel);
+		acc->list_cpu[0] = list_mem;
+		acc->list_cpu[1] = list_mem + list_size;
+		acc->list_dma[0] = list_dma;
+		acc->list_dma[1] = list_dma + list_size;
+		dev_dbg(kdev->dev, "%s: channel %d, phys %08x, virt %8p\n",
+			acc->name, acc->channel, list_dma, list_mem);
+	}
+
+	range->ops = &kqmss_acc_range_ops;
+	return 0;
+}
+EXPORT_SYMBOL_GPL(kqmss_init_acc_range);
diff --git a/drivers/soc/keystone/qmss_queue.c b/drivers/soc/keystone/qmss_queue.c
new file mode 100644
index 0000000..066619d
--- /dev/null
+++ b/drivers/soc/keystone/qmss_queue.c
@@ -0,0 +1,1533 @@
+/*
+ * Keystone Queue Manager subsystem driver
+ *
+ * Copyright (C) 2014 Texas Instruments Incorporated - http://www.ti.com
+ * Authors:	Sandeep Nair <sandeep_n@...com>
+ *		Cyril Chemparathy <cyril@...com>
+ *		Santosh Shilimkar <santosh.shilimkar@...com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ */
+
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/device.h>
+#include <linux/clk.h>
+#include <linux/io.h>
+#include <linux/interrupt.h>
+#include <linux/bitops.h>
+#include <linux/slab.h>
+#include <linux/spinlock.h>
+#include <linux/platform_device.h>
+#include <linux/dma-mapping.h>
+#include <linux/of.h>
+#include <linux/of_irq.h>
+#include <linux/of_device.h>
+#include <linux/of_address.h>
+#include <linux/pm_runtime.h>
+#include <linux/firmware.h>
+#include <linux/debugfs.h>
+#include <linux/seq_file.h>
+#include <linux/string.h>
+#include <linux/soc/keystone_qmss.h>
+
+#include <dt-bindings/interrupt-controller/arm-gic.h>
+#include "qmss_queue.h"
+
+static struct kqmss_device *kdev;
+static DEFINE_MUTEX(kqmss_dev_lock);
+
+#define kqmss_queue_idx_to_inst(kdev, idx)			\
+	(kdev->instances + (idx << kdev->inst_shift))
+
+#define for_each_handle_rcu(qh, inst)			\
+	list_for_each_entry_rcu(qh, &inst->handles, list)
+
+#define for_each_instance(idx, inst, kdev)		\
+	for (idx = 0, inst = kdev->instances;		\
+	     idx < (kdev)->num_queues_in_use;			\
+	     idx++, inst = kqmss_queue_idx_to_inst(kdev, idx))
+
+/**
+ * kqmss_queue_notify: qmss queue notfier call
+ *
+ * @inst:		qmss queue instance like accumulator
+ */
+void kqmss_queue_notify(struct kqmss_queue_inst *inst)
+{
+	struct kqmss_queue *qh;
+
+	if (!inst)
+		return;
+
+	rcu_read_lock();
+	for_each_handle_rcu(qh, inst) {
+		if (atomic_read(&qh->notifier_enabled) <= 0)
+			continue;
+		if (WARN_ON(!qh->notifier_fn))
+			continue;
+		atomic_inc(&qh->stats.notifies);
+		qh->notifier_fn(qh->notifier_fn_arg);
+	}
+	rcu_read_unlock();
+}
+EXPORT_SYMBOL_GPL(kqmss_queue_notify);
+
+static irqreturn_t kqmss_queue_int_handler(int irq, void *_instdata)
+{
+	struct kqmss_queue_inst *inst = _instdata;
+
+	kqmss_queue_notify(inst);
+	return IRQ_HANDLED;
+}
+
+static int kqmss_queue_setup_irq(struct kqmss_range_info *range,
+			  struct kqmss_queue_inst *inst)
+{
+	unsigned queue = inst->id - range->queue_base;
+	unsigned long cpu_map;
+	int ret = 0, irq;
+
+	if (range->flags & RANGE_HAS_IRQ) {
+		irq = range->irqs[queue].irq;
+		cpu_map = range->irqs[queue].cpu_map;
+		ret = request_irq(irq, kqmss_queue_int_handler, 0,
+					inst->irq_name, inst);
+		if (ret)
+			return ret;
+		disable_irq(irq);
+		if (cpu_map) {
+			ret = irq_set_affinity_hint(irq, to_cpumask(&cpu_map));
+			if (ret) {
+				dev_warn(range->kdev->dev,
+					 "Failed to set IRQ affinity\n");
+				return ret;
+			}
+		}
+	}
+	return ret;
+}
+
+static void kqmss_queue_free_irq(struct kqmss_queue_inst *inst)
+{
+	struct kqmss_range_info *range = inst->range;
+	unsigned queue = inst->id - inst->range->queue_base;
+	int irq;
+
+	if (range->flags & RANGE_HAS_IRQ) {
+		irq = range->irqs[queue].irq;
+		free_irq(irq, inst);
+	}
+}
+
+static inline bool kqmss_queue_is_busy(struct kqmss_queue_inst *inst)
+{
+	return !list_empty(&inst->handles);
+}
+
+static inline bool kqmss_queue_is_reserved(struct kqmss_queue_inst *inst)
+{
+	return inst->range->flags & RANGE_RESERVED;
+}
+
+static inline bool kqmss_queue_is_shared(struct kqmss_queue_inst *inst)
+{
+	struct kqmss_queue *tmp;
+
+	rcu_read_lock();
+	for_each_handle_rcu(tmp, inst) {
+		if (tmp->flags & KQMSS_QUEUE_SHARED) {
+			rcu_read_unlock();
+			return true;
+		}
+	}
+	rcu_read_unlock();
+	return false;
+}
+
+static inline bool kqmss_queue_match_type(struct kqmss_queue_inst *inst,
+						unsigned type)
+{
+	if ((type == KQMSS_QUEUE_QPEND) &&
+	    (inst->range->flags & RANGE_HAS_IRQ)) {
+		return true;
+	} else if ((type == KQMSS_QUEUE_ACC) &&
+		(inst->range->flags & RANGE_HAS_ACCUMULATOR)) {
+		return true;
+	} else if ((type == KQMSS_QUEUE_GP) &&
+		!(inst->range->flags &
+			(RANGE_HAS_ACCUMULATOR | RANGE_HAS_IRQ))) {
+		return true;
+	}
+	return false;
+}
+
+static inline struct kqmss_queue_inst *
+kqmss_queue_match_id_to_inst(struct kqmss_device *kdev, unsigned id)
+{
+	struct kqmss_queue_inst *inst;
+	int idx;
+
+	for_each_instance(idx, inst, kdev) {
+		if (inst->id == id)
+			return inst;
+	}
+	return NULL;
+}
+
+static inline struct kqmss_queue_inst *kqmss_queue_find_by_id(int id)
+{
+	if (kdev->base_id <= id &&
+	    kdev->base_id + kdev->num_queues > id) {
+		id -= kdev->base_id;
+		return kqmss_queue_match_id_to_inst(kdev, id);
+	}
+	return NULL;
+}
+
+static struct kqmss_queue *__kqmss_queue_open(struct kqmss_queue_inst *inst,
+				      const char *name, unsigned flags)
+{
+	struct kqmss_queue *qh;
+	unsigned id;
+	int ret = 0;
+
+	qh = devm_kzalloc(inst->kdev->dev, sizeof(*qh), GFP_KERNEL);
+	if (!qh)
+		return ERR_PTR(-ENOMEM);
+
+	qh->flags = flags;
+	qh->inst = inst;
+	id = inst->id - inst->qmgr->start_queue;
+	qh->reg_push = &inst->qmgr->reg_push[id];
+	qh->reg_pop = &inst->qmgr->reg_pop[id];
+	qh->reg_peek = &inst->qmgr->reg_peek[id];
+
+	/* first opener? */
+	if (!kqmss_queue_is_busy(inst)) {
+		struct kqmss_range_info *range = inst->range;
+
+		inst->name = kstrndup(name, KQMSS_NAME_SIZE, GFP_KERNEL);
+		if (range->ops && range->ops->open_queue)
+			ret = range->ops->open_queue(range, inst, flags);
+
+		if (ret) {
+			devm_kfree(inst->kdev->dev, qh);
+			return ERR_PTR(ret);
+		}
+	}
+	list_add_tail_rcu(&qh->list, &inst->handles);
+	return qh;
+}
+
+static struct kqmss_queue *
+kqmss_queue_open_by_id(const char *name, unsigned id, unsigned flags)
+{
+	struct kqmss_queue_inst *inst;
+	struct kqmss_queue *qh;
+
+	mutex_lock(&kqmss_dev_lock);
+
+	qh = ERR_PTR(-ENODEV);
+	inst = kqmss_queue_find_by_id(id);
+	if (!inst)
+		goto unlock_ret;
+
+	qh = ERR_PTR(-EEXIST);
+	if (!(flags & KQMSS_QUEUE_SHARED) && kqmss_queue_is_busy(inst))
+		goto unlock_ret;
+
+	qh = ERR_PTR(-EBUSY);
+	if ((flags & KQMSS_QUEUE_SHARED) &&
+	    (kqmss_queue_is_busy(inst) && !kqmss_queue_is_shared(inst)))
+		goto unlock_ret;
+
+	qh = __kqmss_queue_open(inst, name, flags);
+
+unlock_ret:
+	mutex_unlock(&kqmss_dev_lock);
+
+	return qh;
+}
+
+static struct kqmss_queue *kqmss_queue_open_by_type(const char *name,
+						unsigned type, unsigned flags)
+{
+	struct kqmss_queue_inst *inst;
+	struct kqmss_queue *qh = ERR_PTR(-EINVAL);
+	int idx;
+
+	mutex_lock(&kqmss_dev_lock);
+
+	for_each_instance(idx, inst, kdev) {
+		if (kqmss_queue_is_reserved(inst))
+			continue;
+		if (!kqmss_queue_match_type(inst, type))
+			continue;
+		if (kqmss_queue_is_busy(inst))
+			continue;
+		qh = __kqmss_queue_open(inst, name, flags);
+		goto unlock_ret;
+	}
+
+unlock_ret:
+	mutex_unlock(&kqmss_dev_lock);
+	return qh;
+}
+
+static void kqmss_queue_set_notify(struct kqmss_queue_inst *inst, bool enabled)
+{
+	struct kqmss_range_info *range = inst->range;
+
+	if (range->ops && range->ops->set_notify)
+		range->ops->set_notify(range, inst, enabled);
+}
+
+static int kqmss_queue_enable_notifier(struct kqmss_queue *qh)
+{
+	struct kqmss_queue_inst *inst = qh->inst;
+	bool first;
+
+	if (WARN_ON(!qh->notifier_fn))
+		return -EINVAL;
+
+	/* Adjust the per handle notifier count */
+	first = (atomic_inc_return(&qh->notifier_enabled) == 1);
+	if (!first)
+		return 0; /* nothing to do */
+
+	/* Now adjust the per instance notifier count */
+	first = (atomic_inc_return(&inst->num_notifiers) == 1);
+	if (first)
+		kqmss_queue_set_notify(inst, true);
+
+	return 0;
+}
+
+static int kqmss_queue_disable_notifier(struct kqmss_queue *qh)
+{
+	struct kqmss_queue_inst *inst = qh->inst;
+	bool last;
+
+	last = (atomic_dec_return(&qh->notifier_enabled) == 0);
+	if (!last)
+		return 0; /* nothing to do */
+
+	last = (atomic_dec_return(&inst->num_notifiers) == 0);
+	if (last)
+		kqmss_queue_set_notify(inst, false);
+
+	return 0;
+}
+
+static int kqmss_queue_set_notifier(struct kqmss_queue *qh,
+				struct kqmss_queue_notify_config *cfg)
+{
+	kqmss_queue_notify_fn old_fn = qh->notifier_fn;
+
+	if (!cfg)
+		return -EINVAL;
+
+	if (!(qh->inst->range->flags & (RANGE_HAS_ACCUMULATOR | RANGE_HAS_IRQ)))
+		return -ENOTSUPP;
+
+	if (!cfg->fn && old_fn)
+		kqmss_queue_disable_notifier(qh);
+
+	qh->notifier_fn = cfg->fn;
+	qh->notifier_fn_arg = cfg->fn_arg;
+
+	if (cfg->fn && !old_fn)
+		kqmss_queue_enable_notifier(qh);
+
+	return 0;
+}
+
+static int kqmss_gp_set_notify(struct kqmss_range_info *range,
+			       struct kqmss_queue_inst *inst,
+			       bool enabled)
+{
+	unsigned queue;
+
+	if (range->flags & RANGE_HAS_IRQ) {
+		queue = inst->id - range->queue_base;
+		if (enabled)
+			enable_irq(range->irqs[queue].irq);
+		else
+			disable_irq_nosync(range->irqs[queue].irq);
+	}
+	return 0;
+}
+
+static int kqmss_gp_open_queue(struct kqmss_range_info *range,
+				struct kqmss_queue_inst *inst, unsigned flags)
+{
+	return kqmss_queue_setup_irq(range, inst);
+}
+
+static int kqmss_gp_close_queue(struct kqmss_range_info *range,
+				struct kqmss_queue_inst *inst)
+{
+	kqmss_queue_free_irq(inst);
+	return 0;
+}
+
+struct kqmss_range_ops kqmss_gp_range_ops = {
+	.set_notify	= kqmss_gp_set_notify,
+	.open_queue	= kqmss_gp_open_queue,
+	.close_queue	= kqmss_gp_close_queue,
+};
+
+static void kqmss_queue_debug_show_instance(struct seq_file *s,
+					struct kqmss_queue_inst *inst)
+{
+	struct kqmss_device *kdev = inst->kdev;
+	struct kqmss_queue *qh;
+
+	if (!kqmss_queue_is_busy(inst))
+		return;
+
+	seq_printf(s, "\tqueue id %d (%s)\n",
+		   kdev->base_id + inst->id, inst->name);
+	for_each_handle_rcu(qh, inst) {
+		seq_printf(s, "\t\thandle %p: ", qh);
+		seq_printf(s, "pushes %8d, ",
+			   atomic_read(&qh->stats.pushes));
+		seq_printf(s, "pops %8d, ",
+			   atomic_read(&qh->stats.pops));
+		seq_printf(s, "count %8d, ",
+			   kqmss_queue_get_count(qh));
+		seq_printf(s, "notifies %8d, ",
+			   atomic_read(&qh->stats.notifies));
+		seq_printf(s, "push errors %8d, ",
+			   atomic_read(&qh->stats.push_errors));
+		seq_printf(s, "pop errors %8d\n",
+			   atomic_read(&qh->stats.pop_errors));
+	}
+}
+
+static int kqmss_queue_debug_show(struct seq_file *s, void *v)
+{
+	struct kqmss_queue_inst *inst;
+	int idx;
+
+	mutex_lock(&kqmss_dev_lock);
+	seq_printf(s, "%s: %u-%u\n",
+		   dev_name(kdev->dev), kdev->base_id,
+		   kdev->base_id + kdev->num_queues - 1);
+	for_each_instance(idx, inst, kdev)
+		kqmss_queue_debug_show_instance(s, inst);
+	mutex_unlock(&kqmss_dev_lock);
+
+	return 0;
+}
+
+static int kqmss_queue_debug_open(struct inode *inode, struct file *file)
+{
+	return single_open(file, kqmss_queue_debug_show, NULL);
+}
+
+static const struct file_operations kqmss_queue_debug_ops = {
+	.open		= kqmss_queue_debug_open,
+	.read		= seq_read,
+	.llseek		= seq_lseek,
+	.release	= single_release,
+};
+
+static inline int kqmss_queue_pdsp_wait(u32 * __iomem addr, unsigned timeout,
+					u32 flags)
+{
+	unsigned long end;
+	u32 val = 0;
+
+	end = jiffies + msecs_to_jiffies(timeout);
+	while (time_after(end, jiffies)) {
+		val = readl_relaxed(addr);
+		if (flags)
+			val &= flags;
+		if (!val)
+			break;
+		cpu_relax();
+	}
+	return val ? -ETIMEDOUT : 0;
+}
+
+
+static int kqmss_queue_flush(struct kqmss_queue *qh)
+{
+	struct kqmss_queue_inst *inst = qh->inst;
+	unsigned id = inst->id - inst->qmgr->start_queue;
+
+	atomic_set(&inst->desc_count, 0);
+	writel_relaxed(0, &inst->qmgr->reg_push[id].ptr_size_thresh);
+	return 0;
+}
+
+/**
+ * kqmss_queue_open()	- open a hardware queue
+ * @name		- name to give the queue handle
+ * @id			- desired queue number if any or specifes the type
+ *			  of queue
+ * @flags		- the following flags are applicable to queues:
+ *	KQMSS_QUEUE_SHARED - allow the queue to be shared. Queues are
+ *			     exclusive by default.
+ *			     Subsequent attempts to open a shared queue should
+ *			     also have this flag.
+ *
+ * Returns a handle to the open hardware queue if successful. Use IS_ERR()
+ * to check the returned value for error codes.
+ */
+struct kqmss_queue *kqmss_queue_open(const char *name, unsigned id,
+					unsigned flags)
+{
+	struct kqmss_queue *qh = ERR_PTR(-EINVAL);
+
+	switch (id) {
+	case KQMSS_QUEUE_QPEND:
+	case KQMSS_QUEUE_ACC:
+	case KQMSS_QUEUE_GP:
+		qh = kqmss_queue_open_by_type(name, id, flags);
+		break;
+
+	default:
+		qh = kqmss_queue_open_by_id(name, id, flags);
+		break;
+	}
+	return qh;
+}
+EXPORT_SYMBOL_GPL(kqmss_queue_open);
+
+/**
+ * kqmss_queue_close()	- close a hardware queue handle
+ * @qh			- handle to close
+ */
+void kqmss_queue_close(struct kqmss_queue *qh)
+{
+	struct kqmss_queue_inst *inst = qh->inst;
+
+	while (atomic_read(&qh->notifier_enabled) > 0)
+		kqmss_queue_disable_notifier(qh);
+
+	mutex_lock(&kqmss_dev_lock);
+	list_del_rcu(&qh->list);
+	mutex_unlock(&kqmss_dev_lock);
+	synchronize_rcu();
+	if (!kqmss_queue_is_busy(inst)) {
+		struct kqmss_range_info *range = inst->range;
+
+		if (range->ops && range->ops->close_queue)
+			range->ops->close_queue(range, inst);
+	}
+	devm_kfree(inst->kdev->dev, qh);
+}
+EXPORT_SYMBOL_GPL(kqmss_queue_close);
+
+/**
+ * kqmss_queue_device_control()	- Perform control operations on a queue
+ * @qh				- queue handle
+ * @cmd				- control commands
+ * @arg				- command argument
+ *
+ * Returns 0 on success, errno otherwise.
+ */
+int kqmss_queue_device_control(struct kqmss_queue *qh,
+		enum kqmss_queue_ctrl_cmd cmd, unsigned long arg)
+{
+	struct kqmss_queue_notify_config *cfg;
+	int ret;
+
+	switch ((int)cmd) {
+	case KQMSS_QUEUE_GET_ID:
+		ret = qh->inst->kdev->base_id + qh->inst->id;
+		break;
+
+	case KQMSS_QUEUE_FLUSH:
+		ret = kqmss_queue_flush(qh);
+		break;
+
+	case KQMSS_QUEUE_SET_NOTIFIER:
+		cfg = (void *)arg;
+		ret = kqmss_queue_set_notifier(qh, cfg);
+		break;
+
+	case KQMSS_QUEUE_ENABLE_NOTIFY:
+		ret = kqmss_queue_enable_notifier(qh);
+		break;
+
+	case KQMSS_QUEUE_DISABLE_NOTIFY:
+		ret = kqmss_queue_disable_notifier(qh);
+		break;
+
+	default:
+		ret = -ENOTSUPP;
+		break;
+	}
+	return ret;
+}
+EXPORT_SYMBOL_GPL(kqmss_queue_device_control);
+
+/* carve out descriptors and push into queue */
+static int kdesc_fill_pool(struct kqmss_pool *pool)
+{
+	struct kqmss_region *region;
+	int i;
+
+	region = pool->region;
+	pool->desc_size = region->desc_size;
+	for (i = 0; i < pool->num_desc; i++) {
+		int index = pool->region_offset + i;
+		dma_addr_t dma_addr;
+		unsigned dma_size;
+		dma_addr = region->dma_start + (region->desc_size * index);
+		dma_size = ALIGN(pool->desc_size, SMP_CACHE_BYTES);
+		dma_sync_single_for_device(pool->kdev->dev, dma_addr, dma_size,
+					   DMA_TO_DEVICE);
+		kqmss_queue_push(pool->queue, dma_addr, dma_size, 0);
+	}
+	return 0;
+}
+
+/* pop out descriptors and close the queue */
+static void kdesc_empty_pool(struct kqmss_pool *pool)
+{
+	dma_addr_t dma;
+	unsigned size;
+	void *desc;
+	int i;
+
+	for (i = 0;; i++) {
+		dma = kqmss_queue_pop(pool->queue, &size);
+		if (!dma)
+			break;
+		desc = kqmss_pool_desc_dma_to_virt(pool, dma);
+		if (!desc) {
+			dev_dbg(pool->kdev->dev,
+				"couldn't unmap desc, continuing\n");
+			continue;
+		}
+	}
+	WARN_ON(i != pool->num_desc);
+	kqmss_queue_close(pool->queue);
+}
+
+/**
+ * kqmss_pool_create()	- Create a pool of descriptors
+ * @name		- name to give the pool handle
+ * @num_desc		- numbers of descriptors in the pool
+ * @region_id		- QMSS region id from which the descriptors are to be
+ *			  allocated.
+ *
+ * Returns a pool handle on success.
+ * Use IS_ERR_OR_NULL() to identify error values on return.
+ */
+struct kqmss_pool *kqmss_pool_create(const char *name,
+					int num_desc, int region_id)
+{
+	struct kqmss_region *reg_itr, *region = NULL;
+	struct kqmss_pool *pool;
+	int ret;
+
+	if (!kdev->dev)
+		return ERR_PTR(-ENODEV);
+
+	pool = devm_kzalloc(kdev->dev, sizeof(*pool), GFP_KERNEL);
+	if (!pool) {
+		dev_err(kdev->dev, "out of memory allocating pool\n");
+		return ERR_PTR(-ENOMEM);
+	}
+
+	for_each_region(kdev, reg_itr) {
+		if (reg_itr->id != region_id)
+			continue;
+		region = reg_itr;
+		break;
+	}
+
+	if (!region) {
+		dev_err(kdev->dev, "region-id(%d) not found\n", region_id);
+		ret = -EINVAL;
+		goto err;
+	}
+
+	pool->queue = kqmss_queue_open(name, KQMSS_QUEUE_GP, 0);
+	if (IS_ERR_OR_NULL(pool->queue)) {
+		dev_err(kdev->dev,
+			"failed to open queue for pool(%s), error %ld\n",
+			name, PTR_ERR(pool->queue));
+		ret = PTR_ERR(pool->queue);
+		goto err;
+	}
+
+	pool->name = kstrndup(name, KQMSS_NAME_SIZE, GFP_KERNEL);
+	pool->kdev = kdev;
+	mutex_lock(&kqmss_dev_lock);
+	if (num_desc > (region->num_desc - region->used_desc)) {
+		dev_err(kdev->dev, "out of descs for pool(%s)\n", pool->name);
+		ret = -ENOMEM;
+		goto err;
+	}
+
+	pool->region = region;
+	pool->num_desc = num_desc;
+	pool->region_offset = region->used_desc;
+	region->used_desc += pool->num_desc;
+	ret = kdesc_fill_pool(pool);
+	if (ret)
+		goto err;
+
+	list_add_tail(&pool->list, &kdev->pools);
+	mutex_unlock(&kqmss_dev_lock);
+	return pool;
+
+err:
+	mutex_unlock(&kqmss_dev_lock);
+	kfree(pool->name);
+	if (pool)
+		devm_kfree(kdev->dev, pool);
+	return ERR_PTR(ret);
+}
+EXPORT_SYMBOL_GPL(kqmss_pool_create);
+
+/**
+ * kqmss_pool_destroy()	- Free a pool of descriptors
+ * @pool		- pool handle
+ */
+void kqmss_pool_destroy(struct kqmss_pool *pool)
+{
+	if (!pool)
+		return;
+
+	if (!pool->region)
+		return;
+
+	kdesc_empty_pool(pool);
+	mutex_lock(&kqmss_dev_lock);
+	list_del(&pool->list);
+	mutex_unlock(&kqmss_dev_lock);
+	kfree(pool->name);
+	devm_kfree(kdev->dev, pool);
+}
+EXPORT_SYMBOL_GPL(kqmss_pool_destroy);
+
+static int kqmss_queue_setup_region(struct kqmss_device *kdev,
+				       struct kqmss_region *region)
+{
+	unsigned hw_num_desc, hw_desc_size, size;
+	int id = region->id;
+	struct kqmss_reg_region __iomem  *regs;
+	struct kqmss_qmgr_info *qmgr;
+	struct page *page;
+
+	/* unused region? */
+	if (!region->num_desc) {
+		dev_warn(kdev->dev, "unused region %s\n", region->name);
+		return 0;
+	}
+
+	/* get hardware descriptor value */
+	hw_num_desc = ilog2(region->num_desc - 1) + 1;
+
+	/* did we force fit ourselves into nothingness? */
+	if (region->num_desc < 32) {
+		region->num_desc = 0;
+		dev_warn(kdev->dev, "too few descriptors in region %s\n",
+			 region->name);
+		return 0;
+	}
+
+	size = region->num_desc * region->desc_size;
+	region->virt_start = alloc_pages_exact(size, GFP_KERNEL | GFP_DMA |
+						GFP_DMA32);
+	if (!region->virt_start) {
+		region->num_desc = 0;
+		dev_err(kdev->dev, "memory alloc failed for region %s\n",
+			region->name);
+		return 0;
+	}
+	region->virt_end = region->virt_start + size;
+	page = virt_to_page(region->virt_start);
+
+	region->dma_start = dma_map_page(kdev->dev, page, 0, size,
+					 DMA_BIDIRECTIONAL);
+	if (dma_mapping_error(kdev->dev, region->dma_start)) {
+		region->num_desc = 0;
+		free_pages_exact(region->virt_start, size);
+		dev_err(kdev->dev, "dma map failed for region %s\n",
+			region->name);
+		return 0;
+	}
+	region->dma_end = region->dma_start + size;
+
+	dev_dbg(kdev->dev,
+		"region %s (%d): size:%d, link:%d@%d, phys:%08x-%08x, virt:%p-%p\n",
+		region->name, id, region->desc_size, region->num_desc,
+		region->link_index, region->dma_start, region->dma_end,
+		region->virt_start, region->virt_end);
+
+	hw_desc_size = (region->desc_size / 16) - 1;
+	hw_num_desc -= 5;
+
+	for_each_qmgr(kdev, qmgr) {
+		regs = qmgr->reg_region + id;
+		writel_relaxed(region->dma_start, &regs->base);
+		writel_relaxed(region->link_index, &regs->start_index);
+		writel_relaxed(hw_desc_size << 16 | hw_num_desc,
+			       &regs->size_count);
+	}
+
+	return region->num_desc;
+}
+
+static const char *kqmss_queue_find_name(struct device_node *node)
+{
+	const char *name;
+
+	if (of_property_read_string(node, "label", &name) < 0)
+		name = node->name;
+	if (!name)
+		name = "unknown";
+	return name;
+}
+
+static int kqmss_queue_setup_regions(struct kqmss_device *kdev,
+					struct device_node *regions)
+{
+	struct device *dev = kdev->dev;
+	struct kqmss_region *region;
+	struct device_node *child;
+	u32 temp[2];
+	int ret;
+
+	for_each_child_of_node(regions, child) {
+		region = devm_kzalloc(dev, sizeof(*region), GFP_KERNEL);
+		if (!region) {
+			dev_err(dev, "out of memory allocating region\n");
+			return -ENOMEM;
+		}
+
+		region->name = kqmss_queue_find_name(child);
+		of_property_read_u32(child, "id", &region->id);
+		ret = of_property_read_u32_array(child, "values", temp, 2);
+		if (!ret) {
+			region->num_desc  = temp[0];
+			region->desc_size = temp[1];
+		} else {
+			dev_err(dev, "invalid region info %s\n", region->name);
+			devm_kfree(dev, region);
+			continue;
+		}
+
+		if (!of_get_property(child, "link-index", NULL)) {
+			dev_err(dev, "No link info for %s\n", region->name);
+			devm_kfree(dev, region);
+			continue;
+		}
+		ret = of_property_read_u32(child, "link-index",
+					   &region->link_index);
+		if (ret) {
+			dev_err(dev, "link index not found for %s\n",
+				region->name);
+			devm_kfree(dev, region);
+			continue;
+		}
+
+		list_add_tail(&region->list, &kdev->regions);
+	}
+	if (list_empty(&kdev->regions)) {
+		dev_err(dev, "no valid region information found\n");
+		return -ENODEV;
+	}
+
+	/* Next, we run through the regions and set things up */
+	for_each_region(kdev, region)
+		kqmss_queue_setup_region(kdev, region);
+
+	return 0;
+}
+
+static int kqmss_get_link_ram(struct kqmss_device *kdev,
+				       const char *name,
+				       struct kqmss_link_ram_block *block)
+{
+	struct platform_device *pdev = to_platform_device(kdev->dev);
+	struct device_node *node = pdev->dev.of_node;
+	u32 temp[2];
+
+	/*
+	 * Note: link ram resources are specified in "entry" sized units. In
+	 * reality, although entries are ~40bits in hardware, we treat them as
+	 * 64-bit entities here.
+	 *
+	 * For example, to specify the internal link ram for Keystone-I class
+	 * devices, we would set the linkram0 resource to 0x80000-0x83fff.
+	 *
+	 * This gets a bit weird when other link rams are used.  For example,
+	 * if the range specified is 0x0c000000-0x0c003fff (i.e., 16K entries
+	 * in MSMC SRAM), the actual memory used is 0x0c000000-0x0c020000,
+	 * which accounts for 64-bits per entry, for 16K entries.
+	 */
+	if (!of_property_read_u32_array(node, name , temp, 2)) {
+		if (temp[0]) {
+			/*
+			 * queue_base specified => using internal or onchip
+			 * link ram WARNING - we do not "reserve" this block
+			 */
+			block->phys = (dma_addr_t)temp[0];
+			block->virt = NULL;
+			block->size = temp[1];
+		} else {
+			block->size = temp[1];
+			/* queue_base not specific => allocate requested size */
+			block->virt = dmam_alloc_coherent(kdev->dev,
+						  8 * block->size, &block->phys,
+						  GFP_KERNEL);
+			if (!block->virt) {
+				dev_err(kdev->dev, "failed to alloc linkram\n");
+				return -ENOMEM;
+			}
+		}
+	} else {
+		return -ENODEV;
+	}
+	return 0;
+}
+
+static int kqmss_queue_setup_link_ram(struct kqmss_device *kdev)
+{
+	struct kqmss_link_ram_block *block;
+	struct kqmss_qmgr_info *qmgr;
+
+	for_each_qmgr(kdev, qmgr) {
+		block = &kdev->link_rams[0];
+		dev_dbg(kdev->dev, "linkram0: phys:%x, virt:%p, size:%x\n",
+			block->phys, block->virt, block->size);
+		writel_relaxed(block->phys, &qmgr->reg_config->link_ram_base0);
+		writel_relaxed(block->size, &qmgr->reg_config->link_ram_size0);
+
+		block++;
+		if (!block->size)
+			return 0;
+
+		dev_dbg(kdev->dev, "linkram1: phys:%x, virt:%p, size:%x\n",
+			block->phys, block->virt, block->size);
+		writel_relaxed(block->phys, &qmgr->reg_config->link_ram_base1);
+	}
+
+	return 0;
+}
+
+static int kqmss_setup_queue_range(struct kqmss_device *kdev,
+					struct device_node *node)
+{
+	struct device *dev = kdev->dev;
+	struct kqmss_range_info *range;
+	struct kqmss_qmgr_info *qmgr;
+	u32 temp[2], start, end, id, index;
+	int ret, i;
+
+	range = devm_kzalloc(dev, sizeof(*range), GFP_KERNEL);
+	if (!range) {
+		dev_err(dev, "out of memory allocating range\n");
+		return -ENOMEM;
+	}
+
+	range->kdev = kdev;
+	range->name = kqmss_queue_find_name(node);
+	ret = of_property_read_u32_array(node, "values", temp, 2);
+	if (!ret) {
+		range->queue_base = temp[0] - kdev->base_id;
+		range->num_queues = temp[1];
+	} else {
+		dev_err(dev, "invalid queue range %s\n", range->name);
+		devm_kfree(dev, range);
+		return -EINVAL;
+	}
+
+	for (i = 0; i < RANGE_MAX_IRQS; i++) {
+		struct of_phandle_args oirq;
+
+		if (of_irq_parse_one(node, i, &oirq))
+			break;
+
+		range->irqs[i].irq = irq_create_of_mapping(&oirq);
+		if (range->irqs[i].irq == IRQ_NONE)
+			break;
+
+		range->num_irqs++;
+		/* If it is an SPI interrupt then extract the interrupt
+		 * cpu mask. This mask will be used later to set the
+		 * CPU affinity for the IRQ.
+		 * For details on encoding of interrupt-cells for ARM GIC,
+		 * refer to Documentation/devicetree/bindings/arm/gic.txt.
+		 */
+		if (oirq.args[0] != GIC_SPI)
+			continue;
+
+		if (oirq.args_count == 3)
+			range->irqs[i].cpu_map =
+				(oirq.args[2] & 0x0000ff00) >> 8;
+	}
+
+	range->num_irqs = min(range->num_irqs, range->num_queues);
+	if (range->num_irqs)
+		range->flags |= RANGE_HAS_IRQ;
+
+	if (of_get_property(node, "reserved", NULL))
+		range->flags |= RANGE_RESERVED;
+
+	if (of_get_property(node, "accumulator", NULL)) {
+		ret = kqmss_init_acc_range(kdev, node, range);
+		if (ret < 0) {
+			devm_kfree(dev, range);
+			return ret;
+		}
+	} else {
+		range->ops = &kqmss_gp_range_ops;
+	}
+
+	/* set threshold to 1, and flush out the queues */
+	for_each_qmgr(kdev, qmgr) {
+		start = max(qmgr->start_queue, range->queue_base);
+		end   = min(qmgr->start_queue + qmgr->num_queues,
+			    range->queue_base + range->num_queues);
+		for (id = start; id < end; id++) {
+			index = id - qmgr->start_queue;
+			writel_relaxed(THRESH_GTE | 1,
+				       &qmgr->reg_peek[index].ptr_size_thresh);
+			writel_relaxed(0,
+				       &qmgr->reg_push[index].ptr_size_thresh);
+		}
+	}
+
+	list_add_tail(&range->list, &kdev->queue_ranges);
+	dev_dbg(dev, "added range %s: %d-%d, %d irqs%s%s%s\n",
+		range->name, range->queue_base,
+		range->queue_base + range->num_queues - 1,
+		range->num_irqs,
+		(range->flags & RANGE_HAS_IRQ) ? ", has irq" : "",
+		(range->flags & RANGE_RESERVED) ? ", reserved" : "",
+		(range->flags & RANGE_HAS_ACCUMULATOR) ? ", acc" : "");
+	kdev->num_queues_in_use += range->num_queues;
+	return 0;
+}
+
+static int kqmss_setup_queue_pools(struct kqmss_device *kdev,
+				   struct device_node *queue_pools)
+{
+	struct device_node *type, *range;
+	int ret;
+
+	for_each_child_of_node(queue_pools, type) {
+		for_each_child_of_node(type, range) {
+			ret = kqmss_setup_queue_range(kdev, range);
+			/* return value ignored, we init the rest... */
+		}
+	}
+
+	/* ... and barf if they all failed! */
+	if (list_empty(&kdev->queue_ranges)) {
+		dev_err(kdev->dev, "no valid queue range found\n");
+		return -ENODEV;
+	}
+	return 0;
+}
+
+static void kqmss_free_queue_range(struct kqmss_device *kdev,
+				  struct kqmss_range_info *range)
+{
+	if (range->ops && range->ops->free_range)
+		range->ops->free_range(range);
+	list_del(&range->list);
+	devm_kfree(kdev->dev, range);
+}
+
+static void kqmss_free_queue_ranges(struct kqmss_device *kdev)
+{
+	struct kqmss_range_info *range;
+
+	for (;;) {
+		range = first_queue_range(kdev);
+		if (!range)
+			break;
+		kqmss_free_queue_range(kdev, range);
+	}
+}
+
+static void kqmss_queue_free_regions(struct kqmss_device *kdev)
+{
+	struct kqmss_region *region;
+	unsigned size;
+
+	for (;;) {
+		region = first_region(kdev);
+		if (!region)
+			break;
+		size = region->virt_end - region->virt_start;
+		if (size)
+			free_pages_exact(region->virt_start, size);
+		list_del(&region->list);
+		devm_kfree(kdev->dev, region);
+	}
+}
+
+static int kqmss_queue_init_qmgrs(struct kqmss_device *kdev,
+					struct device_node *qmgrs)
+{
+	struct device *dev = kdev->dev;
+	struct kqmss_qmgr_info *qmgr;
+	struct device_node *child;
+	u32 temp[2];
+	int i, ret;
+
+	for_each_child_of_node(qmgrs, child) {
+		qmgr = devm_kzalloc(dev, sizeof(*qmgr), GFP_KERNEL);
+		if (!qmgr) {
+			dev_err(dev, "out of memory allocating qmgr\n");
+			return -ENOMEM;
+		}
+
+		ret = of_property_read_u32_array(child, "managed-queues",
+						 temp, 2);
+		if (!ret) {
+			qmgr->start_queue = temp[0];
+			qmgr->num_queues = temp[1];
+		} else {
+			dev_err(dev, "invalid qmgr queue range\n");
+			devm_kfree(dev, qmgr);
+			continue;
+		}
+
+		dev_info(dev, "qmgr start queue %d, number of queues %d\n",
+			 qmgr->start_queue, qmgr->num_queues);
+
+		i = of_property_match_string(child, "reg-names", "peek");
+		qmgr->reg_peek = of_iomap(child, i);
+		i = of_property_match_string(child, "reg-names", "status");
+		qmgr->reg_status = of_iomap(child, i);
+		i = of_property_match_string(child, "reg-names", "config");
+		qmgr->reg_config = of_iomap(child, i);
+		i = of_property_match_string(child, "reg-names", "region");
+		qmgr->reg_region = of_iomap(child, i);
+		i = of_property_match_string(child, "reg-names", "push");
+		qmgr->reg_push = of_iomap(child, i);
+		i = of_property_match_string(child, "reg-names", "pop");
+		qmgr->reg_pop = of_iomap(child, i);
+
+		if (!qmgr->reg_peek || !qmgr->reg_status || !qmgr->reg_config ||
+		    !qmgr->reg_region || !qmgr->reg_push || !qmgr->reg_pop) {
+			dev_err(dev, "failed to map qmgr regs\n");
+			if (qmgr->reg_peek)
+				iounmap(qmgr->reg_peek);
+			if (qmgr->reg_status)
+				iounmap(qmgr->reg_status);
+			if (qmgr->reg_config)
+				iounmap(qmgr->reg_config);
+			if (qmgr->reg_region)
+				iounmap(qmgr->reg_region);
+			if (qmgr->reg_push)
+				iounmap(qmgr->reg_push);
+			if (qmgr->reg_pop)
+				iounmap(qmgr->reg_pop);
+			devm_kfree(dev, qmgr);
+			continue;
+		}
+
+		list_add_tail(&qmgr->list, &kdev->qmgrs);
+		dev_info(dev, "added qmgr start queue %d, num of queues %d, reg_peek %p, reg_status %p, reg_config %p, reg_region %p, reg_push %p, reg_pop %p\n",
+			 qmgr->start_queue, qmgr->num_queues,
+			 qmgr->reg_peek, qmgr->reg_status,
+			 qmgr->reg_config, qmgr->reg_region,
+			 qmgr->reg_push, qmgr->reg_pop);
+	}
+	return 0;
+}
+
+static int kqmss_queue_init_pdsps(struct kqmss_device *kdev,
+					struct device_node *pdsps)
+{
+	struct device *dev = kdev->dev;
+	struct kqmss_pdsp_info *pdsp;
+	struct device_node *child;
+	int i, ret;
+
+	for_each_child_of_node(pdsps, child) {
+		pdsp = devm_kzalloc(dev, sizeof(*pdsp), GFP_KERNEL);
+		if (!pdsp) {
+			dev_err(dev, "out of memory allocating pdsp\n");
+			return -ENOMEM;
+		}
+		pdsp->name = kqmss_queue_find_name(child);
+		ret = of_property_read_string(child, "firmware",
+					      &pdsp->firmware);
+		if (ret < 0 || !pdsp->firmware) {
+			dev_err(dev, "unknown firmware for pdsp %s\n",
+				pdsp->name);
+			devm_kfree(dev, pdsp);
+			continue;
+		}
+		dev_dbg(dev, "pdsp name %s fw name :%s\n",
+			pdsp->name, pdsp->firmware);
+		i = of_property_match_string(child, "reg-names", "iram");
+		pdsp->iram = of_iomap(child, i);
+		i = of_property_match_string(child, "reg-names", "reg");
+		pdsp->regs = of_iomap(child, i);
+		i = of_property_match_string(child, "reg-names", "intd");
+		pdsp->intd = of_iomap(child, i);
+		i = of_property_match_string(child, "reg-names", "cmd");
+		pdsp->command = of_iomap(child, i);
+		if (!pdsp->command || !pdsp->iram || !pdsp->regs ||
+		    !pdsp->intd) {
+			dev_err(dev, "failed to map pdsp %s regs\n",
+				pdsp->name);
+			if (pdsp->command)
+				devm_iounmap(dev, pdsp->command);
+			if (pdsp->iram)
+				devm_iounmap(dev, pdsp->iram);
+			if (pdsp->regs)
+				devm_iounmap(dev, pdsp->regs);
+			if (pdsp->intd)
+				devm_iounmap(dev, pdsp->intd);
+			devm_kfree(dev, pdsp);
+			continue;
+		}
+		of_property_read_u32(child, "id", &pdsp->id);
+		list_add_tail(&pdsp->list, &kdev->pdsps);
+		dev_dbg(dev, "added pdsp %s: command %p, iram %p, regs %p, intd %p, firmware %s\n",
+			pdsp->name, pdsp->command, pdsp->iram, pdsp->regs,
+			pdsp->intd, pdsp->firmware);
+	}
+	return 0;
+}
+
+static int kqmss_queue_stop_pdsp(struct kqmss_device *kdev,
+			  struct kqmss_pdsp_info *pdsp)
+{
+	u32 val, timeout = 1000;
+	int ret;
+
+	val = readl_relaxed(&pdsp->regs->control) & ~PDSP_CTRL_ENABLE;
+	writel_relaxed(val, &pdsp->regs->control);
+	ret = kqmss_queue_pdsp_wait(&pdsp->regs->control, timeout,
+					PDSP_CTRL_RUNNING);
+	if (ret < 0) {
+		dev_err(kdev->dev, "timed out on pdsp %s stop\n", pdsp->name);
+		return ret;
+	}
+	return 0;
+}
+
+static int kqmss_queue_load_pdsp(struct kqmss_device *kdev,
+			  struct kqmss_pdsp_info *pdsp)
+{
+	int i, ret, fwlen;
+	const struct firmware *fw;
+	u32 *fwdata;
+
+	ret = request_firmware(&fw, pdsp->firmware, kdev->dev);
+	if (ret) {
+		dev_err(kdev->dev, "failed to get firmware %s for pdsp %s\n",
+			pdsp->firmware, pdsp->name);
+		return ret;
+	}
+	writel_relaxed(pdsp->id + 1, pdsp->command + 0x18);
+	/* download the firmware */
+	fwdata = (u32 *)fw->data;
+	fwlen = (fw->size + sizeof(u32) - 1) / sizeof(u32);
+	for (i = 0; i < fwlen; i++)
+		writel_relaxed(be32_to_cpu(fwdata[i]), pdsp->iram + i);
+
+	release_firmware(fw);
+	return 0;
+}
+
+static int kqmss_queue_start_pdsp(struct kqmss_device *kdev,
+			   struct kqmss_pdsp_info *pdsp)
+{
+	u32 val, timeout = 1000;
+	int ret;
+
+	/* write a command for sync */
+	writel_relaxed(0xffffffff, pdsp->command);
+	while (readl_relaxed(pdsp->command) != 0xffffffff)
+		cpu_relax();
+
+	/* soft reset the PDSP */
+	val  = readl_relaxed(&pdsp->regs->control);
+	val &= ~(PDSP_CTRL_PC_MASK | PDSP_CTRL_SOFT_RESET);
+	writel_relaxed(val, &pdsp->regs->control);
+
+	/* enable pdsp */
+	val = readl_relaxed(&pdsp->regs->control) | PDSP_CTRL_ENABLE;
+	writel_relaxed(val, &pdsp->regs->control);
+
+	/* wait for command register to clear */
+	ret = kqmss_queue_pdsp_wait(pdsp->command, timeout, 0);
+	if (ret < 0) {
+		dev_err(kdev->dev,
+			"timed out on pdsp %s command register wait\n",
+			pdsp->name);
+		return ret;
+	}
+	return 0;
+}
+
+static void kqmss_queue_stop_pdsps(struct kqmss_device *kdev)
+{
+	struct kqmss_pdsp_info *pdsp;
+
+	/* disable all pdsps */
+	for_each_pdsp(kdev, pdsp)
+		kqmss_queue_stop_pdsp(kdev, pdsp);
+}
+
+static int kqmss_queue_start_pdsps(struct kqmss_device *kdev)
+{
+	struct kqmss_pdsp_info *pdsp;
+	int ret;
+
+	kqmss_queue_stop_pdsps(kdev);
+	/* now load them all */
+	for_each_pdsp(kdev, pdsp) {
+		ret = kqmss_queue_load_pdsp(kdev, pdsp);
+		if (ret < 0)
+			return ret;
+	}
+
+	for_each_pdsp(kdev, pdsp) {
+		ret = kqmss_queue_start_pdsp(kdev, pdsp);
+		WARN_ON(ret);
+	}
+	return 0;
+}
+
+static inline struct kqmss_qmgr_info *kqmss_find_qmgr(unsigned id)
+{
+	struct kqmss_qmgr_info *qmgr;
+
+	for_each_qmgr(kdev, qmgr) {
+		if ((id >= qmgr->start_queue) &&
+		    (id < qmgr->start_queue + qmgr->num_queues))
+			return qmgr;
+	}
+	return NULL;
+}
+
+static int kqmss_queue_init_queue(struct kqmss_device *kdev,
+					struct kqmss_range_info *range,
+					struct kqmss_queue_inst *inst,
+					unsigned id)
+{
+	char irq_name[KQMSS_NAME_SIZE];
+	inst->qmgr = kqmss_find_qmgr(id);
+	if (!inst->qmgr)
+		return -1;
+
+	INIT_LIST_HEAD(&inst->handles);
+	inst->kdev = kdev;
+	inst->range = range;
+	inst->irq_num = -1;
+	inst->id = id;
+	scnprintf(irq_name, sizeof(irq_name), "hwqueue-%d", id);
+	inst->irq_name = kstrndup(irq_name, sizeof(irq_name), GFP_KERNEL);
+
+	if (range->ops && range->ops->init_queue)
+		return range->ops->init_queue(range, inst);
+	else
+		return 0;
+}
+
+static int kqmss_queue_init_queues(struct kqmss_device *kdev)
+{
+	struct kqmss_range_info *range;
+	int size, id, base_idx;
+	int idx = 0, ret = 0;
+
+	/* how much do we need for instance data? */
+	size = sizeof(struct kqmss_queue_inst);
+
+	/* round this up to a power of 2, keep the index to instance
+	 * arithmetic fast.
+	 * */
+	kdev->inst_shift = order_base_2(size);
+	size = (1 << kdev->inst_shift) * kdev->num_queues_in_use;
+	kdev->instances = devm_kzalloc(kdev->dev, size, GFP_KERNEL);
+	if (!kdev->instances)
+		return -1;
+
+	for_each_queue_range(kdev, range) {
+		if (range->ops && range->ops->init_range)
+			range->ops->init_range(range);
+		base_idx = idx;
+		for (id = range->queue_base;
+		     id < range->queue_base + range->num_queues; id++, idx++) {
+			ret = kqmss_queue_init_queue(kdev, range,
+					kqmss_queue_idx_to_inst(kdev, idx), id);
+			if (ret < 0)
+				return ret;
+		}
+		range->queue_base_inst =
+			kqmss_queue_idx_to_inst(kdev, base_idx);
+	}
+	return 0;
+}
+
+static int kqmss_queue_probe(struct platform_device *pdev)
+{
+	struct device_node *node = pdev->dev.of_node;
+	struct device_node *qmgrs, *queue_pools, *regions, *pdsps;
+	struct device *dev = &pdev->dev;
+	u32 temp[2];
+	int ret;
+
+	if (!node) {
+		dev_err(dev, "device tree info unavailable\n");
+		return -ENODEV;
+	}
+
+	kdev = devm_kzalloc(dev, sizeof(struct kqmss_device), GFP_KERNEL);
+	if (!kdev) {
+		dev_err(dev, "memory allocation failed\n");
+		return -ENOMEM;
+	}
+
+	platform_set_drvdata(pdev, kdev);
+	kdev->dev = dev;
+	INIT_LIST_HEAD(&kdev->queue_ranges);
+	INIT_LIST_HEAD(&kdev->qmgrs);
+	INIT_LIST_HEAD(&kdev->pools);
+	INIT_LIST_HEAD(&kdev->regions);
+	INIT_LIST_HEAD(&kdev->pdsps);
+
+	pm_runtime_enable(&pdev->dev);
+	ret = pm_runtime_get_sync(&pdev->dev);
+	if (ret < 0) {
+		dev_err(dev, "Failed to enable QMSS\n");
+		return ret;
+	}
+
+	if (of_property_read_u32_array(node, "queue-range", temp, 2)) {
+		dev_err(dev, "queue-range not specified\n");
+		ret = -ENODEV;
+		goto err;
+	}
+	kdev->base_id    = temp[0];
+	kdev->num_queues = temp[1];
+
+	/* Initialize queue managers using device tree configuration */
+	qmgrs =  of_get_child_by_name(node, "qmgrs");
+	if (!qmgrs) {
+		dev_err(dev, "queue manager info not specified\n");
+		ret = -ENODEV;
+		goto err;
+	}
+	ret = kqmss_queue_init_qmgrs(kdev, qmgrs);
+	of_node_put(qmgrs);
+	if (ret)
+		goto err;
+
+	/* get pdsp configuration values from device tree */
+	pdsps =  of_get_child_by_name(node, "pdsps");
+	if (pdsps) {
+		ret = kqmss_queue_init_pdsps(kdev, pdsps);
+		if (ret)
+			goto err;
+
+		ret = kqmss_queue_start_pdsps(kdev);
+		if (ret)
+			goto err;
+	}
+	of_node_put(pdsps);
+
+	/* get usable queue range values from device tree */
+	queue_pools = of_get_child_by_name(node, "queue-pools");
+	if (!queue_pools) {
+		dev_err(dev, "queue-pools not specified\n");
+		ret = -ENODEV;
+		goto err;
+	}
+	ret = kqmss_setup_queue_pools(kdev, queue_pools);
+	of_node_put(queue_pools);
+	if (ret)
+		goto err;
+
+	ret = kqmss_get_link_ram(kdev, "linkram0", &kdev->link_rams[0]);
+	if (ret) {
+		dev_err(kdev->dev, "could not setup linking ram\n");
+		goto err;
+	}
+
+	ret = kqmss_get_link_ram(kdev, "linkram1", &kdev->link_rams[1]);
+	if (ret) {
+		/*
+		 * nothing really, we have one linking ram already, so we just
+		 * live within our means
+		 */
+	}
+
+	ret = kqmss_queue_setup_link_ram(kdev);
+	if (ret)
+		goto err;
+
+	regions =  of_get_child_by_name(node, "descriptor-regions");
+	if (!regions) {
+		dev_err(dev, "descriptor-regions not specified\n");
+		goto err;
+	}
+	ret = kqmss_queue_setup_regions(kdev, regions);
+	of_node_put(regions);
+	if (ret)
+		goto err;
+
+	ret = kqmss_queue_init_queues(kdev);
+	if (ret < 0) {
+		dev_err(dev, "hwqueue initialization failed\n");
+		goto err;
+	}
+
+	debugfs_create_file("qmss", S_IFREG | S_IRUGO, NULL, NULL,
+			    &kqmss_queue_debug_ops);
+	return 0;
+
+err:
+	kqmss_queue_stop_pdsps(kdev);
+	kqmss_free_queue_ranges(kdev);
+	kqmss_queue_free_regions(kdev);
+	pm_runtime_put_sync(&pdev->dev);
+	pm_runtime_disable(&pdev->dev);
+	return ret;
+}
+
+static int kqmss_queue_remove(struct platform_device *pdev)
+{
+	pm_runtime_put_sync(&pdev->dev);
+	pm_runtime_disable(&pdev->dev);
+	return 0;
+}
+
+/* Match table for of_platform binding */
+static struct of_device_id keystone_qmss_of_match[] = {
+	{ .compatible = "ti,keystone-qmss", },
+	{},
+};
+MODULE_DEVICE_TABLE(of, keystone_qmss_of_match);
+
+static struct platform_driver keystone_qmss_driver = {
+	.probe		= kqmss_queue_probe,
+	.remove		= kqmss_queue_remove,
+	.driver		= {
+		.name	= "keystone-qmss",
+		.owner	= THIS_MODULE,
+		.of_match_table = keystone_qmss_of_match,
+	},
+};
+module_platform_driver(keystone_qmss_driver);
+
+MODULE_LICENSE("GPL v2");
+MODULE_DESCRIPTION("TI QMSS driver for Keystone SOCs");
diff --git a/drivers/soc/keystone/qmss_queue.h b/drivers/soc/keystone/qmss_queue.h
new file mode 100644
index 0000000..9399f54
--- /dev/null
+++ b/drivers/soc/keystone/qmss_queue.h
@@ -0,0 +1,236 @@
+/*
+ * Keystone QMSS driver internal header
+ *
+ * Copyright (C) 2014 Texas Instruments Incorporated - http://www.ti.com
+ * Author:	Sandeep Nair <sandeep_n@...com>
+ *		Cyril Chemparathy <cyril@...com>
+ *		Santosh Shilimkar <santosh.shilimkar@...com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * version 2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ */
+
+#ifndef __QMSS_QUEUE_H__
+#define __QMSS_QUEUE_H__
+
+#define BITS(x)		(BIT(x) - 1)
+
+#define THRESH_GTE	BIT(7)
+#define THRESH_LT	0
+
+#define PDSP_CTRL_PC_MASK	0xffff0000
+#define PDSP_CTRL_SOFT_RESET	BIT(0)
+#define PDSP_CTRL_ENABLE	BIT(1)
+#define PDSP_CTRL_RUNNING	BIT(15)
+
+#define ACC_MAX_CHANNEL		48
+#define ACC_DEFAULT_PERIOD	25 /* usecs */
+
+#define ACC_CHANNEL_INT_BASE		2
+
+#define ACC_LIST_ENTRY_TYPE		1
+#define ACC_LIST_ENTRY_WORDS		(1 << ACC_LIST_ENTRY_TYPE)
+#define ACC_LIST_ENTRY_QUEUE_IDX	0
+#define ACC_LIST_ENTRY_DESC_IDX	(ACC_LIST_ENTRY_WORDS - 1)
+
+#define ACC_CMD_DISABLE_CHANNEL	0x80
+#define ACC_CMD_ENABLE_CHANNEL	0x81
+#define ACC_CFG_MULTI_QUEUE		BIT(21)
+
+#define ACC_INTD_OFFSET_EOI		(0x0010)
+#define ACC_INTD_OFFSET_COUNT(ch)	(0x0300 + 4 * (ch))
+#define ACC_INTD_OFFSET_STATUS(ch)	(0x0200 + 4 * ((ch) / 32))
+
+#define RANGE_MAX_IRQS			64
+
+enum kqmss_acc_result {
+	ACC_RET_IDLE,
+	ACC_RET_SUCCESS,
+	ACC_RET_INVALID_COMMAND,
+	ACC_RET_INVALID_CHANNEL,
+	ACC_RET_INACTIVE_CHANNEL,
+	ACC_RET_ACTIVE_CHANNEL,
+	ACC_RET_INVALID_QUEUE,
+	ACC_RET_INVALID_RET,
+};
+
+struct kqmss_reg_config {
+	u32		revision;
+	u32		__pad1;
+	u32		divert;
+	u32		link_ram_base0;
+	u32		link_ram_size0;
+	u32		link_ram_base1;
+	u32		__pad2[2];
+	u32		starvation[0];
+};
+
+struct kqmss_reg_region {
+	u32		base;
+	u32		start_index;
+	u32		size_count;
+	u32		__pad;
+};
+
+struct kqmss_reg_pdsp_regs {
+	u32		control;
+	u32		status;
+	u32		cycle_count;
+	u32		stall_count;
+};
+
+struct kqmss_reg_acc_command {
+	u32		command;
+	u32		queue_mask;
+	u32		list_phys;
+	u32		queue_num;
+	u32		timer_config;
+};
+
+struct kqmss_link_ram_block {
+	dma_addr_t	 phys;
+	void		*virt;
+	size_t		 size;
+};
+
+struct kqmss_acc_info {
+	u32			 pdsp_id;
+	u32			 start_channel;
+	u32			 list_entries;
+	u32			 pacing_mode;
+	u32			 timer_count;
+	int			 mem_size;
+	int			 list_size;
+	struct kqmss_pdsp_info	*pdsp;
+};
+
+struct kqmss_acc_channel {
+	u32			channel;
+	u32			list_index;
+	u32			open_mask;
+	u32			*list_cpu[2];
+	dma_addr_t		list_dma[2];
+	char			name[KQMSS_NAME_SIZE];
+	atomic_t		retrigger_count;
+};
+
+struct kqmss_pdsp_info {
+	const char					*name;
+	struct kqmss_reg_pdsp_regs  __iomem		*regs;
+	union {
+		void __iomem				*command;
+		struct kqmss_reg_acc_command __iomem	*acc_command;
+		u32 __iomem				*qos_command;
+	};
+	void __iomem					*intd;
+	u32 __iomem					*iram;
+	const char					*firmware;
+	u32						id;
+	struct list_head				list;
+};
+
+struct kqmss_qmgr_info {
+	unsigned			start_queue;
+	unsigned			num_queues;
+	struct kqmss_reg_config __iomem	*reg_config;
+	struct kqmss_reg_region __iomem	*reg_region;
+	struct kqmss_reg_queue __iomem	*reg_push, *reg_pop, *reg_peek;
+	void __iomem			*reg_status;
+	struct list_head		list;
+};
+
+#define KQMSS_NUM_LINKRAM	2
+struct kqmss_device {
+	struct device				*dev;
+	unsigned				base_id;
+	unsigned				num_queues;
+	unsigned				num_queues_in_use;
+	unsigned				inst_shift;
+	struct kqmss_link_ram_block		link_rams[KQMSS_NUM_LINKRAM];
+	void					*instances;
+	struct list_head			regions;
+	struct list_head			queue_ranges;
+	struct list_head			pools;
+	struct list_head			pdsps;
+	struct list_head			qmgrs;
+};
+
+struct kqmss_range_ops {
+	int	(*init_range)(struct kqmss_range_info *range);
+	int	(*free_range)(struct kqmss_range_info *range);
+	int	(*init_queue)(struct kqmss_range_info *range,
+			      struct kqmss_queue_inst *inst);
+	int	(*open_queue)(struct kqmss_range_info *range,
+			      struct kqmss_queue_inst *inst, unsigned flags);
+	int	(*close_queue)(struct kqmss_range_info *range,
+			       struct kqmss_queue_inst *inst);
+	int	(*set_notify)(struct kqmss_range_info *range,
+			      struct kqmss_queue_inst *inst, bool enabled);
+};
+
+struct kqmss_irq_info {
+	int	irq;
+	u32	cpu_map;
+};
+
+struct kqmss_range_info {
+	const char			*name;
+	struct kqmss_device		*kdev;
+	unsigned			queue_base;
+	unsigned			num_queues;
+	void				*queue_base_inst;
+	unsigned			flags;
+	struct list_head		list;
+	struct kqmss_range_ops		*ops;
+	struct kqmss_acc_info		acc_info;
+	struct kqmss_acc_channel	*acc;
+	unsigned			num_irqs;
+	struct kqmss_irq_info		irqs[RANGE_MAX_IRQS];
+};
+
+#define RANGE_RESERVED		BIT(0)
+#define RANGE_HAS_IRQ		BIT(1)
+#define RANGE_HAS_ACCUMULATOR	BIT(2)
+#define RANGE_MULTI_QUEUE	BIT(3)
+
+#define for_each_region(kdev, region)				\
+	list_for_each_entry(region, &kdev->regions, list)
+
+#define first_region(kdev)					\
+	list_first_entry(&kdev->regions, \
+			struct kqmss_region, list)
+
+#define for_each_queue_range(kdev, range)			\
+	list_for_each_entry(range, &kdev->queue_ranges, list)
+
+#define first_queue_range(kdev)					\
+	list_first_entry(&kdev->queue_ranges, \
+			struct kqmss_range_info, list)
+
+#define for_each_pool(kdev, pool)				\
+	list_for_each_entry(pool, &kdev->pools, list)
+
+#define for_each_pdsp(kdev, pdsp)				\
+	list_for_each_entry(pdsp, &kdev->pdsps, list)
+
+#define for_each_qmgr(kdev, qmgr)				\
+	list_for_each_entry(qmgr, &kdev->qmgrs, list)
+
+static inline struct kqmss_pdsp_info *
+kqmss_find_pdsp(struct kqmss_device *kdev, unsigned pdsp_id)
+{
+	struct kqmss_pdsp_info *pdsp;
+
+	for_each_pdsp(kdev, pdsp)
+		if (pdsp_id == pdsp->id)
+			return pdsp;
+	return NULL;
+}
+
+#endif /* __QMSS_QUEUE_H__ */
diff --git a/include/linux/soc/keystone_qmss.h b/include/linux/soc/keystone_qmss.h
new file mode 100644
index 0000000..8c3c16c
--- /dev/null
+++ b/include/linux/soc/keystone_qmss.h
@@ -0,0 +1,390 @@
+/*
+ * Keystone Queue Management Sub-System header
+ *
+ * Copyright (C) 2014 Texas Instruments Incorporated - http://www.ti.com
+ * Author:	Sandeep Nair <sandeep_n@...com>
+ *		Cyril Chemparathy <cyril@...com>
+ *		Santosh Shilimkar <santosh.shilimkar@...com>
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License as
+ * published by the Free Software Foundation version 2.
+ *
+ * This program is distributed "as is" WITHOUT ANY WARRANTY of any
+ * kind, whether express or implied; without even the implied warranty
+ * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#ifndef __KEYSTONE_QMSS_H__
+#define __KEYSTONE_QMSS_H__
+
+#include <linux/err.h>
+#include <linux/time.h>
+#include <linux/atomic.h>
+#include <linux/device.h>
+#include <linux/fcntl.h>
+#include <linux/dma-mapping.h>
+
+#define KQMSS_ACC_DESCS_MAX		SZ_1K
+#define KQMSS_ACC_DESCS_MASK		(KQMSS_ACC_DESCS_MAX - 1)
+#define KQMSS_DESC_SIZE_MASK		0xful
+#define KQMSS_DESC_PTR_MASK		(~KQMSS_DESC_SIZE_MASK)
+#define KQMSS_NAME_SIZE			32
+
+/* queue types */
+#define KQMSS_QUEUE_QPEND	((unsigned)-2) /* interruptible qpend queue */
+#define KQMSS_QUEUE_ACC		((unsigned)-3) /* Accumulated queue */
+#define KQMSS_QUEUE_GP		((unsigned)-4) /* General purpose queue */
+
+/* queue flags */
+#define KQMSS_QUEUE_SHARED	0x0001		/* Queue can be shared */
+
+/* Queue notifier callback prototype */
+typedef void (*kqmss_queue_notify_fn)(void *arg);
+
+struct kqmss_device;
+struct kqmss_acc_channel;
+struct kqmss_reg_config;
+struct kqmss_reg_region;
+struct kqmss_qmgr_info;
+
+/**
+ * enum kqmss_queue_ctrl_cmd -	queue operations.
+ * @KQMSS_QUEUE_GET_ID:		Get the ID number for an open queue
+ * @KQMSS_QUEUE_FLUSH:		forcibly empty a queue if possible
+ * @KQMSS_QUEUE_SET_NOTIFIER:	Set a notifier callback to a queue handle.
+ * @KQMSS_QUEUE_ENABLE_NOTIFY:	Enable notifier callback for a queue handle.
+ * @KQMSS_QUEUE_DISABLE_NOTIFY:	Disable notifier callback for a queue handle.
+ */
+enum kqmss_queue_ctrl_cmd {
+	KQMSS_QUEUE_GET_ID,
+	KQMSS_QUEUE_FLUSH,
+	KQMSS_QUEUE_SET_NOTIFIER,
+	KQMSS_QUEUE_ENABLE_NOTIFY,
+	KQMSS_QUEUE_DISABLE_NOTIFY
+};
+
+/**
+ * struct kqmss_queue_stats:	queue statistics
+ * pushes:			number of push operations
+ * pops:			number of pop operations
+ * push_errors:			number of push errors
+ * pop_errors:			number of pop errors
+ * notifies:			notifier counts
+ */
+struct kqmss_queue_stats {
+	atomic_t	 pushes;
+	atomic_t	 pops;
+	atomic_t	 push_errors;
+	atomic_t	 pop_errors;
+	atomic_t	 notifies;
+};
+
+/**
+ * struct kqmss_reg_queue:	queue registers
+ * @entry_count:		valid entries in the queue
+ * @byte_count:			total byte count in thhe queue
+ * @packet_size:		packet size for the queue
+ * @ptr_size_thresh:		packet pointer size threshold
+ */
+struct kqmss_reg_queue {
+	u32		entry_count;
+	u32		byte_count;
+	u32		packet_size;
+	u32		ptr_size_thresh;
+};
+
+/**
+ * struct kqmss_region:		qmss region info
+ * @dma_start, dma_end:		start and end dma address
+ * @virt_start, virt_end:	start and end virtual address
+ * @desc_size:			descriptor size
+ * @used_desc:			consumed descriptors
+ * @id:				region number
+ * @num_desc:			total descriptors
+ * @link_index:			index of the first descriptor
+ * @name:			region name
+ * @list:			list head
+ */
+struct kqmss_region {
+	dma_addr_t		dma_start, dma_end;
+	void			*virt_start, *virt_end;
+	unsigned		desc_size;
+	unsigned		used_desc;
+	unsigned		id;
+	unsigned		num_desc;
+	unsigned		link_index;
+	const char		*name;
+	struct list_head	list;
+};
+
+/**
+ * struct kqmss_pool:		qmss pools
+ * @dev:			device pointer
+ * @region:			qmss region info
+ * @queue:			queue registers
+ * @kdev:			qmss device pointer
+ * @region_offset:		offset from the base
+ * @num_desc:			total descriptors
+ * @desc_size:			descriptor size
+ * @region_id:			region number
+ * @name:			pool name
+ * @list:			list head
+ */
+struct kqmss_pool {
+	struct device			*dev;
+	struct kqmss_region		*region;
+	struct kqmss_queue		*queue;
+	struct kqmss_device		*kdev;
+	int				region_offset;
+	int				num_desc;
+	int				desc_size;
+	int				region_id;
+	const char			*name;
+	struct list_head		list;
+};
+
+/**
+ * struct kqmss_queue_inst:		qmss queue instace properties
+ * @descs:				descriptor pointer
+ * @desc_head, desc_tail, desc_count:	descriptor counters
+ * @acc:				accumulator channel pointer
+ * @kdev:				qmss device pointer
+ * @range:				range info
+ * @qmgr:				queue manager info
+ * @id:					queue instace id
+ * @irq_num:				irq line number
+ * @notify_needed:			notifier needed based on queue type
+ * @num_notifiers:			total notifiers
+ * @handles:				list head
+ * @name:				queue instance name
+ * @irq_name:				irq line name
+ */
+struct kqmss_queue_inst {
+	u32				*descs;
+	atomic_t			desc_head, desc_tail, desc_count;
+	struct kqmss_acc_channel	*acc;
+	struct kqmss_device		*kdev;
+	struct kqmss_range_info		*range;
+	struct kqmss_qmgr_info		*qmgr;
+	u32				id;
+	int				irq_num;
+	int				notify_needed;
+	atomic_t			num_notifiers;
+	struct list_head		handles;
+	const char			*name;
+	const char			*irq_name;
+};
+
+/**
+ * struct kqmss_queue:			qmss queue properties
+ * @reg_push, reg_pop, reg_peek:	push, pop queue registers
+ * @inst:				qmss queue instace properties
+ * @notifier_fn:			notifier function
+ * @notifier_fn_arg:			notifier function argument
+ * @notifier_enabled:			notier enabled for a give queue
+ * @rcu:				rcu head
+ * @flags:				queue flags
+ * @list:				list head
+ */
+struct kqmss_queue {
+	struct kqmss_reg_queue __iomem	*reg_push, *reg_pop, *reg_peek;
+	struct kqmss_queue_inst		*inst;
+	struct kqmss_queue_stats	stats;
+	kqmss_queue_notify_fn		notifier_fn;
+	void				*notifier_fn_arg;
+	atomic_t			notifier_enabled;
+	struct rcu_head			rcu;
+	unsigned			flags;
+	struct list_head		list;
+};
+
+/**
+ * struct kqmss_queue_notify_config:	Notifier configuration
+ * @fn:					Notifier function
+ * @fn_arg:				Notifier function arguments
+ */
+struct kqmss_queue_notify_config {
+	kqmss_queue_notify_fn fn;
+	void *fn_arg;
+};
+
+/* Get the DMA address of a descriptor */
+#define kqmss_pool_desc_virt_to_dma(pool, virt) \
+	(pool->region->dma_start + (virt - pool->region->virt_start))
+
+/* Get the virtual(cpu) address of a descriptor */
+#define kqmss_pool_desc_dma_to_virt(pool, dma) \
+	(pool->region->virt_start + (dma - pool->region->dma_start))
+
+int kqmss_init_acc_range(struct kqmss_device *kdev, struct device_node *node,
+					struct kqmss_range_info *range);
+struct kqmss_queue *kqmss_queue_open(const char *name, unsigned id,
+					unsigned flags);
+void kqmss_queue_close(struct kqmss_queue *queue);
+void kqmss_queue_notify(struct kqmss_queue_inst *inst);
+int kqmss_queue_device_control(struct kqmss_queue *qh,
+			enum kqmss_queue_ctrl_cmd cmd, unsigned long arg);
+
+/**
+ * kqmss_queue_get_count()	- returns number of elements in a queue
+ * @qh				- hardware queue handle
+ *
+ * Returns number of elements in the queue.
+ */
+static inline int kqmss_queue_get_count(struct kqmss_queue *qh)
+{
+	struct kqmss_queue_inst *inst = qh->inst;
+
+	return readl_relaxed(&qh->reg_peek[0].entry_count) +
+		atomic_read(&inst->desc_count);
+}
+
+/**
+ * kqmss_queue_push()	- push data (or descriptor) to the tail of a queue
+ * @qh			- hardware queue handle
+ * @data		- data to push
+ * @size		- size of data to push
+ * @flags		- can be used to pass additional information
+ *
+ * Returns 0 on success, errno otherwise.
+ */
+static inline int kqmss_queue_push(struct kqmss_queue *qh, dma_addr_t dma,
+					unsigned size, unsigned flags)
+{
+	u32 val;
+
+	val = (u32)dma | ((size / 16) - 1);
+	writel_relaxed(val, &qh->reg_push[0].ptr_size_thresh);
+	atomic_inc(&qh->stats.pushes);
+	return 0;
+}
+
+/**
+ * kqmss_queue_pop()	- pop data (or descriptor) from the head of a queue
+ * @qh			- hardware queue handle
+ * @size		- (optional) size of the data pop'ed.
+ *
+ * Returns a DMA address on success, 0 on failure.
+ */
+static inline dma_addr_t kqmss_queue_pop(struct kqmss_queue *qh, unsigned *size)
+{
+	struct kqmss_queue_inst *inst = qh->inst;
+	dma_addr_t dma;
+	u32 val, idx;
+
+	/* are we accumulated? */
+	if (inst->descs) {
+		if (unlikely(atomic_dec_return(&inst->desc_count) < 0)) {
+			atomic_inc(&inst->desc_count);
+			return 0;
+		}
+		idx  = atomic_inc_return(&inst->desc_head);
+		idx &= KQMSS_ACC_DESCS_MASK;
+		val = inst->descs[idx];
+	} else {
+		val = readl_relaxed(&qh->reg_pop[0].ptr_size_thresh);
+		if (unlikely(!val))
+			return 0;
+	}
+
+	dma = val & KQMSS_DESC_PTR_MASK;
+	if (size)
+		*size = ((val & KQMSS_DESC_SIZE_MASK) + 1) * 16;
+
+	atomic_inc(&qh->stats.pops);
+	return dma;
+}
+
+struct kqmss_pool *kqmss_pool_create(const char *name,
+					int num_desc, int region_id);
+void kqmss_pool_destroy(struct kqmss_pool *pool);
+
+/**
+ * kqmss_pool_desc_get()	- Get a descriptor from the pool
+ * @pool			- pool handle
+ *
+ * Returns descriptor from the pool.
+ */
+static inline void *kqmss_pool_desc_get(struct kqmss_pool *pool)
+{
+	dma_addr_t dma;
+	unsigned size;
+	void *data;
+
+	dma = kqmss_queue_pop(pool->queue, &size);
+	if (unlikely(!dma))
+		return ERR_PTR(-ENOMEM);
+	data = kqmss_pool_desc_dma_to_virt(pool, dma);
+	return data;
+}
+
+/**
+ * kqmss_pool_desc_put()	- return a descriptor to the pool
+ * @pool			- pool handle
+ */
+static inline void kqmss_pool_desc_put(struct kqmss_pool *pool,
+					void *desc)
+{
+	dma_addr_t dma;
+	dma = kqmss_pool_desc_virt_to_dma(pool, desc);
+	kqmss_queue_push(pool->queue, dma, pool->region->desc_size, 0);
+}
+
+/**
+ * kqmss_pool_desc_map()	- Map descriptor for DMA transfer
+ * @pool			- pool handle
+ * @desc			- address of descriptor to map
+ * @size			- size of descriptor to map
+ * @dma				- DMA address return pointer
+ * @dma_sz			- adjusted return pointer
+ *
+ * Returns 0 on success, errno otherwise.
+ */
+static inline int kqmss_pool_desc_map(struct kqmss_pool *pool,
+					void *desc, unsigned size,
+					dma_addr_t *dma, unsigned *dma_sz)
+{
+	*dma = kqmss_pool_desc_virt_to_dma(pool, desc);
+	size = min(size, pool->region->desc_size);
+	size = ALIGN(size, SMP_CACHE_BYTES);
+	*dma_sz = size;
+	dma_sync_single_for_device(pool->dev, *dma, size, DMA_TO_DEVICE);
+	return 0;
+}
+
+/**
+ * kqmss_pool_desc_unmap()	- Unmap descriptor after DMA transfer
+ * @pool			- pool handle
+ * @dma				- DMA address of descriptor to unmap
+ * @dma_sz			- size of descriptor to unmap
+ *
+ * Returns descriptor address on success, Use IS_ERR_OR_NULL() to identify
+ * error values on return.
+ */
+static inline void *kqmss_pool_desc_unmap(struct kqmss_pool *pool,
+						dma_addr_t dma, unsigned dma_sz)
+{
+	unsigned desc_sz;
+	void *desc;
+
+	desc_sz = min(dma_sz, pool->region->desc_size);
+	desc = kqmss_pool_desc_dma_to_virt(pool, dma);
+	dma_sync_single_for_cpu(pool->dev, dma, desc_sz,
+				DMA_FROM_DEVICE);
+	prefetch(desc);
+	return desc;
+}
+
+/**
+ * kqmss_pool_count()	- Get the number of descriptors in pool.
+ * @pool		- pool handle
+ * Returns number of elements in the pool.
+ */
+static inline int kqmss_pool_count(struct kqmss_pool *pool)
+{
+	return kqmss_queue_get_count(pool->queue);
+}
+
+#endif /* __KEYSTONE_QMSS_H__ */
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ