lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20180803161221.6322-1-ruxandra.radulescu@nxp.com>
Date:   Fri,  3 Aug 2018 11:12:21 -0500
From:   Ioana Radulescu <ruxandra.radulescu@....com>
To:     netdev@...r.kernel.org, davem@...emloft.net
Cc:     gregkh@...uxfoundation.org, devel@...verdev.osuosl.org,
        linux-kernel@...r.kernel.org, ioana.ciornei@....com,
        laurentiu.tudor@....com, madalin.bucur@....com,
        horia.geanta@....com
Subject: [PATCH net-next] [RFC] dpaa2-eth: Move DPAA2 Ethernet driver from staging to drivers/net

The DPAA2 Ethernet driver supports Freescale/NXP SoCs with DPAA2
(DataPath Acceleration Architecture v2). The driver manages
network objects discovered on the fsl-mc bus.

Signed-off-by: Ioana Radulescu <ruxandra.radulescu@....com>
---
The Freescale/NXP DPAA2 Ethernet driver was first included in
drivers/staging, due to its dependencies on two components located
there at the time of its initial submission:
* the fsl-mc bus driver, which was moved to drivers/bus in kernel 4.17
* the dpio driver, which is currently being moved to drivers/soc/fsl
(the change has been picked up by the ARM SoC tree and will be merged
on mainline during the upcoming merge window)

This patch depends both on the dpio driver patches in the ARM SoC
tree (https://lkml.org/lkml/2018/7/24/563) and the dpaa2-eth driver
patches added in Greg's staging tree after the 4.17 kernel release.

This patch is marked as a RFC due to above dependencies; I considered
delaying it until after the merge window, which should take care of all
deps, but I'm hoping for an initial round of review before the window
opens. Also for review purposes, I generated this patch without the -M
option, although the driver files are moved without any code changes.

More information on the DPAA2 architecture and the interactions
between the fsl-mc bus and the objects present on it can be found in:
Documentation/networking/dpaa2/overview.rst

Thanks,
Ioana

 Documentation/networking/dpaa2/ethernet-driver.rst |  185 ++
 Documentation/networking/dpaa2/index.rst           |    1 +
 MAINTAINERS                                        |    4 +-
 drivers/net/ethernet/freescale/Kconfig             |    8 +
 drivers/net/ethernet/freescale/Makefile            |    2 +
 drivers/net/ethernet/freescale/dpaa2/Makefile      |   11 +
 .../net/ethernet/freescale/dpaa2/dpaa2-eth-trace.h |  158 ++
 drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c   | 2661 ++++++++++++++++++++
 drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h   |  412 +++
 .../net/ethernet/freescale/dpaa2/dpaa2-ethtool.c   |  280 ++
 drivers/net/ethernet/freescale/dpaa2/dpkg.h        |  480 ++++
 drivers/net/ethernet/freescale/dpaa2/dpni-cmd.h    |  518 ++++
 drivers/net/ethernet/freescale/dpaa2/dpni.c        | 1600 ++++++++++++
 drivers/net/ethernet/freescale/dpaa2/dpni.h        |  824 ++++++
 drivers/staging/fsl-dpaa2/Kconfig                  |    8 -
 drivers/staging/fsl-dpaa2/Makefile                 |    1 -
 drivers/staging/fsl-dpaa2/ethernet/Makefile        |   11 -
 drivers/staging/fsl-dpaa2/ethernet/TODO            |   18 -
 .../staging/fsl-dpaa2/ethernet/dpaa2-eth-trace.h   |  158 --
 drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.c     | 2661 --------------------
 drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.h     |  412 ---
 drivers/staging/fsl-dpaa2/ethernet/dpaa2-ethtool.c |  280 --
 drivers/staging/fsl-dpaa2/ethernet/dpkg.h          |  480 ----
 drivers/staging/fsl-dpaa2/ethernet/dpni-cmd.h      |  518 ----
 drivers/staging/fsl-dpaa2/ethernet/dpni.c          | 1600 ------------
 drivers/staging/fsl-dpaa2/ethernet/dpni.h          |  824 ------
 .../staging/fsl-dpaa2/ethernet/ethernet-driver.rst |  185 --
 27 files changed, 7142 insertions(+), 7158 deletions(-)
 create mode 100644 Documentation/networking/dpaa2/ethernet-driver.rst
 create mode 100644 drivers/net/ethernet/freescale/dpaa2/Makefile
 create mode 100644 drivers/net/ethernet/freescale/dpaa2/dpaa2-eth-trace.h
 create mode 100644 drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
 create mode 100644 drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h
 create mode 100644 drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c
 create mode 100644 drivers/net/ethernet/freescale/dpaa2/dpkg.h
 create mode 100644 drivers/net/ethernet/freescale/dpaa2/dpni-cmd.h
 create mode 100644 drivers/net/ethernet/freescale/dpaa2/dpni.c
 create mode 100644 drivers/net/ethernet/freescale/dpaa2/dpni.h
 delete mode 100644 drivers/staging/fsl-dpaa2/ethernet/Makefile
 delete mode 100644 drivers/staging/fsl-dpaa2/ethernet/TODO
 delete mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth-trace.h
 delete mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.c
 delete mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.h
 delete mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpaa2-ethtool.c
 delete mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpkg.h
 delete mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpni-cmd.h
 delete mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpni.c
 delete mode 100644 drivers/staging/fsl-dpaa2/ethernet/dpni.h
 delete mode 100644 drivers/staging/fsl-dpaa2/ethernet/ethernet-driver.rst

diff --git a/Documentation/networking/dpaa2/ethernet-driver.rst b/Documentation/networking/dpaa2/ethernet-driver.rst
new file mode 100644
index 0000000..90ec940
--- /dev/null
+++ b/Documentation/networking/dpaa2/ethernet-driver.rst
@@ -0,0 +1,185 @@
+.. SPDX-License-Identifier: GPL-2.0
+.. include:: <isonum.txt>
+
+===============================
+DPAA2 Ethernet driver
+===============================
+
+:Copyright: |copy| 2017-2018 NXP
+
+This file provides documentation for the Freescale DPAA2 Ethernet driver.
+
+Supported Platforms
+===================
+This driver provides networking support for Freescale DPAA2 SoCs, e.g.
+LS2080A, LS2088A, LS1088A.
+
+
+Architecture Overview
+=====================
+Unlike regular NICs, in the DPAA2 architecture there is no single hardware block
+representing network interfaces; instead, several separate hardware resources
+concur to provide the networking functionality:
+
+- network interfaces
+- queues, channels
+- buffer pools
+- MAC/PHY
+
+All hardware resources are allocated and configured through the Management
+Complex (MC) portals. MC abstracts most of these resources as DPAA2 objects
+and exposes ABIs through which they can be configured and controlled. A few
+hardware resources, like queues, do not have a corresponding MC object and
+are treated as internal resources of other objects.
+
+For a more detailed description of the DPAA2 architecture and its object
+abstractions see *Documentation/networking/dpaa2/overview.rst*.
+
+Each Linux net device is built on top of a Datapath Network Interface (DPNI)
+object and uses Buffer Pools (DPBPs), I/O Portals (DPIOs) and Concentrators
+(DPCONs).
+
+Configuration interface::
+
+                 -----------------------
+                | DPAA2 Ethernet Driver |
+                 -----------------------
+                     .      .      .
+                     .      .      .
+             . . . . .      .      . . . . . .
+             .              .                .
+             .              .                .
+         ----------     ----------      -----------
+        | DPBP API |   | DPNI API |    | DPCON API |
+         ----------     ----------      -----------
+             .              .                .             software
+    =======  .  ==========  .  ============  .  ===================
+             .              .                .             hardware
+         ------------------------------------------
+        |            MC hardware portals           |
+         ------------------------------------------
+             .              .                .
+             .              .                .
+          ------         ------            -------
+         | DPBP |       | DPNI |          | DPCON |
+          ------         ------            -------
+
+The DPNIs are network interfaces without a direct one-on-one mapping to PHYs.
+DPBPs represent hardware buffer pools. Packet I/O is performed in the context
+of DPCON objects, using DPIO portals for managing and communicating with the
+hardware resources.
+
+Datapath (I/O) interface::
+
+         -----------------------------------------------
+        |           DPAA2 Ethernet Driver               |
+         -----------------------------------------------
+          |          ^        ^         |            |
+          |          |        |         |            |
+   enqueue|   dequeue|   data |  dequeue|       seed |
+    (Tx)  | (Rx, TxC)|  avail.|  request|     buffers|
+          |          |  notify|         |            |
+          |          |        |         |            |
+          V          |        |         V            V
+         -----------------------------------------------
+        |                 DPIO Driver                   |
+         -----------------------------------------------
+          |          |        |         |            |          software
+          |          |        |         |            |  ================
+          |          |        |         |            |          hardware
+         -----------------------------------------------
+        |               I/O hardware portals            |
+         -----------------------------------------------
+          |          ^        ^         |            |
+          |          |        |         |            |
+          |          |        |         V            |
+          V          |    ================           V
+        ----------------------           |      -------------
+ queues  ----------------------          |     | Buffer pool |
+          ----------------------         |      -------------
+                   =======================
+                                Channel
+
+Datapath I/O (DPIO) portals provide enqueue and dequeue services, data
+availability notifications and buffer pool management. DPIOs are shared between
+all DPAA2 objects (and implicitly all DPAA2 kernel drivers) that work with data
+frames, but must be affine to the CPUs for the purpose of traffic distribution.
+
+Frames are transmitted and received through hardware frame queues, which can be
+grouped in channels for the purpose of hardware scheduling. The Ethernet driver
+enqueues TX frames on egress queues and after transmission is complete a TX
+confirmation frame is sent back to the CPU.
+
+When frames are available on ingress queues, a data availability notification
+is sent to the CPU; notifications are raised per channel, so even if multiple
+queues in the same channel have available frames, only one notification is sent.
+After a channel fires a notification, is must be explicitly rearmed.
+
+Each network interface can have multiple Rx, Tx and confirmation queues affined
+to CPUs, and one channel (DPCON) for each CPU that services at least one queue.
+DPCONs are used to distribute ingress traffic to different CPUs via the cores'
+affine DPIOs.
+
+The role of hardware buffer pools is storage of ingress frame data. Each network
+interface has a privately owned buffer pool which it seeds with kernel allocated
+buffers.
+
+
+DPNIs are decoupled from PHYs; a DPNI can be connected to a PHY through a DPMAC
+object or to another DPNI through an internal link, but the connection is
+managed by MC and completely transparent to the Ethernet driver.
+
+::
+
+     ---------     ---------     ---------
+    | eth if1 |   | eth if2 |   | eth ifn |
+     ---------     ---------     ---------
+          .           .          .
+          .           .          .
+          .           .          .
+         ---------------------------
+        |   DPAA2 Ethernet Driver   |
+         ---------------------------
+          .           .          .
+          .           .          .
+          .           .          .
+       ------      ------      ------            -------
+      | DPNI |    | DPNI |    | DPNI |          | DPMAC |----+
+       ------      ------      ------            -------     |
+         |           |           |                  |        |
+         |           |           |                  |      -----
+          ===========             ==================      | PHY |
+                                                           -----
+
+Creating a Network Interface
+============================
+A net device is created for each DPNI object probed on the MC bus. Each DPNI has
+a number of properties which determine the network interface configuration
+options and associated hardware resources.
+
+DPNI objects (and the other DPAA2 objects needed for a network interface) can be
+added to a container on the MC bus in one of two ways: statically, through a
+Datapath Layout Binary file (DPL) that is parsed by MC at boot time; or created
+dynamically at runtime, via the DPAA2 objects APIs.
+
+
+Features & Offloads
+===================
+Hardware checksum offloading is supported for TCP and UDP over IPv4/6 frames.
+The checksum offloads can be independently configured on RX and TX through
+ethtool.
+
+Hardware offload of unicast and multicast MAC filtering is supported on the
+ingress path and permanently enabled.
+
+Scatter-gather frames are supported on both RX and TX paths. On TX, SG support
+is configurable via ethtool; on RX it is always enabled.
+
+The DPAA2 hardware can process jumbo Ethernet frames of up to 10K bytes.
+
+The Ethernet driver defines a static flow hashing scheme that distributes
+traffic based on a 5-tuple key: src IP, dst IP, IP proto, L4 src port,
+L4 dst port. No user configuration is supported for now.
+
+Hardware specific statistics for the network interface as well as some
+non-standard driver stats can be consulted through ethtool -S option.
diff --git a/Documentation/networking/dpaa2/index.rst b/Documentation/networking/dpaa2/index.rst
index 10bea11..67bd87f 100644
--- a/Documentation/networking/dpaa2/index.rst
+++ b/Documentation/networking/dpaa2/index.rst
@@ -7,3 +7,4 @@ DPAA2 Documentation
 
    overview
    dpio-driver
+   ethernet-driver
diff --git a/MAINTAINERS b/MAINTAINERS
index c261842..8756769 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -4437,9 +4437,9 @@ F:	drivers/soc/fsl/dpio
 
 DPAA2 ETHERNET DRIVER
 M:	Ioana Radulescu <ruxandra.radulescu@....com>
-L:	linux-kernel@...r.kernel.org
+L:	netdev@...r.kernel.org
 S:	Maintained
-F:	drivers/staging/fsl-dpaa2/ethernet
+F:	drivers/net/ethernet/freescale/dpaa2
 
 DPAA2 ETHERNET SWITCH DRIVER
 M:	Ioana Radulescu <ruxandra.radulescu@....com>
diff --git a/drivers/net/ethernet/freescale/Kconfig b/drivers/net/ethernet/freescale/Kconfig
index a580a3d..7a30276 100644
--- a/drivers/net/ethernet/freescale/Kconfig
+++ b/drivers/net/ethernet/freescale/Kconfig
@@ -97,4 +97,12 @@ config GIANFAR
 
 source "drivers/net/ethernet/freescale/dpaa/Kconfig"
 
+config FSL_DPAA2_ETH
+	tristate "Freescale DPAA2 Ethernet"
+	depends on FSL_MC_BUS && FSL_MC_DPIO
+	depends on NETDEVICES && ETHERNET
+	---help---
+	  Ethernet driver for Freescale DPAA2 SoCs, using the
+	  Freescale MC bus driver
+
 endif # NET_VENDOR_FREESCALE
diff --git a/drivers/net/ethernet/freescale/Makefile b/drivers/net/ethernet/freescale/Makefile
index 0914a3e..3b4ff08 100644
--- a/drivers/net/ethernet/freescale/Makefile
+++ b/drivers/net/ethernet/freescale/Makefile
@@ -21,3 +21,5 @@ ucc_geth_driver-objs := ucc_geth.o ucc_geth_ethtool.o
 
 obj-$(CONFIG_FSL_FMAN) += fman/
 obj-$(CONFIG_FSL_DPAA_ETH) += dpaa/
+
+obj-$(CONFIG_FSL_DPAA2_ETH) += dpaa2/
diff --git a/drivers/net/ethernet/freescale/dpaa2/Makefile b/drivers/net/ethernet/freescale/dpaa2/Makefile
new file mode 100644
index 0000000..9315ecd
--- /dev/null
+++ b/drivers/net/ethernet/freescale/dpaa2/Makefile
@@ -0,0 +1,11 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Makefile for the Freescale DPAA2 Ethernet controller
+#
+
+obj-$(CONFIG_FSL_DPAA2_ETH) += fsl-dpaa2-eth.o
+
+fsl-dpaa2-eth-objs    := dpaa2-eth.o dpaa2-ethtool.o dpni.o
+
+# Needed by the tracing framework
+CFLAGS_dpaa2-eth.o := -I$(src)
diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth-trace.h b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth-trace.h
new file mode 100644
index 0000000..9801528
--- /dev/null
+++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth-trace.h
@@ -0,0 +1,158 @@
+/* SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause) */
+/* Copyright 2014-2015 Freescale Semiconductor Inc.
+ */
+
+#undef TRACE_SYSTEM
+#define TRACE_SYSTEM	dpaa2_eth
+
+#if !defined(_DPAA2_ETH_TRACE_H) || defined(TRACE_HEADER_MULTI_READ)
+#define _DPAA2_ETH_TRACE_H
+
+#include <linux/skbuff.h>
+#include <linux/netdevice.h>
+#include "dpaa2-eth.h"
+#include <linux/tracepoint.h>
+
+#define TR_FMT "[%s] fd: addr=0x%llx, len=%u, off=%u"
+/* trace_printk format for raw buffer event class */
+#define TR_BUF_FMT "[%s] vaddr=%p size=%zu dma_addr=%pad map_size=%zu bpid=%d"
+
+/* This is used to declare a class of events.
+ * individual events of this type will be defined below.
+ */
+
+/* Store details about a frame descriptor */
+DECLARE_EVENT_CLASS(dpaa2_eth_fd,
+		    /* Trace function prototype */
+		    TP_PROTO(struct net_device *netdev,
+			     const struct dpaa2_fd *fd),
+
+		    /* Repeat argument list here */
+		    TP_ARGS(netdev, fd),
+
+		    /* A structure containing the relevant information we want
+		     * to record. Declare name and type for each normal element,
+		     * name, type and size for arrays. Use __string for variable
+		     * length strings.
+		     */
+		    TP_STRUCT__entry(
+				     __field(u64, fd_addr)
+				     __field(u32, fd_len)
+				     __field(u16, fd_offset)
+				     __string(name, netdev->name)
+		    ),
+
+		    /* The function that assigns values to the above declared
+		     * fields
+		     */
+		    TP_fast_assign(
+				   __entry->fd_addr = dpaa2_fd_get_addr(fd);
+				   __entry->fd_len = dpaa2_fd_get_len(fd);
+				   __entry->fd_offset = dpaa2_fd_get_offset(fd);
+				   __assign_str(name, netdev->name);
+		    ),
+
+		    /* This is what gets printed when the trace event is
+		     * triggered.
+		     */
+		    TP_printk(TR_FMT,
+			      __get_str(name),
+			      __entry->fd_addr,
+			      __entry->fd_len,
+			      __entry->fd_offset)
+);
+
+/* Now declare events of the above type. Format is:
+ * DEFINE_EVENT(class, name, proto, args), with proto and args same as for class
+ */
+
+/* Tx (egress) fd */
+DEFINE_EVENT(dpaa2_eth_fd, dpaa2_tx_fd,
+	     TP_PROTO(struct net_device *netdev,
+		      const struct dpaa2_fd *fd),
+
+	     TP_ARGS(netdev, fd)
+);
+
+/* Rx fd */
+DEFINE_EVENT(dpaa2_eth_fd, dpaa2_rx_fd,
+	     TP_PROTO(struct net_device *netdev,
+		      const struct dpaa2_fd *fd),
+
+	     TP_ARGS(netdev, fd)
+);
+
+/* Tx confirmation fd */
+DEFINE_EVENT(dpaa2_eth_fd, dpaa2_tx_conf_fd,
+	     TP_PROTO(struct net_device *netdev,
+		      const struct dpaa2_fd *fd),
+
+	     TP_ARGS(netdev, fd)
+);
+
+/* Log data about raw buffers. Useful for tracing DPBP content. */
+TRACE_EVENT(dpaa2_eth_buf_seed,
+	    /* Trace function prototype */
+	    TP_PROTO(struct net_device *netdev,
+		     /* virtual address and size */
+		     void *vaddr,
+		     size_t size,
+		     /* dma map address and size */
+		     dma_addr_t dma_addr,
+		     size_t map_size,
+		     /* buffer pool id, if relevant */
+		     u16 bpid),
+
+	    /* Repeat argument list here */
+	    TP_ARGS(netdev, vaddr, size, dma_addr, map_size, bpid),
+
+	    /* A structure containing the relevant information we want
+	     * to record. Declare name and type for each normal element,
+	     * name, type and size for arrays. Use __string for variable
+	     * length strings.
+	     */
+	    TP_STRUCT__entry(
+			     __field(void *, vaddr)
+			     __field(size_t, size)
+			     __field(dma_addr_t, dma_addr)
+			     __field(size_t, map_size)
+			     __field(u16, bpid)
+			     __string(name, netdev->name)
+	    ),
+
+	    /* The function that assigns values to the above declared
+	     * fields
+	     */
+	    TP_fast_assign(
+			   __entry->vaddr = vaddr;
+			   __entry->size = size;
+			   __entry->dma_addr = dma_addr;
+			   __entry->map_size = map_size;
+			   __entry->bpid = bpid;
+			   __assign_str(name, netdev->name);
+	    ),
+
+	    /* This is what gets printed when the trace event is
+	     * triggered.
+	     */
+	    TP_printk(TR_BUF_FMT,
+		      __get_str(name),
+		      __entry->vaddr,
+		      __entry->size,
+		      &__entry->dma_addr,
+		      __entry->map_size,
+		      __entry->bpid)
+);
+
+/* If only one event of a certain type needs to be declared, use TRACE_EVENT().
+ * The syntax is the same as for DECLARE_EVENT_CLASS().
+ */
+
+#endif /* _DPAA2_ETH_TRACE_H */
+
+/* This must be outside ifdef _DPAA2_ETH_TRACE_H */
+#undef TRACE_INCLUDE_PATH
+#define TRACE_INCLUDE_PATH .
+#undef TRACE_INCLUDE_FILE
+#define TRACE_INCLUDE_FILE	dpaa2-eth-trace
+#include <trace/define_trace.h>
diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
new file mode 100644
index 0000000..9329fca
--- /dev/null
+++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.c
@@ -0,0 +1,2661 @@
+// SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause)
+/* Copyright 2014-2016 Freescale Semiconductor Inc.
+ * Copyright 2016-2017 NXP
+ */
+#include <linux/init.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/etherdevice.h>
+#include <linux/of_net.h>
+#include <linux/interrupt.h>
+#include <linux/msi.h>
+#include <linux/kthread.h>
+#include <linux/iommu.h>
+#include <linux/net_tstamp.h>
+#include <linux/fsl/mc.h>
+
+#include <net/sock.h>
+
+#include "dpaa2-eth.h"
+
+/* CREATE_TRACE_POINTS only needs to be defined once. Other dpa files
+ * using trace events only need to #include <trace/events/sched.h>
+ */
+#define CREATE_TRACE_POINTS
+#include "dpaa2-eth-trace.h"
+
+MODULE_LICENSE("Dual BSD/GPL");
+MODULE_AUTHOR("Freescale Semiconductor, Inc");
+MODULE_DESCRIPTION("Freescale DPAA2 Ethernet Driver");
+
+static void *dpaa2_iova_to_virt(struct iommu_domain *domain,
+				dma_addr_t iova_addr)
+{
+	phys_addr_t phys_addr;
+
+	phys_addr = domain ? iommu_iova_to_phys(domain, iova_addr) : iova_addr;
+
+	return phys_to_virt(phys_addr);
+}
+
+static void validate_rx_csum(struct dpaa2_eth_priv *priv,
+			     u32 fd_status,
+			     struct sk_buff *skb)
+{
+	skb_checksum_none_assert(skb);
+
+	/* HW checksum validation is disabled, nothing to do here */
+	if (!(priv->net_dev->features & NETIF_F_RXCSUM))
+		return;
+
+	/* Read checksum validation bits */
+	if (!((fd_status & DPAA2_FAS_L3CV) &&
+	      (fd_status & DPAA2_FAS_L4CV)))
+		return;
+
+	/* Inform the stack there's no need to compute L3/L4 csum anymore */
+	skb->ip_summed = CHECKSUM_UNNECESSARY;
+}
+
+/* Free a received FD.
+ * Not to be used for Tx conf FDs or on any other paths.
+ */
+static void free_rx_fd(struct dpaa2_eth_priv *priv,
+		       const struct dpaa2_fd *fd,
+		       void *vaddr)
+{
+	struct device *dev = priv->net_dev->dev.parent;
+	dma_addr_t addr = dpaa2_fd_get_addr(fd);
+	u8 fd_format = dpaa2_fd_get_format(fd);
+	struct dpaa2_sg_entry *sgt;
+	void *sg_vaddr;
+	int i;
+
+	/* If single buffer frame, just free the data buffer */
+	if (fd_format == dpaa2_fd_single)
+		goto free_buf;
+	else if (fd_format != dpaa2_fd_sg)
+		/* We don't support any other format */
+		return;
+
+	/* For S/G frames, we first need to free all SG entries
+	 * except the first one, which was taken care of already
+	 */
+	sgt = vaddr + dpaa2_fd_get_offset(fd);
+	for (i = 1; i < DPAA2_ETH_MAX_SG_ENTRIES; i++) {
+		addr = dpaa2_sg_get_addr(&sgt[i]);
+		sg_vaddr = dpaa2_iova_to_virt(priv->iommu_domain, addr);
+		dma_unmap_single(dev, addr, DPAA2_ETH_RX_BUF_SIZE,
+				 DMA_FROM_DEVICE);
+
+		skb_free_frag(sg_vaddr);
+		if (dpaa2_sg_is_final(&sgt[i]))
+			break;
+	}
+
+free_buf:
+	skb_free_frag(vaddr);
+}
+
+/* Build a linear skb based on a single-buffer frame descriptor */
+static struct sk_buff *build_linear_skb(struct dpaa2_eth_priv *priv,
+					struct dpaa2_eth_channel *ch,
+					const struct dpaa2_fd *fd,
+					void *fd_vaddr)
+{
+	struct sk_buff *skb = NULL;
+	u16 fd_offset = dpaa2_fd_get_offset(fd);
+	u32 fd_length = dpaa2_fd_get_len(fd);
+
+	ch->buf_count--;
+
+	skb = build_skb(fd_vaddr, DPAA2_ETH_SKB_SIZE);
+	if (unlikely(!skb))
+		return NULL;
+
+	skb_reserve(skb, fd_offset);
+	skb_put(skb, fd_length);
+
+	return skb;
+}
+
+/* Build a non linear (fragmented) skb based on a S/G table */
+static struct sk_buff *build_frag_skb(struct dpaa2_eth_priv *priv,
+				      struct dpaa2_eth_channel *ch,
+				      struct dpaa2_sg_entry *sgt)
+{
+	struct sk_buff *skb = NULL;
+	struct device *dev = priv->net_dev->dev.parent;
+	void *sg_vaddr;
+	dma_addr_t sg_addr;
+	u16 sg_offset;
+	u32 sg_length;
+	struct page *page, *head_page;
+	int page_offset;
+	int i;
+
+	for (i = 0; i < DPAA2_ETH_MAX_SG_ENTRIES; i++) {
+		struct dpaa2_sg_entry *sge = &sgt[i];
+
+		/* NOTE: We only support SG entries in dpaa2_sg_single format,
+		 * but this is the only format we may receive from HW anyway
+		 */
+
+		/* Get the address and length from the S/G entry */
+		sg_addr = dpaa2_sg_get_addr(sge);
+		sg_vaddr = dpaa2_iova_to_virt(priv->iommu_domain, sg_addr);
+		dma_unmap_single(dev, sg_addr, DPAA2_ETH_RX_BUF_SIZE,
+				 DMA_FROM_DEVICE);
+
+		sg_length = dpaa2_sg_get_len(sge);
+
+		if (i == 0) {
+			/* We build the skb around the first data buffer */
+			skb = build_skb(sg_vaddr, DPAA2_ETH_SKB_SIZE);
+			if (unlikely(!skb)) {
+				/* Free the first SG entry now, since we already
+				 * unmapped it and obtained the virtual address
+				 */
+				skb_free_frag(sg_vaddr);
+
+				/* We still need to subtract the buffers used
+				 * by this FD from our software counter
+				 */
+				while (!dpaa2_sg_is_final(&sgt[i]) &&
+				       i < DPAA2_ETH_MAX_SG_ENTRIES)
+					i++;
+				break;
+			}
+
+			sg_offset = dpaa2_sg_get_offset(sge);
+			skb_reserve(skb, sg_offset);
+			skb_put(skb, sg_length);
+		} else {
+			/* Rest of the data buffers are stored as skb frags */
+			page = virt_to_page(sg_vaddr);
+			head_page = virt_to_head_page(sg_vaddr);
+
+			/* Offset in page (which may be compound).
+			 * Data in subsequent SG entries is stored from the
+			 * beginning of the buffer, so we don't need to add the
+			 * sg_offset.
+			 */
+			page_offset = ((unsigned long)sg_vaddr &
+				(PAGE_SIZE - 1)) +
+				(page_address(page) - page_address(head_page));
+
+			skb_add_rx_frag(skb, i - 1, head_page, page_offset,
+					sg_length, DPAA2_ETH_RX_BUF_SIZE);
+		}
+
+		if (dpaa2_sg_is_final(sge))
+			break;
+	}
+
+	WARN_ONCE(i == DPAA2_ETH_MAX_SG_ENTRIES, "Final bit not set in SGT");
+
+	/* Count all data buffers + SG table buffer */
+	ch->buf_count -= i + 2;
+
+	return skb;
+}
+
+/* Main Rx frame processing routine */
+static void dpaa2_eth_rx(struct dpaa2_eth_priv *priv,
+			 struct dpaa2_eth_channel *ch,
+			 const struct dpaa2_fd *fd,
+			 struct napi_struct *napi,
+			 u16 queue_id)
+{
+	dma_addr_t addr = dpaa2_fd_get_addr(fd);
+	u8 fd_format = dpaa2_fd_get_format(fd);
+	void *vaddr;
+	struct sk_buff *skb;
+	struct rtnl_link_stats64 *percpu_stats;
+	struct dpaa2_eth_drv_stats *percpu_extras;
+	struct device *dev = priv->net_dev->dev.parent;
+	struct dpaa2_fas *fas;
+	void *buf_data;
+	u32 status = 0;
+
+	/* Tracing point */
+	trace_dpaa2_rx_fd(priv->net_dev, fd);
+
+	vaddr = dpaa2_iova_to_virt(priv->iommu_domain, addr);
+	dma_unmap_single(dev, addr, DPAA2_ETH_RX_BUF_SIZE, DMA_FROM_DEVICE);
+
+	fas = dpaa2_get_fas(vaddr, false);
+	prefetch(fas);
+	buf_data = vaddr + dpaa2_fd_get_offset(fd);
+	prefetch(buf_data);
+
+	percpu_stats = this_cpu_ptr(priv->percpu_stats);
+	percpu_extras = this_cpu_ptr(priv->percpu_extras);
+
+	if (fd_format == dpaa2_fd_single) {
+		skb = build_linear_skb(priv, ch, fd, vaddr);
+	} else if (fd_format == dpaa2_fd_sg) {
+		skb = build_frag_skb(priv, ch, buf_data);
+		skb_free_frag(vaddr);
+		percpu_extras->rx_sg_frames++;
+		percpu_extras->rx_sg_bytes += dpaa2_fd_get_len(fd);
+	} else {
+		/* We don't support any other format */
+		goto err_frame_format;
+	}
+
+	if (unlikely(!skb))
+		goto err_build_skb;
+
+	prefetch(skb->data);
+
+	/* Get the timestamp value */
+	if (priv->rx_tstamp) {
+		struct skb_shared_hwtstamps *shhwtstamps = skb_hwtstamps(skb);
+		__le64 *ts = dpaa2_get_ts(vaddr, false);
+		u64 ns;
+
+		memset(shhwtstamps, 0, sizeof(*shhwtstamps));
+
+		ns = DPAA2_PTP_CLK_PERIOD_NS * le64_to_cpup(ts);
+		shhwtstamps->hwtstamp = ns_to_ktime(ns);
+	}
+
+	/* Check if we need to validate the L4 csum */
+	if (likely(dpaa2_fd_get_frc(fd) & DPAA2_FD_FRC_FASV)) {
+		status = le32_to_cpu(fas->status);
+		validate_rx_csum(priv, status, skb);
+	}
+
+	skb->protocol = eth_type_trans(skb, priv->net_dev);
+	skb_record_rx_queue(skb, queue_id);
+
+	percpu_stats->rx_packets++;
+	percpu_stats->rx_bytes += dpaa2_fd_get_len(fd);
+
+	napi_gro_receive(napi, skb);
+
+	return;
+
+err_build_skb:
+	free_rx_fd(priv, fd, vaddr);
+err_frame_format:
+	percpu_stats->rx_dropped++;
+}
+
+/* Consume all frames pull-dequeued into the store. This is the simplest way to
+ * make sure we don't accidentally issue another volatile dequeue which would
+ * overwrite (leak) frames already in the store.
+ *
+ * Observance of NAPI budget is not our concern, leaving that to the caller.
+ */
+static int consume_frames(struct dpaa2_eth_channel *ch)
+{
+	struct dpaa2_eth_priv *priv = ch->priv;
+	struct dpaa2_eth_fq *fq;
+	struct dpaa2_dq *dq;
+	const struct dpaa2_fd *fd;
+	int cleaned = 0;
+	int is_last;
+
+	do {
+		dq = dpaa2_io_store_next(ch->store, &is_last);
+		if (unlikely(!dq)) {
+			/* If we're here, we *must* have placed a
+			 * volatile dequeue comnmand, so keep reading through
+			 * the store until we get some sort of valid response
+			 * token (either a valid frame or an "empty dequeue")
+			 */
+			continue;
+		}
+
+		fd = dpaa2_dq_fd(dq);
+		fq = (struct dpaa2_eth_fq *)(uintptr_t)dpaa2_dq_fqd_ctx(dq);
+		fq->stats.frames++;
+
+		fq->consume(priv, ch, fd, &ch->napi, fq->flowid);
+		cleaned++;
+	} while (!is_last);
+
+	return cleaned;
+}
+
+/* Configure the egress frame annotation for timestamp update */
+static void enable_tx_tstamp(struct dpaa2_fd *fd, void *buf_start)
+{
+	struct dpaa2_faead *faead;
+	u32 ctrl, frc;
+
+	/* Mark the egress frame annotation area as valid */
+	frc = dpaa2_fd_get_frc(fd);
+	dpaa2_fd_set_frc(fd, frc | DPAA2_FD_FRC_FAEADV);
+
+	/* Set hardware annotation size */
+	ctrl = dpaa2_fd_get_ctrl(fd);
+	dpaa2_fd_set_ctrl(fd, ctrl | DPAA2_FD_CTRL_ASAL);
+
+	/* enable UPD (update prepanded data) bit in FAEAD field of
+	 * hardware frame annotation area
+	 */
+	ctrl = DPAA2_FAEAD_A2V | DPAA2_FAEAD_UPDV | DPAA2_FAEAD_UPD;
+	faead = dpaa2_get_faead(buf_start, true);
+	faead->ctrl = cpu_to_le32(ctrl);
+}
+
+/* Create a frame descriptor based on a fragmented skb */
+static int build_sg_fd(struct dpaa2_eth_priv *priv,
+		       struct sk_buff *skb,
+		       struct dpaa2_fd *fd)
+{
+	struct device *dev = priv->net_dev->dev.parent;
+	void *sgt_buf = NULL;
+	dma_addr_t addr;
+	int nr_frags = skb_shinfo(skb)->nr_frags;
+	struct dpaa2_sg_entry *sgt;
+	int i, err;
+	int sgt_buf_size;
+	struct scatterlist *scl, *crt_scl;
+	int num_sg;
+	int num_dma_bufs;
+	struct dpaa2_eth_swa *swa;
+
+	/* Create and map scatterlist.
+	 * We don't advertise NETIF_F_FRAGLIST, so skb_to_sgvec() will not have
+	 * to go beyond nr_frags+1.
+	 * Note: We don't support chained scatterlists
+	 */
+	if (unlikely(PAGE_SIZE / sizeof(struct scatterlist) < nr_frags + 1))
+		return -EINVAL;
+
+	scl = kcalloc(nr_frags + 1, sizeof(struct scatterlist), GFP_ATOMIC);
+	if (unlikely(!scl))
+		return -ENOMEM;
+
+	sg_init_table(scl, nr_frags + 1);
+	num_sg = skb_to_sgvec(skb, scl, 0, skb->len);
+	num_dma_bufs = dma_map_sg(dev, scl, num_sg, DMA_BIDIRECTIONAL);
+	if (unlikely(!num_dma_bufs)) {
+		err = -ENOMEM;
+		goto dma_map_sg_failed;
+	}
+
+	/* Prepare the HW SGT structure */
+	sgt_buf_size = priv->tx_data_offset +
+		       sizeof(struct dpaa2_sg_entry) *  num_dma_bufs;
+	sgt_buf = netdev_alloc_frag(sgt_buf_size + DPAA2_ETH_TX_BUF_ALIGN);
+	if (unlikely(!sgt_buf)) {
+		err = -ENOMEM;
+		goto sgt_buf_alloc_failed;
+	}
+	sgt_buf = PTR_ALIGN(sgt_buf, DPAA2_ETH_TX_BUF_ALIGN);
+	memset(sgt_buf, 0, sgt_buf_size);
+
+	sgt = (struct dpaa2_sg_entry *)(sgt_buf + priv->tx_data_offset);
+
+	/* Fill in the HW SGT structure.
+	 *
+	 * sgt_buf is zeroed out, so the following fields are implicit
+	 * in all sgt entries:
+	 *   - offset is 0
+	 *   - format is 'dpaa2_sg_single'
+	 */
+	for_each_sg(scl, crt_scl, num_dma_bufs, i) {
+		dpaa2_sg_set_addr(&sgt[i], sg_dma_address(crt_scl));
+		dpaa2_sg_set_len(&sgt[i], sg_dma_len(crt_scl));
+	}
+	dpaa2_sg_set_final(&sgt[i - 1], true);
+
+	/* Store the skb backpointer in the SGT buffer.
+	 * Fit the scatterlist and the number of buffers alongside the
+	 * skb backpointer in the software annotation area. We'll need
+	 * all of them on Tx Conf.
+	 */
+	swa = (struct dpaa2_eth_swa *)sgt_buf;
+	swa->skb = skb;
+	swa->scl = scl;
+	swa->num_sg = num_sg;
+	swa->sgt_size = sgt_buf_size;
+
+	/* Separately map the SGT buffer */
+	addr = dma_map_single(dev, sgt_buf, sgt_buf_size, DMA_BIDIRECTIONAL);
+	if (unlikely(dma_mapping_error(dev, addr))) {
+		err = -ENOMEM;
+		goto dma_map_single_failed;
+	}
+	dpaa2_fd_set_offset(fd, priv->tx_data_offset);
+	dpaa2_fd_set_format(fd, dpaa2_fd_sg);
+	dpaa2_fd_set_addr(fd, addr);
+	dpaa2_fd_set_len(fd, skb->len);
+	dpaa2_fd_set_ctrl(fd, FD_CTRL_PTA | FD_CTRL_PTV1);
+
+	if (priv->tx_tstamp && skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)
+		enable_tx_tstamp(fd, sgt_buf);
+
+	return 0;
+
+dma_map_single_failed:
+	skb_free_frag(sgt_buf);
+sgt_buf_alloc_failed:
+	dma_unmap_sg(dev, scl, num_sg, DMA_BIDIRECTIONAL);
+dma_map_sg_failed:
+	kfree(scl);
+	return err;
+}
+
+/* Create a frame descriptor based on a linear skb */
+static int build_single_fd(struct dpaa2_eth_priv *priv,
+			   struct sk_buff *skb,
+			   struct dpaa2_fd *fd)
+{
+	struct device *dev = priv->net_dev->dev.parent;
+	u8 *buffer_start, *aligned_start;
+	struct sk_buff **skbh;
+	dma_addr_t addr;
+
+	buffer_start = skb->data - dpaa2_eth_needed_headroom(priv, skb);
+
+	/* If there's enough room to align the FD address, do it.
+	 * It will help hardware optimize accesses.
+	 */
+	aligned_start = PTR_ALIGN(buffer_start - DPAA2_ETH_TX_BUF_ALIGN,
+				  DPAA2_ETH_TX_BUF_ALIGN);
+	if (aligned_start >= skb->head)
+		buffer_start = aligned_start;
+
+	/* Store a backpointer to the skb at the beginning of the buffer
+	 * (in the private data area) such that we can release it
+	 * on Tx confirm
+	 */
+	skbh = (struct sk_buff **)buffer_start;
+	*skbh = skb;
+
+	addr = dma_map_single(dev, buffer_start,
+			      skb_tail_pointer(skb) - buffer_start,
+			      DMA_BIDIRECTIONAL);
+	if (unlikely(dma_mapping_error(dev, addr)))
+		return -ENOMEM;
+
+	dpaa2_fd_set_addr(fd, addr);
+	dpaa2_fd_set_offset(fd, (u16)(skb->data - buffer_start));
+	dpaa2_fd_set_len(fd, skb->len);
+	dpaa2_fd_set_format(fd, dpaa2_fd_single);
+	dpaa2_fd_set_ctrl(fd, FD_CTRL_PTA | FD_CTRL_PTV1);
+
+	if (priv->tx_tstamp && skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)
+		enable_tx_tstamp(fd, buffer_start);
+
+	return 0;
+}
+
+/* FD freeing routine on the Tx path
+ *
+ * DMA-unmap and free FD and possibly SGT buffer allocated on Tx. The skb
+ * back-pointed to is also freed.
+ * This can be called either from dpaa2_eth_tx_conf() or on the error path of
+ * dpaa2_eth_tx().
+ */
+static void free_tx_fd(const struct dpaa2_eth_priv *priv,
+		       const struct dpaa2_fd *fd)
+{
+	struct device *dev = priv->net_dev->dev.parent;
+	dma_addr_t fd_addr;
+	struct sk_buff **skbh, *skb;
+	unsigned char *buffer_start;
+	struct dpaa2_eth_swa *swa;
+	u8 fd_format = dpaa2_fd_get_format(fd);
+
+	fd_addr = dpaa2_fd_get_addr(fd);
+	skbh = dpaa2_iova_to_virt(priv->iommu_domain, fd_addr);
+
+	if (fd_format == dpaa2_fd_single) {
+		skb = *skbh;
+		buffer_start = (unsigned char *)skbh;
+		/* Accessing the skb buffer is safe before dma unmap, because
+		 * we didn't map the actual skb shell.
+		 */
+		dma_unmap_single(dev, fd_addr,
+				 skb_tail_pointer(skb) - buffer_start,
+				 DMA_BIDIRECTIONAL);
+	} else if (fd_format == dpaa2_fd_sg) {
+		swa = (struct dpaa2_eth_swa *)skbh;
+		skb = swa->skb;
+
+		/* Unmap the scatterlist */
+		dma_unmap_sg(dev, swa->scl, swa->num_sg, DMA_BIDIRECTIONAL);
+		kfree(swa->scl);
+
+		/* Unmap the SGT buffer */
+		dma_unmap_single(dev, fd_addr, swa->sgt_size,
+				 DMA_BIDIRECTIONAL);
+	} else {
+		netdev_dbg(priv->net_dev, "Invalid FD format\n");
+		return;
+	}
+
+	/* Get the timestamp value */
+	if (priv->tx_tstamp && skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) {
+		struct skb_shared_hwtstamps shhwtstamps;
+		__le64 *ts = dpaa2_get_ts(skbh, true);
+		u64 ns;
+
+		memset(&shhwtstamps, 0, sizeof(shhwtstamps));
+
+		ns = DPAA2_PTP_CLK_PERIOD_NS * le64_to_cpup(ts);
+		shhwtstamps.hwtstamp = ns_to_ktime(ns);
+		skb_tstamp_tx(skb, &shhwtstamps);
+	}
+
+	/* Free SGT buffer allocated on tx */
+	if (fd_format != dpaa2_fd_single)
+		skb_free_frag(skbh);
+
+	/* Move on with skb release */
+	dev_kfree_skb(skb);
+}
+
+static netdev_tx_t dpaa2_eth_tx(struct sk_buff *skb, struct net_device *net_dev)
+{
+	struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
+	struct dpaa2_fd fd;
+	struct rtnl_link_stats64 *percpu_stats;
+	struct dpaa2_eth_drv_stats *percpu_extras;
+	struct dpaa2_eth_fq *fq;
+	u16 queue_mapping;
+	unsigned int needed_headroom;
+	int err, i;
+
+	percpu_stats = this_cpu_ptr(priv->percpu_stats);
+	percpu_extras = this_cpu_ptr(priv->percpu_extras);
+
+	needed_headroom = dpaa2_eth_needed_headroom(priv, skb);
+	if (skb_headroom(skb) < needed_headroom) {
+		struct sk_buff *ns;
+
+		ns = skb_realloc_headroom(skb, needed_headroom);
+		if (unlikely(!ns)) {
+			percpu_stats->tx_dropped++;
+			goto err_alloc_headroom;
+		}
+		percpu_extras->tx_reallocs++;
+
+		if (skb->sk)
+			skb_set_owner_w(ns, skb->sk);
+
+		dev_kfree_skb(skb);
+		skb = ns;
+	}
+
+	/* We'll be holding a back-reference to the skb until Tx Confirmation;
+	 * we don't want that overwritten by a concurrent Tx with a cloned skb.
+	 */
+	skb = skb_unshare(skb, GFP_ATOMIC);
+	if (unlikely(!skb)) {
+		/* skb_unshare() has already freed the skb */
+		percpu_stats->tx_dropped++;
+		return NETDEV_TX_OK;
+	}
+
+	/* Setup the FD fields */
+	memset(&fd, 0, sizeof(fd));
+
+	if (skb_is_nonlinear(skb)) {
+		err = build_sg_fd(priv, skb, &fd);
+		percpu_extras->tx_sg_frames++;
+		percpu_extras->tx_sg_bytes += skb->len;
+	} else {
+		err = build_single_fd(priv, skb, &fd);
+	}
+
+	if (unlikely(err)) {
+		percpu_stats->tx_dropped++;
+		goto err_build_fd;
+	}
+
+	/* Tracing point */
+	trace_dpaa2_tx_fd(net_dev, &fd);
+
+	/* TxConf FQ selection relies on queue id from the stack.
+	 * In case of a forwarded frame from another DPNI interface, we choose
+	 * a queue affined to the same core that processed the Rx frame
+	 */
+	queue_mapping = skb_get_queue_mapping(skb);
+	fq = &priv->fq[queue_mapping];
+	for (i = 0; i < DPAA2_ETH_ENQUEUE_RETRIES; i++) {
+		err = dpaa2_io_service_enqueue_qd(fq->channel->dpio,
+						  priv->tx_qdid, 0,
+						  fq->tx_qdbin, &fd);
+		if (err != -EBUSY)
+			break;
+	}
+	percpu_extras->tx_portal_busy += i;
+	if (unlikely(err < 0)) {
+		percpu_stats->tx_errors++;
+		/* Clean up everything, including freeing the skb */
+		free_tx_fd(priv, &fd);
+	} else {
+		percpu_stats->tx_packets++;
+		percpu_stats->tx_bytes += dpaa2_fd_get_len(&fd);
+	}
+
+	return NETDEV_TX_OK;
+
+err_build_fd:
+err_alloc_headroom:
+	dev_kfree_skb(skb);
+
+	return NETDEV_TX_OK;
+}
+
+/* Tx confirmation frame processing routine */
+static void dpaa2_eth_tx_conf(struct dpaa2_eth_priv *priv,
+			      struct dpaa2_eth_channel *ch,
+			      const struct dpaa2_fd *fd,
+			      struct napi_struct *napi __always_unused,
+			      u16 queue_id __always_unused)
+{
+	struct rtnl_link_stats64 *percpu_stats;
+	struct dpaa2_eth_drv_stats *percpu_extras;
+	u32 fd_errors;
+
+	/* Tracing point */
+	trace_dpaa2_tx_conf_fd(priv->net_dev, fd);
+
+	percpu_extras = this_cpu_ptr(priv->percpu_extras);
+	percpu_extras->tx_conf_frames++;
+	percpu_extras->tx_conf_bytes += dpaa2_fd_get_len(fd);
+
+	/* Check frame errors in the FD field */
+	fd_errors = dpaa2_fd_get_ctrl(fd) & DPAA2_FD_TX_ERR_MASK;
+	free_tx_fd(priv, fd);
+
+	if (likely(!fd_errors))
+		return;
+
+	if (net_ratelimit())
+		netdev_dbg(priv->net_dev, "TX frame FD error: 0x%08x\n",
+			   fd_errors);
+
+	percpu_stats = this_cpu_ptr(priv->percpu_stats);
+	/* Tx-conf logically pertains to the egress path. */
+	percpu_stats->tx_errors++;
+}
+
+static int set_rx_csum(struct dpaa2_eth_priv *priv, bool enable)
+{
+	int err;
+
+	err = dpni_set_offload(priv->mc_io, 0, priv->mc_token,
+			       DPNI_OFF_RX_L3_CSUM, enable);
+	if (err) {
+		netdev_err(priv->net_dev,
+			   "dpni_set_offload(RX_L3_CSUM) failed\n");
+		return err;
+	}
+
+	err = dpni_set_offload(priv->mc_io, 0, priv->mc_token,
+			       DPNI_OFF_RX_L4_CSUM, enable);
+	if (err) {
+		netdev_err(priv->net_dev,
+			   "dpni_set_offload(RX_L4_CSUM) failed\n");
+		return err;
+	}
+
+	return 0;
+}
+
+static int set_tx_csum(struct dpaa2_eth_priv *priv, bool enable)
+{
+	int err;
+
+	err = dpni_set_offload(priv->mc_io, 0, priv->mc_token,
+			       DPNI_OFF_TX_L3_CSUM, enable);
+	if (err) {
+		netdev_err(priv->net_dev, "dpni_set_offload(TX_L3_CSUM) failed\n");
+		return err;
+	}
+
+	err = dpni_set_offload(priv->mc_io, 0, priv->mc_token,
+			       DPNI_OFF_TX_L4_CSUM, enable);
+	if (err) {
+		netdev_err(priv->net_dev, "dpni_set_offload(TX_L4_CSUM) failed\n");
+		return err;
+	}
+
+	return 0;
+}
+
+/* Free buffers acquired from the buffer pool or which were meant to
+ * be released in the pool
+ */
+static void free_bufs(struct dpaa2_eth_priv *priv, u64 *buf_array, int count)
+{
+	struct device *dev = priv->net_dev->dev.parent;
+	void *vaddr;
+	int i;
+
+	for (i = 0; i < count; i++) {
+		vaddr = dpaa2_iova_to_virt(priv->iommu_domain, buf_array[i]);
+		dma_unmap_single(dev, buf_array[i], DPAA2_ETH_RX_BUF_SIZE,
+				 DMA_FROM_DEVICE);
+		skb_free_frag(vaddr);
+	}
+}
+
+/* Perform a single release command to add buffers
+ * to the specified buffer pool
+ */
+static int add_bufs(struct dpaa2_eth_priv *priv,
+		    struct dpaa2_eth_channel *ch, u16 bpid)
+{
+	struct device *dev = priv->net_dev->dev.parent;
+	u64 buf_array[DPAA2_ETH_BUFS_PER_CMD];
+	void *buf;
+	dma_addr_t addr;
+	int i, err;
+
+	for (i = 0; i < DPAA2_ETH_BUFS_PER_CMD; i++) {
+		/* Allocate buffer visible to WRIOP + skb shared info +
+		 * alignment padding
+		 */
+		buf = napi_alloc_frag(dpaa2_eth_buf_raw_size(priv));
+		if (unlikely(!buf))
+			goto err_alloc;
+
+		buf = PTR_ALIGN(buf, priv->rx_buf_align);
+
+		addr = dma_map_single(dev, buf, DPAA2_ETH_RX_BUF_SIZE,
+				      DMA_FROM_DEVICE);
+		if (unlikely(dma_mapping_error(dev, addr)))
+			goto err_map;
+
+		buf_array[i] = addr;
+
+		/* tracing point */
+		trace_dpaa2_eth_buf_seed(priv->net_dev,
+					 buf, dpaa2_eth_buf_raw_size(priv),
+					 addr, DPAA2_ETH_RX_BUF_SIZE,
+					 bpid);
+	}
+
+release_bufs:
+	/* In case the portal is busy, retry until successful */
+	while ((err = dpaa2_io_service_release(ch->dpio, bpid,
+					       buf_array, i)) == -EBUSY)
+		cpu_relax();
+
+	/* If release command failed, clean up and bail out;
+	 * not much else we can do about it
+	 */
+	if (err) {
+		free_bufs(priv, buf_array, i);
+		return 0;
+	}
+
+	return i;
+
+err_map:
+	skb_free_frag(buf);
+err_alloc:
+	/* If we managed to allocate at least some buffers,
+	 * release them to hardware
+	 */
+	if (i)
+		goto release_bufs;
+
+	return 0;
+}
+
+static int seed_pool(struct dpaa2_eth_priv *priv, u16 bpid)
+{
+	int i, j;
+	int new_count;
+
+	/* This is the lazy seeding of Rx buffer pools.
+	 * dpaa2_add_bufs() is also used on the Rx hotpath and calls
+	 * napi_alloc_frag(). The trouble with that is that it in turn ends up
+	 * calling this_cpu_ptr(), which mandates execution in atomic context.
+	 * Rather than splitting up the code, do a one-off preempt disable.
+	 */
+	preempt_disable();
+	for (j = 0; j < priv->num_channels; j++) {
+		for (i = 0; i < DPAA2_ETH_NUM_BUFS;
+		     i += DPAA2_ETH_BUFS_PER_CMD) {
+			new_count = add_bufs(priv, priv->channel[j], bpid);
+			priv->channel[j]->buf_count += new_count;
+
+			if (new_count < DPAA2_ETH_BUFS_PER_CMD) {
+				preempt_enable();
+				return -ENOMEM;
+			}
+		}
+	}
+	preempt_enable();
+
+	return 0;
+}
+
+/**
+ * Drain the specified number of buffers from the DPNI's private buffer pool.
+ * @count must not exceeed DPAA2_ETH_BUFS_PER_CMD
+ */
+static void drain_bufs(struct dpaa2_eth_priv *priv, int count)
+{
+	u64 buf_array[DPAA2_ETH_BUFS_PER_CMD];
+	int ret;
+
+	do {
+		ret = dpaa2_io_service_acquire(NULL, priv->bpid,
+					       buf_array, count);
+		if (ret < 0) {
+			netdev_err(priv->net_dev, "dpaa2_io_service_acquire() failed\n");
+			return;
+		}
+		free_bufs(priv, buf_array, ret);
+	} while (ret);
+}
+
+static void drain_pool(struct dpaa2_eth_priv *priv)
+{
+	int i;
+
+	drain_bufs(priv, DPAA2_ETH_BUFS_PER_CMD);
+	drain_bufs(priv, 1);
+
+	for (i = 0; i < priv->num_channels; i++)
+		priv->channel[i]->buf_count = 0;
+}
+
+/* Function is called from softirq context only, so we don't need to guard
+ * the access to percpu count
+ */
+static int refill_pool(struct dpaa2_eth_priv *priv,
+		       struct dpaa2_eth_channel *ch,
+		       u16 bpid)
+{
+	int new_count;
+
+	if (likely(ch->buf_count >= DPAA2_ETH_REFILL_THRESH))
+		return 0;
+
+	do {
+		new_count = add_bufs(priv, ch, bpid);
+		if (unlikely(!new_count)) {
+			/* Out of memory; abort for now, we'll try later on */
+			break;
+		}
+		ch->buf_count += new_count;
+	} while (ch->buf_count < DPAA2_ETH_NUM_BUFS);
+
+	if (unlikely(ch->buf_count < DPAA2_ETH_NUM_BUFS))
+		return -ENOMEM;
+
+	return 0;
+}
+
+static int pull_channel(struct dpaa2_eth_channel *ch)
+{
+	int err;
+	int dequeues = -1;
+
+	/* Retry while portal is busy */
+	do {
+		err = dpaa2_io_service_pull_channel(ch->dpio, ch->ch_id,
+						    ch->store);
+		dequeues++;
+		cpu_relax();
+	} while (err == -EBUSY);
+
+	ch->stats.dequeue_portal_busy += dequeues;
+	if (unlikely(err))
+		ch->stats.pull_err++;
+
+	return err;
+}
+
+/* NAPI poll routine
+ *
+ * Frames are dequeued from the QMan channel associated with this NAPI context.
+ * Rx, Tx confirmation and (if configured) Rx error frames all count
+ * towards the NAPI budget.
+ */
+static int dpaa2_eth_poll(struct napi_struct *napi, int budget)
+{
+	struct dpaa2_eth_channel *ch;
+	int cleaned = 0, store_cleaned;
+	struct dpaa2_eth_priv *priv;
+	int err;
+
+	ch = container_of(napi, struct dpaa2_eth_channel, napi);
+	priv = ch->priv;
+
+	while (cleaned < budget) {
+		err = pull_channel(ch);
+		if (unlikely(err))
+			break;
+
+		/* Refill pool if appropriate */
+		refill_pool(priv, ch, priv->bpid);
+
+		store_cleaned = consume_frames(ch);
+		cleaned += store_cleaned;
+
+		/* If we have enough budget left for a full store,
+		 * try a new pull dequeue, otherwise we're done here
+		 */
+		if (store_cleaned == 0 ||
+		    cleaned > budget - DPAA2_ETH_STORE_SIZE)
+			break;
+	}
+
+	if (cleaned < budget && napi_complete_done(napi, cleaned)) {
+		/* Re-enable data available notifications */
+		do {
+			err = dpaa2_io_service_rearm(ch->dpio, &ch->nctx);
+			cpu_relax();
+		} while (err == -EBUSY);
+		WARN_ONCE(err, "CDAN notifications rearm failed on core %d",
+			  ch->nctx.desired_cpu);
+	}
+
+	ch->stats.frames += cleaned;
+
+	return cleaned;
+}
+
+static void enable_ch_napi(struct dpaa2_eth_priv *priv)
+{
+	struct dpaa2_eth_channel *ch;
+	int i;
+
+	for (i = 0; i < priv->num_channels; i++) {
+		ch = priv->channel[i];
+		napi_enable(&ch->napi);
+	}
+}
+
+static void disable_ch_napi(struct dpaa2_eth_priv *priv)
+{
+	struct dpaa2_eth_channel *ch;
+	int i;
+
+	for (i = 0; i < priv->num_channels; i++) {
+		ch = priv->channel[i];
+		napi_disable(&ch->napi);
+	}
+}
+
+static int link_state_update(struct dpaa2_eth_priv *priv)
+{
+	struct dpni_link_state state;
+	int err;
+
+	err = dpni_get_link_state(priv->mc_io, 0, priv->mc_token, &state);
+	if (unlikely(err)) {
+		netdev_err(priv->net_dev,
+			   "dpni_get_link_state() failed\n");
+		return err;
+	}
+
+	/* Chech link state; speed / duplex changes are not treated yet */
+	if (priv->link_state.up == state.up)
+		return 0;
+
+	priv->link_state = state;
+	if (state.up) {
+		netif_carrier_on(priv->net_dev);
+		netif_tx_start_all_queues(priv->net_dev);
+	} else {
+		netif_tx_stop_all_queues(priv->net_dev);
+		netif_carrier_off(priv->net_dev);
+	}
+
+	netdev_info(priv->net_dev, "Link Event: state %s\n",
+		    state.up ? "up" : "down");
+
+	return 0;
+}
+
+static int dpaa2_eth_open(struct net_device *net_dev)
+{
+	struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
+	int err;
+
+	err = seed_pool(priv, priv->bpid);
+	if (err) {
+		/* Not much to do; the buffer pool, though not filled up,
+		 * may still contain some buffers which would enable us
+		 * to limp on.
+		 */
+		netdev_err(net_dev, "Buffer seeding failed for DPBP %d (bpid=%d)\n",
+			   priv->dpbp_dev->obj_desc.id, priv->bpid);
+	}
+
+	/* We'll only start the txqs when the link is actually ready; make sure
+	 * we don't race against the link up notification, which may come
+	 * immediately after dpni_enable();
+	 */
+	netif_tx_stop_all_queues(net_dev);
+	enable_ch_napi(priv);
+	/* Also, explicitly set carrier off, otherwise netif_carrier_ok() will
+	 * return true and cause 'ip link show' to report the LOWER_UP flag,
+	 * even though the link notification wasn't even received.
+	 */
+	netif_carrier_off(net_dev);
+
+	err = dpni_enable(priv->mc_io, 0, priv->mc_token);
+	if (err < 0) {
+		netdev_err(net_dev, "dpni_enable() failed\n");
+		goto enable_err;
+	}
+
+	/* If the DPMAC object has already processed the link up interrupt,
+	 * we have to learn the link state ourselves.
+	 */
+	err = link_state_update(priv);
+	if (err < 0) {
+		netdev_err(net_dev, "Can't update link state\n");
+		goto link_state_err;
+	}
+
+	return 0;
+
+link_state_err:
+enable_err:
+	disable_ch_napi(priv);
+	drain_pool(priv);
+	return err;
+}
+
+/* The DPIO store must be empty when we call this,
+ * at the end of every NAPI cycle.
+ */
+static u32 drain_channel(struct dpaa2_eth_priv *priv,
+			 struct dpaa2_eth_channel *ch)
+{
+	u32 drained = 0, total = 0;
+
+	do {
+		pull_channel(ch);
+		drained = consume_frames(ch);
+		total += drained;
+	} while (drained);
+
+	return total;
+}
+
+static u32 drain_ingress_frames(struct dpaa2_eth_priv *priv)
+{
+	struct dpaa2_eth_channel *ch;
+	int i;
+	u32 drained = 0;
+
+	for (i = 0; i < priv->num_channels; i++) {
+		ch = priv->channel[i];
+		drained += drain_channel(priv, ch);
+	}
+
+	return drained;
+}
+
+static int dpaa2_eth_stop(struct net_device *net_dev)
+{
+	struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
+	int dpni_enabled;
+	int retries = 10;
+	u32 drained;
+
+	netif_tx_stop_all_queues(net_dev);
+	netif_carrier_off(net_dev);
+
+	/* Loop while dpni_disable() attempts to drain the egress FQs
+	 * and confirm them back to us.
+	 */
+	do {
+		dpni_disable(priv->mc_io, 0, priv->mc_token);
+		dpni_is_enabled(priv->mc_io, 0, priv->mc_token, &dpni_enabled);
+		if (dpni_enabled)
+			/* Allow the hardware some slack */
+			msleep(100);
+	} while (dpni_enabled && --retries);
+	if (!retries) {
+		netdev_warn(net_dev, "Retry count exceeded disabling DPNI\n");
+		/* Must go on and disable NAPI nonetheless, so we don't crash at
+		 * the next "ifconfig up"
+		 */
+	}
+
+	/* Wait for NAPI to complete on every core and disable it.
+	 * In particular, this will also prevent NAPI from being rescheduled if
+	 * a new CDAN is serviced, effectively discarding the CDAN. We therefore
+	 * don't even need to disarm the channels, except perhaps for the case
+	 * of a huge coalescing value.
+	 */
+	disable_ch_napi(priv);
+
+	 /* Manually drain the Rx and TxConf queues */
+	drained = drain_ingress_frames(priv);
+	if (drained)
+		netdev_dbg(net_dev, "Drained %d frames.\n", drained);
+
+	/* Empty the buffer pool */
+	drain_pool(priv);
+
+	return 0;
+}
+
+static int dpaa2_eth_init(struct net_device *net_dev)
+{
+	u64 supported = 0;
+	u64 not_supported = 0;
+	struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
+	u32 options = priv->dpni_attrs.options;
+
+	/* Capabilities listing */
+	supported |= IFF_LIVE_ADDR_CHANGE;
+
+	if (options & DPNI_OPT_NO_MAC_FILTER)
+		not_supported |= IFF_UNICAST_FLT;
+	else
+		supported |= IFF_UNICAST_FLT;
+
+	net_dev->priv_flags |= supported;
+	net_dev->priv_flags &= ~not_supported;
+
+	/* Features */
+	net_dev->features = NETIF_F_RXCSUM |
+			    NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
+			    NETIF_F_SG | NETIF_F_HIGHDMA |
+			    NETIF_F_LLTX;
+	net_dev->hw_features = net_dev->features;
+
+	return 0;
+}
+
+static int dpaa2_eth_set_addr(struct net_device *net_dev, void *addr)
+{
+	struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
+	struct device *dev = net_dev->dev.parent;
+	int err;
+
+	err = eth_mac_addr(net_dev, addr);
+	if (err < 0) {
+		dev_err(dev, "eth_mac_addr() failed (%d)\n", err);
+		return err;
+	}
+
+	err = dpni_set_primary_mac_addr(priv->mc_io, 0, priv->mc_token,
+					net_dev->dev_addr);
+	if (err) {
+		dev_err(dev, "dpni_set_primary_mac_addr() failed (%d)\n", err);
+		return err;
+	}
+
+	return 0;
+}
+
+/** Fill in counters maintained by the GPP driver. These may be different from
+ * the hardware counters obtained by ethtool.
+ */
+static void dpaa2_eth_get_stats(struct net_device *net_dev,
+				struct rtnl_link_stats64 *stats)
+{
+	struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
+	struct rtnl_link_stats64 *percpu_stats;
+	u64 *cpustats;
+	u64 *netstats = (u64 *)stats;
+	int i, j;
+	int num = sizeof(struct rtnl_link_stats64) / sizeof(u64);
+
+	for_each_possible_cpu(i) {
+		percpu_stats = per_cpu_ptr(priv->percpu_stats, i);
+		cpustats = (u64 *)percpu_stats;
+		for (j = 0; j < num; j++)
+			netstats[j] += cpustats[j];
+	}
+}
+
+/* Copy mac unicast addresses from @net_dev to @priv.
+ * Its sole purpose is to make dpaa2_eth_set_rx_mode() more readable.
+ */
+static void add_uc_hw_addr(const struct net_device *net_dev,
+			   struct dpaa2_eth_priv *priv)
+{
+	struct netdev_hw_addr *ha;
+	int err;
+
+	netdev_for_each_uc_addr(ha, net_dev) {
+		err = dpni_add_mac_addr(priv->mc_io, 0, priv->mc_token,
+					ha->addr);
+		if (err)
+			netdev_warn(priv->net_dev,
+				    "Could not add ucast MAC %pM to the filtering table (err %d)\n",
+				    ha->addr, err);
+	}
+}
+
+/* Copy mac multicast addresses from @net_dev to @priv
+ * Its sole purpose is to make dpaa2_eth_set_rx_mode() more readable.
+ */
+static void add_mc_hw_addr(const struct net_device *net_dev,
+			   struct dpaa2_eth_priv *priv)
+{
+	struct netdev_hw_addr *ha;
+	int err;
+
+	netdev_for_each_mc_addr(ha, net_dev) {
+		err = dpni_add_mac_addr(priv->mc_io, 0, priv->mc_token,
+					ha->addr);
+		if (err)
+			netdev_warn(priv->net_dev,
+				    "Could not add mcast MAC %pM to the filtering table (err %d)\n",
+				    ha->addr, err);
+	}
+}
+
+static void dpaa2_eth_set_rx_mode(struct net_device *net_dev)
+{
+	struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
+	int uc_count = netdev_uc_count(net_dev);
+	int mc_count = netdev_mc_count(net_dev);
+	u8 max_mac = priv->dpni_attrs.mac_filter_entries;
+	u32 options = priv->dpni_attrs.options;
+	u16 mc_token = priv->mc_token;
+	struct fsl_mc_io *mc_io = priv->mc_io;
+	int err;
+
+	/* Basic sanity checks; these probably indicate a misconfiguration */
+	if (options & DPNI_OPT_NO_MAC_FILTER && max_mac != 0)
+		netdev_info(net_dev,
+			    "mac_filter_entries=%d, DPNI_OPT_NO_MAC_FILTER option must be disabled\n",
+			    max_mac);
+
+	/* Force promiscuous if the uc or mc counts exceed our capabilities. */
+	if (uc_count > max_mac) {
+		netdev_info(net_dev,
+			    "Unicast addr count reached %d, max allowed is %d; forcing promisc\n",
+			    uc_count, max_mac);
+		goto force_promisc;
+	}
+	if (mc_count + uc_count > max_mac) {
+		netdev_info(net_dev,
+			    "Unicast + multicast addr count reached %d, max allowed is %d; forcing promisc\n",
+			    uc_count + mc_count, max_mac);
+		goto force_mc_promisc;
+	}
+
+	/* Adjust promisc settings due to flag combinations */
+	if (net_dev->flags & IFF_PROMISC)
+		goto force_promisc;
+	if (net_dev->flags & IFF_ALLMULTI) {
+		/* First, rebuild unicast filtering table. This should be done
+		 * in promisc mode, in order to avoid frame loss while we
+		 * progressively add entries to the table.
+		 * We don't know whether we had been in promisc already, and
+		 * making an MC call to find out is expensive; so set uc promisc
+		 * nonetheless.
+		 */
+		err = dpni_set_unicast_promisc(mc_io, 0, mc_token, 1);
+		if (err)
+			netdev_warn(net_dev, "Can't set uc promisc\n");
+
+		/* Actual uc table reconstruction. */
+		err = dpni_clear_mac_filters(mc_io, 0, mc_token, 1, 0);
+		if (err)
+			netdev_warn(net_dev, "Can't clear uc filters\n");
+		add_uc_hw_addr(net_dev, priv);
+
+		/* Finally, clear uc promisc and set mc promisc as requested. */
+		err = dpni_set_unicast_promisc(mc_io, 0, mc_token, 0);
+		if (err)
+			netdev_warn(net_dev, "Can't clear uc promisc\n");
+		goto force_mc_promisc;
+	}
+
+	/* Neither unicast, nor multicast promisc will be on... eventually.
+	 * For now, rebuild mac filtering tables while forcing both of them on.
+	 */
+	err = dpni_set_unicast_promisc(mc_io, 0, mc_token, 1);
+	if (err)
+		netdev_warn(net_dev, "Can't set uc promisc (%d)\n", err);
+	err = dpni_set_multicast_promisc(mc_io, 0, mc_token, 1);
+	if (err)
+		netdev_warn(net_dev, "Can't set mc promisc (%d)\n", err);
+
+	/* Actual mac filtering tables reconstruction */
+	err = dpni_clear_mac_filters(mc_io, 0, mc_token, 1, 1);
+	if (err)
+		netdev_warn(net_dev, "Can't clear mac filters\n");
+	add_mc_hw_addr(net_dev, priv);
+	add_uc_hw_addr(net_dev, priv);
+
+	/* Now we can clear both ucast and mcast promisc, without risking
+	 * to drop legitimate frames anymore.
+	 */
+	err = dpni_set_unicast_promisc(mc_io, 0, mc_token, 0);
+	if (err)
+		netdev_warn(net_dev, "Can't clear ucast promisc\n");
+	err = dpni_set_multicast_promisc(mc_io, 0, mc_token, 0);
+	if (err)
+		netdev_warn(net_dev, "Can't clear mcast promisc\n");
+
+	return;
+
+force_promisc:
+	err = dpni_set_unicast_promisc(mc_io, 0, mc_token, 1);
+	if (err)
+		netdev_warn(net_dev, "Can't set ucast promisc\n");
+force_mc_promisc:
+	err = dpni_set_multicast_promisc(mc_io, 0, mc_token, 1);
+	if (err)
+		netdev_warn(net_dev, "Can't set mcast promisc\n");
+}
+
+static int dpaa2_eth_set_features(struct net_device *net_dev,
+				  netdev_features_t features)
+{
+	struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
+	netdev_features_t changed = features ^ net_dev->features;
+	bool enable;
+	int err;
+
+	if (changed & NETIF_F_RXCSUM) {
+		enable = !!(features & NETIF_F_RXCSUM);
+		err = set_rx_csum(priv, enable);
+		if (err)
+			return err;
+	}
+
+	if (changed & (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM)) {
+		enable = !!(features & (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM));
+		err = set_tx_csum(priv, enable);
+		if (err)
+			return err;
+	}
+
+	return 0;
+}
+
+static int dpaa2_eth_ts_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
+{
+	struct dpaa2_eth_priv *priv = netdev_priv(dev);
+	struct hwtstamp_config config;
+
+	if (copy_from_user(&config, rq->ifr_data, sizeof(config)))
+		return -EFAULT;
+
+	switch (config.tx_type) {
+	case HWTSTAMP_TX_OFF:
+		priv->tx_tstamp = false;
+		break;
+	case HWTSTAMP_TX_ON:
+		priv->tx_tstamp = true;
+		break;
+	default:
+		return -ERANGE;
+	}
+
+	if (config.rx_filter == HWTSTAMP_FILTER_NONE) {
+		priv->rx_tstamp = false;
+	} else {
+		priv->rx_tstamp = true;
+		/* TS is set for all frame types, not only those requested */
+		config.rx_filter = HWTSTAMP_FILTER_ALL;
+	}
+
+	return copy_to_user(rq->ifr_data, &config, sizeof(config)) ?
+			-EFAULT : 0;
+}
+
+static int dpaa2_eth_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
+{
+	if (cmd == SIOCSHWTSTAMP)
+		return dpaa2_eth_ts_ioctl(dev, rq, cmd);
+
+	return -EINVAL;
+}
+
+static const struct net_device_ops dpaa2_eth_ops = {
+	.ndo_open = dpaa2_eth_open,
+	.ndo_start_xmit = dpaa2_eth_tx,
+	.ndo_stop = dpaa2_eth_stop,
+	.ndo_init = dpaa2_eth_init,
+	.ndo_set_mac_address = dpaa2_eth_set_addr,
+	.ndo_get_stats64 = dpaa2_eth_get_stats,
+	.ndo_set_rx_mode = dpaa2_eth_set_rx_mode,
+	.ndo_set_features = dpaa2_eth_set_features,
+	.ndo_do_ioctl = dpaa2_eth_ioctl,
+};
+
+static void cdan_cb(struct dpaa2_io_notification_ctx *ctx)
+{
+	struct dpaa2_eth_channel *ch;
+
+	ch = container_of(ctx, struct dpaa2_eth_channel, nctx);
+
+	/* Update NAPI statistics */
+	ch->stats.cdan++;
+
+	napi_schedule_irqoff(&ch->napi);
+}
+
+/* Allocate and configure a DPCON object */
+static struct fsl_mc_device *setup_dpcon(struct dpaa2_eth_priv *priv)
+{
+	struct fsl_mc_device *dpcon;
+	struct device *dev = priv->net_dev->dev.parent;
+	struct dpcon_attr attrs;
+	int err;
+
+	err = fsl_mc_object_allocate(to_fsl_mc_device(dev),
+				     FSL_MC_POOL_DPCON, &dpcon);
+	if (err) {
+		dev_info(dev, "Not enough DPCONs, will go on as-is\n");
+		return NULL;
+	}
+
+	err = dpcon_open(priv->mc_io, 0, dpcon->obj_desc.id, &dpcon->mc_handle);
+	if (err) {
+		dev_err(dev, "dpcon_open() failed\n");
+		goto free;
+	}
+
+	err = dpcon_reset(priv->mc_io, 0, dpcon->mc_handle);
+	if (err) {
+		dev_err(dev, "dpcon_reset() failed\n");
+		goto close;
+	}
+
+	err = dpcon_get_attributes(priv->mc_io, 0, dpcon->mc_handle, &attrs);
+	if (err) {
+		dev_err(dev, "dpcon_get_attributes() failed\n");
+		goto close;
+	}
+
+	err = dpcon_enable(priv->mc_io, 0, dpcon->mc_handle);
+	if (err) {
+		dev_err(dev, "dpcon_enable() failed\n");
+		goto close;
+	}
+
+	return dpcon;
+
+close:
+	dpcon_close(priv->mc_io, 0, dpcon->mc_handle);
+free:
+	fsl_mc_object_free(dpcon);
+
+	return NULL;
+}
+
+static void free_dpcon(struct dpaa2_eth_priv *priv,
+		       struct fsl_mc_device *dpcon)
+{
+	dpcon_disable(priv->mc_io, 0, dpcon->mc_handle);
+	dpcon_close(priv->mc_io, 0, dpcon->mc_handle);
+	fsl_mc_object_free(dpcon);
+}
+
+static struct dpaa2_eth_channel *
+alloc_channel(struct dpaa2_eth_priv *priv)
+{
+	struct dpaa2_eth_channel *channel;
+	struct dpcon_attr attr;
+	struct device *dev = priv->net_dev->dev.parent;
+	int err;
+
+	channel = kzalloc(sizeof(*channel), GFP_KERNEL);
+	if (!channel)
+		return NULL;
+
+	channel->dpcon = setup_dpcon(priv);
+	if (!channel->dpcon)
+		goto err_setup;
+
+	err = dpcon_get_attributes(priv->mc_io, 0, channel->dpcon->mc_handle,
+				   &attr);
+	if (err) {
+		dev_err(dev, "dpcon_get_attributes() failed\n");
+		goto err_get_attr;
+	}
+
+	channel->dpcon_id = attr.id;
+	channel->ch_id = attr.qbman_ch_id;
+	channel->priv = priv;
+
+	return channel;
+
+err_get_attr:
+	free_dpcon(priv, channel->dpcon);
+err_setup:
+	kfree(channel);
+	return NULL;
+}
+
+static void free_channel(struct dpaa2_eth_priv *priv,
+			 struct dpaa2_eth_channel *channel)
+{
+	free_dpcon(priv, channel->dpcon);
+	kfree(channel);
+}
+
+/* DPIO setup: allocate and configure QBMan channels, setup core affinity
+ * and register data availability notifications
+ */
+static int setup_dpio(struct dpaa2_eth_priv *priv)
+{
+	struct dpaa2_io_notification_ctx *nctx;
+	struct dpaa2_eth_channel *channel;
+	struct dpcon_notification_cfg dpcon_notif_cfg;
+	struct device *dev = priv->net_dev->dev.parent;
+	int i, err;
+
+	/* We want the ability to spread ingress traffic (RX, TX conf) to as
+	 * many cores as possible, so we need one channel for each core
+	 * (unless there's fewer queues than cores, in which case the extra
+	 * channels would be wasted).
+	 * Allocate one channel per core and register it to the core's
+	 * affine DPIO. If not enough channels are available for all cores
+	 * or if some cores don't have an affine DPIO, there will be no
+	 * ingress frame processing on those cores.
+	 */
+	cpumask_clear(&priv->dpio_cpumask);
+	for_each_online_cpu(i) {
+		/* Try to allocate a channel */
+		channel = alloc_channel(priv);
+		if (!channel) {
+			dev_info(dev,
+				 "No affine channel for cpu %d and above\n", i);
+			err = -ENODEV;
+			goto err_alloc_ch;
+		}
+
+		priv->channel[priv->num_channels] = channel;
+
+		nctx = &channel->nctx;
+		nctx->is_cdan = 1;
+		nctx->cb = cdan_cb;
+		nctx->id = channel->ch_id;
+		nctx->desired_cpu = i;
+
+		/* Register the new context */
+		channel->dpio = dpaa2_io_service_select(i);
+		err = dpaa2_io_service_register(channel->dpio, nctx);
+		if (err) {
+			dev_dbg(dev, "No affine DPIO for cpu %d\n", i);
+			/* If no affine DPIO for this core, there's probably
+			 * none available for next cores either. Signal we want
+			 * to retry later, in case the DPIO devices weren't
+			 * probed yet.
+			 */
+			err = -EPROBE_DEFER;
+			goto err_service_reg;
+		}
+
+		/* Register DPCON notification with MC */
+		dpcon_notif_cfg.dpio_id = nctx->dpio_id;
+		dpcon_notif_cfg.priority = 0;
+		dpcon_notif_cfg.user_ctx = nctx->qman64;
+		err = dpcon_set_notification(priv->mc_io, 0,
+					     channel->dpcon->mc_handle,
+					     &dpcon_notif_cfg);
+		if (err) {
+			dev_err(dev, "dpcon_set_notification failed()\n");
+			goto err_set_cdan;
+		}
+
+		/* If we managed to allocate a channel and also found an affine
+		 * DPIO for this core, add it to the final mask
+		 */
+		cpumask_set_cpu(i, &priv->dpio_cpumask);
+		priv->num_channels++;
+
+		/* Stop if we already have enough channels to accommodate all
+		 * RX and TX conf queues
+		 */
+		if (priv->num_channels == dpaa2_eth_queue_count(priv))
+			break;
+	}
+
+	return 0;
+
+err_set_cdan:
+	dpaa2_io_service_deregister(channel->dpio, nctx);
+err_service_reg:
+	free_channel(priv, channel);
+err_alloc_ch:
+	if (cpumask_empty(&priv->dpio_cpumask)) {
+		dev_err(dev, "No cpu with an affine DPIO/DPCON\n");
+		return err;
+	}
+
+	dev_info(dev, "Cores %*pbl available for processing ingress traffic\n",
+		 cpumask_pr_args(&priv->dpio_cpumask));
+
+	return 0;
+}
+
+static void free_dpio(struct dpaa2_eth_priv *priv)
+{
+	int i;
+	struct dpaa2_eth_channel *ch;
+
+	/* deregister CDAN notifications and free channels */
+	for (i = 0; i < priv->num_channels; i++) {
+		ch = priv->channel[i];
+		dpaa2_io_service_deregister(ch->dpio, &ch->nctx);
+		free_channel(priv, ch);
+	}
+}
+
+static struct dpaa2_eth_channel *get_affine_channel(struct dpaa2_eth_priv *priv,
+						    int cpu)
+{
+	struct device *dev = priv->net_dev->dev.parent;
+	int i;
+
+	for (i = 0; i < priv->num_channels; i++)
+		if (priv->channel[i]->nctx.desired_cpu == cpu)
+			return priv->channel[i];
+
+	/* We should never get here. Issue a warning and return
+	 * the first channel, because it's still better than nothing
+	 */
+	dev_warn(dev, "No affine channel found for cpu %d\n", cpu);
+
+	return priv->channel[0];
+}
+
+static void set_fq_affinity(struct dpaa2_eth_priv *priv)
+{
+	struct device *dev = priv->net_dev->dev.parent;
+	struct cpumask xps_mask;
+	struct dpaa2_eth_fq *fq;
+	int rx_cpu, txc_cpu;
+	int i, err;
+
+	/* For each FQ, pick one channel/CPU to deliver frames to.
+	 * This may well change at runtime, either through irqbalance or
+	 * through direct user intervention.
+	 */
+	rx_cpu = txc_cpu = cpumask_first(&priv->dpio_cpumask);
+
+	for (i = 0; i < priv->num_fqs; i++) {
+		fq = &priv->fq[i];
+		switch (fq->type) {
+		case DPAA2_RX_FQ:
+			fq->target_cpu = rx_cpu;
+			rx_cpu = cpumask_next(rx_cpu, &priv->dpio_cpumask);
+			if (rx_cpu >= nr_cpu_ids)
+				rx_cpu = cpumask_first(&priv->dpio_cpumask);
+			break;
+		case DPAA2_TX_CONF_FQ:
+			fq->target_cpu = txc_cpu;
+
+			/* Tell the stack to affine to txc_cpu the Tx queue
+			 * associated with the confirmation one
+			 */
+			cpumask_clear(&xps_mask);
+			cpumask_set_cpu(txc_cpu, &xps_mask);
+			err = netif_set_xps_queue(priv->net_dev, &xps_mask,
+						  fq->flowid);
+			if (err)
+				dev_err(dev, "Error setting XPS queue\n");
+
+			txc_cpu = cpumask_next(txc_cpu, &priv->dpio_cpumask);
+			if (txc_cpu >= nr_cpu_ids)
+				txc_cpu = cpumask_first(&priv->dpio_cpumask);
+			break;
+		default:
+			dev_err(dev, "Unknown FQ type: %d\n", fq->type);
+		}
+		fq->channel = get_affine_channel(priv, fq->target_cpu);
+	}
+}
+
+static void setup_fqs(struct dpaa2_eth_priv *priv)
+{
+	int i;
+
+	/* We have one TxConf FQ per Tx flow.
+	 * The number of Tx and Rx queues is the same.
+	 * Tx queues come first in the fq array.
+	 */
+	for (i = 0; i < dpaa2_eth_queue_count(priv); i++) {
+		priv->fq[priv->num_fqs].type = DPAA2_TX_CONF_FQ;
+		priv->fq[priv->num_fqs].consume = dpaa2_eth_tx_conf;
+		priv->fq[priv->num_fqs++].flowid = (u16)i;
+	}
+
+	for (i = 0; i < dpaa2_eth_queue_count(priv); i++) {
+		priv->fq[priv->num_fqs].type = DPAA2_RX_FQ;
+		priv->fq[priv->num_fqs].consume = dpaa2_eth_rx;
+		priv->fq[priv->num_fqs++].flowid = (u16)i;
+	}
+
+	/* For each FQ, decide on which core to process incoming frames */
+	set_fq_affinity(priv);
+}
+
+/* Allocate and configure one buffer pool for each interface */
+static int setup_dpbp(struct dpaa2_eth_priv *priv)
+{
+	int err;
+	struct fsl_mc_device *dpbp_dev;
+	struct device *dev = priv->net_dev->dev.parent;
+	struct dpbp_attr dpbp_attrs;
+
+	err = fsl_mc_object_allocate(to_fsl_mc_device(dev), FSL_MC_POOL_DPBP,
+				     &dpbp_dev);
+	if (err) {
+		dev_err(dev, "DPBP device allocation failed\n");
+		return err;
+	}
+
+	priv->dpbp_dev = dpbp_dev;
+
+	err = dpbp_open(priv->mc_io, 0, priv->dpbp_dev->obj_desc.id,
+			&dpbp_dev->mc_handle);
+	if (err) {
+		dev_err(dev, "dpbp_open() failed\n");
+		goto err_open;
+	}
+
+	err = dpbp_reset(priv->mc_io, 0, dpbp_dev->mc_handle);
+	if (err) {
+		dev_err(dev, "dpbp_reset() failed\n");
+		goto err_reset;
+	}
+
+	err = dpbp_enable(priv->mc_io, 0, dpbp_dev->mc_handle);
+	if (err) {
+		dev_err(dev, "dpbp_enable() failed\n");
+		goto err_enable;
+	}
+
+	err = dpbp_get_attributes(priv->mc_io, 0, dpbp_dev->mc_handle,
+				  &dpbp_attrs);
+	if (err) {
+		dev_err(dev, "dpbp_get_attributes() failed\n");
+		goto err_get_attr;
+	}
+	priv->bpid = dpbp_attrs.bpid;
+
+	return 0;
+
+err_get_attr:
+	dpbp_disable(priv->mc_io, 0, dpbp_dev->mc_handle);
+err_enable:
+err_reset:
+	dpbp_close(priv->mc_io, 0, dpbp_dev->mc_handle);
+err_open:
+	fsl_mc_object_free(dpbp_dev);
+
+	return err;
+}
+
+static void free_dpbp(struct dpaa2_eth_priv *priv)
+{
+	drain_pool(priv);
+	dpbp_disable(priv->mc_io, 0, priv->dpbp_dev->mc_handle);
+	dpbp_close(priv->mc_io, 0, priv->dpbp_dev->mc_handle);
+	fsl_mc_object_free(priv->dpbp_dev);
+}
+
+static int set_buffer_layout(struct dpaa2_eth_priv *priv)
+{
+	struct device *dev = priv->net_dev->dev.parent;
+	struct dpni_buffer_layout buf_layout = {0};
+	int err;
+
+	/* We need to check for WRIOP version 1.0.0, but depending on the MC
+	 * version, this number is not always provided correctly on rev1.
+	 * We need to check for both alternatives in this situation.
+	 */
+	if (priv->dpni_attrs.wriop_version == DPAA2_WRIOP_VERSION(0, 0, 0) ||
+	    priv->dpni_attrs.wriop_version == DPAA2_WRIOP_VERSION(1, 0, 0))
+		priv->rx_buf_align = DPAA2_ETH_RX_BUF_ALIGN_REV1;
+	else
+		priv->rx_buf_align = DPAA2_ETH_RX_BUF_ALIGN;
+
+	/* tx buffer */
+	buf_layout.private_data_size = DPAA2_ETH_SWA_SIZE;
+	buf_layout.pass_timestamp = true;
+	buf_layout.options = DPNI_BUF_LAYOUT_OPT_PRIVATE_DATA_SIZE |
+			     DPNI_BUF_LAYOUT_OPT_TIMESTAMP;
+	err = dpni_set_buffer_layout(priv->mc_io, 0, priv->mc_token,
+				     DPNI_QUEUE_TX, &buf_layout);
+	if (err) {
+		dev_err(dev, "dpni_set_buffer_layout(TX) failed\n");
+		return err;
+	}
+
+	/* tx-confirm buffer */
+	buf_layout.options = DPNI_BUF_LAYOUT_OPT_TIMESTAMP;
+	err = dpni_set_buffer_layout(priv->mc_io, 0, priv->mc_token,
+				     DPNI_QUEUE_TX_CONFIRM, &buf_layout);
+	if (err) {
+		dev_err(dev, "dpni_set_buffer_layout(TX_CONF) failed\n");
+		return err;
+	}
+
+	/* Now that we've set our tx buffer layout, retrieve the minimum
+	 * required tx data offset.
+	 */
+	err = dpni_get_tx_data_offset(priv->mc_io, 0, priv->mc_token,
+				      &priv->tx_data_offset);
+	if (err) {
+		dev_err(dev, "dpni_get_tx_data_offset() failed\n");
+		return err;
+	}
+
+	if ((priv->tx_data_offset % 64) != 0)
+		dev_warn(dev, "Tx data offset (%d) not a multiple of 64B\n",
+			 priv->tx_data_offset);
+
+	/* rx buffer */
+	buf_layout.pass_frame_status = true;
+	buf_layout.pass_parser_result = true;
+	buf_layout.data_align = priv->rx_buf_align;
+	buf_layout.data_head_room = dpaa2_eth_rx_head_room(priv);
+	buf_layout.private_data_size = 0;
+	buf_layout.options = DPNI_BUF_LAYOUT_OPT_PARSER_RESULT |
+			     DPNI_BUF_LAYOUT_OPT_FRAME_STATUS |
+			     DPNI_BUF_LAYOUT_OPT_DATA_ALIGN |
+			     DPNI_BUF_LAYOUT_OPT_DATA_HEAD_ROOM |
+			     DPNI_BUF_LAYOUT_OPT_TIMESTAMP;
+	err = dpni_set_buffer_layout(priv->mc_io, 0, priv->mc_token,
+				     DPNI_QUEUE_RX, &buf_layout);
+	if (err) {
+		dev_err(dev, "dpni_set_buffer_layout(RX) failed\n");
+		return err;
+	}
+
+	return 0;
+}
+
+/* Configure the DPNI object this interface is associated with */
+static int setup_dpni(struct fsl_mc_device *ls_dev)
+{
+	struct device *dev = &ls_dev->dev;
+	struct dpaa2_eth_priv *priv;
+	struct net_device *net_dev;
+	int err;
+
+	net_dev = dev_get_drvdata(dev);
+	priv = netdev_priv(net_dev);
+
+	/* get a handle for the DPNI object */
+	err = dpni_open(priv->mc_io, 0, ls_dev->obj_desc.id, &priv->mc_token);
+	if (err) {
+		dev_err(dev, "dpni_open() failed\n");
+		return err;
+	}
+
+	/* Check if we can work with this DPNI object */
+	err = dpni_get_api_version(priv->mc_io, 0, &priv->dpni_ver_major,
+				   &priv->dpni_ver_minor);
+	if (err) {
+		dev_err(dev, "dpni_get_api_version() failed\n");
+		goto close;
+	}
+	if (dpaa2_eth_cmp_dpni_ver(priv, DPNI_VER_MAJOR, DPNI_VER_MINOR) < 0) {
+		dev_err(dev, "DPNI version %u.%u not supported, need >= %u.%u\n",
+			priv->dpni_ver_major, priv->dpni_ver_minor,
+			DPNI_VER_MAJOR, DPNI_VER_MINOR);
+		err = -ENOTSUPP;
+		goto close;
+	}
+
+	ls_dev->mc_io = priv->mc_io;
+	ls_dev->mc_handle = priv->mc_token;
+
+	err = dpni_reset(priv->mc_io, 0, priv->mc_token);
+	if (err) {
+		dev_err(dev, "dpni_reset() failed\n");
+		goto close;
+	}
+
+	err = dpni_get_attributes(priv->mc_io, 0, priv->mc_token,
+				  &priv->dpni_attrs);
+	if (err) {
+		dev_err(dev, "dpni_get_attributes() failed (err=%d)\n", err);
+		goto close;
+	}
+
+	err = set_buffer_layout(priv);
+	if (err)
+		goto close;
+
+	return 0;
+
+close:
+	dpni_close(priv->mc_io, 0, priv->mc_token);
+
+	return err;
+}
+
+static void free_dpni(struct dpaa2_eth_priv *priv)
+{
+	int err;
+
+	err = dpni_reset(priv->mc_io, 0, priv->mc_token);
+	if (err)
+		netdev_warn(priv->net_dev, "dpni_reset() failed (err %d)\n",
+			    err);
+
+	dpni_close(priv->mc_io, 0, priv->mc_token);
+}
+
+static int setup_rx_flow(struct dpaa2_eth_priv *priv,
+			 struct dpaa2_eth_fq *fq)
+{
+	struct device *dev = priv->net_dev->dev.parent;
+	struct dpni_queue queue;
+	struct dpni_queue_id qid;
+	struct dpni_taildrop td;
+	int err;
+
+	err = dpni_get_queue(priv->mc_io, 0, priv->mc_token,
+			     DPNI_QUEUE_RX, 0, fq->flowid, &queue, &qid);
+	if (err) {
+		dev_err(dev, "dpni_get_queue(RX) failed\n");
+		return err;
+	}
+
+	fq->fqid = qid.fqid;
+
+	queue.destination.id = fq->channel->dpcon_id;
+	queue.destination.type = DPNI_DEST_DPCON;
+	queue.destination.priority = 1;
+	queue.user_context = (u64)(uintptr_t)fq;
+	err = dpni_set_queue(priv->mc_io, 0, priv->mc_token,
+			     DPNI_QUEUE_RX, 0, fq->flowid,
+			     DPNI_QUEUE_OPT_USER_CTX | DPNI_QUEUE_OPT_DEST,
+			     &queue);
+	if (err) {
+		dev_err(dev, "dpni_set_queue(RX) failed\n");
+		return err;
+	}
+
+	td.enable = 1;
+	td.threshold = DPAA2_ETH_TAILDROP_THRESH;
+	err = dpni_set_taildrop(priv->mc_io, 0, priv->mc_token, DPNI_CP_QUEUE,
+				DPNI_QUEUE_RX, 0, fq->flowid, &td);
+	if (err) {
+		dev_err(dev, "dpni_set_threshold() failed\n");
+		return err;
+	}
+
+	return 0;
+}
+
+static int setup_tx_flow(struct dpaa2_eth_priv *priv,
+			 struct dpaa2_eth_fq *fq)
+{
+	struct device *dev = priv->net_dev->dev.parent;
+	struct dpni_queue queue;
+	struct dpni_queue_id qid;
+	int err;
+
+	err = dpni_get_queue(priv->mc_io, 0, priv->mc_token,
+			     DPNI_QUEUE_TX, 0, fq->flowid, &queue, &qid);
+	if (err) {
+		dev_err(dev, "dpni_get_queue(TX) failed\n");
+		return err;
+	}
+
+	fq->tx_qdbin = qid.qdbin;
+
+	err = dpni_get_queue(priv->mc_io, 0, priv->mc_token,
+			     DPNI_QUEUE_TX_CONFIRM, 0, fq->flowid,
+			     &queue, &qid);
+	if (err) {
+		dev_err(dev, "dpni_get_queue(TX_CONF) failed\n");
+		return err;
+	}
+
+	fq->fqid = qid.fqid;
+
+	queue.destination.id = fq->channel->dpcon_id;
+	queue.destination.type = DPNI_DEST_DPCON;
+	queue.destination.priority = 0;
+	queue.user_context = (u64)(uintptr_t)fq;
+	err = dpni_set_queue(priv->mc_io, 0, priv->mc_token,
+			     DPNI_QUEUE_TX_CONFIRM, 0, fq->flowid,
+			     DPNI_QUEUE_OPT_USER_CTX | DPNI_QUEUE_OPT_DEST,
+			     &queue);
+	if (err) {
+		dev_err(dev, "dpni_set_queue(TX_CONF) failed\n");
+		return err;
+	}
+
+	return 0;
+}
+
+/* Hash key is a 5-tuple: IPsrc, IPdst, IPnextproto, L4src, L4dst */
+static const struct dpaa2_eth_hash_fields hash_fields[] = {
+	{
+		/* IP header */
+		.rxnfc_field = RXH_IP_SRC,
+		.cls_prot = NET_PROT_IP,
+		.cls_field = NH_FLD_IP_SRC,
+		.size = 4,
+	}, {
+		.rxnfc_field = RXH_IP_DST,
+		.cls_prot = NET_PROT_IP,
+		.cls_field = NH_FLD_IP_DST,
+		.size = 4,
+	}, {
+		.rxnfc_field = RXH_L3_PROTO,
+		.cls_prot = NET_PROT_IP,
+		.cls_field = NH_FLD_IP_PROTO,
+		.size = 1,
+	}, {
+		/* Using UDP ports, this is functionally equivalent to raw
+		 * byte pairs from L4 header.
+		 */
+		.rxnfc_field = RXH_L4_B_0_1,
+		.cls_prot = NET_PROT_UDP,
+		.cls_field = NH_FLD_UDP_PORT_SRC,
+		.size = 2,
+	}, {
+		.rxnfc_field = RXH_L4_B_2_3,
+		.cls_prot = NET_PROT_UDP,
+		.cls_field = NH_FLD_UDP_PORT_DST,
+		.size = 2,
+	},
+};
+
+/* Set RX hash options
+ * flags is a combination of RXH_ bits
+ */
+static int dpaa2_eth_set_hash(struct net_device *net_dev, u64 flags)
+{
+	struct device *dev = net_dev->dev.parent;
+	struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
+	struct dpkg_profile_cfg cls_cfg;
+	struct dpni_rx_tc_dist_cfg dist_cfg;
+	u8 *dma_mem;
+	int i;
+	int err = 0;
+
+	if (!dpaa2_eth_hash_enabled(priv)) {
+		dev_dbg(dev, "Hashing support is not enabled\n");
+		return 0;
+	}
+
+	memset(&cls_cfg, 0, sizeof(cls_cfg));
+
+	for (i = 0; i < ARRAY_SIZE(hash_fields); i++) {
+		struct dpkg_extract *key =
+			&cls_cfg.extracts[cls_cfg.num_extracts];
+
+		if (!(flags & hash_fields[i].rxnfc_field))
+			continue;
+
+		if (cls_cfg.num_extracts >= DPKG_MAX_NUM_OF_EXTRACTS) {
+			dev_err(dev, "error adding key extraction rule, too many rules?\n");
+			return -E2BIG;
+		}
+
+		key->type = DPKG_EXTRACT_FROM_HDR;
+		key->extract.from_hdr.prot = hash_fields[i].cls_prot;
+		key->extract.from_hdr.type = DPKG_FULL_FIELD;
+		key->extract.from_hdr.field = hash_fields[i].cls_field;
+		cls_cfg.num_extracts++;
+
+		priv->rx_hash_fields |= hash_fields[i].rxnfc_field;
+	}
+
+	dma_mem = kzalloc(DPAA2_CLASSIFIER_DMA_SIZE, GFP_KERNEL);
+	if (!dma_mem)
+		return -ENOMEM;
+
+	err = dpni_prepare_key_cfg(&cls_cfg, dma_mem);
+	if (err) {
+		dev_err(dev, "dpni_prepare_key_cfg error %d\n", err);
+		goto err_prep_key;
+	}
+
+	memset(&dist_cfg, 0, sizeof(dist_cfg));
+
+	/* Prepare for setting the rx dist */
+	dist_cfg.key_cfg_iova = dma_map_single(dev, dma_mem,
+					       DPAA2_CLASSIFIER_DMA_SIZE,
+					       DMA_TO_DEVICE);
+	if (dma_mapping_error(dev, dist_cfg.key_cfg_iova)) {
+		dev_err(dev, "DMA mapping failed\n");
+		err = -ENOMEM;
+		goto err_dma_map;
+	}
+
+	dist_cfg.dist_size = dpaa2_eth_queue_count(priv);
+	dist_cfg.dist_mode = DPNI_DIST_MODE_HASH;
+
+	err = dpni_set_rx_tc_dist(priv->mc_io, 0, priv->mc_token, 0, &dist_cfg);
+	dma_unmap_single(dev, dist_cfg.key_cfg_iova,
+			 DPAA2_CLASSIFIER_DMA_SIZE, DMA_TO_DEVICE);
+	if (err)
+		dev_err(dev, "dpni_set_rx_tc_dist() error %d\n", err);
+
+err_dma_map:
+err_prep_key:
+	kfree(dma_mem);
+	return err;
+}
+
+/* Bind the DPNI to its needed objects and resources: buffer pool, DPIOs,
+ * frame queues and channels
+ */
+static int bind_dpni(struct dpaa2_eth_priv *priv)
+{
+	struct net_device *net_dev = priv->net_dev;
+	struct device *dev = net_dev->dev.parent;
+	struct dpni_pools_cfg pools_params;
+	struct dpni_error_cfg err_cfg;
+	int err = 0;
+	int i;
+
+	pools_params.num_dpbp = 1;
+	pools_params.pools[0].dpbp_id = priv->dpbp_dev->obj_desc.id;
+	pools_params.pools[0].backup_pool = 0;
+	pools_params.pools[0].buffer_size = DPAA2_ETH_RX_BUF_SIZE;
+	err = dpni_set_pools(priv->mc_io, 0, priv->mc_token, &pools_params);
+	if (err) {
+		dev_err(dev, "dpni_set_pools() failed\n");
+		return err;
+	}
+
+	/* have the interface implicitly distribute traffic based on
+	 * the default hash key
+	 */
+	err = dpaa2_eth_set_hash(net_dev, DPAA2_RXH_DEFAULT);
+	if (err)
+		dev_err(dev, "Failed to configure hashing\n");
+
+	/* Configure handling of error frames */
+	err_cfg.errors = DPAA2_FAS_RX_ERR_MASK;
+	err_cfg.set_frame_annotation = 1;
+	err_cfg.error_action = DPNI_ERROR_ACTION_DISCARD;
+	err = dpni_set_errors_behavior(priv->mc_io, 0, priv->mc_token,
+				       &err_cfg);
+	if (err) {
+		dev_err(dev, "dpni_set_errors_behavior failed\n");
+		return err;
+	}
+
+	/* Configure Rx and Tx conf queues to generate CDANs */
+	for (i = 0; i < priv->num_fqs; i++) {
+		switch (priv->fq[i].type) {
+		case DPAA2_RX_FQ:
+			err = setup_rx_flow(priv, &priv->fq[i]);
+			break;
+		case DPAA2_TX_CONF_FQ:
+			err = setup_tx_flow(priv, &priv->fq[i]);
+			break;
+		default:
+			dev_err(dev, "Invalid FQ type %d\n", priv->fq[i].type);
+			return -EINVAL;
+		}
+		if (err)
+			return err;
+	}
+
+	err = dpni_get_qdid(priv->mc_io, 0, priv->mc_token,
+			    DPNI_QUEUE_TX, &priv->tx_qdid);
+	if (err) {
+		dev_err(dev, "dpni_get_qdid() failed\n");
+		return err;
+	}
+
+	return 0;
+}
+
+/* Allocate rings for storing incoming frame descriptors */
+static int alloc_rings(struct dpaa2_eth_priv *priv)
+{
+	struct net_device *net_dev = priv->net_dev;
+	struct device *dev = net_dev->dev.parent;
+	int i;
+
+	for (i = 0; i < priv->num_channels; i++) {
+		priv->channel[i]->store =
+			dpaa2_io_store_create(DPAA2_ETH_STORE_SIZE, dev);
+		if (!priv->channel[i]->store) {
+			netdev_err(net_dev, "dpaa2_io_store_create() failed\n");
+			goto err_ring;
+		}
+	}
+
+	return 0;
+
+err_ring:
+	for (i = 0; i < priv->num_channels; i++) {
+		if (!priv->channel[i]->store)
+			break;
+		dpaa2_io_store_destroy(priv->channel[i]->store);
+	}
+
+	return -ENOMEM;
+}
+
+static void free_rings(struct dpaa2_eth_priv *priv)
+{
+	int i;
+
+	for (i = 0; i < priv->num_channels; i++)
+		dpaa2_io_store_destroy(priv->channel[i]->store);
+}
+
+static int set_mac_addr(struct dpaa2_eth_priv *priv)
+{
+	struct net_device *net_dev = priv->net_dev;
+	struct device *dev = net_dev->dev.parent;
+	u8 mac_addr[ETH_ALEN], dpni_mac_addr[ETH_ALEN];
+	int err;
+
+	/* Get firmware address, if any */
+	err = dpni_get_port_mac_addr(priv->mc_io, 0, priv->mc_token, mac_addr);
+	if (err) {
+		dev_err(dev, "dpni_get_port_mac_addr() failed\n");
+		return err;
+	}
+
+	/* Get DPNI attributes address, if any */
+	err = dpni_get_primary_mac_addr(priv->mc_io, 0, priv->mc_token,
+					dpni_mac_addr);
+	if (err) {
+		dev_err(dev, "dpni_get_primary_mac_addr() failed\n");
+		return err;
+	}
+
+	/* First check if firmware has any address configured by bootloader */
+	if (!is_zero_ether_addr(mac_addr)) {
+		/* If the DPMAC addr != DPNI addr, update it */
+		if (!ether_addr_equal(mac_addr, dpni_mac_addr)) {
+			err = dpni_set_primary_mac_addr(priv->mc_io, 0,
+							priv->mc_token,
+							mac_addr);
+			if (err) {
+				dev_err(dev, "dpni_set_primary_mac_addr() failed\n");
+				return err;
+			}
+		}
+		memcpy(net_dev->dev_addr, mac_addr, net_dev->addr_len);
+	} else if (is_zero_ether_addr(dpni_mac_addr)) {
+		/* No MAC address configured, fill in net_dev->dev_addr
+		 * with a random one
+		 */
+		eth_hw_addr_random(net_dev);
+		dev_dbg_once(dev, "device(s) have all-zero hwaddr, replaced with random\n");
+
+		err = dpni_set_primary_mac_addr(priv->mc_io, 0, priv->mc_token,
+						net_dev->dev_addr);
+		if (err) {
+			dev_err(dev, "dpni_set_primary_mac_addr() failed\n");
+			return err;
+		}
+
+		/* Override NET_ADDR_RANDOM set by eth_hw_addr_random(); for all
+		 * practical purposes, this will be our "permanent" mac address,
+		 * at least until the next reboot. This move will also permit
+		 * register_netdevice() to properly fill up net_dev->perm_addr.
+		 */
+		net_dev->addr_assign_type = NET_ADDR_PERM;
+	} else {
+		/* NET_ADDR_PERM is default, all we have to do is
+		 * fill in the device addr.
+		 */
+		memcpy(net_dev->dev_addr, dpni_mac_addr, net_dev->addr_len);
+	}
+
+	return 0;
+}
+
+static int netdev_init(struct net_device *net_dev)
+{
+	struct device *dev = net_dev->dev.parent;
+	struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
+	u8 bcast_addr[ETH_ALEN];
+	u8 num_queues;
+	int err;
+
+	net_dev->netdev_ops = &dpaa2_eth_ops;
+
+	err = set_mac_addr(priv);
+	if (err)
+		return err;
+
+	/* Explicitly add the broadcast address to the MAC filtering table */
+	eth_broadcast_addr(bcast_addr);
+	err = dpni_add_mac_addr(priv->mc_io, 0, priv->mc_token, bcast_addr);
+	if (err) {
+		dev_err(dev, "dpni_add_mac_addr() failed\n");
+		return err;
+	}
+
+	/* Set MTU upper limit; lower limit is 68B (default value) */
+	net_dev->max_mtu = DPAA2_ETH_MAX_MTU;
+	err = dpni_set_max_frame_length(priv->mc_io, 0, priv->mc_token,
+					DPAA2_ETH_MFL);
+	if (err) {
+		dev_err(dev, "dpni_set_max_frame_length() failed\n");
+		return err;
+	}
+
+	/* Set actual number of queues in the net device */
+	num_queues = dpaa2_eth_queue_count(priv);
+	err = netif_set_real_num_tx_queues(net_dev, num_queues);
+	if (err) {
+		dev_err(dev, "netif_set_real_num_tx_queues() failed\n");
+		return err;
+	}
+	err = netif_set_real_num_rx_queues(net_dev, num_queues);
+	if (err) {
+		dev_err(dev, "netif_set_real_num_rx_queues() failed\n");
+		return err;
+	}
+
+	/* Our .ndo_init will be called herein */
+	err = register_netdev(net_dev);
+	if (err < 0) {
+		dev_err(dev, "register_netdev() failed\n");
+		return err;
+	}
+
+	return 0;
+}
+
+static int poll_link_state(void *arg)
+{
+	struct dpaa2_eth_priv *priv = (struct dpaa2_eth_priv *)arg;
+	int err;
+
+	while (!kthread_should_stop()) {
+		err = link_state_update(priv);
+		if (unlikely(err))
+			return err;
+
+		msleep(DPAA2_ETH_LINK_STATE_REFRESH);
+	}
+
+	return 0;
+}
+
+static irqreturn_t dpni_irq0_handler_thread(int irq_num, void *arg)
+{
+	u32 status = ~0;
+	struct device *dev = (struct device *)arg;
+	struct fsl_mc_device *dpni_dev = to_fsl_mc_device(dev);
+	struct net_device *net_dev = dev_get_drvdata(dev);
+	int err;
+
+	err = dpni_get_irq_status(dpni_dev->mc_io, 0, dpni_dev->mc_handle,
+				  DPNI_IRQ_INDEX, &status);
+	if (unlikely(err)) {
+		netdev_err(net_dev, "Can't get irq status (err %d)\n", err);
+		return IRQ_HANDLED;
+	}
+
+	if (status & DPNI_IRQ_EVENT_LINK_CHANGED)
+		link_state_update(netdev_priv(net_dev));
+
+	return IRQ_HANDLED;
+}
+
+static int setup_irqs(struct fsl_mc_device *ls_dev)
+{
+	int err = 0;
+	struct fsl_mc_device_irq *irq;
+
+	err = fsl_mc_allocate_irqs(ls_dev);
+	if (err) {
+		dev_err(&ls_dev->dev, "MC irqs allocation failed\n");
+		return err;
+	}
+
+	irq = ls_dev->irqs[0];
+	err = devm_request_threaded_irq(&ls_dev->dev, irq->msi_desc->irq,
+					NULL, dpni_irq0_handler_thread,
+					IRQF_NO_SUSPEND | IRQF_ONESHOT,
+					dev_name(&ls_dev->dev), &ls_dev->dev);
+	if (err < 0) {
+		dev_err(&ls_dev->dev, "devm_request_threaded_irq(): %d\n", err);
+		goto free_mc_irq;
+	}
+
+	err = dpni_set_irq_mask(ls_dev->mc_io, 0, ls_dev->mc_handle,
+				DPNI_IRQ_INDEX, DPNI_IRQ_EVENT_LINK_CHANGED);
+	if (err < 0) {
+		dev_err(&ls_dev->dev, "dpni_set_irq_mask(): %d\n", err);
+		goto free_irq;
+	}
+
+	err = dpni_set_irq_enable(ls_dev->mc_io, 0, ls_dev->mc_handle,
+				  DPNI_IRQ_INDEX, 1);
+	if (err < 0) {
+		dev_err(&ls_dev->dev, "dpni_set_irq_enable(): %d\n", err);
+		goto free_irq;
+	}
+
+	return 0;
+
+free_irq:
+	devm_free_irq(&ls_dev->dev, irq->msi_desc->irq, &ls_dev->dev);
+free_mc_irq:
+	fsl_mc_free_irqs(ls_dev);
+
+	return err;
+}
+
+static void add_ch_napi(struct dpaa2_eth_priv *priv)
+{
+	int i;
+	struct dpaa2_eth_channel *ch;
+
+	for (i = 0; i < priv->num_channels; i++) {
+		ch = priv->channel[i];
+		/* NAPI weight *MUST* be a multiple of DPAA2_ETH_STORE_SIZE */
+		netif_napi_add(priv->net_dev, &ch->napi, dpaa2_eth_poll,
+			       NAPI_POLL_WEIGHT);
+	}
+}
+
+static void del_ch_napi(struct dpaa2_eth_priv *priv)
+{
+	int i;
+	struct dpaa2_eth_channel *ch;
+
+	for (i = 0; i < priv->num_channels; i++) {
+		ch = priv->channel[i];
+		netif_napi_del(&ch->napi);
+	}
+}
+
+static int dpaa2_eth_probe(struct fsl_mc_device *dpni_dev)
+{
+	struct device *dev;
+	struct net_device *net_dev = NULL;
+	struct dpaa2_eth_priv *priv = NULL;
+	int err = 0;
+
+	dev = &dpni_dev->dev;
+
+	/* Net device */
+	net_dev = alloc_etherdev_mq(sizeof(*priv), DPAA2_ETH_MAX_TX_QUEUES);
+	if (!net_dev) {
+		dev_err(dev, "alloc_etherdev_mq() failed\n");
+		return -ENOMEM;
+	}
+
+	SET_NETDEV_DEV(net_dev, dev);
+	dev_set_drvdata(dev, net_dev);
+
+	priv = netdev_priv(net_dev);
+	priv->net_dev = net_dev;
+
+	priv->iommu_domain = iommu_get_domain_for_dev(dev);
+
+	/* Obtain a MC portal */
+	err = fsl_mc_portal_allocate(dpni_dev, FSL_MC_IO_ATOMIC_CONTEXT_PORTAL,
+				     &priv->mc_io);
+	if (err) {
+		if (err == -ENXIO)
+			err = -EPROBE_DEFER;
+		else
+			dev_err(dev, "MC portal allocation failed\n");
+		goto err_portal_alloc;
+	}
+
+	/* MC objects initialization and configuration */
+	err = setup_dpni(dpni_dev);
+	if (err)
+		goto err_dpni_setup;
+
+	err = setup_dpio(priv);
+	if (err)
+		goto err_dpio_setup;
+
+	setup_fqs(priv);
+
+	err = setup_dpbp(priv);
+	if (err)
+		goto err_dpbp_setup;
+
+	err = bind_dpni(priv);
+	if (err)
+		goto err_bind;
+
+	/* Add a NAPI context for each channel */
+	add_ch_napi(priv);
+
+	/* Percpu statistics */
+	priv->percpu_stats = alloc_percpu(*priv->percpu_stats);
+	if (!priv->percpu_stats) {
+		dev_err(dev, "alloc_percpu(percpu_stats) failed\n");
+		err = -ENOMEM;
+		goto err_alloc_percpu_stats;
+	}
+	priv->percpu_extras = alloc_percpu(*priv->percpu_extras);
+	if (!priv->percpu_extras) {
+		dev_err(dev, "alloc_percpu(percpu_extras) failed\n");
+		err = -ENOMEM;
+		goto err_alloc_percpu_extras;
+	}
+
+	err = netdev_init(net_dev);
+	if (err)
+		goto err_netdev_init;
+
+	/* Configure checksum offload based on current interface flags */
+	err = set_rx_csum(priv, !!(net_dev->features & NETIF_F_RXCSUM));
+	if (err)
+		goto err_csum;
+
+	err = set_tx_csum(priv, !!(net_dev->features &
+				   (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM)));
+	if (err)
+		goto err_csum;
+
+	err = alloc_rings(priv);
+	if (err)
+		goto err_alloc_rings;
+
+	net_dev->ethtool_ops = &dpaa2_ethtool_ops;
+
+	err = setup_irqs(dpni_dev);
+	if (err) {
+		netdev_warn(net_dev, "Failed to set link interrupt, fall back to polling\n");
+		priv->poll_thread = kthread_run(poll_link_state, priv,
+						"%s_poll_link", net_dev->name);
+		if (IS_ERR(priv->poll_thread)) {
+			netdev_err(net_dev, "Error starting polling thread\n");
+			goto err_poll_thread;
+		}
+		priv->do_link_poll = true;
+	}
+
+	dev_info(dev, "Probed interface %s\n", net_dev->name);
+	return 0;
+
+err_poll_thread:
+	free_rings(priv);
+err_alloc_rings:
+err_csum:
+	unregister_netdev(net_dev);
+err_netdev_init:
+	free_percpu(priv->percpu_extras);
+err_alloc_percpu_extras:
+	free_percpu(priv->percpu_stats);
+err_alloc_percpu_stats:
+	del_ch_napi(priv);
+err_bind:
+	free_dpbp(priv);
+err_dpbp_setup:
+	free_dpio(priv);
+err_dpio_setup:
+	free_dpni(priv);
+err_dpni_setup:
+	fsl_mc_portal_free(priv->mc_io);
+err_portal_alloc:
+	dev_set_drvdata(dev, NULL);
+	free_netdev(net_dev);
+
+	return err;
+}
+
+static int dpaa2_eth_remove(struct fsl_mc_device *ls_dev)
+{
+	struct device *dev;
+	struct net_device *net_dev;
+	struct dpaa2_eth_priv *priv;
+
+	dev = &ls_dev->dev;
+	net_dev = dev_get_drvdata(dev);
+	priv = netdev_priv(net_dev);
+
+	unregister_netdev(net_dev);
+
+	if (priv->do_link_poll)
+		kthread_stop(priv->poll_thread);
+	else
+		fsl_mc_free_irqs(ls_dev);
+
+	free_rings(priv);
+	free_percpu(priv->percpu_stats);
+	free_percpu(priv->percpu_extras);
+
+	del_ch_napi(priv);
+	free_dpbp(priv);
+	free_dpio(priv);
+	free_dpni(priv);
+
+	fsl_mc_portal_free(priv->mc_io);
+
+	free_netdev(net_dev);
+
+	dev_dbg(net_dev->dev.parent, "Removed interface %s\n", net_dev->name);
+
+	return 0;
+}
+
+static const struct fsl_mc_device_id dpaa2_eth_match_id_table[] = {
+	{
+		.vendor = FSL_MC_VENDOR_FREESCALE,
+		.obj_type = "dpni",
+	},
+	{ .vendor = 0x0 }
+};
+MODULE_DEVICE_TABLE(fslmc, dpaa2_eth_match_id_table);
+
+static struct fsl_mc_driver dpaa2_eth_driver = {
+	.driver = {
+		.name = KBUILD_MODNAME,
+		.owner = THIS_MODULE,
+	},
+	.probe = dpaa2_eth_probe,
+	.remove = dpaa2_eth_remove,
+	.match_id_table = dpaa2_eth_match_id_table
+};
+
+module_fsl_mc_driver(dpaa2_eth_driver);
diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h
new file mode 100644
index 0000000..1f86a78
--- /dev/null
+++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-eth.h
@@ -0,0 +1,412 @@
+/* SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause) */
+/* Copyright 2014-2016 Freescale Semiconductor Inc.
+ * Copyright 2016 NXP
+ */
+
+#ifndef __DPAA2_ETH_H
+#define __DPAA2_ETH_H
+
+#include <linux/netdevice.h>
+#include <linux/if_vlan.h>
+#include <linux/fsl/mc.h>
+
+#include <soc/fsl/dpaa2-io.h>
+#include <soc/fsl/dpaa2-fd.h>
+#include "dpni.h"
+#include "dpni-cmd.h"
+
+#include "dpaa2-eth-trace.h"
+
+#define DPAA2_WRIOP_VERSION(x, y, z) ((x) << 10 | (y) << 5 | (z) << 0)
+
+#define DPAA2_ETH_STORE_SIZE		16
+
+/* Maximum number of scatter-gather entries in an ingress frame,
+ * considering the maximum receive frame size is 64K
+ */
+#define DPAA2_ETH_MAX_SG_ENTRIES	((64 * 1024) / DPAA2_ETH_RX_BUF_SIZE)
+
+/* Maximum acceptable MTU value. It is in direct relation with the hardware
+ * enforced Max Frame Length (currently 10k).
+ */
+#define DPAA2_ETH_MFL			(10 * 1024)
+#define DPAA2_ETH_MAX_MTU		(DPAA2_ETH_MFL - VLAN_ETH_HLEN)
+/* Convert L3 MTU to L2 MFL */
+#define DPAA2_ETH_L2_MAX_FRM(mtu)	((mtu) + VLAN_ETH_HLEN)
+
+/* Set the taildrop threshold (in bytes) to allow the enqueue of several jumbo
+ * frames in the Rx queues (length of the current frame is not
+ * taken into account when making the taildrop decision)
+ */
+#define DPAA2_ETH_TAILDROP_THRESH	(64 * 1024)
+
+/* Buffer quota per queue. Must be large enough such that for minimum sized
+ * frames taildrop kicks in before the bpool gets depleted, so we compute
+ * how many 64B frames fit inside the taildrop threshold and add a margin
+ * to accommodate the buffer refill delay.
+ */
+#define DPAA2_ETH_MAX_FRAMES_PER_QUEUE	(DPAA2_ETH_TAILDROP_THRESH / 64)
+#define DPAA2_ETH_NUM_BUFS		(DPAA2_ETH_MAX_FRAMES_PER_QUEUE + 256)
+#define DPAA2_ETH_REFILL_THRESH		DPAA2_ETH_MAX_FRAMES_PER_QUEUE
+
+/* Maximum number of buffers that can be acquired/released through a single
+ * QBMan command
+ */
+#define DPAA2_ETH_BUFS_PER_CMD		7
+
+/* Hardware requires alignment for ingress/egress buffer addresses */
+#define DPAA2_ETH_TX_BUF_ALIGN		64
+
+#define DPAA2_ETH_RX_BUF_SIZE		2048
+#define DPAA2_ETH_SKB_SIZE \
+	(DPAA2_ETH_RX_BUF_SIZE + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)))
+
+/* Hardware annotation area in RX/TX buffers */
+#define DPAA2_ETH_RX_HWA_SIZE		64
+#define DPAA2_ETH_TX_HWA_SIZE		128
+
+/* PTP nominal frequency 1GHz */
+#define DPAA2_PTP_CLK_PERIOD_NS		1
+
+/* Due to a limitation in WRIOP 1.0.0, the RX buffer data must be aligned
+ * to 256B. For newer revisions, the requirement is only for 64B alignment
+ */
+#define DPAA2_ETH_RX_BUF_ALIGN_REV1	256
+#define DPAA2_ETH_RX_BUF_ALIGN		64
+
+/* We are accommodating a skb backpointer and some S/G info
+ * in the frame's software annotation. The hardware
+ * options are either 0 or 64, so we choose the latter.
+ */
+#define DPAA2_ETH_SWA_SIZE		64
+
+/* Must keep this struct smaller than DPAA2_ETH_SWA_SIZE */
+struct dpaa2_eth_swa {
+	struct sk_buff *skb;
+	struct scatterlist *scl;
+	int num_sg;
+	int sgt_size;
+};
+
+/* Annotation valid bits in FD FRC */
+#define DPAA2_FD_FRC_FASV		0x8000
+#define DPAA2_FD_FRC_FAEADV		0x4000
+#define DPAA2_FD_FRC_FAPRV		0x2000
+#define DPAA2_FD_FRC_FAIADV		0x1000
+#define DPAA2_FD_FRC_FASWOV		0x0800
+#define DPAA2_FD_FRC_FAICFDV		0x0400
+
+/* Error bits in FD CTRL */
+#define DPAA2_FD_RX_ERR_MASK		(FD_CTRL_SBE | FD_CTRL_FAERR)
+#define DPAA2_FD_TX_ERR_MASK		(FD_CTRL_UFD	| \
+					 FD_CTRL_SBE	| \
+					 FD_CTRL_FSE	| \
+					 FD_CTRL_FAERR)
+
+/* Annotation bits in FD CTRL */
+#define DPAA2_FD_CTRL_ASAL		0x00020000	/* ASAL = 128B */
+
+/* Frame annotation status */
+struct dpaa2_fas {
+	u8 reserved;
+	u8 ppid;
+	__le16 ifpid;
+	__le32 status;
+};
+
+/* Frame annotation status word is located in the first 8 bytes
+ * of the buffer's hardware annoatation area
+ */
+#define DPAA2_FAS_OFFSET		0
+#define DPAA2_FAS_SIZE			(sizeof(struct dpaa2_fas))
+
+/* Timestamp is located in the next 8 bytes of the buffer's
+ * hardware annotation area
+ */
+#define DPAA2_TS_OFFSET			0x8
+
+/* Frame annotation egress action descriptor */
+#define DPAA2_FAEAD_OFFSET		0x58
+
+struct dpaa2_faead {
+	__le32 conf_fqid;
+	__le32 ctrl;
+};
+
+#define DPAA2_FAEAD_A2V			0x20000000
+#define DPAA2_FAEAD_UPDV		0x00001000
+#define DPAA2_FAEAD_UPD			0x00000010
+
+/* Accessors for the hardware annotation fields that we use */
+static inline void *dpaa2_get_hwa(void *buf_addr, bool swa)
+{
+	return buf_addr + (swa ? DPAA2_ETH_SWA_SIZE : 0);
+}
+
+static inline struct dpaa2_fas *dpaa2_get_fas(void *buf_addr, bool swa)
+{
+	return dpaa2_get_hwa(buf_addr, swa) + DPAA2_FAS_OFFSET;
+}
+
+static inline __le64 *dpaa2_get_ts(void *buf_addr, bool swa)
+{
+	return dpaa2_get_hwa(buf_addr, swa) + DPAA2_TS_OFFSET;
+}
+
+static inline struct dpaa2_faead *dpaa2_get_faead(void *buf_addr, bool swa)
+{
+	return dpaa2_get_hwa(buf_addr, swa) + DPAA2_FAEAD_OFFSET;
+}
+
+/* Error and status bits in the frame annotation status word */
+/* Debug frame, otherwise supposed to be discarded */
+#define DPAA2_FAS_DISC			0x80000000
+/* MACSEC frame */
+#define DPAA2_FAS_MS			0x40000000
+#define DPAA2_FAS_PTP			0x08000000
+/* Ethernet multicast frame */
+#define DPAA2_FAS_MC			0x04000000
+/* Ethernet broadcast frame */
+#define DPAA2_FAS_BC			0x02000000
+#define DPAA2_FAS_KSE			0x00040000
+#define DPAA2_FAS_EOFHE			0x00020000
+#define DPAA2_FAS_MNLE			0x00010000
+#define DPAA2_FAS_TIDE			0x00008000
+#define DPAA2_FAS_PIEE			0x00004000
+/* Frame length error */
+#define DPAA2_FAS_FLE			0x00002000
+/* Frame physical error */
+#define DPAA2_FAS_FPE			0x00001000
+#define DPAA2_FAS_PTE			0x00000080
+#define DPAA2_FAS_ISP			0x00000040
+#define DPAA2_FAS_PHE			0x00000020
+#define DPAA2_FAS_BLE			0x00000010
+/* L3 csum validation performed */
+#define DPAA2_FAS_L3CV			0x00000008
+/* L3 csum error */
+#define DPAA2_FAS_L3CE			0x00000004
+/* L4 csum validation performed */
+#define DPAA2_FAS_L4CV			0x00000002
+/* L4 csum error */
+#define DPAA2_FAS_L4CE			0x00000001
+/* Possible errors on the ingress path */
+#define DPAA2_FAS_RX_ERR_MASK		(DPAA2_FAS_KSE		| \
+					 DPAA2_FAS_EOFHE	| \
+					 DPAA2_FAS_MNLE		| \
+					 DPAA2_FAS_TIDE		| \
+					 DPAA2_FAS_PIEE		| \
+					 DPAA2_FAS_FLE		| \
+					 DPAA2_FAS_FPE		| \
+					 DPAA2_FAS_PTE		| \
+					 DPAA2_FAS_ISP		| \
+					 DPAA2_FAS_PHE		| \
+					 DPAA2_FAS_BLE		| \
+					 DPAA2_FAS_L3CE		| \
+					 DPAA2_FAS_L4CE)
+
+/* Time in milliseconds between link state updates */
+#define DPAA2_ETH_LINK_STATE_REFRESH	1000
+
+/* Number of times to retry a frame enqueue before giving up.
+ * Value determined empirically, in order to minimize the number
+ * of frames dropped on Tx
+ */
+#define DPAA2_ETH_ENQUEUE_RETRIES	10
+
+/* Driver statistics, other than those in struct rtnl_link_stats64.
+ * These are usually collected per-CPU and aggregated by ethtool.
+ */
+struct dpaa2_eth_drv_stats {
+	__u64	tx_conf_frames;
+	__u64	tx_conf_bytes;
+	__u64	tx_sg_frames;
+	__u64	tx_sg_bytes;
+	__u64	tx_reallocs;
+	__u64	rx_sg_frames;
+	__u64	rx_sg_bytes;
+	/* Enqueues retried due to portal busy */
+	__u64	tx_portal_busy;
+};
+
+/* Per-FQ statistics */
+struct dpaa2_eth_fq_stats {
+	/* Number of frames received on this queue */
+	__u64 frames;
+};
+
+/* Per-channel statistics */
+struct dpaa2_eth_ch_stats {
+	/* Volatile dequeues retried due to portal busy */
+	__u64 dequeue_portal_busy;
+	/* Number of CDANs; useful to estimate avg NAPI len */
+	__u64 cdan;
+	/* Number of frames received on queues from this channel */
+	__u64 frames;
+	/* Pull errors */
+	__u64 pull_err;
+};
+
+/* Maximum number of queues associated with a DPNI */
+#define DPAA2_ETH_MAX_RX_QUEUES		16
+#define DPAA2_ETH_MAX_TX_QUEUES		16
+#define DPAA2_ETH_MAX_QUEUES		(DPAA2_ETH_MAX_RX_QUEUES + \
+					DPAA2_ETH_MAX_TX_QUEUES)
+
+#define DPAA2_ETH_MAX_DPCONS		16
+
+enum dpaa2_eth_fq_type {
+	DPAA2_RX_FQ = 0,
+	DPAA2_TX_CONF_FQ,
+};
+
+struct dpaa2_eth_priv;
+
+struct dpaa2_eth_fq {
+	u32 fqid;
+	u32 tx_qdbin;
+	u16 flowid;
+	int target_cpu;
+	struct dpaa2_eth_channel *channel;
+	enum dpaa2_eth_fq_type type;
+
+	void (*consume)(struct dpaa2_eth_priv *,
+			struct dpaa2_eth_channel *,
+			const struct dpaa2_fd *,
+			struct napi_struct *,
+			u16 queue_id);
+	struct dpaa2_eth_fq_stats stats;
+};
+
+struct dpaa2_eth_channel {
+	struct dpaa2_io_notification_ctx nctx;
+	struct fsl_mc_device *dpcon;
+	int dpcon_id;
+	int ch_id;
+	struct napi_struct napi;
+	struct dpaa2_io *dpio;
+	struct dpaa2_io_store *store;
+	struct dpaa2_eth_priv *priv;
+	int buf_count;
+	struct dpaa2_eth_ch_stats stats;
+};
+
+struct dpaa2_eth_hash_fields {
+	u64 rxnfc_field;
+	enum net_prot cls_prot;
+	int cls_field;
+	int size;
+};
+
+/* Driver private data */
+struct dpaa2_eth_priv {
+	struct net_device *net_dev;
+
+	u8 num_fqs;
+	struct dpaa2_eth_fq fq[DPAA2_ETH_MAX_QUEUES];
+
+	u8 num_channels;
+	struct dpaa2_eth_channel *channel[DPAA2_ETH_MAX_DPCONS];
+
+	struct dpni_attr dpni_attrs;
+	u16 dpni_ver_major;
+	u16 dpni_ver_minor;
+	u16 tx_data_offset;
+
+	struct fsl_mc_device *dpbp_dev;
+	u16 bpid;
+	struct iommu_domain *iommu_domain;
+
+	bool tx_tstamp; /* Tx timestamping enabled */
+	bool rx_tstamp; /* Rx timestamping enabled */
+
+	u16 tx_qdid;
+	u16 rx_buf_align;
+	struct fsl_mc_io *mc_io;
+	/* Cores which have an affine DPIO/DPCON.
+	 * This is the cpu set on which Rx and Tx conf frames are processed
+	 */
+	struct cpumask dpio_cpumask;
+
+	/* Standard statistics */
+	struct rtnl_link_stats64 __percpu *percpu_stats;
+	/* Extra stats, in addition to the ones known by the kernel */
+	struct dpaa2_eth_drv_stats __percpu *percpu_extras;
+
+	u16 mc_token;
+
+	struct dpni_link_state link_state;
+	bool do_link_poll;
+	struct task_struct *poll_thread;
+
+	/* enabled ethtool hashing bits */
+	u64 rx_hash_fields;
+};
+
+#define DPAA2_RXH_SUPPORTED	(RXH_L2DA | RXH_VLAN | RXH_L3_PROTO \
+				| RXH_IP_SRC | RXH_IP_DST | RXH_L4_B_0_1 \
+				| RXH_L4_B_2_3)
+
+/* default Rx hash options, set during probing */
+#define DPAA2_RXH_DEFAULT	(RXH_L3_PROTO | RXH_IP_SRC | RXH_IP_DST | \
+				 RXH_L4_B_0_1 | RXH_L4_B_2_3)
+
+#define dpaa2_eth_hash_enabled(priv)	\
+	((priv)->dpni_attrs.num_queues > 1)
+
+/* Required by struct dpni_rx_tc_dist_cfg::key_cfg_iova */
+#define DPAA2_CLASSIFIER_DMA_SIZE 256
+
+extern const struct ethtool_ops dpaa2_ethtool_ops;
+extern int dpaa2_phc_index;
+
+static inline int dpaa2_eth_cmp_dpni_ver(struct dpaa2_eth_priv *priv,
+					 u16 ver_major, u16 ver_minor)
+{
+	if (priv->dpni_ver_major == ver_major)
+		return priv->dpni_ver_minor - ver_minor;
+	return priv->dpni_ver_major - ver_major;
+}
+
+/* Hardware only sees DPAA2_ETH_RX_BUF_SIZE, but the skb built around
+ * the buffer also needs space for its shared info struct, and we need
+ * to allocate enough to accommodate hardware alignment restrictions
+ */
+static inline unsigned int dpaa2_eth_buf_raw_size(struct dpaa2_eth_priv *priv)
+{
+	return DPAA2_ETH_SKB_SIZE + priv->rx_buf_align;
+}
+
+static inline
+unsigned int dpaa2_eth_needed_headroom(struct dpaa2_eth_priv *priv,
+				       struct sk_buff *skb)
+{
+	unsigned int headroom = DPAA2_ETH_SWA_SIZE;
+
+	/* For non-linear skbs we have no headroom requirement, as we build a
+	 * SG frame with a newly allocated SGT buffer
+	 */
+	if (skb_is_nonlinear(skb))
+		return 0;
+
+	/* If we have Tx timestamping, need 128B hardware annotation */
+	if (priv->tx_tstamp && skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)
+		headroom += DPAA2_ETH_TX_HWA_SIZE;
+
+	return headroom;
+}
+
+/* Extra headroom space requested to hardware, in order to make sure there's
+ * no realloc'ing in forwarding scenarios
+ */
+static inline unsigned int dpaa2_eth_rx_head_room(struct dpaa2_eth_priv *priv)
+{
+	return priv->tx_data_offset + DPAA2_ETH_TX_BUF_ALIGN -
+	       DPAA2_ETH_RX_HWA_SIZE;
+}
+
+static int dpaa2_eth_queue_count(struct dpaa2_eth_priv *priv)
+{
+	return priv->dpni_attrs.num_queues;
+}
+
+#endif	/* __DPAA2_H */
diff --git a/drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c b/drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c
new file mode 100644
index 0000000..8056a95
--- /dev/null
+++ b/drivers/net/ethernet/freescale/dpaa2/dpaa2-ethtool.c
@@ -0,0 +1,280 @@
+// SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause)
+/* Copyright 2014-2016 Freescale Semiconductor Inc.
+ * Copyright 2016 NXP
+ */
+
+#include <linux/net_tstamp.h>
+
+#include "dpni.h"	/* DPNI_LINK_OPT_* */
+#include "dpaa2-eth.h"
+
+/* To be kept in sync with DPNI statistics */
+static char dpaa2_ethtool_stats[][ETH_GSTRING_LEN] = {
+	"[hw] rx frames",
+	"[hw] rx bytes",
+	"[hw] rx mcast frames",
+	"[hw] rx mcast bytes",
+	"[hw] rx bcast frames",
+	"[hw] rx bcast bytes",
+	"[hw] tx frames",
+	"[hw] tx bytes",
+	"[hw] tx mcast frames",
+	"[hw] tx mcast bytes",
+	"[hw] tx bcast frames",
+	"[hw] tx bcast bytes",
+	"[hw] rx filtered frames",
+	"[hw] rx discarded frames",
+	"[hw] rx nobuffer discards",
+	"[hw] tx discarded frames",
+	"[hw] tx confirmed frames",
+};
+
+#define DPAA2_ETH_NUM_STATS	ARRAY_SIZE(dpaa2_ethtool_stats)
+
+static char dpaa2_ethtool_extras[][ETH_GSTRING_LEN] = {
+	/* per-cpu stats */
+	"[drv] tx conf frames",
+	"[drv] tx conf bytes",
+	"[drv] tx sg frames",
+	"[drv] tx sg bytes",
+	"[drv] tx realloc frames",
+	"[drv] rx sg frames",
+	"[drv] rx sg bytes",
+	"[drv] enqueue portal busy",
+	/* Channel stats */
+	"[drv] dequeue portal busy",
+	"[drv] channel pull errors",
+	"[drv] cdan",
+};
+
+#define DPAA2_ETH_NUM_EXTRA_STATS	ARRAY_SIZE(dpaa2_ethtool_extras)
+
+static void dpaa2_eth_get_drvinfo(struct net_device *net_dev,
+				  struct ethtool_drvinfo *drvinfo)
+{
+	struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
+
+	strlcpy(drvinfo->driver, KBUILD_MODNAME, sizeof(drvinfo->driver));
+
+	snprintf(drvinfo->fw_version, sizeof(drvinfo->fw_version),
+		 "%u.%u", priv->dpni_ver_major, priv->dpni_ver_minor);
+
+	strlcpy(drvinfo->bus_info, dev_name(net_dev->dev.parent->parent),
+		sizeof(drvinfo->bus_info));
+}
+
+static int
+dpaa2_eth_get_link_ksettings(struct net_device *net_dev,
+			     struct ethtool_link_ksettings *link_settings)
+{
+	struct dpni_link_state state = {0};
+	int err = 0;
+	struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
+
+	err = dpni_get_link_state(priv->mc_io, 0, priv->mc_token, &state);
+	if (err) {
+		netdev_err(net_dev, "ERROR %d getting link state\n", err);
+		goto out;
+	}
+
+	/* At the moment, we have no way of interrogating the DPMAC
+	 * from the DPNI side - and for that matter there may exist
+	 * no DPMAC at all. So for now we just don't report anything
+	 * beyond the DPNI attributes.
+	 */
+	if (state.options & DPNI_LINK_OPT_AUTONEG)
+		link_settings->base.autoneg = AUTONEG_ENABLE;
+	if (!(state.options & DPNI_LINK_OPT_HALF_DUPLEX))
+		link_settings->base.duplex = DUPLEX_FULL;
+	link_settings->base.speed = state.rate;
+
+out:
+	return err;
+}
+
+#define DPNI_DYNAMIC_LINK_SET_VER_MAJOR		7
+#define DPNI_DYNAMIC_LINK_SET_VER_MINOR		1
+static int
+dpaa2_eth_set_link_ksettings(struct net_device *net_dev,
+			     const struct ethtool_link_ksettings *link_settings)
+{
+	struct dpni_link_cfg cfg = {0};
+	struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
+	int err = 0;
+
+	/* If using an older MC version, the DPNI must be down
+	 * in order to be able to change link settings. Taking steps to let
+	 * the user know that.
+	 */
+	if (dpaa2_eth_cmp_dpni_ver(priv, DPNI_DYNAMIC_LINK_SET_VER_MAJOR,
+				   DPNI_DYNAMIC_LINK_SET_VER_MINOR) < 0) {
+		if (netif_running(net_dev)) {
+			netdev_info(net_dev, "Interface must be brought down first.\n");
+			return -EACCES;
+		}
+	}
+
+	cfg.rate = link_settings->base.speed;
+	if (link_settings->base.autoneg == AUTONEG_ENABLE)
+		cfg.options |= DPNI_LINK_OPT_AUTONEG;
+	else
+		cfg.options &= ~DPNI_LINK_OPT_AUTONEG;
+	if (link_settings->base.duplex  == DUPLEX_HALF)
+		cfg.options |= DPNI_LINK_OPT_HALF_DUPLEX;
+	else
+		cfg.options &= ~DPNI_LINK_OPT_HALF_DUPLEX;
+
+	err = dpni_set_link_cfg(priv->mc_io, 0, priv->mc_token, &cfg);
+	if (err)
+		/* ethtool will be loud enough if we return an error; no point
+		 * in putting our own error message on the console by default
+		 */
+		netdev_dbg(net_dev, "ERROR %d setting link cfg\n", err);
+
+	return err;
+}
+
+static void dpaa2_eth_get_strings(struct net_device *netdev, u32 stringset,
+				  u8 *data)
+{
+	u8 *p = data;
+	int i;
+
+	switch (stringset) {
+	case ETH_SS_STATS:
+		for (i = 0; i < DPAA2_ETH_NUM_STATS; i++) {
+			strlcpy(p, dpaa2_ethtool_stats[i], ETH_GSTRING_LEN);
+			p += ETH_GSTRING_LEN;
+		}
+		for (i = 0; i < DPAA2_ETH_NUM_EXTRA_STATS; i++) {
+			strlcpy(p, dpaa2_ethtool_extras[i], ETH_GSTRING_LEN);
+			p += ETH_GSTRING_LEN;
+		}
+		break;
+	}
+}
+
+static int dpaa2_eth_get_sset_count(struct net_device *net_dev, int sset)
+{
+	switch (sset) {
+	case ETH_SS_STATS: /* ethtool_get_stats(), ethtool_get_drvinfo() */
+		return DPAA2_ETH_NUM_STATS + DPAA2_ETH_NUM_EXTRA_STATS;
+	default:
+		return -EOPNOTSUPP;
+	}
+}
+
+/** Fill in hardware counters, as returned by MC.
+ */
+static void dpaa2_eth_get_ethtool_stats(struct net_device *net_dev,
+					struct ethtool_stats *stats,
+					u64 *data)
+{
+	int i = 0;
+	int j, k, err;
+	int num_cnt;
+	union dpni_statistics dpni_stats;
+	u64 cdan = 0;
+	u64 portal_busy = 0, pull_err = 0;
+	struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
+	struct dpaa2_eth_drv_stats *extras;
+	struct dpaa2_eth_ch_stats *ch_stats;
+
+	memset(data, 0,
+	       sizeof(u64) * (DPAA2_ETH_NUM_STATS + DPAA2_ETH_NUM_EXTRA_STATS));
+
+	/* Print standard counters, from DPNI statistics */
+	for (j = 0; j <= 2; j++) {
+		err = dpni_get_statistics(priv->mc_io, 0, priv->mc_token,
+					  j, &dpni_stats);
+		if (err != 0)
+			netdev_warn(net_dev, "dpni_get_stats(%d) failed\n", j);
+		switch (j) {
+		case 0:
+			num_cnt = sizeof(dpni_stats.page_0) / sizeof(u64);
+			break;
+		case 1:
+			num_cnt = sizeof(dpni_stats.page_1) / sizeof(u64);
+			break;
+		case 2:
+			num_cnt = sizeof(dpni_stats.page_2) / sizeof(u64);
+			break;
+		}
+		for (k = 0; k < num_cnt; k++)
+			*(data + i++) = dpni_stats.raw.counter[k];
+	}
+
+	/* Print per-cpu extra stats */
+	for_each_online_cpu(k) {
+		extras = per_cpu_ptr(priv->percpu_extras, k);
+		for (j = 0; j < sizeof(*extras) / sizeof(__u64); j++)
+			*((__u64 *)data + i + j) += *((__u64 *)extras + j);
+	}
+	i += j;
+
+	for (j = 0; j < priv->num_channels; j++) {
+		ch_stats = &priv->channel[j]->stats;
+		cdan += ch_stats->cdan;
+		portal_busy += ch_stats->dequeue_portal_busy;
+		pull_err += ch_stats->pull_err;
+	}
+
+	*(data + i++) = portal_busy;
+	*(data + i++) = pull_err;
+	*(data + i++) = cdan;
+}
+
+static int dpaa2_eth_get_rxnfc(struct net_device *net_dev,
+			       struct ethtool_rxnfc *rxnfc, u32 *rule_locs)
+{
+	struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
+
+	switch (rxnfc->cmd) {
+	case ETHTOOL_GRXFH:
+		/* we purposely ignore cmd->flow_type for now, because the
+		 * classifier only supports a single set of fields for all
+		 * protocols
+		 */
+		rxnfc->data = priv->rx_hash_fields;
+		break;
+	case ETHTOOL_GRXRINGS:
+		rxnfc->data = dpaa2_eth_queue_count(priv);
+		break;
+	default:
+		return -EOPNOTSUPP;
+	}
+
+	return 0;
+}
+
+int dpaa2_phc_index = -1;
+EXPORT_SYMBOL(dpaa2_phc_index);
+
+static int dpaa2_eth_get_ts_info(struct net_device *dev,
+				 struct ethtool_ts_info *info)
+{
+	info->so_timestamping = SOF_TIMESTAMPING_TX_HARDWARE |
+				SOF_TIMESTAMPING_RX_HARDWARE |
+				SOF_TIMESTAMPING_RAW_HARDWARE;
+
+	info->phc_index = dpaa2_phc_index;
+
+	info->tx_types = (1 << HWTSTAMP_TX_OFF) |
+			 (1 << HWTSTAMP_TX_ON);
+
+	info->rx_filters = (1 << HWTSTAMP_FILTER_NONE) |
+			   (1 << HWTSTAMP_FILTER_ALL);
+	return 0;
+}
+
+const struct ethtool_ops dpaa2_ethtool_ops = {
+	.get_drvinfo = dpaa2_eth_get_drvinfo,
+	.get_link = ethtool_op_get_link,
+	.get_link_ksettings = dpaa2_eth_get_link_ksettings,
+	.set_link_ksettings = dpaa2_eth_set_link_ksettings,
+	.get_sset_count = dpaa2_eth_get_sset_count,
+	.get_ethtool_stats = dpaa2_eth_get_ethtool_stats,
+	.get_strings = dpaa2_eth_get_strings,
+	.get_rxnfc = dpaa2_eth_get_rxnfc,
+	.get_ts_info = dpaa2_eth_get_ts_info,
+};
diff --git a/drivers/net/ethernet/freescale/dpaa2/dpkg.h b/drivers/net/ethernet/freescale/dpaa2/dpkg.h
new file mode 100644
index 0000000..6de613b1
--- /dev/null
+++ b/drivers/net/ethernet/freescale/dpaa2/dpkg.h
@@ -0,0 +1,480 @@
+/* SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause) */
+/* Copyright 2013-2015 Freescale Semiconductor Inc.
+ */
+#ifndef __FSL_DPKG_H_
+#define __FSL_DPKG_H_
+
+#include <linux/types.h>
+
+/* Data Path Key Generator API
+ * Contains initialization APIs and runtime APIs for the Key Generator
+ */
+
+/** Key Generator properties */
+
+/**
+ * Number of masks per key extraction
+ */
+#define DPKG_NUM_OF_MASKS		4
+/**
+ * Number of extractions per key profile
+ */
+#define DPKG_MAX_NUM_OF_EXTRACTS	10
+
+/**
+ * enum dpkg_extract_from_hdr_type - Selecting extraction by header types
+ * @DPKG_FROM_HDR: Extract selected bytes from header, by offset
+ * @DPKG_FROM_FIELD: Extract selected bytes from header, by offset from field
+ * @DPKG_FULL_FIELD: Extract a full field
+ */
+enum dpkg_extract_from_hdr_type {
+	DPKG_FROM_HDR = 0,
+	DPKG_FROM_FIELD = 1,
+	DPKG_FULL_FIELD = 2
+};
+
+/**
+ * enum dpkg_extract_type - Enumeration for selecting extraction type
+ * @DPKG_EXTRACT_FROM_HDR: Extract from the header
+ * @DPKG_EXTRACT_FROM_DATA: Extract from data not in specific header
+ * @DPKG_EXTRACT_FROM_PARSE: Extract from parser-result;
+ *	e.g. can be used to extract header existence;
+ *	please refer to 'Parse Result definition' section in the parser BG
+ */
+enum dpkg_extract_type {
+	DPKG_EXTRACT_FROM_HDR = 0,
+	DPKG_EXTRACT_FROM_DATA = 1,
+	DPKG_EXTRACT_FROM_PARSE = 3
+};
+
+/**
+ * struct dpkg_mask - A structure for defining a single extraction mask
+ * @mask: Byte mask for the extracted content
+ * @offset: Offset within the extracted content
+ */
+struct dpkg_mask {
+	u8 mask;
+	u8 offset;
+};
+
+/* Protocol fields */
+
+/* Ethernet fields */
+#define NH_FLD_ETH_DA				BIT(0)
+#define NH_FLD_ETH_SA				BIT(1)
+#define NH_FLD_ETH_LENGTH			BIT(2)
+#define NH_FLD_ETH_TYPE				BIT(3)
+#define NH_FLD_ETH_FINAL_CKSUM			BIT(4)
+#define NH_FLD_ETH_PADDING			BIT(5)
+#define NH_FLD_ETH_ALL_FIELDS			(BIT(6) - 1)
+
+/* VLAN fields */
+#define NH_FLD_VLAN_VPRI			BIT(0)
+#define NH_FLD_VLAN_CFI				BIT(1)
+#define NH_FLD_VLAN_VID				BIT(2)
+#define NH_FLD_VLAN_LENGTH			BIT(3)
+#define NH_FLD_VLAN_TYPE			BIT(4)
+#define NH_FLD_VLAN_ALL_FIELDS			(BIT(5) - 1)
+
+#define NH_FLD_VLAN_TCI				(NH_FLD_VLAN_VPRI | \
+						 NH_FLD_VLAN_CFI | \
+						 NH_FLD_VLAN_VID)
+
+/* IP (generic) fields */
+#define NH_FLD_IP_VER				BIT(0)
+#define NH_FLD_IP_DSCP				BIT(2)
+#define NH_FLD_IP_ECN				BIT(3)
+#define NH_FLD_IP_PROTO				BIT(4)
+#define NH_FLD_IP_SRC				BIT(5)
+#define NH_FLD_IP_DST				BIT(6)
+#define NH_FLD_IP_TOS_TC			BIT(7)
+#define NH_FLD_IP_ID				BIT(8)
+#define NH_FLD_IP_ALL_FIELDS			(BIT(9) - 1)
+
+/* IPV4 fields */
+#define NH_FLD_IPV4_VER				BIT(0)
+#define NH_FLD_IPV4_HDR_LEN			BIT(1)
+#define NH_FLD_IPV4_TOS				BIT(2)
+#define NH_FLD_IPV4_TOTAL_LEN			BIT(3)
+#define NH_FLD_IPV4_ID				BIT(4)
+#define NH_FLD_IPV4_FLAG_D			BIT(5)
+#define NH_FLD_IPV4_FLAG_M			BIT(6)
+#define NH_FLD_IPV4_OFFSET			BIT(7)
+#define NH_FLD_IPV4_TTL				BIT(8)
+#define NH_FLD_IPV4_PROTO			BIT(9)
+#define NH_FLD_IPV4_CKSUM			BIT(10)
+#define NH_FLD_IPV4_SRC_IP			BIT(11)
+#define NH_FLD_IPV4_DST_IP			BIT(12)
+#define NH_FLD_IPV4_OPTS			BIT(13)
+#define NH_FLD_IPV4_OPTS_COUNT			BIT(14)
+#define NH_FLD_IPV4_ALL_FIELDS			(BIT(15) - 1)
+
+/* IPV6 fields */
+#define NH_FLD_IPV6_VER				BIT(0)
+#define NH_FLD_IPV6_TC				BIT(1)
+#define NH_FLD_IPV6_SRC_IP			BIT(2)
+#define NH_FLD_IPV6_DST_IP			BIT(3)
+#define NH_FLD_IPV6_NEXT_HDR			BIT(4)
+#define NH_FLD_IPV6_FL				BIT(5)
+#define NH_FLD_IPV6_HOP_LIMIT			BIT(6)
+#define NH_FLD_IPV6_ID				BIT(7)
+#define NH_FLD_IPV6_ALL_FIELDS			(BIT(8) - 1)
+
+/* ICMP fields */
+#define NH_FLD_ICMP_TYPE			BIT(0)
+#define NH_FLD_ICMP_CODE			BIT(1)
+#define NH_FLD_ICMP_CKSUM			BIT(2)
+#define NH_FLD_ICMP_ID				BIT(3)
+#define NH_FLD_ICMP_SQ_NUM			BIT(4)
+#define NH_FLD_ICMP_ALL_FIELDS			(BIT(5) - 1)
+
+/* IGMP fields */
+#define NH_FLD_IGMP_VERSION			BIT(0)
+#define NH_FLD_IGMP_TYPE			BIT(1)
+#define NH_FLD_IGMP_CKSUM			BIT(2)
+#define NH_FLD_IGMP_DATA			BIT(3)
+#define NH_FLD_IGMP_ALL_FIELDS			(BIT(4) - 1)
+
+/* TCP fields */
+#define NH_FLD_TCP_PORT_SRC			BIT(0)
+#define NH_FLD_TCP_PORT_DST			BIT(1)
+#define NH_FLD_TCP_SEQ				BIT(2)
+#define NH_FLD_TCP_ACK				BIT(3)
+#define NH_FLD_TCP_OFFSET			BIT(4)
+#define NH_FLD_TCP_FLAGS			BIT(5)
+#define NH_FLD_TCP_WINDOW			BIT(6)
+#define NH_FLD_TCP_CKSUM			BIT(7)
+#define NH_FLD_TCP_URGPTR			BIT(8)
+#define NH_FLD_TCP_OPTS				BIT(9)
+#define NH_FLD_TCP_OPTS_COUNT			BIT(10)
+#define NH_FLD_TCP_ALL_FIELDS			(BIT(11) - 1)
+
+/* UDP fields */
+#define NH_FLD_UDP_PORT_SRC			BIT(0)
+#define NH_FLD_UDP_PORT_DST			BIT(1)
+#define NH_FLD_UDP_LEN				BIT(2)
+#define NH_FLD_UDP_CKSUM			BIT(3)
+#define NH_FLD_UDP_ALL_FIELDS			(BIT(4) - 1)
+
+/* UDP-lite fields */
+#define NH_FLD_UDP_LITE_PORT_SRC		BIT(0)
+#define NH_FLD_UDP_LITE_PORT_DST		BIT(1)
+#define NH_FLD_UDP_LITE_ALL_FIELDS		(BIT(2) - 1)
+
+/* UDP-encap-ESP fields */
+#define NH_FLD_UDP_ENC_ESP_PORT_SRC		BIT(0)
+#define NH_FLD_UDP_ENC_ESP_PORT_DST		BIT(1)
+#define NH_FLD_UDP_ENC_ESP_LEN			BIT(2)
+#define NH_FLD_UDP_ENC_ESP_CKSUM		BIT(3)
+#define NH_FLD_UDP_ENC_ESP_SPI			BIT(4)
+#define NH_FLD_UDP_ENC_ESP_SEQUENCE_NUM		BIT(5)
+#define NH_FLD_UDP_ENC_ESP_ALL_FIELDS		(BIT(6) - 1)
+
+/* SCTP fields */
+#define NH_FLD_SCTP_PORT_SRC			BIT(0)
+#define NH_FLD_SCTP_PORT_DST			BIT(1)
+#define NH_FLD_SCTP_VER_TAG			BIT(2)
+#define NH_FLD_SCTP_CKSUM			BIT(3)
+#define NH_FLD_SCTP_ALL_FIELDS			(BIT(4) - 1)
+
+/* DCCP fields */
+#define NH_FLD_DCCP_PORT_SRC			BIT(0)
+#define NH_FLD_DCCP_PORT_DST			BIT(1)
+#define NH_FLD_DCCP_ALL_FIELDS			(BIT(2) - 1)
+
+/* IPHC fields */
+#define NH_FLD_IPHC_CID				BIT(0)
+#define NH_FLD_IPHC_CID_TYPE			BIT(1)
+#define NH_FLD_IPHC_HCINDEX			BIT(2)
+#define NH_FLD_IPHC_GEN				BIT(3)
+#define NH_FLD_IPHC_D_BIT			BIT(4)
+#define NH_FLD_IPHC_ALL_FIELDS			(BIT(5) - 1)
+
+/* SCTP fields */
+#define NH_FLD_SCTP_CHUNK_DATA_TYPE		BIT(0)
+#define NH_FLD_SCTP_CHUNK_DATA_FLAGS		BIT(1)
+#define NH_FLD_SCTP_CHUNK_DATA_LENGTH		BIT(2)
+#define NH_FLD_SCTP_CHUNK_DATA_TSN		BIT(3)
+#define NH_FLD_SCTP_CHUNK_DATA_STREAM_ID	BIT(4)
+#define NH_FLD_SCTP_CHUNK_DATA_STREAM_SQN	BIT(5)
+#define NH_FLD_SCTP_CHUNK_DATA_PAYLOAD_PID	BIT(6)
+#define NH_FLD_SCTP_CHUNK_DATA_UNORDERED	BIT(7)
+#define NH_FLD_SCTP_CHUNK_DATA_BEGGINING	BIT(8)
+#define NH_FLD_SCTP_CHUNK_DATA_END		BIT(9)
+#define NH_FLD_SCTP_CHUNK_DATA_ALL_FIELDS	(BIT(10) - 1)
+
+/* L2TPV2 fields */
+#define NH_FLD_L2TPV2_TYPE_BIT			BIT(0)
+#define NH_FLD_L2TPV2_LENGTH_BIT		BIT(1)
+#define NH_FLD_L2TPV2_SEQUENCE_BIT		BIT(2)
+#define NH_FLD_L2TPV2_OFFSET_BIT		BIT(3)
+#define NH_FLD_L2TPV2_PRIORITY_BIT		BIT(4)
+#define NH_FLD_L2TPV2_VERSION			BIT(5)
+#define NH_FLD_L2TPV2_LEN			BIT(6)
+#define NH_FLD_L2TPV2_TUNNEL_ID			BIT(7)
+#define NH_FLD_L2TPV2_SESSION_ID		BIT(8)
+#define NH_FLD_L2TPV2_NS			BIT(9)
+#define NH_FLD_L2TPV2_NR			BIT(10)
+#define NH_FLD_L2TPV2_OFFSET_SIZE		BIT(11)
+#define NH_FLD_L2TPV2_FIRST_BYTE		BIT(12)
+#define NH_FLD_L2TPV2_ALL_FIELDS		(BIT(13) - 1)
+
+/* L2TPV3 fields */
+#define NH_FLD_L2TPV3_CTRL_TYPE_BIT		BIT(0)
+#define NH_FLD_L2TPV3_CTRL_LENGTH_BIT		BIT(1)
+#define NH_FLD_L2TPV3_CTRL_SEQUENCE_BIT		BIT(2)
+#define NH_FLD_L2TPV3_CTRL_VERSION		BIT(3)
+#define NH_FLD_L2TPV3_CTRL_LENGTH		BIT(4)
+#define NH_FLD_L2TPV3_CTRL_CONTROL		BIT(5)
+#define NH_FLD_L2TPV3_CTRL_SENT			BIT(6)
+#define NH_FLD_L2TPV3_CTRL_RECV			BIT(7)
+#define NH_FLD_L2TPV3_CTRL_FIRST_BYTE		BIT(8)
+#define NH_FLD_L2TPV3_CTRL_ALL_FIELDS		(BIT(9) - 1)
+
+#define NH_FLD_L2TPV3_SESS_TYPE_BIT		BIT(0)
+#define NH_FLD_L2TPV3_SESS_VERSION		BIT(1)
+#define NH_FLD_L2TPV3_SESS_ID			BIT(2)
+#define NH_FLD_L2TPV3_SESS_COOKIE		BIT(3)
+#define NH_FLD_L2TPV3_SESS_ALL_FIELDS		(BIT(4) - 1)
+
+/* PPP fields */
+#define NH_FLD_PPP_PID				BIT(0)
+#define NH_FLD_PPP_COMPRESSED			BIT(1)
+#define NH_FLD_PPP_ALL_FIELDS			(BIT(2) - 1)
+
+/* PPPoE fields */
+#define NH_FLD_PPPOE_VER			BIT(0)
+#define NH_FLD_PPPOE_TYPE			BIT(1)
+#define NH_FLD_PPPOE_CODE			BIT(2)
+#define NH_FLD_PPPOE_SID			BIT(3)
+#define NH_FLD_PPPOE_LEN			BIT(4)
+#define NH_FLD_PPPOE_SESSION			BIT(5)
+#define NH_FLD_PPPOE_PID			BIT(6)
+#define NH_FLD_PPPOE_ALL_FIELDS			(BIT(7) - 1)
+
+/* PPP-Mux fields */
+#define NH_FLD_PPPMUX_PID			BIT(0)
+#define NH_FLD_PPPMUX_CKSUM			BIT(1)
+#define NH_FLD_PPPMUX_COMPRESSED		BIT(2)
+#define NH_FLD_PPPMUX_ALL_FIELDS		(BIT(3) - 1)
+
+/* PPP-Mux sub-frame fields */
+#define NH_FLD_PPPMUX_SUBFRM_PFF		BIT(0)
+#define NH_FLD_PPPMUX_SUBFRM_LXT		BIT(1)
+#define NH_FLD_PPPMUX_SUBFRM_LEN		BIT(2)
+#define NH_FLD_PPPMUX_SUBFRM_PID		BIT(3)
+#define NH_FLD_PPPMUX_SUBFRM_USE_PID		BIT(4)
+#define NH_FLD_PPPMUX_SUBFRM_ALL_FIELDS		(BIT(5) - 1)
+
+/* LLC fields */
+#define NH_FLD_LLC_DSAP				BIT(0)
+#define NH_FLD_LLC_SSAP				BIT(1)
+#define NH_FLD_LLC_CTRL				BIT(2)
+#define NH_FLD_LLC_ALL_FIELDS			(BIT(3) - 1)
+
+/* NLPID fields */
+#define NH_FLD_NLPID_NLPID			BIT(0)
+#define NH_FLD_NLPID_ALL_FIELDS			(BIT(1) - 1)
+
+/* SNAP fields */
+#define NH_FLD_SNAP_OUI				BIT(0)
+#define NH_FLD_SNAP_PID				BIT(1)
+#define NH_FLD_SNAP_ALL_FIELDS			(BIT(2) - 1)
+
+/* LLC SNAP fields */
+#define NH_FLD_LLC_SNAP_TYPE			BIT(0)
+#define NH_FLD_LLC_SNAP_ALL_FIELDS		(BIT(1) - 1)
+
+/* ARP fields */
+#define NH_FLD_ARP_HTYPE			BIT(0)
+#define NH_FLD_ARP_PTYPE			BIT(1)
+#define NH_FLD_ARP_HLEN				BIT(2)
+#define NH_FLD_ARP_PLEN				BIT(3)
+#define NH_FLD_ARP_OPER				BIT(4)
+#define NH_FLD_ARP_SHA				BIT(5)
+#define NH_FLD_ARP_SPA				BIT(6)
+#define NH_FLD_ARP_THA				BIT(7)
+#define NH_FLD_ARP_TPA				BIT(8)
+#define NH_FLD_ARP_ALL_FIELDS			(BIT(9) - 1)
+
+/* RFC2684 fields */
+#define NH_FLD_RFC2684_LLC			BIT(0)
+#define NH_FLD_RFC2684_NLPID			BIT(1)
+#define NH_FLD_RFC2684_OUI			BIT(2)
+#define NH_FLD_RFC2684_PID			BIT(3)
+#define NH_FLD_RFC2684_VPN_OUI			BIT(4)
+#define NH_FLD_RFC2684_VPN_IDX			BIT(5)
+#define NH_FLD_RFC2684_ALL_FIELDS		(BIT(6) - 1)
+
+/* User defined fields */
+#define NH_FLD_USER_DEFINED_SRCPORT		BIT(0)
+#define NH_FLD_USER_DEFINED_PCDID		BIT(1)
+#define NH_FLD_USER_DEFINED_ALL_FIELDS		(BIT(2) - 1)
+
+/* Payload fields */
+#define NH_FLD_PAYLOAD_BUFFER			BIT(0)
+#define NH_FLD_PAYLOAD_SIZE			BIT(1)
+#define NH_FLD_MAX_FRM_SIZE			BIT(2)
+#define NH_FLD_MIN_FRM_SIZE			BIT(3)
+#define NH_FLD_PAYLOAD_TYPE			BIT(4)
+#define NH_FLD_FRAME_SIZE			BIT(5)
+#define NH_FLD_PAYLOAD_ALL_FIELDS		(BIT(6) - 1)
+
+/* GRE fields */
+#define NH_FLD_GRE_TYPE				BIT(0)
+#define NH_FLD_GRE_ALL_FIELDS			(BIT(1) - 1)
+
+/* MINENCAP fields */
+#define NH_FLD_MINENCAP_SRC_IP			BIT(0)
+#define NH_FLD_MINENCAP_DST_IP			BIT(1)
+#define NH_FLD_MINENCAP_TYPE			BIT(2)
+#define NH_FLD_MINENCAP_ALL_FIELDS		(BIT(3) - 1)
+
+/* IPSEC AH fields */
+#define NH_FLD_IPSEC_AH_SPI			BIT(0)
+#define NH_FLD_IPSEC_AH_NH			BIT(1)
+#define NH_FLD_IPSEC_AH_ALL_FIELDS		(BIT(2) - 1)
+
+/* IPSEC ESP fields */
+#define NH_FLD_IPSEC_ESP_SPI			BIT(0)
+#define NH_FLD_IPSEC_ESP_SEQUENCE_NUM		BIT(1)
+#define NH_FLD_IPSEC_ESP_ALL_FIELDS		(BIT(2) - 1)
+
+/* MPLS fields */
+#define NH_FLD_MPLS_LABEL_STACK			BIT(0)
+#define NH_FLD_MPLS_LABEL_STACK_ALL_FIELDS	(BIT(1) - 1)
+
+/* MACSEC fields */
+#define NH_FLD_MACSEC_SECTAG			BIT(0)
+#define NH_FLD_MACSEC_ALL_FIELDS		(BIT(1) - 1)
+
+/* GTP fields */
+#define NH_FLD_GTP_TEID				BIT(0)
+
+/* Supported protocols */
+enum net_prot {
+	NET_PROT_NONE = 0,
+	NET_PROT_PAYLOAD,
+	NET_PROT_ETH,
+	NET_PROT_VLAN,
+	NET_PROT_IPV4,
+	NET_PROT_IPV6,
+	NET_PROT_IP,
+	NET_PROT_TCP,
+	NET_PROT_UDP,
+	NET_PROT_UDP_LITE,
+	NET_PROT_IPHC,
+	NET_PROT_SCTP,
+	NET_PROT_SCTP_CHUNK_DATA,
+	NET_PROT_PPPOE,
+	NET_PROT_PPP,
+	NET_PROT_PPPMUX,
+	NET_PROT_PPPMUX_SUBFRM,
+	NET_PROT_L2TPV2,
+	NET_PROT_L2TPV3_CTRL,
+	NET_PROT_L2TPV3_SESS,
+	NET_PROT_LLC,
+	NET_PROT_LLC_SNAP,
+	NET_PROT_NLPID,
+	NET_PROT_SNAP,
+	NET_PROT_MPLS,
+	NET_PROT_IPSEC_AH,
+	NET_PROT_IPSEC_ESP,
+	NET_PROT_UDP_ENC_ESP, /* RFC 3948 */
+	NET_PROT_MACSEC,
+	NET_PROT_GRE,
+	NET_PROT_MINENCAP,
+	NET_PROT_DCCP,
+	NET_PROT_ICMP,
+	NET_PROT_IGMP,
+	NET_PROT_ARP,
+	NET_PROT_CAPWAP_DATA,
+	NET_PROT_CAPWAP_CTRL,
+	NET_PROT_RFC2684,
+	NET_PROT_ICMPV6,
+	NET_PROT_FCOE,
+	NET_PROT_FIP,
+	NET_PROT_ISCSI,
+	NET_PROT_GTP,
+	NET_PROT_USER_DEFINED_L2,
+	NET_PROT_USER_DEFINED_L3,
+	NET_PROT_USER_DEFINED_L4,
+	NET_PROT_USER_DEFINED_L5,
+	NET_PROT_USER_DEFINED_SHIM1,
+	NET_PROT_USER_DEFINED_SHIM2,
+
+	NET_PROT_DUMMY_LAST
+};
+
+/**
+ * struct dpkg_extract - A structure for defining a single extraction
+ * @type: Determines how the union below is interpreted:
+ *	DPKG_EXTRACT_FROM_HDR: selects 'from_hdr';
+ *	DPKG_EXTRACT_FROM_DATA: selects 'from_data';
+ *	DPKG_EXTRACT_FROM_PARSE: selects 'from_parse'
+ * @extract: Selects extraction method
+ * @extract.from_hdr: Used when 'type = DPKG_EXTRACT_FROM_HDR'
+ * @extract.from_data: Used when 'type = DPKG_EXTRACT_FROM_DATA'
+ * @extract.from_parse:  Used when 'type = DPKG_EXTRACT_FROM_PARSE'
+ * @extract.from_hdr.prot: Any of the supported headers
+ * @extract.from_hdr.type: Defines the type of header extraction:
+ *	DPKG_FROM_HDR: use size & offset below;
+ *	DPKG_FROM_FIELD: use field, size and offset below;
+ *	DPKG_FULL_FIELD: use field below
+ * @extract.from_hdr.field: One of the supported fields (NH_FLD_)
+ * @extract.from_hdr.size: Size in bytes
+ * @extract.from_hdr.offset: Byte offset
+ * @extract.from_hdr.hdr_index: Clear for cases not listed below;
+ *	Used for protocols that may have more than a single
+ *	header, 0 indicates an outer header;
+ *	Supported protocols (possible values):
+ *	NET_PROT_VLAN (0, HDR_INDEX_LAST);
+ *	NET_PROT_MPLS (0, 1, HDR_INDEX_LAST);
+ *	NET_PROT_IP(0, HDR_INDEX_LAST);
+ *	NET_PROT_IPv4(0, HDR_INDEX_LAST);
+ *	NET_PROT_IPv6(0, HDR_INDEX_LAST);
+ * @extract.from_data.size: Size in bytes
+ * @extract.from_data.offset: Byte offset
+ * @extract.from_parse.size: Size in bytes
+ * @extract.from_parse.offset: Byte offset
+ * @num_of_byte_masks: Defines the number of valid entries in the array below;
+ *		This is	also the number of bytes to be used as masks
+ * @masks: Masks parameters
+ */
+struct dpkg_extract {
+	enum dpkg_extract_type type;
+	union {
+		struct {
+			enum net_prot			prot;
+			enum dpkg_extract_from_hdr_type type;
+			u32			field;
+			u8			size;
+			u8			offset;
+			u8			hdr_index;
+		} from_hdr;
+		struct {
+			u8 size;
+			u8 offset;
+		} from_data;
+		struct {
+			u8 size;
+			u8 offset;
+		} from_parse;
+	} extract;
+
+	u8		num_of_byte_masks;
+	struct dpkg_mask	masks[DPKG_NUM_OF_MASKS];
+};
+
+/**
+ * struct dpkg_profile_cfg - A structure for defining a full Key Generation
+ *				profile (rule)
+ * @num_extracts: Defines the number of valid entries in the array below
+ * @extracts: Array of required extractions
+ */
+struct dpkg_profile_cfg {
+	u8 num_extracts;
+	struct dpkg_extract extracts[DPKG_MAX_NUM_OF_EXTRACTS];
+};
+
+#endif /* __FSL_DPKG_H_ */
diff --git a/drivers/net/ethernet/freescale/dpaa2/dpni-cmd.h b/drivers/net/ethernet/freescale/dpaa2/dpni-cmd.h
new file mode 100644
index 0000000..83698ab
--- /dev/null
+++ b/drivers/net/ethernet/freescale/dpaa2/dpni-cmd.h
@@ -0,0 +1,518 @@
+/* SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause) */
+/* Copyright 2013-2016 Freescale Semiconductor Inc.
+ * Copyright 2016 NXP
+ */
+#ifndef _FSL_DPNI_CMD_H
+#define _FSL_DPNI_CMD_H
+
+#include "dpni.h"
+
+/* DPNI Version */
+#define DPNI_VER_MAJOR				7
+#define DPNI_VER_MINOR				0
+#define DPNI_CMD_BASE_VERSION			1
+#define DPNI_CMD_ID_OFFSET			4
+
+#define DPNI_CMD(id)	(((id) << DPNI_CMD_ID_OFFSET) | DPNI_CMD_BASE_VERSION)
+
+#define DPNI_CMDID_OPEN					DPNI_CMD(0x801)
+#define DPNI_CMDID_CLOSE				DPNI_CMD(0x800)
+#define DPNI_CMDID_CREATE				DPNI_CMD(0x901)
+#define DPNI_CMDID_DESTROY				DPNI_CMD(0x900)
+#define DPNI_CMDID_GET_API_VERSION			DPNI_CMD(0xa01)
+
+#define DPNI_CMDID_ENABLE				DPNI_CMD(0x002)
+#define DPNI_CMDID_DISABLE				DPNI_CMD(0x003)
+#define DPNI_CMDID_GET_ATTR				DPNI_CMD(0x004)
+#define DPNI_CMDID_RESET				DPNI_CMD(0x005)
+#define DPNI_CMDID_IS_ENABLED				DPNI_CMD(0x006)
+
+#define DPNI_CMDID_SET_IRQ				DPNI_CMD(0x010)
+#define DPNI_CMDID_GET_IRQ				DPNI_CMD(0x011)
+#define DPNI_CMDID_SET_IRQ_ENABLE			DPNI_CMD(0x012)
+#define DPNI_CMDID_GET_IRQ_ENABLE			DPNI_CMD(0x013)
+#define DPNI_CMDID_SET_IRQ_MASK				DPNI_CMD(0x014)
+#define DPNI_CMDID_GET_IRQ_MASK				DPNI_CMD(0x015)
+#define DPNI_CMDID_GET_IRQ_STATUS			DPNI_CMD(0x016)
+#define DPNI_CMDID_CLEAR_IRQ_STATUS			DPNI_CMD(0x017)
+
+#define DPNI_CMDID_SET_POOLS				DPNI_CMD(0x200)
+#define DPNI_CMDID_SET_ERRORS_BEHAVIOR			DPNI_CMD(0x20B)
+
+#define DPNI_CMDID_GET_QDID				DPNI_CMD(0x210)
+#define DPNI_CMDID_GET_TX_DATA_OFFSET			DPNI_CMD(0x212)
+#define DPNI_CMDID_GET_LINK_STATE			DPNI_CMD(0x215)
+#define DPNI_CMDID_SET_MAX_FRAME_LENGTH			DPNI_CMD(0x216)
+#define DPNI_CMDID_GET_MAX_FRAME_LENGTH			DPNI_CMD(0x217)
+#define DPNI_CMDID_SET_LINK_CFG				DPNI_CMD(0x21A)
+#define DPNI_CMDID_SET_TX_SHAPING			DPNI_CMD(0x21B)
+
+#define DPNI_CMDID_SET_MCAST_PROMISC			DPNI_CMD(0x220)
+#define DPNI_CMDID_GET_MCAST_PROMISC			DPNI_CMD(0x221)
+#define DPNI_CMDID_SET_UNICAST_PROMISC			DPNI_CMD(0x222)
+#define DPNI_CMDID_GET_UNICAST_PROMISC			DPNI_CMD(0x223)
+#define DPNI_CMDID_SET_PRIM_MAC				DPNI_CMD(0x224)
+#define DPNI_CMDID_GET_PRIM_MAC				DPNI_CMD(0x225)
+#define DPNI_CMDID_ADD_MAC_ADDR				DPNI_CMD(0x226)
+#define DPNI_CMDID_REMOVE_MAC_ADDR			DPNI_CMD(0x227)
+#define DPNI_CMDID_CLR_MAC_FILTERS			DPNI_CMD(0x228)
+
+#define DPNI_CMDID_SET_RX_TC_DIST			DPNI_CMD(0x235)
+
+#define DPNI_CMDID_ADD_FS_ENT				DPNI_CMD(0x244)
+#define DPNI_CMDID_REMOVE_FS_ENT			DPNI_CMD(0x245)
+#define DPNI_CMDID_CLR_FS_ENT				DPNI_CMD(0x246)
+
+#define DPNI_CMDID_GET_STATISTICS			DPNI_CMD(0x25D)
+#define DPNI_CMDID_GET_QUEUE				DPNI_CMD(0x25F)
+#define DPNI_CMDID_SET_QUEUE				DPNI_CMD(0x260)
+#define DPNI_CMDID_GET_TAILDROP				DPNI_CMD(0x261)
+#define DPNI_CMDID_SET_TAILDROP				DPNI_CMD(0x262)
+
+#define DPNI_CMDID_GET_PORT_MAC_ADDR			DPNI_CMD(0x263)
+
+#define DPNI_CMDID_GET_BUFFER_LAYOUT			DPNI_CMD(0x264)
+#define DPNI_CMDID_SET_BUFFER_LAYOUT			DPNI_CMD(0x265)
+
+#define DPNI_CMDID_SET_TX_CONFIRMATION_MODE		DPNI_CMD(0x266)
+#define DPNI_CMDID_SET_CONGESTION_NOTIFICATION		DPNI_CMD(0x267)
+#define DPNI_CMDID_GET_CONGESTION_NOTIFICATION		DPNI_CMD(0x268)
+#define DPNI_CMDID_SET_EARLY_DROP			DPNI_CMD(0x269)
+#define DPNI_CMDID_GET_EARLY_DROP			DPNI_CMD(0x26A)
+#define DPNI_CMDID_GET_OFFLOAD				DPNI_CMD(0x26B)
+#define DPNI_CMDID_SET_OFFLOAD				DPNI_CMD(0x26C)
+
+/* Macros for accessing command fields smaller than 1byte */
+#define DPNI_MASK(field)	\
+	GENMASK(DPNI_##field##_SHIFT + DPNI_##field##_SIZE - 1, \
+		DPNI_##field##_SHIFT)
+
+#define dpni_set_field(var, field, val)	\
+	((var) |= (((val) << DPNI_##field##_SHIFT) & DPNI_MASK(field)))
+#define dpni_get_field(var, field)	\
+	(((var) & DPNI_MASK(field)) >> DPNI_##field##_SHIFT)
+
+struct dpni_cmd_open {
+	__le32 dpni_id;
+};
+
+#define DPNI_BACKUP_POOL(val, order)	(((val) & 0x1) << (order))
+struct dpni_cmd_set_pools {
+	/* cmd word 0 */
+	u8 num_dpbp;
+	u8 backup_pool_mask;
+	__le16 pad;
+	/* cmd word 0..4 */
+	__le32 dpbp_id[DPNI_MAX_DPBP];
+	/* cmd word 4..6 */
+	__le16 buffer_size[DPNI_MAX_DPBP];
+};
+
+/* The enable indication is always the least significant bit */
+#define DPNI_ENABLE_SHIFT		0
+#define DPNI_ENABLE_SIZE		1
+
+struct dpni_rsp_is_enabled {
+	u8 enabled;
+};
+
+struct dpni_rsp_get_irq {
+	/* response word 0 */
+	__le32 irq_val;
+	__le32 pad;
+	/* response word 1 */
+	__le64 irq_addr;
+	/* response word 2 */
+	__le32 irq_num;
+	__le32 type;
+};
+
+struct dpni_cmd_set_irq_enable {
+	u8 enable;
+	u8 pad[3];
+	u8 irq_index;
+};
+
+struct dpni_cmd_get_irq_enable {
+	__le32 pad;
+	u8 irq_index;
+};
+
+struct dpni_rsp_get_irq_enable {
+	u8 enabled;
+};
+
+struct dpni_cmd_set_irq_mask {
+	__le32 mask;
+	u8 irq_index;
+};
+
+struct dpni_cmd_get_irq_mask {
+	__le32 pad;
+	u8 irq_index;
+};
+
+struct dpni_rsp_get_irq_mask {
+	__le32 mask;
+};
+
+struct dpni_cmd_get_irq_status {
+	__le32 status;
+	u8 irq_index;
+};
+
+struct dpni_rsp_get_irq_status {
+	__le32 status;
+};
+
+struct dpni_cmd_clear_irq_status {
+	__le32 status;
+	u8 irq_index;
+};
+
+struct dpni_rsp_get_attr {
+	/* response word 0 */
+	__le32 options;
+	u8 num_queues;
+	u8 num_tcs;
+	u8 mac_filter_entries;
+	u8 pad0;
+	/* response word 1 */
+	u8 vlan_filter_entries;
+	u8 pad1;
+	u8 qos_entries;
+	u8 pad2;
+	__le16 fs_entries;
+	__le16 pad3;
+	/* response word 2 */
+	u8 qos_key_size;
+	u8 fs_key_size;
+	__le16 wriop_version;
+};
+
+#define DPNI_ERROR_ACTION_SHIFT		0
+#define DPNI_ERROR_ACTION_SIZE		4
+#define DPNI_FRAME_ANN_SHIFT		4
+#define DPNI_FRAME_ANN_SIZE		1
+
+struct dpni_cmd_set_errors_behavior {
+	__le32 errors;
+	/* from least significant bit: error_action:4, set_frame_annotation:1 */
+	u8 flags;
+};
+
+/* There are 3 separate commands for configuring Rx, Tx and Tx confirmation
+ * buffer layouts, but they all share the same parameters.
+ * If one of the functions changes, below structure needs to be split.
+ */
+
+#define DPNI_PASS_TS_SHIFT		0
+#define DPNI_PASS_TS_SIZE		1
+#define DPNI_PASS_PR_SHIFT		1
+#define DPNI_PASS_PR_SIZE		1
+#define DPNI_PASS_FS_SHIFT		2
+#define DPNI_PASS_FS_SIZE		1
+
+struct dpni_cmd_get_buffer_layout {
+	u8 qtype;
+};
+
+struct dpni_rsp_get_buffer_layout {
+	/* response word 0 */
+	u8 pad0[6];
+	/* from LSB: pass_timestamp:1, parser_result:1, frame_status:1 */
+	u8 flags;
+	u8 pad1;
+	/* response word 1 */
+	__le16 private_data_size;
+	__le16 data_align;
+	__le16 head_room;
+	__le16 tail_room;
+};
+
+struct dpni_cmd_set_buffer_layout {
+	/* cmd word 0 */
+	u8 qtype;
+	u8 pad0[3];
+	__le16 options;
+	/* from LSB: pass_timestamp:1, parser_result:1, frame_status:1 */
+	u8 flags;
+	u8 pad1;
+	/* cmd word 1 */
+	__le16 private_data_size;
+	__le16 data_align;
+	__le16 head_room;
+	__le16 tail_room;
+};
+
+struct dpni_cmd_set_offload {
+	u8 pad[3];
+	u8 dpni_offload;
+	__le32 config;
+};
+
+struct dpni_cmd_get_offload {
+	u8 pad[3];
+	u8 dpni_offload;
+};
+
+struct dpni_rsp_get_offload {
+	__le32 pad;
+	__le32 config;
+};
+
+struct dpni_cmd_get_qdid {
+	u8 qtype;
+};
+
+struct dpni_rsp_get_qdid {
+	__le16 qdid;
+};
+
+struct dpni_rsp_get_tx_data_offset {
+	__le16 data_offset;
+};
+
+struct dpni_cmd_get_statistics {
+	u8 page_number;
+};
+
+struct dpni_rsp_get_statistics {
+	__le64 counter[DPNI_STATISTICS_CNT];
+};
+
+struct dpni_cmd_set_link_cfg {
+	/* cmd word 0 */
+	__le64 pad0;
+	/* cmd word 1 */
+	__le32 rate;
+	__le32 pad1;
+	/* cmd word 2 */
+	__le64 options;
+};
+
+#define DPNI_LINK_STATE_SHIFT		0
+#define DPNI_LINK_STATE_SIZE		1
+
+struct dpni_rsp_get_link_state {
+	/* response word 0 */
+	__le32 pad0;
+	/* from LSB: up:1 */
+	u8 flags;
+	u8 pad1[3];
+	/* response word 1 */
+	__le32 rate;
+	__le32 pad2;
+	/* response word 2 */
+	__le64 options;
+};
+
+struct dpni_cmd_set_max_frame_length {
+	__le16 max_frame_length;
+};
+
+struct dpni_rsp_get_max_frame_length {
+	__le16 max_frame_length;
+};
+
+struct dpni_cmd_set_multicast_promisc {
+	u8 enable;
+};
+
+struct dpni_rsp_get_multicast_promisc {
+	u8 enabled;
+};
+
+struct dpni_cmd_set_unicast_promisc {
+	u8 enable;
+};
+
+struct dpni_rsp_get_unicast_promisc {
+	u8 enabled;
+};
+
+struct dpni_cmd_set_primary_mac_addr {
+	__le16 pad;
+	u8 mac_addr[6];
+};
+
+struct dpni_rsp_get_primary_mac_addr {
+	__le16 pad;
+	u8 mac_addr[6];
+};
+
+struct dpni_rsp_get_port_mac_addr {
+	__le16 pad;
+	u8 mac_addr[6];
+};
+
+struct dpni_cmd_add_mac_addr {
+	__le16 pad;
+	u8 mac_addr[6];
+};
+
+struct dpni_cmd_remove_mac_addr {
+	__le16 pad;
+	u8 mac_addr[6];
+};
+
+#define DPNI_UNICAST_FILTERS_SHIFT	0
+#define DPNI_UNICAST_FILTERS_SIZE	1
+#define DPNI_MULTICAST_FILTERS_SHIFT	1
+#define DPNI_MULTICAST_FILTERS_SIZE	1
+
+struct dpni_cmd_clear_mac_filters {
+	/* from LSB: unicast:1, multicast:1 */
+	u8 flags;
+};
+
+#define DPNI_DIST_MODE_SHIFT		0
+#define DPNI_DIST_MODE_SIZE		4
+#define DPNI_MISS_ACTION_SHIFT		4
+#define DPNI_MISS_ACTION_SIZE		4
+
+struct dpni_cmd_set_rx_tc_dist {
+	/* cmd word 0 */
+	__le16 dist_size;
+	u8 tc_id;
+	/* from LSB: dist_mode:4, miss_action:4 */
+	u8 flags;
+	__le16 pad0;
+	__le16 default_flow_id;
+	/* cmd word 1..5 */
+	__le64 pad1[5];
+	/* cmd word 6 */
+	__le64 key_cfg_iova;
+};
+
+/* dpni_set_rx_tc_dist extension (structure of the DMA-able memory at
+ * key_cfg_iova)
+ */
+struct dpni_mask_cfg {
+	u8 mask;
+	u8 offset;
+};
+
+#define DPNI_EFH_TYPE_SHIFT		0
+#define DPNI_EFH_TYPE_SIZE		4
+#define DPNI_EXTRACT_TYPE_SHIFT		0
+#define DPNI_EXTRACT_TYPE_SIZE		4
+
+struct dpni_dist_extract {
+	/* word 0 */
+	u8 prot;
+	/* EFH type stored in the 4 least significant bits */
+	u8 efh_type;
+	u8 size;
+	u8 offset;
+	__le32 field;
+	/* word 1 */
+	u8 hdr_index;
+	u8 constant;
+	u8 num_of_repeats;
+	u8 num_of_byte_masks;
+	/* Extraction type is stored in the 4 LSBs */
+	u8 extract_type;
+	u8 pad[3];
+	/* word 2 */
+	struct dpni_mask_cfg masks[4];
+};
+
+struct dpni_ext_set_rx_tc_dist {
+	/* extension word 0 */
+	u8 num_extracts;
+	u8 pad[7];
+	/* words 1..25 */
+	struct dpni_dist_extract extracts[DPKG_MAX_NUM_OF_EXTRACTS];
+};
+
+struct dpni_cmd_get_queue {
+	u8 qtype;
+	u8 tc;
+	u8 index;
+};
+
+#define DPNI_DEST_TYPE_SHIFT		0
+#define DPNI_DEST_TYPE_SIZE		4
+#define DPNI_STASH_CTRL_SHIFT		6
+#define DPNI_STASH_CTRL_SIZE		1
+#define DPNI_HOLD_ACTIVE_SHIFT		7
+#define DPNI_HOLD_ACTIVE_SIZE		1
+
+struct dpni_rsp_get_queue {
+	/* response word 0 */
+	__le64 pad0;
+	/* response word 1 */
+	__le32 dest_id;
+	__le16 pad1;
+	u8 dest_prio;
+	/* From LSB: dest_type:4, pad:2, flc_stash_ctrl:1, hold_active:1 */
+	u8 flags;
+	/* response word 2 */
+	__le64 flc;
+	/* response word 3 */
+	__le64 user_context;
+	/* response word 4 */
+	__le32 fqid;
+	__le16 qdbin;
+};
+
+struct dpni_cmd_set_queue {
+	/* cmd word 0 */
+	u8 qtype;
+	u8 tc;
+	u8 index;
+	u8 options;
+	__le32 pad0;
+	/* cmd word 1 */
+	__le32 dest_id;
+	__le16 pad1;
+	u8 dest_prio;
+	u8 flags;
+	/* cmd word 2 */
+	__le64 flc;
+	/* cmd word 3 */
+	__le64 user_context;
+};
+
+struct dpni_cmd_set_taildrop {
+	/* cmd word 0 */
+	u8 congestion_point;
+	u8 qtype;
+	u8 tc;
+	u8 index;
+	__le32 pad0;
+	/* cmd word 1 */
+	/* Only least significant bit is relevant */
+	u8 enable;
+	u8 pad1;
+	u8 units;
+	u8 pad2;
+	__le32 threshold;
+};
+
+struct dpni_cmd_get_taildrop {
+	u8 congestion_point;
+	u8 qtype;
+	u8 tc;
+	u8 index;
+};
+
+struct dpni_rsp_get_taildrop {
+	/* cmd word 0 */
+	__le64 pad0;
+	/* cmd word 1 */
+	/* only least significant bit is relevant */
+	u8 enable;
+	u8 pad1;
+	u8 units;
+	u8 pad2;
+	__le32 threshold;
+};
+
+struct dpni_rsp_get_api_version {
+	__le16 major;
+	__le16 minor;
+};
+
+#endif /* _FSL_DPNI_CMD_H */
diff --git a/drivers/net/ethernet/freescale/dpaa2/dpni.c b/drivers/net/ethernet/freescale/dpaa2/dpni.c
new file mode 100644
index 0000000..d6ac267
--- /dev/null
+++ b/drivers/net/ethernet/freescale/dpaa2/dpni.c
@@ -0,0 +1,1600 @@
+// SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause)
+/* Copyright 2013-2016 Freescale Semiconductor Inc.
+ * Copyright 2016 NXP
+ */
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/fsl/mc.h>
+#include "dpni.h"
+#include "dpni-cmd.h"
+
+/**
+ * dpni_prepare_key_cfg() - function prepare extract parameters
+ * @cfg: defining a full Key Generation profile (rule)
+ * @key_cfg_buf: Zeroed 256 bytes of memory before mapping it to DMA
+ *
+ * This function has to be called before the following functions:
+ *	- dpni_set_rx_tc_dist()
+ *	- dpni_set_qos_table()
+ */
+int dpni_prepare_key_cfg(const struct dpkg_profile_cfg *cfg, u8 *key_cfg_buf)
+{
+	int i, j;
+	struct dpni_ext_set_rx_tc_dist *dpni_ext;
+	struct dpni_dist_extract *extr;
+
+	if (cfg->num_extracts > DPKG_MAX_NUM_OF_EXTRACTS)
+		return -EINVAL;
+
+	dpni_ext = (struct dpni_ext_set_rx_tc_dist *)key_cfg_buf;
+	dpni_ext->num_extracts = cfg->num_extracts;
+
+	for (i = 0; i < cfg->num_extracts; i++) {
+		extr = &dpni_ext->extracts[i];
+
+		switch (cfg->extracts[i].type) {
+		case DPKG_EXTRACT_FROM_HDR:
+			extr->prot = cfg->extracts[i].extract.from_hdr.prot;
+			dpni_set_field(extr->efh_type, EFH_TYPE,
+				       cfg->extracts[i].extract.from_hdr.type);
+			extr->size = cfg->extracts[i].extract.from_hdr.size;
+			extr->offset = cfg->extracts[i].extract.from_hdr.offset;
+			extr->field = cpu_to_le32(
+				cfg->extracts[i].extract.from_hdr.field);
+			extr->hdr_index =
+				cfg->extracts[i].extract.from_hdr.hdr_index;
+			break;
+		case DPKG_EXTRACT_FROM_DATA:
+			extr->size = cfg->extracts[i].extract.from_data.size;
+			extr->offset =
+				cfg->extracts[i].extract.from_data.offset;
+			break;
+		case DPKG_EXTRACT_FROM_PARSE:
+			extr->size = cfg->extracts[i].extract.from_parse.size;
+			extr->offset =
+				cfg->extracts[i].extract.from_parse.offset;
+			break;
+		default:
+			return -EINVAL;
+		}
+
+		extr->num_of_byte_masks = cfg->extracts[i].num_of_byte_masks;
+		dpni_set_field(extr->extract_type, EXTRACT_TYPE,
+			       cfg->extracts[i].type);
+
+		for (j = 0; j < DPKG_NUM_OF_MASKS; j++) {
+			extr->masks[j].mask = cfg->extracts[i].masks[j].mask;
+			extr->masks[j].offset =
+				cfg->extracts[i].masks[j].offset;
+		}
+	}
+
+	return 0;
+}
+
+/**
+ * dpni_open() - Open a control session for the specified object
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @dpni_id:	DPNI unique ID
+ * @token:	Returned token; use in subsequent API calls
+ *
+ * This function can be used to open a control session for an
+ * already created object; an object may have been declared in
+ * the DPL or by calling the dpni_create() function.
+ * This function returns a unique authentication token,
+ * associated with the specific object ID and the specific MC
+ * portal; this token must be used in all subsequent commands for
+ * this specific object.
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_open(struct fsl_mc_io *mc_io,
+	      u32 cmd_flags,
+	      int dpni_id,
+	      u16 *token)
+{
+	struct fsl_mc_command cmd = { 0 };
+	struct dpni_cmd_open *cmd_params;
+
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_OPEN,
+					  cmd_flags,
+					  0);
+	cmd_params = (struct dpni_cmd_open *)cmd.params;
+	cmd_params->dpni_id = cpu_to_le32(dpni_id);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	*token = mc_cmd_hdr_read_token(&cmd);
+
+	return 0;
+}
+
+/**
+ * dpni_close() - Close the control session of the object
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ *
+ * After this function is called, no further operations are
+ * allowed on the object without opening a new control session.
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_close(struct fsl_mc_io *mc_io,
+	       u32 cmd_flags,
+	       u16 token)
+{
+	struct fsl_mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_CLOSE,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpni_set_pools() - Set buffer pools configuration
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @cfg:	Buffer pools configuration
+ *
+ * mandatory for DPNI operation
+ * warning:Allowed only when DPNI is disabled
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_set_pools(struct fsl_mc_io *mc_io,
+		   u32 cmd_flags,
+		   u16 token,
+		   const struct dpni_pools_cfg *cfg)
+{
+	struct fsl_mc_command cmd = { 0 };
+	struct dpni_cmd_set_pools *cmd_params;
+	int i;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_POOLS,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_cmd_set_pools *)cmd.params;
+	cmd_params->num_dpbp = cfg->num_dpbp;
+	for (i = 0; i < DPNI_MAX_DPBP; i++) {
+		cmd_params->dpbp_id[i] = cpu_to_le32(cfg->pools[i].dpbp_id);
+		cmd_params->buffer_size[i] =
+			cpu_to_le16(cfg->pools[i].buffer_size);
+		cmd_params->backup_pool_mask |=
+			DPNI_BACKUP_POOL(cfg->pools[i].backup_pool, i);
+	}
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpni_enable() - Enable the DPNI, allow sending and receiving frames.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:		Token of DPNI object
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_enable(struct fsl_mc_io *mc_io,
+		u32 cmd_flags,
+		u16 token)
+{
+	struct fsl_mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_ENABLE,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpni_disable() - Disable the DPNI, stop sending and receiving frames.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_disable(struct fsl_mc_io *mc_io,
+		 u32 cmd_flags,
+		 u16 token)
+{
+	struct fsl_mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_DISABLE,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpni_is_enabled() - Check if the DPNI is enabled.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @en:		Returns '1' if object is enabled; '0' otherwise
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_is_enabled(struct fsl_mc_io *mc_io,
+		    u32 cmd_flags,
+		    u16 token,
+		    int *en)
+{
+	struct fsl_mc_command cmd = { 0 };
+	struct dpni_rsp_is_enabled *rsp_params;
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_IS_ENABLED,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpni_rsp_is_enabled *)cmd.params;
+	*en = dpni_get_field(rsp_params->enabled, ENABLE);
+
+	return 0;
+}
+
+/**
+ * dpni_reset() - Reset the DPNI, returns the object to initial state.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_reset(struct fsl_mc_io *mc_io,
+	       u32 cmd_flags,
+	       u16 token)
+{
+	struct fsl_mc_command cmd = { 0 };
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_RESET,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpni_set_irq_enable() - Set overall interrupt state.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @irq_index:	The interrupt index to configure
+ * @en:		Interrupt state: - enable = 1, disable = 0
+ *
+ * Allows GPP software to control when interrupts are generated.
+ * Each interrupt can have up to 32 causes.  The enable/disable control's the
+ * overall interrupt state. if the interrupt is disabled no causes will cause
+ * an interrupt.
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_set_irq_enable(struct fsl_mc_io *mc_io,
+			u32 cmd_flags,
+			u16 token,
+			u8 irq_index,
+			u8 en)
+{
+	struct fsl_mc_command cmd = { 0 };
+	struct dpni_cmd_set_irq_enable *cmd_params;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_IRQ_ENABLE,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_cmd_set_irq_enable *)cmd.params;
+	dpni_set_field(cmd_params->enable, ENABLE, en);
+	cmd_params->irq_index = irq_index;
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpni_get_irq_enable() - Get overall interrupt state
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @irq_index:	The interrupt index to configure
+ * @en:		Returned interrupt state - enable = 1, disable = 0
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_get_irq_enable(struct fsl_mc_io *mc_io,
+			u32 cmd_flags,
+			u16 token,
+			u8 irq_index,
+			u8 *en)
+{
+	struct fsl_mc_command cmd = { 0 };
+	struct dpni_cmd_get_irq_enable *cmd_params;
+	struct dpni_rsp_get_irq_enable *rsp_params;
+
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_IRQ_ENABLE,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_cmd_get_irq_enable *)cmd.params;
+	cmd_params->irq_index = irq_index;
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpni_rsp_get_irq_enable *)cmd.params;
+	*en = dpni_get_field(rsp_params->enabled, ENABLE);
+
+	return 0;
+}
+
+/**
+ * dpni_set_irq_mask() - Set interrupt mask.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @irq_index:	The interrupt index to configure
+ * @mask:	event mask to trigger interrupt;
+ *			each bit:
+ *				0 = ignore event
+ *				1 = consider event for asserting IRQ
+ *
+ * Every interrupt can have up to 32 causes and the interrupt model supports
+ * masking/unmasking each cause independently
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_set_irq_mask(struct fsl_mc_io *mc_io,
+		      u32 cmd_flags,
+		      u16 token,
+		      u8 irq_index,
+		      u32 mask)
+{
+	struct fsl_mc_command cmd = { 0 };
+	struct dpni_cmd_set_irq_mask *cmd_params;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_IRQ_MASK,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_cmd_set_irq_mask *)cmd.params;
+	cmd_params->mask = cpu_to_le32(mask);
+	cmd_params->irq_index = irq_index;
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpni_get_irq_mask() - Get interrupt mask.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @irq_index:	The interrupt index to configure
+ * @mask:	Returned event mask to trigger interrupt
+ *
+ * Every interrupt can have up to 32 causes and the interrupt model supports
+ * masking/unmasking each cause independently
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_get_irq_mask(struct fsl_mc_io *mc_io,
+		      u32 cmd_flags,
+		      u16 token,
+		      u8 irq_index,
+		      u32 *mask)
+{
+	struct fsl_mc_command cmd = { 0 };
+	struct dpni_cmd_get_irq_mask *cmd_params;
+	struct dpni_rsp_get_irq_mask *rsp_params;
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_IRQ_MASK,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_cmd_get_irq_mask *)cmd.params;
+	cmd_params->irq_index = irq_index;
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpni_rsp_get_irq_mask *)cmd.params;
+	*mask = le32_to_cpu(rsp_params->mask);
+
+	return 0;
+}
+
+/**
+ * dpni_get_irq_status() - Get the current status of any pending interrupts.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @irq_index:	The interrupt index to configure
+ * @status:	Returned interrupts status - one bit per cause:
+ *			0 = no interrupt pending
+ *			1 = interrupt pending
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_get_irq_status(struct fsl_mc_io *mc_io,
+			u32 cmd_flags,
+			u16 token,
+			u8 irq_index,
+			u32 *status)
+{
+	struct fsl_mc_command cmd = { 0 };
+	struct dpni_cmd_get_irq_status *cmd_params;
+	struct dpni_rsp_get_irq_status *rsp_params;
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_IRQ_STATUS,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_cmd_get_irq_status *)cmd.params;
+	cmd_params->status = cpu_to_le32(*status);
+	cmd_params->irq_index = irq_index;
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpni_rsp_get_irq_status *)cmd.params;
+	*status = le32_to_cpu(rsp_params->status);
+
+	return 0;
+}
+
+/**
+ * dpni_clear_irq_status() - Clear a pending interrupt's status
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @irq_index:	The interrupt index to configure
+ * @status:	bits to clear (W1C) - one bit per cause:
+ *			0 = don't change
+ *			1 = clear status bit
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_clear_irq_status(struct fsl_mc_io *mc_io,
+			  u32 cmd_flags,
+			  u16 token,
+			  u8 irq_index,
+			  u32 status)
+{
+	struct fsl_mc_command cmd = { 0 };
+	struct dpni_cmd_clear_irq_status *cmd_params;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_CLEAR_IRQ_STATUS,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_cmd_clear_irq_status *)cmd.params;
+	cmd_params->irq_index = irq_index;
+	cmd_params->status = cpu_to_le32(status);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpni_get_attributes() - Retrieve DPNI attributes.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @attr:	Object's attributes
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_get_attributes(struct fsl_mc_io *mc_io,
+			u32 cmd_flags,
+			u16 token,
+			struct dpni_attr *attr)
+{
+	struct fsl_mc_command cmd = { 0 };
+	struct dpni_rsp_get_attr *rsp_params;
+
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_ATTR,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpni_rsp_get_attr *)cmd.params;
+	attr->options = le32_to_cpu(rsp_params->options);
+	attr->num_queues = rsp_params->num_queues;
+	attr->num_tcs = rsp_params->num_tcs;
+	attr->mac_filter_entries = rsp_params->mac_filter_entries;
+	attr->vlan_filter_entries = rsp_params->vlan_filter_entries;
+	attr->qos_entries = rsp_params->qos_entries;
+	attr->fs_entries = le16_to_cpu(rsp_params->fs_entries);
+	attr->qos_key_size = rsp_params->qos_key_size;
+	attr->fs_key_size = rsp_params->fs_key_size;
+	attr->wriop_version = le16_to_cpu(rsp_params->wriop_version);
+
+	return 0;
+}
+
+/**
+ * dpni_set_errors_behavior() - Set errors behavior
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @cfg:	Errors configuration
+ *
+ * this function may be called numerous times with different
+ * error masks
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_set_errors_behavior(struct fsl_mc_io *mc_io,
+			     u32 cmd_flags,
+			     u16 token,
+			     struct dpni_error_cfg *cfg)
+{
+	struct fsl_mc_command cmd = { 0 };
+	struct dpni_cmd_set_errors_behavior *cmd_params;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_ERRORS_BEHAVIOR,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_cmd_set_errors_behavior *)cmd.params;
+	cmd_params->errors = cpu_to_le32(cfg->errors);
+	dpni_set_field(cmd_params->flags, ERROR_ACTION, cfg->error_action);
+	dpni_set_field(cmd_params->flags, FRAME_ANN, cfg->set_frame_annotation);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpni_get_buffer_layout() - Retrieve buffer layout attributes.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @qtype:	Type of queue to retrieve configuration for
+ * @layout:	Returns buffer layout attributes
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_get_buffer_layout(struct fsl_mc_io *mc_io,
+			   u32 cmd_flags,
+			   u16 token,
+			   enum dpni_queue_type qtype,
+			   struct dpni_buffer_layout *layout)
+{
+	struct fsl_mc_command cmd = { 0 };
+	struct dpni_cmd_get_buffer_layout *cmd_params;
+	struct dpni_rsp_get_buffer_layout *rsp_params;
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_BUFFER_LAYOUT,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_cmd_get_buffer_layout *)cmd.params;
+	cmd_params->qtype = qtype;
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpni_rsp_get_buffer_layout *)cmd.params;
+	layout->pass_timestamp = dpni_get_field(rsp_params->flags, PASS_TS);
+	layout->pass_parser_result = dpni_get_field(rsp_params->flags, PASS_PR);
+	layout->pass_frame_status = dpni_get_field(rsp_params->flags, PASS_FS);
+	layout->private_data_size = le16_to_cpu(rsp_params->private_data_size);
+	layout->data_align = le16_to_cpu(rsp_params->data_align);
+	layout->data_head_room = le16_to_cpu(rsp_params->head_room);
+	layout->data_tail_room = le16_to_cpu(rsp_params->tail_room);
+
+	return 0;
+}
+
+/**
+ * dpni_set_buffer_layout() - Set buffer layout configuration.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @qtype:	Type of queue this configuration applies to
+ * @layout:	Buffer layout configuration
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ *
+ * @warning	Allowed only when DPNI is disabled
+ */
+int dpni_set_buffer_layout(struct fsl_mc_io *mc_io,
+			   u32 cmd_flags,
+			   u16 token,
+			   enum dpni_queue_type qtype,
+			   const struct dpni_buffer_layout *layout)
+{
+	struct fsl_mc_command cmd = { 0 };
+	struct dpni_cmd_set_buffer_layout *cmd_params;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_BUFFER_LAYOUT,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_cmd_set_buffer_layout *)cmd.params;
+	cmd_params->qtype = qtype;
+	cmd_params->options = cpu_to_le16(layout->options);
+	dpni_set_field(cmd_params->flags, PASS_TS, layout->pass_timestamp);
+	dpni_set_field(cmd_params->flags, PASS_PR, layout->pass_parser_result);
+	dpni_set_field(cmd_params->flags, PASS_FS, layout->pass_frame_status);
+	cmd_params->private_data_size = cpu_to_le16(layout->private_data_size);
+	cmd_params->data_align = cpu_to_le16(layout->data_align);
+	cmd_params->head_room = cpu_to_le16(layout->data_head_room);
+	cmd_params->tail_room = cpu_to_le16(layout->data_tail_room);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpni_set_offload() - Set DPNI offload configuration.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @type:	Type of DPNI offload
+ * @config:	Offload configuration.
+ *		For checksum offloads, non-zero value enables the offload
+ *
+ * Return:     '0' on Success; Error code otherwise.
+ *
+ * @warning    Allowed only when DPNI is disabled
+ */
+
+int dpni_set_offload(struct fsl_mc_io *mc_io,
+		     u32 cmd_flags,
+		     u16 token,
+		     enum dpni_offload type,
+		     u32 config)
+{
+	struct fsl_mc_command cmd = { 0 };
+	struct dpni_cmd_set_offload *cmd_params;
+
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_OFFLOAD,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_cmd_set_offload *)cmd.params;
+	cmd_params->dpni_offload = type;
+	cmd_params->config = cpu_to_le32(config);
+
+	return mc_send_command(mc_io, &cmd);
+}
+
+int dpni_get_offload(struct fsl_mc_io *mc_io,
+		     u32 cmd_flags,
+		     u16 token,
+		     enum dpni_offload type,
+		     u32 *config)
+{
+	struct fsl_mc_command cmd = { 0 };
+	struct dpni_cmd_get_offload *cmd_params;
+	struct dpni_rsp_get_offload *rsp_params;
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_OFFLOAD,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_cmd_get_offload *)cmd.params;
+	cmd_params->dpni_offload = type;
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpni_rsp_get_offload *)cmd.params;
+	*config = le32_to_cpu(rsp_params->config);
+
+	return 0;
+}
+
+/**
+ * dpni_get_qdid() - Get the Queuing Destination ID (QDID) that should be used
+ *			for enqueue operations
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @qtype:	Type of queue to receive QDID for
+ * @qdid:	Returned virtual QDID value that should be used as an argument
+ *			in all enqueue operations
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_get_qdid(struct fsl_mc_io *mc_io,
+		  u32 cmd_flags,
+		  u16 token,
+		  enum dpni_queue_type qtype,
+		  u16 *qdid)
+{
+	struct fsl_mc_command cmd = { 0 };
+	struct dpni_cmd_get_qdid *cmd_params;
+	struct dpni_rsp_get_qdid *rsp_params;
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_QDID,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_cmd_get_qdid *)cmd.params;
+	cmd_params->qtype = qtype;
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpni_rsp_get_qdid *)cmd.params;
+	*qdid = le16_to_cpu(rsp_params->qdid);
+
+	return 0;
+}
+
+/**
+ * dpni_get_tx_data_offset() - Get the Tx data offset (from start of buffer)
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @data_offset: Tx data offset (from start of buffer)
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_get_tx_data_offset(struct fsl_mc_io *mc_io,
+			    u32 cmd_flags,
+			    u16 token,
+			    u16 *data_offset)
+{
+	struct fsl_mc_command cmd = { 0 };
+	struct dpni_rsp_get_tx_data_offset *rsp_params;
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_TX_DATA_OFFSET,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpni_rsp_get_tx_data_offset *)cmd.params;
+	*data_offset = le16_to_cpu(rsp_params->data_offset);
+
+	return 0;
+}
+
+/**
+ * dpni_set_link_cfg() - set the link configuration.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @cfg:	Link configuration
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_set_link_cfg(struct fsl_mc_io *mc_io,
+		      u32 cmd_flags,
+		      u16 token,
+		      const struct dpni_link_cfg *cfg)
+{
+	struct fsl_mc_command cmd = { 0 };
+	struct dpni_cmd_set_link_cfg *cmd_params;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_LINK_CFG,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_cmd_set_link_cfg *)cmd.params;
+	cmd_params->rate = cpu_to_le32(cfg->rate);
+	cmd_params->options = cpu_to_le64(cfg->options);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpni_get_link_state() - Return the link state (either up or down)
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @state:	Returned link state;
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_get_link_state(struct fsl_mc_io *mc_io,
+			u32 cmd_flags,
+			u16 token,
+			struct dpni_link_state *state)
+{
+	struct fsl_mc_command cmd = { 0 };
+	struct dpni_rsp_get_link_state *rsp_params;
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_LINK_STATE,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpni_rsp_get_link_state *)cmd.params;
+	state->up = dpni_get_field(rsp_params->flags, LINK_STATE);
+	state->rate = le32_to_cpu(rsp_params->rate);
+	state->options = le64_to_cpu(rsp_params->options);
+
+	return 0;
+}
+
+/**
+ * dpni_set_max_frame_length() - Set the maximum received frame length.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @max_frame_length:	Maximum received frame length (in
+ *				bytes); frame is discarded if its
+ *				length exceeds this value
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_set_max_frame_length(struct fsl_mc_io *mc_io,
+			      u32 cmd_flags,
+			      u16 token,
+			      u16 max_frame_length)
+{
+	struct fsl_mc_command cmd = { 0 };
+	struct dpni_cmd_set_max_frame_length *cmd_params;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_MAX_FRAME_LENGTH,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_cmd_set_max_frame_length *)cmd.params;
+	cmd_params->max_frame_length = cpu_to_le16(max_frame_length);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpni_get_max_frame_length() - Get the maximum received frame length.
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @max_frame_length:	Maximum received frame length (in
+ *				bytes); frame is discarded if its
+ *				length exceeds this value
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_get_max_frame_length(struct fsl_mc_io *mc_io,
+			      u32 cmd_flags,
+			      u16 token,
+			      u16 *max_frame_length)
+{
+	struct fsl_mc_command cmd = { 0 };
+	struct dpni_rsp_get_max_frame_length *rsp_params;
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_MAX_FRAME_LENGTH,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpni_rsp_get_max_frame_length *)cmd.params;
+	*max_frame_length = le16_to_cpu(rsp_params->max_frame_length);
+
+	return 0;
+}
+
+/**
+ * dpni_set_multicast_promisc() - Enable/disable multicast promiscuous mode
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @en:		Set to '1' to enable; '0' to disable
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_set_multicast_promisc(struct fsl_mc_io *mc_io,
+			       u32 cmd_flags,
+			       u16 token,
+			       int en)
+{
+	struct fsl_mc_command cmd = { 0 };
+	struct dpni_cmd_set_multicast_promisc *cmd_params;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_MCAST_PROMISC,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_cmd_set_multicast_promisc *)cmd.params;
+	dpni_set_field(cmd_params->enable, ENABLE, en);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpni_get_multicast_promisc() - Get multicast promiscuous mode
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @en:		Returns '1' if enabled; '0' otherwise
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_get_multicast_promisc(struct fsl_mc_io *mc_io,
+			       u32 cmd_flags,
+			       u16 token,
+			       int *en)
+{
+	struct fsl_mc_command cmd = { 0 };
+	struct dpni_rsp_get_multicast_promisc *rsp_params;
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_MCAST_PROMISC,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpni_rsp_get_multicast_promisc *)cmd.params;
+	*en = dpni_get_field(rsp_params->enabled, ENABLE);
+
+	return 0;
+}
+
+/**
+ * dpni_set_unicast_promisc() - Enable/disable unicast promiscuous mode
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @en:		Set to '1' to enable; '0' to disable
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_set_unicast_promisc(struct fsl_mc_io *mc_io,
+			     u32 cmd_flags,
+			     u16 token,
+			     int en)
+{
+	struct fsl_mc_command cmd = { 0 };
+	struct dpni_cmd_set_unicast_promisc *cmd_params;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_UNICAST_PROMISC,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_cmd_set_unicast_promisc *)cmd.params;
+	dpni_set_field(cmd_params->enable, ENABLE, en);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpni_get_unicast_promisc() - Get unicast promiscuous mode
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @en:		Returns '1' if enabled; '0' otherwise
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_get_unicast_promisc(struct fsl_mc_io *mc_io,
+			     u32 cmd_flags,
+			     u16 token,
+			     int *en)
+{
+	struct fsl_mc_command cmd = { 0 };
+	struct dpni_rsp_get_unicast_promisc *rsp_params;
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_UNICAST_PROMISC,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpni_rsp_get_unicast_promisc *)cmd.params;
+	*en = dpni_get_field(rsp_params->enabled, ENABLE);
+
+	return 0;
+}
+
+/**
+ * dpni_set_primary_mac_addr() - Set the primary MAC address
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @mac_addr:	MAC address to set as primary address
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_set_primary_mac_addr(struct fsl_mc_io *mc_io,
+			      u32 cmd_flags,
+			      u16 token,
+			      const u8 mac_addr[6])
+{
+	struct fsl_mc_command cmd = { 0 };
+	struct dpni_cmd_set_primary_mac_addr *cmd_params;
+	int i;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_PRIM_MAC,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_cmd_set_primary_mac_addr *)cmd.params;
+	for (i = 0; i < 6; i++)
+		cmd_params->mac_addr[i] = mac_addr[5 - i];
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpni_get_primary_mac_addr() - Get the primary MAC address
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @mac_addr:	Returned MAC address
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_get_primary_mac_addr(struct fsl_mc_io *mc_io,
+			      u32 cmd_flags,
+			      u16 token,
+			      u8 mac_addr[6])
+{
+	struct fsl_mc_command cmd = { 0 };
+	struct dpni_rsp_get_primary_mac_addr *rsp_params;
+	int i, err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_PRIM_MAC,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpni_rsp_get_primary_mac_addr *)cmd.params;
+	for (i = 0; i < 6; i++)
+		mac_addr[5 - i] = rsp_params->mac_addr[i];
+
+	return 0;
+}
+
+/**
+ * dpni_get_port_mac_addr() - Retrieve MAC address associated to the physical
+ *			port the DPNI is attached to
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @mac_addr:	MAC address of the physical port, if any, otherwise 0
+ *
+ * The primary MAC address is not cleared by this operation.
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_get_port_mac_addr(struct fsl_mc_io *mc_io,
+			   u32 cmd_flags,
+			   u16 token,
+			   u8 mac_addr[6])
+{
+	struct fsl_mc_command cmd = { 0 };
+	struct dpni_rsp_get_port_mac_addr *rsp_params;
+	int i, err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_PORT_MAC_ADDR,
+					  cmd_flags,
+					  token);
+
+	/* send command to mc*/
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpni_rsp_get_port_mac_addr *)cmd.params;
+	for (i = 0; i < 6; i++)
+		mac_addr[5 - i] = rsp_params->mac_addr[i];
+
+	return 0;
+}
+
+/**
+ * dpni_add_mac_addr() - Add MAC address filter
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @mac_addr:	MAC address to add
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_add_mac_addr(struct fsl_mc_io *mc_io,
+		      u32 cmd_flags,
+		      u16 token,
+		      const u8 mac_addr[6])
+{
+	struct fsl_mc_command cmd = { 0 };
+	struct dpni_cmd_add_mac_addr *cmd_params;
+	int i;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_ADD_MAC_ADDR,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_cmd_add_mac_addr *)cmd.params;
+	for (i = 0; i < 6; i++)
+		cmd_params->mac_addr[i] = mac_addr[5 - i];
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpni_remove_mac_addr() - Remove MAC address filter
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @mac_addr:	MAC address to remove
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_remove_mac_addr(struct fsl_mc_io *mc_io,
+			 u32 cmd_flags,
+			 u16 token,
+			 const u8 mac_addr[6])
+{
+	struct fsl_mc_command cmd = { 0 };
+	struct dpni_cmd_remove_mac_addr *cmd_params;
+	int i;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_REMOVE_MAC_ADDR,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_cmd_remove_mac_addr *)cmd.params;
+	for (i = 0; i < 6; i++)
+		cmd_params->mac_addr[i] = mac_addr[5 - i];
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpni_clear_mac_filters() - Clear all unicast and/or multicast MAC filters
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @unicast:	Set to '1' to clear unicast addresses
+ * @multicast:	Set to '1' to clear multicast addresses
+ *
+ * The primary MAC address is not cleared by this operation.
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_clear_mac_filters(struct fsl_mc_io *mc_io,
+			   u32 cmd_flags,
+			   u16 token,
+			   int unicast,
+			   int multicast)
+{
+	struct fsl_mc_command cmd = { 0 };
+	struct dpni_cmd_clear_mac_filters *cmd_params;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_CLR_MAC_FILTERS,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_cmd_clear_mac_filters *)cmd.params;
+	dpni_set_field(cmd_params->flags, UNICAST_FILTERS, unicast);
+	dpni_set_field(cmd_params->flags, MULTICAST_FILTERS, multicast);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpni_set_rx_tc_dist() - Set Rx traffic class distribution configuration
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @tc_id:	Traffic class selection (0-7)
+ * @cfg:	Traffic class distribution configuration
+ *
+ * warning: if 'dist_mode != DPNI_DIST_MODE_NONE', call dpni_prepare_key_cfg()
+ *			first to prepare the key_cfg_iova parameter
+ *
+ * Return:	'0' on Success; error code otherwise.
+ */
+int dpni_set_rx_tc_dist(struct fsl_mc_io *mc_io,
+			u32 cmd_flags,
+			u16 token,
+			u8 tc_id,
+			const struct dpni_rx_tc_dist_cfg *cfg)
+{
+	struct fsl_mc_command cmd = { 0 };
+	struct dpni_cmd_set_rx_tc_dist *cmd_params;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_RX_TC_DIST,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_cmd_set_rx_tc_dist *)cmd.params;
+	cmd_params->dist_size = cpu_to_le16(cfg->dist_size);
+	cmd_params->tc_id = tc_id;
+	dpni_set_field(cmd_params->flags, DIST_MODE, cfg->dist_mode);
+	dpni_set_field(cmd_params->flags, MISS_ACTION, cfg->fs_cfg.miss_action);
+	cmd_params->default_flow_id = cpu_to_le16(cfg->fs_cfg.default_flow_id);
+	cmd_params->key_cfg_iova = cpu_to_le64(cfg->key_cfg_iova);
+
+	/* send command to mc*/
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpni_set_queue() - Set queue parameters
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @qtype:	Type of queue - all queue types are supported, although
+ *		the command is ignored for Tx
+ * @tc:		Traffic class, in range 0 to NUM_TCS - 1
+ * @index:	Selects the specific queue out of the set allocated for the
+ *		same TC. Value must be in range 0 to NUM_QUEUES - 1
+ * @options:	A combination of DPNI_QUEUE_OPT_ values that control what
+ *		configuration options are set on the queue
+ * @queue:	Queue structure
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_set_queue(struct fsl_mc_io *mc_io,
+		   u32 cmd_flags,
+		   u16 token,
+		   enum dpni_queue_type qtype,
+		   u8 tc,
+		   u8 index,
+		   u8 options,
+		   const struct dpni_queue *queue)
+{
+	struct fsl_mc_command cmd = { 0 };
+	struct dpni_cmd_set_queue *cmd_params;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_QUEUE,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_cmd_set_queue *)cmd.params;
+	cmd_params->qtype = qtype;
+	cmd_params->tc = tc;
+	cmd_params->index = index;
+	cmd_params->options = options;
+	cmd_params->dest_id = cpu_to_le32(queue->destination.id);
+	cmd_params->dest_prio = queue->destination.priority;
+	dpni_set_field(cmd_params->flags, DEST_TYPE, queue->destination.type);
+	dpni_set_field(cmd_params->flags, STASH_CTRL, queue->flc.stash_control);
+	dpni_set_field(cmd_params->flags, HOLD_ACTIVE,
+		       queue->destination.hold_active);
+	cmd_params->flc = cpu_to_le64(queue->flc.value);
+	cmd_params->user_context = cpu_to_le64(queue->user_context);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpni_get_queue() - Get queue parameters
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @qtype:	Type of queue - all queue types are supported
+ * @tc:		Traffic class, in range 0 to NUM_TCS - 1
+ * @index:	Selects the specific queue out of the set allocated for the
+ *		same TC. Value must be in range 0 to NUM_QUEUES - 1
+ * @queue:	Queue configuration structure
+ * @qid:	Queue identification
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_get_queue(struct fsl_mc_io *mc_io,
+		   u32 cmd_flags,
+		   u16 token,
+		   enum dpni_queue_type qtype,
+		   u8 tc,
+		   u8 index,
+		   struct dpni_queue *queue,
+		   struct dpni_queue_id *qid)
+{
+	struct fsl_mc_command cmd = { 0 };
+	struct dpni_cmd_get_queue *cmd_params;
+	struct dpni_rsp_get_queue *rsp_params;
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_QUEUE,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_cmd_get_queue *)cmd.params;
+	cmd_params->qtype = qtype;
+	cmd_params->tc = tc;
+	cmd_params->index = index;
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpni_rsp_get_queue *)cmd.params;
+	queue->destination.id = le32_to_cpu(rsp_params->dest_id);
+	queue->destination.priority = rsp_params->dest_prio;
+	queue->destination.type = dpni_get_field(rsp_params->flags,
+						 DEST_TYPE);
+	queue->flc.stash_control = dpni_get_field(rsp_params->flags,
+						  STASH_CTRL);
+	queue->destination.hold_active = dpni_get_field(rsp_params->flags,
+							HOLD_ACTIVE);
+	queue->flc.value = le64_to_cpu(rsp_params->flc);
+	queue->user_context = le64_to_cpu(rsp_params->user_context);
+	qid->fqid = le32_to_cpu(rsp_params->fqid);
+	qid->qdbin = le16_to_cpu(rsp_params->qdbin);
+
+	return 0;
+}
+
+/**
+ * dpni_get_statistics() - Get DPNI statistics
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @page:	Selects the statistics page to retrieve, see
+ *		DPNI_GET_STATISTICS output. Pages are numbered 0 to 2.
+ * @stat:	Structure containing the statistics
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_get_statistics(struct fsl_mc_io *mc_io,
+			u32 cmd_flags,
+			u16 token,
+			u8 page,
+			union dpni_statistics *stat)
+{
+	struct fsl_mc_command cmd = { 0 };
+	struct dpni_cmd_get_statistics *cmd_params;
+	struct dpni_rsp_get_statistics *rsp_params;
+	int i, err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_STATISTICS,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_cmd_get_statistics *)cmd.params;
+	cmd_params->page_number = page;
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpni_rsp_get_statistics *)cmd.params;
+	for (i = 0; i < DPNI_STATISTICS_CNT; i++)
+		stat->raw.counter[i] = le64_to_cpu(rsp_params->counter[i]);
+
+	return 0;
+}
+
+/**
+ * dpni_set_taildrop() - Set taildrop per queue or TC
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @cg_point:	Congestion point
+ * @q_type:	Queue type on which the taildrop is configured.
+ *		Only Rx queues are supported for now
+ * @tc:		Traffic class to apply this taildrop to
+ * @q_index:	Index of the queue if the DPNI supports multiple queues for
+ *		traffic distribution. Ignored if CONGESTION_POINT is not 0.
+ * @taildrop:	Taildrop structure
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_set_taildrop(struct fsl_mc_io *mc_io,
+		      u32 cmd_flags,
+		      u16 token,
+		      enum dpni_congestion_point cg_point,
+		      enum dpni_queue_type qtype,
+		      u8 tc,
+		      u8 index,
+		      struct dpni_taildrop *taildrop)
+{
+	struct fsl_mc_command cmd = { 0 };
+	struct dpni_cmd_set_taildrop *cmd_params;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_TAILDROP,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_cmd_set_taildrop *)cmd.params;
+	cmd_params->congestion_point = cg_point;
+	cmd_params->qtype = qtype;
+	cmd_params->tc = tc;
+	cmd_params->index = index;
+	dpni_set_field(cmd_params->enable, ENABLE, taildrop->enable);
+	cmd_params->units = taildrop->units;
+	cmd_params->threshold = cpu_to_le32(taildrop->threshold);
+
+	/* send command to mc */
+	return mc_send_command(mc_io, &cmd);
+}
+
+/**
+ * dpni_get_taildrop() - Get taildrop information
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @token:	Token of DPNI object
+ * @cg_point:	Congestion point
+ * @q_type:	Queue type on which the taildrop is configured.
+ *		Only Rx queues are supported for now
+ * @tc:		Traffic class to apply this taildrop to
+ * @q_index:	Index of the queue if the DPNI supports multiple queues for
+ *		traffic distribution. Ignored if CONGESTION_POINT is not 0.
+ * @taildrop:	Taildrop structure
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_get_taildrop(struct fsl_mc_io *mc_io,
+		      u32 cmd_flags,
+		      u16 token,
+		      enum dpni_congestion_point cg_point,
+		      enum dpni_queue_type qtype,
+		      u8 tc,
+		      u8 index,
+		      struct dpni_taildrop *taildrop)
+{
+	struct fsl_mc_command cmd = { 0 };
+	struct dpni_cmd_get_taildrop *cmd_params;
+	struct dpni_rsp_get_taildrop *rsp_params;
+	int err;
+
+	/* prepare command */
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_TAILDROP,
+					  cmd_flags,
+					  token);
+	cmd_params = (struct dpni_cmd_get_taildrop *)cmd.params;
+	cmd_params->congestion_point = cg_point;
+	cmd_params->qtype = qtype;
+	cmd_params->tc = tc;
+	cmd_params->index = index;
+
+	/* send command to mc */
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	/* retrieve response parameters */
+	rsp_params = (struct dpni_rsp_get_taildrop *)cmd.params;
+	taildrop->enable = dpni_get_field(rsp_params->enable, ENABLE);
+	taildrop->units = rsp_params->units;
+	taildrop->threshold = le32_to_cpu(rsp_params->threshold);
+
+	return 0;
+}
+
+/**
+ * dpni_get_api_version() - Get Data Path Network Interface API version
+ * @mc_io:	Pointer to MC portal's I/O object
+ * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
+ * @major_ver:	Major version of data path network interface API
+ * @minor_ver:	Minor version of data path network interface API
+ *
+ * Return:	'0' on Success; Error code otherwise.
+ */
+int dpni_get_api_version(struct fsl_mc_io *mc_io,
+			 u32 cmd_flags,
+			 u16 *major_ver,
+			 u16 *minor_ver)
+{
+	struct dpni_rsp_get_api_version *rsp_params;
+	struct fsl_mc_command cmd = { 0 };
+	int err;
+
+	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_API_VERSION,
+					  cmd_flags, 0);
+
+	err = mc_send_command(mc_io, &cmd);
+	if (err)
+		return err;
+
+	rsp_params = (struct dpni_rsp_get_api_version *)cmd.params;
+	*major_ver = le16_to_cpu(rsp_params->major);
+	*minor_ver = le16_to_cpu(rsp_params->minor);
+
+	return 0;
+}
diff --git a/drivers/net/ethernet/freescale/dpaa2/dpni.h b/drivers/net/ethernet/freescale/dpaa2/dpni.h
new file mode 100644
index 0000000..b378a00
--- /dev/null
+++ b/drivers/net/ethernet/freescale/dpaa2/dpni.h
@@ -0,0 +1,824 @@
+/* SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause) */
+/* Copyright 2013-2016 Freescale Semiconductor Inc.
+ * Copyright 2016 NXP
+ */
+#ifndef __FSL_DPNI_H
+#define __FSL_DPNI_H
+
+#include "dpkg.h"
+
+struct fsl_mc_io;
+
+/**
+ * Data Path Network Interface API
+ * Contains initialization APIs and runtime control APIs for DPNI
+ */
+
+/** General DPNI macros */
+
+/**
+ * Maximum number of traffic classes
+ */
+#define DPNI_MAX_TC				8
+/**
+ * Maximum number of buffer pools per DPNI
+ */
+#define DPNI_MAX_DPBP				8
+
+/**
+ * All traffic classes considered; see dpni_set_queue()
+ */
+#define DPNI_ALL_TCS				(u8)(-1)
+/**
+ * All flows within traffic class considered; see dpni_set_queue()
+ */
+#define DPNI_ALL_TC_FLOWS			(u16)(-1)
+/**
+ * Generate new flow ID; see dpni_set_queue()
+ */
+#define DPNI_NEW_FLOW_ID			(u16)(-1)
+
+/**
+ * Tx traffic is always released to a buffer pool on transmit, there are no
+ * resources allocated to have the frames confirmed back to the source after
+ * transmission.
+ */
+#define DPNI_OPT_TX_FRM_RELEASE			0x000001
+/**
+ * Disables support for MAC address filtering for addresses other than primary
+ * MAC address. This affects both unicast and multicast. Promiscuous mode can
+ * still be enabled/disabled for both unicast and multicast. If promiscuous mode
+ * is disabled, only traffic matching the primary MAC address will be accepted.
+ */
+#define DPNI_OPT_NO_MAC_FILTER			0x000002
+/**
+ * Allocate policers for this DPNI. They can be used to rate-limit traffic per
+ * traffic class (TC) basis.
+ */
+#define DPNI_OPT_HAS_POLICING			0x000004
+/**
+ * Congestion can be managed in several ways, allowing the buffer pool to
+ * deplete on ingress, taildrop on each queue or use congestion groups for sets
+ * of queues. If set, it configures a single congestion groups across all TCs.
+ * If reset, a congestion group is allocated for each TC. Only relevant if the
+ * DPNI has multiple traffic classes.
+ */
+#define DPNI_OPT_SHARED_CONGESTION		0x000008
+/**
+ * Enables TCAM for Flow Steering and QoS look-ups. If not specified, all
+ * look-ups are exact match. Note that TCAM is not available on LS1088 and its
+ * variants. Setting this bit on these SoCs will trigger an error.
+ */
+#define DPNI_OPT_HAS_KEY_MASKING		0x000010
+/**
+ * Disables the flow steering table.
+ */
+#define DPNI_OPT_NO_FS				0x000020
+
+int dpni_open(struct fsl_mc_io	*mc_io,
+	      u32		cmd_flags,
+	      int		dpni_id,
+	      u16		*token);
+
+int dpni_close(struct fsl_mc_io	*mc_io,
+	       u32		cmd_flags,
+	       u16		token);
+
+/**
+ * struct dpni_pools_cfg - Structure representing buffer pools configuration
+ * @num_dpbp: Number of DPBPs
+ * @pools: Array of buffer pools parameters; The number of valid entries
+ *	must match 'num_dpbp' value
+ * @pools.dpbp_id: DPBP object ID
+ * @pools.buffer_size: Buffer size
+ * @pools.backup_pool: Backup pool
+ */
+struct dpni_pools_cfg {
+	u8		num_dpbp;
+	struct {
+		int	dpbp_id;
+		u16	buffer_size;
+		int	backup_pool;
+	} pools[DPNI_MAX_DPBP];
+};
+
+int dpni_set_pools(struct fsl_mc_io		*mc_io,
+		   u32				cmd_flags,
+		   u16				token,
+		   const struct dpni_pools_cfg	*cfg);
+
+int dpni_enable(struct fsl_mc_io	*mc_io,
+		u32			cmd_flags,
+		u16			token);
+
+int dpni_disable(struct fsl_mc_io	*mc_io,
+		 u32			cmd_flags,
+		 u16			token);
+
+int dpni_is_enabled(struct fsl_mc_io	*mc_io,
+		    u32			cmd_flags,
+		    u16			token,
+		    int			*en);
+
+int dpni_reset(struct fsl_mc_io	*mc_io,
+	       u32		cmd_flags,
+	       u16		token);
+
+/**
+ * DPNI IRQ Index and Events
+ */
+
+/**
+ * IRQ index
+ */
+#define DPNI_IRQ_INDEX				0
+/**
+ * IRQ event - indicates a change in link state
+ */
+#define DPNI_IRQ_EVENT_LINK_CHANGED		0x00000001
+
+int dpni_set_irq_enable(struct fsl_mc_io	*mc_io,
+			u32			cmd_flags,
+			u16			token,
+			u8			irq_index,
+			u8			en);
+
+int dpni_get_irq_enable(struct fsl_mc_io	*mc_io,
+			u32			cmd_flags,
+			u16			token,
+			u8			irq_index,
+			u8			*en);
+
+int dpni_set_irq_mask(struct fsl_mc_io	*mc_io,
+		      u32		cmd_flags,
+		      u16		token,
+		      u8		irq_index,
+		      u32		mask);
+
+int dpni_get_irq_mask(struct fsl_mc_io	*mc_io,
+		      u32		cmd_flags,
+		      u16		token,
+		      u8		irq_index,
+		      u32		*mask);
+
+int dpni_get_irq_status(struct fsl_mc_io	*mc_io,
+			u32			cmd_flags,
+			u16			token,
+			u8			irq_index,
+			u32			*status);
+
+int dpni_clear_irq_status(struct fsl_mc_io	*mc_io,
+			  u32			cmd_flags,
+			  u16			token,
+			  u8			irq_index,
+			  u32			status);
+
+/**
+ * struct dpni_attr - Structure representing DPNI attributes
+ * @options: Any combination of the following options:
+ *		DPNI_OPT_TX_FRM_RELEASE
+ *		DPNI_OPT_NO_MAC_FILTER
+ *		DPNI_OPT_HAS_POLICING
+ *		DPNI_OPT_SHARED_CONGESTION
+ *		DPNI_OPT_HAS_KEY_MASKING
+ *		DPNI_OPT_NO_FS
+ * @num_queues: Number of Tx and Rx queues used for traffic distribution.
+ * @num_tcs: Number of traffic classes (TCs), reserved for the DPNI.
+ * @mac_filter_entries: Number of entries in the MAC address filtering table.
+ * @vlan_filter_entries: Number of entries in the VLAN address filtering table.
+ * @qos_entries: Number of entries in the QoS classification table.
+ * @fs_entries: Number of entries in the flow steering table.
+ * @qos_key_size: Size, in bytes, of the QoS look-up key. Defining a key larger
+ *		than this when adding QoS entries will result in an error.
+ * @fs_key_size: Size, in bytes, of the flow steering look-up key. Defining a
+ *		key larger than this when composing the hash + FS key will
+ *		result in an error.
+ * @wriop_version: Version of WRIOP HW block. The 3 version values are stored
+ *		on 6, 5, 5 bits respectively.
+ */
+struct dpni_attr {
+	u32 options;
+	u8 num_queues;
+	u8 num_tcs;
+	u8 mac_filter_entries;
+	u8 vlan_filter_entries;
+	u8 qos_entries;
+	u16 fs_entries;
+	u8 qos_key_size;
+	u8 fs_key_size;
+	u16 wriop_version;
+};
+
+int dpni_get_attributes(struct fsl_mc_io	*mc_io,
+			u32			cmd_flags,
+			u16			token,
+			struct dpni_attr	*attr);
+
+/**
+ * DPNI errors
+ */
+
+/**
+ * Extract out of frame header error
+ */
+#define DPNI_ERROR_EOFHE	0x00020000
+/**
+ * Frame length error
+ */
+#define DPNI_ERROR_FLE		0x00002000
+/**
+ * Frame physical error
+ */
+#define DPNI_ERROR_FPE		0x00001000
+/**
+ * Parsing header error
+ */
+#define DPNI_ERROR_PHE		0x00000020
+/**
+ * Parser L3 checksum error
+ */
+#define DPNI_ERROR_L3CE		0x00000004
+/**
+ * Parser L3 checksum error
+ */
+#define DPNI_ERROR_L4CE		0x00000001
+
+/**
+ * enum dpni_error_action - Defines DPNI behavior for errors
+ * @DPNI_ERROR_ACTION_DISCARD: Discard the frame
+ * @DPNI_ERROR_ACTION_CONTINUE: Continue with the normal flow
+ * @DPNI_ERROR_ACTION_SEND_TO_ERROR_QUEUE: Send the frame to the error queue
+ */
+enum dpni_error_action {
+	DPNI_ERROR_ACTION_DISCARD = 0,
+	DPNI_ERROR_ACTION_CONTINUE = 1,
+	DPNI_ERROR_ACTION_SEND_TO_ERROR_QUEUE = 2
+};
+
+/**
+ * struct dpni_error_cfg - Structure representing DPNI errors treatment
+ * @errors: Errors mask; use 'DPNI_ERROR__<X>
+ * @error_action: The desired action for the errors mask
+ * @set_frame_annotation: Set to '1' to mark the errors in frame annotation
+ *		status (FAS); relevant only for the non-discard action
+ */
+struct dpni_error_cfg {
+	u32			errors;
+	enum dpni_error_action	error_action;
+	int			set_frame_annotation;
+};
+
+int dpni_set_errors_behavior(struct fsl_mc_io		*mc_io,
+			     u32			cmd_flags,
+			     u16			token,
+			     struct dpni_error_cfg	*cfg);
+
+/**
+ * DPNI buffer layout modification options
+ */
+
+/**
+ * Select to modify the time-stamp setting
+ */
+#define DPNI_BUF_LAYOUT_OPT_TIMESTAMP		0x00000001
+/**
+ * Select to modify the parser-result setting; not applicable for Tx
+ */
+#define DPNI_BUF_LAYOUT_OPT_PARSER_RESULT	0x00000002
+/**
+ * Select to modify the frame-status setting
+ */
+#define DPNI_BUF_LAYOUT_OPT_FRAME_STATUS	0x00000004
+/**
+ * Select to modify the private-data-size setting
+ */
+#define DPNI_BUF_LAYOUT_OPT_PRIVATE_DATA_SIZE	0x00000008
+/**
+ * Select to modify the data-alignment setting
+ */
+#define DPNI_BUF_LAYOUT_OPT_DATA_ALIGN		0x00000010
+/**
+ * Select to modify the data-head-room setting
+ */
+#define DPNI_BUF_LAYOUT_OPT_DATA_HEAD_ROOM	0x00000020
+/**
+ * Select to modify the data-tail-room setting
+ */
+#define DPNI_BUF_LAYOUT_OPT_DATA_TAIL_ROOM	0x00000040
+
+/**
+ * struct dpni_buffer_layout - Structure representing DPNI buffer layout
+ * @options: Flags representing the suggested modifications to the buffer
+ *		layout; Use any combination of 'DPNI_BUF_LAYOUT_OPT_<X>' flags
+ * @pass_timestamp: Pass timestamp value
+ * @pass_parser_result: Pass parser results
+ * @pass_frame_status: Pass frame status
+ * @private_data_size: Size kept for private data (in bytes)
+ * @data_align: Data alignment
+ * @data_head_room: Data head room
+ * @data_tail_room: Data tail room
+ */
+struct dpni_buffer_layout {
+	u32	options;
+	int	pass_timestamp;
+	int	pass_parser_result;
+	int	pass_frame_status;
+	u16	private_data_size;
+	u16	data_align;
+	u16	data_head_room;
+	u16	data_tail_room;
+};
+
+/**
+ * enum dpni_queue_type - Identifies a type of queue targeted by the command
+ * @DPNI_QUEUE_RX: Rx queue
+ * @DPNI_QUEUE_TX: Tx queue
+ * @DPNI_QUEUE_TX_CONFIRM: Tx confirmation queue
+ * @DPNI_QUEUE_RX_ERR: Rx error queue
+ */enum dpni_queue_type {
+	DPNI_QUEUE_RX,
+	DPNI_QUEUE_TX,
+	DPNI_QUEUE_TX_CONFIRM,
+	DPNI_QUEUE_RX_ERR,
+};
+
+int dpni_get_buffer_layout(struct fsl_mc_io		*mc_io,
+			   u32				cmd_flags,
+			   u16				token,
+			   enum dpni_queue_type		qtype,
+			   struct dpni_buffer_layout	*layout);
+
+int dpni_set_buffer_layout(struct fsl_mc_io		   *mc_io,
+			   u32				   cmd_flags,
+			   u16				   token,
+			   enum dpni_queue_type		   qtype,
+			   const struct dpni_buffer_layout *layout);
+
+/**
+ * enum dpni_offload - Identifies a type of offload targeted by the command
+ * @DPNI_OFF_RX_L3_CSUM: Rx L3 checksum validation
+ * @DPNI_OFF_RX_L4_CSUM: Rx L4 checksum validation
+ * @DPNI_OFF_TX_L3_CSUM: Tx L3 checksum generation
+ * @DPNI_OFF_TX_L4_CSUM: Tx L4 checksum generation
+ */
+enum dpni_offload {
+	DPNI_OFF_RX_L3_CSUM,
+	DPNI_OFF_RX_L4_CSUM,
+	DPNI_OFF_TX_L3_CSUM,
+	DPNI_OFF_TX_L4_CSUM,
+};
+
+int dpni_set_offload(struct fsl_mc_io	*mc_io,
+		     u32		cmd_flags,
+		     u16		token,
+		     enum dpni_offload	type,
+		     u32		config);
+
+int dpni_get_offload(struct fsl_mc_io	*mc_io,
+		     u32		cmd_flags,
+		     u16		token,
+		     enum dpni_offload	type,
+		     u32		*config);
+
+int dpni_get_qdid(struct fsl_mc_io	*mc_io,
+		  u32			cmd_flags,
+		  u16			token,
+		  enum dpni_queue_type	qtype,
+		  u16			*qdid);
+
+int dpni_get_tx_data_offset(struct fsl_mc_io	*mc_io,
+			    u32			cmd_flags,
+			    u16			token,
+			    u16			*data_offset);
+
+#define DPNI_STATISTICS_CNT		7
+
+/**
+ * union dpni_statistics - Union describing the DPNI statistics
+ * @page_0: Page_0 statistics structure
+ * @page_0.ingress_all_frames: Ingress frame count
+ * @page_0.ingress_all_bytes: Ingress byte count
+ * @page_0.ingress_multicast_frames: Ingress multicast frame count
+ * @page_0.ingress_multicast_bytes: Ingress multicast byte count
+ * @page_0.ingress_broadcast_frames: Ingress broadcast frame count
+ * @page_0.ingress_broadcast_bytes: Ingress broadcast byte count
+ * @page_1: Page_1 statistics structure
+ * @page_1.egress_all_frames: Egress frame count
+ * @page_1.egress_all_bytes: Egress byte count
+ * @page_1.egress_multicast_frames: Egress multicast frame count
+ * @page_1.egress_multicast_bytes: Egress multicast byte count
+ * @page_1.egress_broadcast_frames: Egress broadcast frame count
+ * @page_1.egress_broadcast_bytes: Egress broadcast byte count
+ * @page_2: Page_2 statistics structure
+ * @page_2.ingress_filtered_frames: Ingress filtered frame count
+ * @page_2.ingress_discarded_frames: Ingress discarded frame count
+ * @page_2.ingress_nobuffer_discards: Ingress discarded frame count due to
+ *	lack of buffers
+ * @page_2.egress_discarded_frames: Egress discarded frame count
+ * @page_2.egress_confirmed_frames: Egress confirmed frame count
+ * @raw: raw statistics structure, used to index counters
+ */
+union dpni_statistics {
+	struct {
+		u64 ingress_all_frames;
+		u64 ingress_all_bytes;
+		u64 ingress_multicast_frames;
+		u64 ingress_multicast_bytes;
+		u64 ingress_broadcast_frames;
+		u64 ingress_broadcast_bytes;
+	} page_0;
+	struct {
+		u64 egress_all_frames;
+		u64 egress_all_bytes;
+		u64 egress_multicast_frames;
+		u64 egress_multicast_bytes;
+		u64 egress_broadcast_frames;
+		u64 egress_broadcast_bytes;
+	} page_1;
+	struct {
+		u64 ingress_filtered_frames;
+		u64 ingress_discarded_frames;
+		u64 ingress_nobuffer_discards;
+		u64 egress_discarded_frames;
+		u64 egress_confirmed_frames;
+	} page_2;
+	struct {
+		u64 counter[DPNI_STATISTICS_CNT];
+	} raw;
+};
+
+int dpni_get_statistics(struct fsl_mc_io	*mc_io,
+			u32			cmd_flags,
+			u16			token,
+			u8			page,
+			union dpni_statistics	*stat);
+
+/**
+ * Enable auto-negotiation
+ */
+#define DPNI_LINK_OPT_AUTONEG		0x0000000000000001ULL
+/**
+ * Enable half-duplex mode
+ */
+#define DPNI_LINK_OPT_HALF_DUPLEX	0x0000000000000002ULL
+/**
+ * Enable pause frames
+ */
+#define DPNI_LINK_OPT_PAUSE		0x0000000000000004ULL
+/**
+ * Enable a-symmetric pause frames
+ */
+#define DPNI_LINK_OPT_ASYM_PAUSE	0x0000000000000008ULL
+
+/**
+ * struct - Structure representing DPNI link configuration
+ * @rate: Rate
+ * @options: Mask of available options; use 'DPNI_LINK_OPT_<X>' values
+ */
+struct dpni_link_cfg {
+	u32 rate;
+	u64 options;
+};
+
+int dpni_set_link_cfg(struct fsl_mc_io			*mc_io,
+		      u32				cmd_flags,
+		      u16				token,
+		      const struct dpni_link_cfg	*cfg);
+
+/**
+ * struct dpni_link_state - Structure representing DPNI link state
+ * @rate: Rate
+ * @options: Mask of available options; use 'DPNI_LINK_OPT_<X>' values
+ * @up: Link state; '0' for down, '1' for up
+ */
+struct dpni_link_state {
+	u32	rate;
+	u64	options;
+	int	up;
+};
+
+int dpni_get_link_state(struct fsl_mc_io	*mc_io,
+			u32			cmd_flags,
+			u16			token,
+			struct dpni_link_state	*state);
+
+int dpni_set_max_frame_length(struct fsl_mc_io	*mc_io,
+			      u32		cmd_flags,
+			      u16		token,
+			      u16		max_frame_length);
+
+int dpni_get_max_frame_length(struct fsl_mc_io	*mc_io,
+			      u32		cmd_flags,
+			      u16		token,
+			      u16		*max_frame_length);
+
+int dpni_set_multicast_promisc(struct fsl_mc_io *mc_io,
+			       u32		cmd_flags,
+			       u16		token,
+			       int		en);
+
+int dpni_get_multicast_promisc(struct fsl_mc_io *mc_io,
+			       u32		cmd_flags,
+			       u16		token,
+			       int		*en);
+
+int dpni_set_unicast_promisc(struct fsl_mc_io	*mc_io,
+			     u32		cmd_flags,
+			     u16		token,
+			     int		en);
+
+int dpni_get_unicast_promisc(struct fsl_mc_io	*mc_io,
+			     u32		cmd_flags,
+			     u16		token,
+			     int		*en);
+
+int dpni_set_primary_mac_addr(struct fsl_mc_io *mc_io,
+			      u32		cmd_flags,
+			      u16		token,
+			      const u8		mac_addr[6]);
+
+int dpni_get_primary_mac_addr(struct fsl_mc_io	*mc_io,
+			      u32		cmd_flags,
+			      u16		token,
+			      u8		mac_addr[6]);
+
+int dpni_get_port_mac_addr(struct fsl_mc_io	*mc_io,
+			   u32			cm_flags,
+			   u16			token,
+			   u8			mac_addr[6]);
+
+int dpni_add_mac_addr(struct fsl_mc_io	*mc_io,
+		      u32		cmd_flags,
+		      u16		token,
+		      const u8		mac_addr[6]);
+
+int dpni_remove_mac_addr(struct fsl_mc_io	*mc_io,
+			 u32			cmd_flags,
+			 u16			token,
+			 const u8		mac_addr[6]);
+
+int dpni_clear_mac_filters(struct fsl_mc_io	*mc_io,
+			   u32			cmd_flags,
+			   u16			token,
+			   int			unicast,
+			   int			multicast);
+
+/**
+ * enum dpni_dist_mode - DPNI distribution mode
+ * @DPNI_DIST_MODE_NONE: No distribution
+ * @DPNI_DIST_MODE_HASH: Use hash distribution; only relevant if
+ *		the 'DPNI_OPT_DIST_HASH' option was set at DPNI creation
+ * @DPNI_DIST_MODE_FS:  Use explicit flow steering; only relevant if
+ *	 the 'DPNI_OPT_DIST_FS' option was set at DPNI creation
+ */
+enum dpni_dist_mode {
+	DPNI_DIST_MODE_NONE = 0,
+	DPNI_DIST_MODE_HASH = 1,
+	DPNI_DIST_MODE_FS = 2
+};
+
+/**
+ * enum dpni_fs_miss_action -   DPNI Flow Steering miss action
+ * @DPNI_FS_MISS_DROP: In case of no-match, drop the frame
+ * @DPNI_FS_MISS_EXPLICIT_FLOWID: In case of no-match, use explicit flow-id
+ * @DPNI_FS_MISS_HASH: In case of no-match, distribute using hash
+ */
+enum dpni_fs_miss_action {
+	DPNI_FS_MISS_DROP = 0,
+	DPNI_FS_MISS_EXPLICIT_FLOWID = 1,
+	DPNI_FS_MISS_HASH = 2
+};
+
+/**
+ * struct dpni_fs_tbl_cfg - Flow Steering table configuration
+ * @miss_action: Miss action selection
+ * @default_flow_id: Used when 'miss_action = DPNI_FS_MISS_EXPLICIT_FLOWID'
+ */
+struct dpni_fs_tbl_cfg {
+	enum dpni_fs_miss_action	miss_action;
+	u16				default_flow_id;
+};
+
+int dpni_prepare_key_cfg(const struct dpkg_profile_cfg *cfg,
+			 u8 *key_cfg_buf);
+
+/**
+ * struct dpni_rx_tc_dist_cfg - Rx traffic class distribution configuration
+ * @dist_size: Set the distribution size;
+ *	supported values: 1,2,3,4,6,7,8,12,14,16,24,28,32,48,56,64,96,
+ *	112,128,192,224,256,384,448,512,768,896,1024
+ * @dist_mode: Distribution mode
+ * @key_cfg_iova: I/O virtual address of 256 bytes DMA-able memory filled with
+ *		the extractions to be used for the distribution key by calling
+ *		dpni_prepare_key_cfg() relevant only when
+ *		'dist_mode != DPNI_DIST_MODE_NONE', otherwise it can be '0'
+ * @fs_cfg: Flow Steering table configuration; only relevant if
+ *		'dist_mode = DPNI_DIST_MODE_FS'
+ */
+struct dpni_rx_tc_dist_cfg {
+	u16			dist_size;
+	enum dpni_dist_mode	dist_mode;
+	u64			key_cfg_iova;
+	struct dpni_fs_tbl_cfg	fs_cfg;
+};
+
+int dpni_set_rx_tc_dist(struct fsl_mc_io			*mc_io,
+			u32					cmd_flags,
+			u16					token,
+			u8					tc_id,
+			const struct dpni_rx_tc_dist_cfg	*cfg);
+
+/**
+ * enum dpni_dest - DPNI destination types
+ * @DPNI_DEST_NONE: Unassigned destination; The queue is set in parked mode and
+ *		does not generate FQDAN notifications; user is expected to
+ *		dequeue from the queue based on polling or other user-defined
+ *		method
+ * @DPNI_DEST_DPIO: The queue is set in schedule mode and generates FQDAN
+ *		notifications to the specified DPIO; user is expected to dequeue
+ *		from the queue only after notification is received
+ * @DPNI_DEST_DPCON: The queue is set in schedule mode and does not generate
+ *		FQDAN notifications, but is connected to the specified DPCON
+ *		object; user is expected to dequeue from the DPCON channel
+ */
+enum dpni_dest {
+	DPNI_DEST_NONE = 0,
+	DPNI_DEST_DPIO = 1,
+	DPNI_DEST_DPCON = 2
+};
+
+/**
+ * struct dpni_queue - Queue structure
+ * @destination - Destination structure
+ * @destination.id: ID of the destination, only relevant if DEST_TYPE is > 0.
+ *	Identifies either a DPIO or a DPCON object.
+ *	Not relevant for Tx queues.
+ * @destination.type:	May be one of the following:
+ *	0 - No destination, queue can be manually
+ *		queried, but will not push traffic or
+ *		notifications to a DPIO;
+ *	1 - The destination is a DPIO. When traffic
+ *		becomes available in the queue a FQDAN
+ *		(FQ data available notification) will be
+ *		generated to selected DPIO;
+ *	2 - The destination is a DPCON. The queue is
+ *		associated with a DPCON object for the
+ *		purpose of scheduling between multiple
+ *		queues. The DPCON may be independently
+ *		configured to generate notifications.
+ *		Not relevant for Tx queues.
+ * @destination.hold_active: Hold active, maintains a queue scheduled for longer
+ *	in a DPIO during dequeue to reduce spread of traffic.
+ *	Only relevant if queues are
+ *	not affined to a single DPIO.
+ * @user_context: User data, presented to the user along with any frames
+ *	from this queue. Not relevant for Tx queues.
+ * @flc: FD FLow Context structure
+ * @flc.value: Default FLC value for traffic dequeued from
+ *      this queue.  Please check description of FD
+ *      structure for more information.
+ *      Note that FLC values set using dpni_add_fs_entry,
+ *      if any, take precedence over values per queue.
+ * @flc.stash_control: Boolean, indicates whether the 6 lowest
+ *      - significant bits are used for stash control.
+ *      significant bits are used for stash control.  If set, the 6
+ *      least significant bits in value are interpreted as follows:
+ *      - bits 0-1: indicates the number of 64 byte units of context
+ *      that are stashed.  FLC value is interpreted as a memory address
+ *      in this case, excluding the 6 LS bits.
+ *      - bits 2-3: indicates the number of 64 byte units of frame
+ *      annotation to be stashed.  Annotation is placed at FD[ADDR].
+ *      - bits 4-5: indicates the number of 64 byte units of frame
+ *      data to be stashed.  Frame data is placed at FD[ADDR] +
+ *      FD[OFFSET].
+ *      For more details check the Frame Descriptor section in the
+ *      hardware documentation.
+ */
+struct dpni_queue {
+	struct {
+		u16 id;
+		enum dpni_dest type;
+		char hold_active;
+		u8 priority;
+	} destination;
+	u64 user_context;
+	struct {
+		u64 value;
+		char stash_control;
+	} flc;
+};
+
+/**
+ * struct dpni_queue_id - Queue identification, used for enqueue commands
+ *			or queue control
+ * @fqid: FQID used for enqueueing to and/or configuration of this specific FQ
+ * @qdbin: Queueing bin, used to enqueue using QDID, DQBIN, QPRI. Only relevant
+ *		for Tx queues.
+ */
+struct dpni_queue_id {
+	u32 fqid;
+	u16 qdbin;
+};
+
+/**
+ * Set User Context
+ */
+#define DPNI_QUEUE_OPT_USER_CTX		0x00000001
+#define DPNI_QUEUE_OPT_DEST		0x00000002
+#define DPNI_QUEUE_OPT_FLC		0x00000004
+#define DPNI_QUEUE_OPT_HOLD_ACTIVE	0x00000008
+
+int dpni_set_queue(struct fsl_mc_io	*mc_io,
+		   u32			cmd_flags,
+		   u16			token,
+		   enum dpni_queue_type	qtype,
+		   u8			tc,
+		   u8			index,
+		   u8			options,
+		   const struct dpni_queue *queue);
+
+int dpni_get_queue(struct fsl_mc_io	*mc_io,
+		   u32			cmd_flags,
+		   u16			token,
+		   enum dpni_queue_type	qtype,
+		   u8			tc,
+		   u8			index,
+		   struct dpni_queue	*queue,
+		   struct dpni_queue_id	*qid);
+
+/**
+ * enum dpni_congestion_unit - DPNI congestion units
+ * @DPNI_CONGESTION_UNIT_BYTES: bytes units
+ * @DPNI_CONGESTION_UNIT_FRAMES: frames units
+ */
+enum dpni_congestion_unit {
+	DPNI_CONGESTION_UNIT_BYTES = 0,
+	DPNI_CONGESTION_UNIT_FRAMES
+};
+
+/**
+ * enum dpni_congestion_point - Structure representing congestion point
+ * @DPNI_CP_QUEUE: Set taildrop per queue, identified by QUEUE_TYPE, TC and
+ *		QUEUE_INDEX
+ * @DPNI_CP_GROUP: Set taildrop per queue group. Depending on options used to
+ *		define the DPNI this can be either per TC (default) or per
+ *		interface (DPNI_OPT_SHARED_CONGESTION set at DPNI create).
+ *		QUEUE_INDEX is ignored if this type is used.
+ */
+enum dpni_congestion_point {
+	DPNI_CP_QUEUE,
+	DPNI_CP_GROUP,
+};
+
+/**
+ * struct dpni_taildrop - Structure representing the taildrop
+ * @enable:	Indicates whether the taildrop is active or not.
+ * @units:	Indicates the unit of THRESHOLD. Queue taildrop only supports
+ *		byte units, this field is ignored and assumed = 0 if
+ *		CONGESTION_POINT is 0.
+ * @threshold:	Threshold value, in units identified by UNITS field. Value 0
+ *		cannot be used as a valid taildrop threshold, THRESHOLD must
+ *		be > 0 if the taildrop is enabled.
+ */
+struct dpni_taildrop {
+	char enable;
+	enum dpni_congestion_unit units;
+	u32 threshold;
+};
+
+int dpni_set_taildrop(struct fsl_mc_io *mc_io,
+		      u32 cmd_flags,
+		      u16 token,
+		      enum dpni_congestion_point cg_point,
+		      enum dpni_queue_type q_type,
+		      u8 tc,
+		      u8 q_index,
+		      struct dpni_taildrop *taildrop);
+
+int dpni_get_taildrop(struct fsl_mc_io *mc_io,
+		      u32 cmd_flags,
+		      u16 token,
+		      enum dpni_congestion_point cg_point,
+		      enum dpni_queue_type q_type,
+		      u8 tc,
+		      u8 q_index,
+		      struct dpni_taildrop *taildrop);
+
+/**
+ * struct dpni_rule_cfg - Rule configuration for table lookup
+ * @key_iova: I/O virtual address of the key (must be in DMA-able memory)
+ * @mask_iova: I/O virtual address of the mask (must be in DMA-able memory)
+ * @key_size: key and mask size (in bytes)
+ */
+struct dpni_rule_cfg {
+	u64	key_iova;
+	u64	mask_iova;
+	u8	key_size;
+};
+
+int dpni_get_api_version(struct fsl_mc_io *mc_io,
+			 u32 cmd_flags,
+			 u16 *major_ver,
+			 u16 *minor_ver);
+
+#endif /* __FSL_DPNI_H */
diff --git a/drivers/staging/fsl-dpaa2/Kconfig b/drivers/staging/fsl-dpaa2/Kconfig
index a4c4b83..59aaae7 100644
--- a/drivers/staging/fsl-dpaa2/Kconfig
+++ b/drivers/staging/fsl-dpaa2/Kconfig
@@ -9,14 +9,6 @@ config FSL_DPAA2
 	  Build drivers for Freescale DataPath Acceleration
 	  Architecture (DPAA2) family of SoCs.
 
-config FSL_DPAA2_ETH
-	tristate "Freescale DPAA2 Ethernet"
-	depends on FSL_DPAA2 && FSL_MC_DPIO
-	depends on NETDEVICES && ETHERNET
-	---help---
-	  Ethernet driver for Freescale DPAA2 SoCs, using the
-	  Freescale MC bus driver
-
 config FSL_DPAA2_ETHSW
 	tristate "Freescale DPAA2 Ethernet Switch"
 	depends on FSL_DPAA2
diff --git a/drivers/staging/fsl-dpaa2/Makefile b/drivers/staging/fsl-dpaa2/Makefile
index 9c70629..464f242 100644
--- a/drivers/staging/fsl-dpaa2/Makefile
+++ b/drivers/staging/fsl-dpaa2/Makefile
@@ -2,6 +2,5 @@
 # Freescale DataPath Acceleration Architecture Gen2 (DPAA2) drivers
 #
 
-obj-$(CONFIG_FSL_DPAA2_ETH)		+= ethernet/
 obj-$(CONFIG_FSL_DPAA2_ETHSW)		+= ethsw/
 obj-$(CONFIG_FSL_DPAA2_PTP_CLOCK)	+= rtc/
diff --git a/drivers/staging/fsl-dpaa2/ethernet/Makefile b/drivers/staging/fsl-dpaa2/ethernet/Makefile
deleted file mode 100644
index 9315ecd..0000000
--- a/drivers/staging/fsl-dpaa2/ethernet/Makefile
+++ /dev/null
@@ -1,11 +0,0 @@
-# SPDX-License-Identifier: GPL-2.0
-#
-# Makefile for the Freescale DPAA2 Ethernet controller
-#
-
-obj-$(CONFIG_FSL_DPAA2_ETH) += fsl-dpaa2-eth.o
-
-fsl-dpaa2-eth-objs    := dpaa2-eth.o dpaa2-ethtool.o dpni.o
-
-# Needed by the tracing framework
-CFLAGS_dpaa2-eth.o := -I$(src)
diff --git a/drivers/staging/fsl-dpaa2/ethernet/TODO b/drivers/staging/fsl-dpaa2/ethernet/TODO
deleted file mode 100644
index e400a5e..0000000
--- a/drivers/staging/fsl-dpaa2/ethernet/TODO
+++ /dev/null
@@ -1,18 +0,0 @@
-* Add a DPAA2 MAC kernel driver in order to allow PHY management; currently
-  the DPMAC objects and their link to DPNIs are handled by MC internally
-  and all PHYs are seen as fixed-link
-* add more debug support: decide how to expose detailed debug statistics,
-  add ingress error queue support
-* MC firmware uprev; the DPAA2 objects used by the Ethernet driver need to
-  be kept in sync with binary interface changes in MC
-* refine README file
-* cleanup
-
-NOTE: None of the above is must-have before getting the DPAA2 Ethernet driver
-out of staging. The main requirement for that is to have the drivers it
-depends on, fsl-mc bus and DPIO driver, moved to drivers/bus and drivers/soc
-respectively.
-
- Please send any patches to Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
- ruxandra.radulescu@....com, devel@...verdev.osuosl.org,
- linux-kernel@...r.kernel.org
diff --git a/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth-trace.h b/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth-trace.h
deleted file mode 100644
index 9801528..0000000
--- a/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth-trace.h
+++ /dev/null
@@ -1,158 +0,0 @@
-/* SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause) */
-/* Copyright 2014-2015 Freescale Semiconductor Inc.
- */
-
-#undef TRACE_SYSTEM
-#define TRACE_SYSTEM	dpaa2_eth
-
-#if !defined(_DPAA2_ETH_TRACE_H) || defined(TRACE_HEADER_MULTI_READ)
-#define _DPAA2_ETH_TRACE_H
-
-#include <linux/skbuff.h>
-#include <linux/netdevice.h>
-#include "dpaa2-eth.h"
-#include <linux/tracepoint.h>
-
-#define TR_FMT "[%s] fd: addr=0x%llx, len=%u, off=%u"
-/* trace_printk format for raw buffer event class */
-#define TR_BUF_FMT "[%s] vaddr=%p size=%zu dma_addr=%pad map_size=%zu bpid=%d"
-
-/* This is used to declare a class of events.
- * individual events of this type will be defined below.
- */
-
-/* Store details about a frame descriptor */
-DECLARE_EVENT_CLASS(dpaa2_eth_fd,
-		    /* Trace function prototype */
-		    TP_PROTO(struct net_device *netdev,
-			     const struct dpaa2_fd *fd),
-
-		    /* Repeat argument list here */
-		    TP_ARGS(netdev, fd),
-
-		    /* A structure containing the relevant information we want
-		     * to record. Declare name and type for each normal element,
-		     * name, type and size for arrays. Use __string for variable
-		     * length strings.
-		     */
-		    TP_STRUCT__entry(
-				     __field(u64, fd_addr)
-				     __field(u32, fd_len)
-				     __field(u16, fd_offset)
-				     __string(name, netdev->name)
-		    ),
-
-		    /* The function that assigns values to the above declared
-		     * fields
-		     */
-		    TP_fast_assign(
-				   __entry->fd_addr = dpaa2_fd_get_addr(fd);
-				   __entry->fd_len = dpaa2_fd_get_len(fd);
-				   __entry->fd_offset = dpaa2_fd_get_offset(fd);
-				   __assign_str(name, netdev->name);
-		    ),
-
-		    /* This is what gets printed when the trace event is
-		     * triggered.
-		     */
-		    TP_printk(TR_FMT,
-			      __get_str(name),
-			      __entry->fd_addr,
-			      __entry->fd_len,
-			      __entry->fd_offset)
-);
-
-/* Now declare events of the above type. Format is:
- * DEFINE_EVENT(class, name, proto, args), with proto and args same as for class
- */
-
-/* Tx (egress) fd */
-DEFINE_EVENT(dpaa2_eth_fd, dpaa2_tx_fd,
-	     TP_PROTO(struct net_device *netdev,
-		      const struct dpaa2_fd *fd),
-
-	     TP_ARGS(netdev, fd)
-);
-
-/* Rx fd */
-DEFINE_EVENT(dpaa2_eth_fd, dpaa2_rx_fd,
-	     TP_PROTO(struct net_device *netdev,
-		      const struct dpaa2_fd *fd),
-
-	     TP_ARGS(netdev, fd)
-);
-
-/* Tx confirmation fd */
-DEFINE_EVENT(dpaa2_eth_fd, dpaa2_tx_conf_fd,
-	     TP_PROTO(struct net_device *netdev,
-		      const struct dpaa2_fd *fd),
-
-	     TP_ARGS(netdev, fd)
-);
-
-/* Log data about raw buffers. Useful for tracing DPBP content. */
-TRACE_EVENT(dpaa2_eth_buf_seed,
-	    /* Trace function prototype */
-	    TP_PROTO(struct net_device *netdev,
-		     /* virtual address and size */
-		     void *vaddr,
-		     size_t size,
-		     /* dma map address and size */
-		     dma_addr_t dma_addr,
-		     size_t map_size,
-		     /* buffer pool id, if relevant */
-		     u16 bpid),
-
-	    /* Repeat argument list here */
-	    TP_ARGS(netdev, vaddr, size, dma_addr, map_size, bpid),
-
-	    /* A structure containing the relevant information we want
-	     * to record. Declare name and type for each normal element,
-	     * name, type and size for arrays. Use __string for variable
-	     * length strings.
-	     */
-	    TP_STRUCT__entry(
-			     __field(void *, vaddr)
-			     __field(size_t, size)
-			     __field(dma_addr_t, dma_addr)
-			     __field(size_t, map_size)
-			     __field(u16, bpid)
-			     __string(name, netdev->name)
-	    ),
-
-	    /* The function that assigns values to the above declared
-	     * fields
-	     */
-	    TP_fast_assign(
-			   __entry->vaddr = vaddr;
-			   __entry->size = size;
-			   __entry->dma_addr = dma_addr;
-			   __entry->map_size = map_size;
-			   __entry->bpid = bpid;
-			   __assign_str(name, netdev->name);
-	    ),
-
-	    /* This is what gets printed when the trace event is
-	     * triggered.
-	     */
-	    TP_printk(TR_BUF_FMT,
-		      __get_str(name),
-		      __entry->vaddr,
-		      __entry->size,
-		      &__entry->dma_addr,
-		      __entry->map_size,
-		      __entry->bpid)
-);
-
-/* If only one event of a certain type needs to be declared, use TRACE_EVENT().
- * The syntax is the same as for DECLARE_EVENT_CLASS().
- */
-
-#endif /* _DPAA2_ETH_TRACE_H */
-
-/* This must be outside ifdef _DPAA2_ETH_TRACE_H */
-#undef TRACE_INCLUDE_PATH
-#define TRACE_INCLUDE_PATH .
-#undef TRACE_INCLUDE_FILE
-#define TRACE_INCLUDE_FILE	dpaa2-eth-trace
-#include <trace/define_trace.h>
diff --git a/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.c b/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.c
deleted file mode 100644
index 9329fca..0000000
--- a/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.c
+++ /dev/null
@@ -1,2661 +0,0 @@
-// SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause)
-/* Copyright 2014-2016 Freescale Semiconductor Inc.
- * Copyright 2016-2017 NXP
- */
-#include <linux/init.h>
-#include <linux/module.h>
-#include <linux/platform_device.h>
-#include <linux/etherdevice.h>
-#include <linux/of_net.h>
-#include <linux/interrupt.h>
-#include <linux/msi.h>
-#include <linux/kthread.h>
-#include <linux/iommu.h>
-#include <linux/net_tstamp.h>
-#include <linux/fsl/mc.h>
-
-#include <net/sock.h>
-
-#include "dpaa2-eth.h"
-
-/* CREATE_TRACE_POINTS only needs to be defined once. Other dpa files
- * using trace events only need to #include <trace/events/sched.h>
- */
-#define CREATE_TRACE_POINTS
-#include "dpaa2-eth-trace.h"
-
-MODULE_LICENSE("Dual BSD/GPL");
-MODULE_AUTHOR("Freescale Semiconductor, Inc");
-MODULE_DESCRIPTION("Freescale DPAA2 Ethernet Driver");
-
-static void *dpaa2_iova_to_virt(struct iommu_domain *domain,
-				dma_addr_t iova_addr)
-{
-	phys_addr_t phys_addr;
-
-	phys_addr = domain ? iommu_iova_to_phys(domain, iova_addr) : iova_addr;
-
-	return phys_to_virt(phys_addr);
-}
-
-static void validate_rx_csum(struct dpaa2_eth_priv *priv,
-			     u32 fd_status,
-			     struct sk_buff *skb)
-{
-	skb_checksum_none_assert(skb);
-
-	/* HW checksum validation is disabled, nothing to do here */
-	if (!(priv->net_dev->features & NETIF_F_RXCSUM))
-		return;
-
-	/* Read checksum validation bits */
-	if (!((fd_status & DPAA2_FAS_L3CV) &&
-	      (fd_status & DPAA2_FAS_L4CV)))
-		return;
-
-	/* Inform the stack there's no need to compute L3/L4 csum anymore */
-	skb->ip_summed = CHECKSUM_UNNECESSARY;
-}
-
-/* Free a received FD.
- * Not to be used for Tx conf FDs or on any other paths.
- */
-static void free_rx_fd(struct dpaa2_eth_priv *priv,
-		       const struct dpaa2_fd *fd,
-		       void *vaddr)
-{
-	struct device *dev = priv->net_dev->dev.parent;
-	dma_addr_t addr = dpaa2_fd_get_addr(fd);
-	u8 fd_format = dpaa2_fd_get_format(fd);
-	struct dpaa2_sg_entry *sgt;
-	void *sg_vaddr;
-	int i;
-
-	/* If single buffer frame, just free the data buffer */
-	if (fd_format == dpaa2_fd_single)
-		goto free_buf;
-	else if (fd_format != dpaa2_fd_sg)
-		/* We don't support any other format */
-		return;
-
-	/* For S/G frames, we first need to free all SG entries
-	 * except the first one, which was taken care of already
-	 */
-	sgt = vaddr + dpaa2_fd_get_offset(fd);
-	for (i = 1; i < DPAA2_ETH_MAX_SG_ENTRIES; i++) {
-		addr = dpaa2_sg_get_addr(&sgt[i]);
-		sg_vaddr = dpaa2_iova_to_virt(priv->iommu_domain, addr);
-		dma_unmap_single(dev, addr, DPAA2_ETH_RX_BUF_SIZE,
-				 DMA_FROM_DEVICE);
-
-		skb_free_frag(sg_vaddr);
-		if (dpaa2_sg_is_final(&sgt[i]))
-			break;
-	}
-
-free_buf:
-	skb_free_frag(vaddr);
-}
-
-/* Build a linear skb based on a single-buffer frame descriptor */
-static struct sk_buff *build_linear_skb(struct dpaa2_eth_priv *priv,
-					struct dpaa2_eth_channel *ch,
-					const struct dpaa2_fd *fd,
-					void *fd_vaddr)
-{
-	struct sk_buff *skb = NULL;
-	u16 fd_offset = dpaa2_fd_get_offset(fd);
-	u32 fd_length = dpaa2_fd_get_len(fd);
-
-	ch->buf_count--;
-
-	skb = build_skb(fd_vaddr, DPAA2_ETH_SKB_SIZE);
-	if (unlikely(!skb))
-		return NULL;
-
-	skb_reserve(skb, fd_offset);
-	skb_put(skb, fd_length);
-
-	return skb;
-}
-
-/* Build a non linear (fragmented) skb based on a S/G table */
-static struct sk_buff *build_frag_skb(struct dpaa2_eth_priv *priv,
-				      struct dpaa2_eth_channel *ch,
-				      struct dpaa2_sg_entry *sgt)
-{
-	struct sk_buff *skb = NULL;
-	struct device *dev = priv->net_dev->dev.parent;
-	void *sg_vaddr;
-	dma_addr_t sg_addr;
-	u16 sg_offset;
-	u32 sg_length;
-	struct page *page, *head_page;
-	int page_offset;
-	int i;
-
-	for (i = 0; i < DPAA2_ETH_MAX_SG_ENTRIES; i++) {
-		struct dpaa2_sg_entry *sge = &sgt[i];
-
-		/* NOTE: We only support SG entries in dpaa2_sg_single format,
-		 * but this is the only format we may receive from HW anyway
-		 */
-
-		/* Get the address and length from the S/G entry */
-		sg_addr = dpaa2_sg_get_addr(sge);
-		sg_vaddr = dpaa2_iova_to_virt(priv->iommu_domain, sg_addr);
-		dma_unmap_single(dev, sg_addr, DPAA2_ETH_RX_BUF_SIZE,
-				 DMA_FROM_DEVICE);
-
-		sg_length = dpaa2_sg_get_len(sge);
-
-		if (i == 0) {
-			/* We build the skb around the first data buffer */
-			skb = build_skb(sg_vaddr, DPAA2_ETH_SKB_SIZE);
-			if (unlikely(!skb)) {
-				/* Free the first SG entry now, since we already
-				 * unmapped it and obtained the virtual address
-				 */
-				skb_free_frag(sg_vaddr);
-
-				/* We still need to subtract the buffers used
-				 * by this FD from our software counter
-				 */
-				while (!dpaa2_sg_is_final(&sgt[i]) &&
-				       i < DPAA2_ETH_MAX_SG_ENTRIES)
-					i++;
-				break;
-			}
-
-			sg_offset = dpaa2_sg_get_offset(sge);
-			skb_reserve(skb, sg_offset);
-			skb_put(skb, sg_length);
-		} else {
-			/* Rest of the data buffers are stored as skb frags */
-			page = virt_to_page(sg_vaddr);
-			head_page = virt_to_head_page(sg_vaddr);
-
-			/* Offset in page (which may be compound).
-			 * Data in subsequent SG entries is stored from the
-			 * beginning of the buffer, so we don't need to add the
-			 * sg_offset.
-			 */
-			page_offset = ((unsigned long)sg_vaddr &
-				(PAGE_SIZE - 1)) +
-				(page_address(page) - page_address(head_page));
-
-			skb_add_rx_frag(skb, i - 1, head_page, page_offset,
-					sg_length, DPAA2_ETH_RX_BUF_SIZE);
-		}
-
-		if (dpaa2_sg_is_final(sge))
-			break;
-	}
-
-	WARN_ONCE(i == DPAA2_ETH_MAX_SG_ENTRIES, "Final bit not set in SGT");
-
-	/* Count all data buffers + SG table buffer */
-	ch->buf_count -= i + 2;
-
-	return skb;
-}
-
-/* Main Rx frame processing routine */
-static void dpaa2_eth_rx(struct dpaa2_eth_priv *priv,
-			 struct dpaa2_eth_channel *ch,
-			 const struct dpaa2_fd *fd,
-			 struct napi_struct *napi,
-			 u16 queue_id)
-{
-	dma_addr_t addr = dpaa2_fd_get_addr(fd);
-	u8 fd_format = dpaa2_fd_get_format(fd);
-	void *vaddr;
-	struct sk_buff *skb;
-	struct rtnl_link_stats64 *percpu_stats;
-	struct dpaa2_eth_drv_stats *percpu_extras;
-	struct device *dev = priv->net_dev->dev.parent;
-	struct dpaa2_fas *fas;
-	void *buf_data;
-	u32 status = 0;
-
-	/* Tracing point */
-	trace_dpaa2_rx_fd(priv->net_dev, fd);
-
-	vaddr = dpaa2_iova_to_virt(priv->iommu_domain, addr);
-	dma_unmap_single(dev, addr, DPAA2_ETH_RX_BUF_SIZE, DMA_FROM_DEVICE);
-
-	fas = dpaa2_get_fas(vaddr, false);
-	prefetch(fas);
-	buf_data = vaddr + dpaa2_fd_get_offset(fd);
-	prefetch(buf_data);
-
-	percpu_stats = this_cpu_ptr(priv->percpu_stats);
-	percpu_extras = this_cpu_ptr(priv->percpu_extras);
-
-	if (fd_format == dpaa2_fd_single) {
-		skb = build_linear_skb(priv, ch, fd, vaddr);
-	} else if (fd_format == dpaa2_fd_sg) {
-		skb = build_frag_skb(priv, ch, buf_data);
-		skb_free_frag(vaddr);
-		percpu_extras->rx_sg_frames++;
-		percpu_extras->rx_sg_bytes += dpaa2_fd_get_len(fd);
-	} else {
-		/* We don't support any other format */
-		goto err_frame_format;
-	}
-
-	if (unlikely(!skb))
-		goto err_build_skb;
-
-	prefetch(skb->data);
-
-	/* Get the timestamp value */
-	if (priv->rx_tstamp) {
-		struct skb_shared_hwtstamps *shhwtstamps = skb_hwtstamps(skb);
-		__le64 *ts = dpaa2_get_ts(vaddr, false);
-		u64 ns;
-
-		memset(shhwtstamps, 0, sizeof(*shhwtstamps));
-
-		ns = DPAA2_PTP_CLK_PERIOD_NS * le64_to_cpup(ts);
-		shhwtstamps->hwtstamp = ns_to_ktime(ns);
-	}
-
-	/* Check if we need to validate the L4 csum */
-	if (likely(dpaa2_fd_get_frc(fd) & DPAA2_FD_FRC_FASV)) {
-		status = le32_to_cpu(fas->status);
-		validate_rx_csum(priv, status, skb);
-	}
-
-	skb->protocol = eth_type_trans(skb, priv->net_dev);
-	skb_record_rx_queue(skb, queue_id);
-
-	percpu_stats->rx_packets++;
-	percpu_stats->rx_bytes += dpaa2_fd_get_len(fd);
-
-	napi_gro_receive(napi, skb);
-
-	return;
-
-err_build_skb:
-	free_rx_fd(priv, fd, vaddr);
-err_frame_format:
-	percpu_stats->rx_dropped++;
-}
-
-/* Consume all frames pull-dequeued into the store. This is the simplest way to
- * make sure we don't accidentally issue another volatile dequeue which would
- * overwrite (leak) frames already in the store.
- *
- * Observance of NAPI budget is not our concern, leaving that to the caller.
- */
-static int consume_frames(struct dpaa2_eth_channel *ch)
-{
-	struct dpaa2_eth_priv *priv = ch->priv;
-	struct dpaa2_eth_fq *fq;
-	struct dpaa2_dq *dq;
-	const struct dpaa2_fd *fd;
-	int cleaned = 0;
-	int is_last;
-
-	do {
-		dq = dpaa2_io_store_next(ch->store, &is_last);
-		if (unlikely(!dq)) {
-			/* If we're here, we *must* have placed a
-			 * volatile dequeue comnmand, so keep reading through
-			 * the store until we get some sort of valid response
-			 * token (either a valid frame or an "empty dequeue")
-			 */
-			continue;
-		}
-
-		fd = dpaa2_dq_fd(dq);
-		fq = (struct dpaa2_eth_fq *)(uintptr_t)dpaa2_dq_fqd_ctx(dq);
-		fq->stats.frames++;
-
-		fq->consume(priv, ch, fd, &ch->napi, fq->flowid);
-		cleaned++;
-	} while (!is_last);
-
-	return cleaned;
-}
-
-/* Configure the egress frame annotation for timestamp update */
-static void enable_tx_tstamp(struct dpaa2_fd *fd, void *buf_start)
-{
-	struct dpaa2_faead *faead;
-	u32 ctrl, frc;
-
-	/* Mark the egress frame annotation area as valid */
-	frc = dpaa2_fd_get_frc(fd);
-	dpaa2_fd_set_frc(fd, frc | DPAA2_FD_FRC_FAEADV);
-
-	/* Set hardware annotation size */
-	ctrl = dpaa2_fd_get_ctrl(fd);
-	dpaa2_fd_set_ctrl(fd, ctrl | DPAA2_FD_CTRL_ASAL);
-
-	/* enable UPD (update prepanded data) bit in FAEAD field of
-	 * hardware frame annotation area
-	 */
-	ctrl = DPAA2_FAEAD_A2V | DPAA2_FAEAD_UPDV | DPAA2_FAEAD_UPD;
-	faead = dpaa2_get_faead(buf_start, true);
-	faead->ctrl = cpu_to_le32(ctrl);
-}
-
-/* Create a frame descriptor based on a fragmented skb */
-static int build_sg_fd(struct dpaa2_eth_priv *priv,
-		       struct sk_buff *skb,
-		       struct dpaa2_fd *fd)
-{
-	struct device *dev = priv->net_dev->dev.parent;
-	void *sgt_buf = NULL;
-	dma_addr_t addr;
-	int nr_frags = skb_shinfo(skb)->nr_frags;
-	struct dpaa2_sg_entry *sgt;
-	int i, err;
-	int sgt_buf_size;
-	struct scatterlist *scl, *crt_scl;
-	int num_sg;
-	int num_dma_bufs;
-	struct dpaa2_eth_swa *swa;
-
-	/* Create and map scatterlist.
-	 * We don't advertise NETIF_F_FRAGLIST, so skb_to_sgvec() will not have
-	 * to go beyond nr_frags+1.
-	 * Note: We don't support chained scatterlists
-	 */
-	if (unlikely(PAGE_SIZE / sizeof(struct scatterlist) < nr_frags + 1))
-		return -EINVAL;
-
-	scl = kcalloc(nr_frags + 1, sizeof(struct scatterlist), GFP_ATOMIC);
-	if (unlikely(!scl))
-		return -ENOMEM;
-
-	sg_init_table(scl, nr_frags + 1);
-	num_sg = skb_to_sgvec(skb, scl, 0, skb->len);
-	num_dma_bufs = dma_map_sg(dev, scl, num_sg, DMA_BIDIRECTIONAL);
-	if (unlikely(!num_dma_bufs)) {
-		err = -ENOMEM;
-		goto dma_map_sg_failed;
-	}
-
-	/* Prepare the HW SGT structure */
-	sgt_buf_size = priv->tx_data_offset +
-		       sizeof(struct dpaa2_sg_entry) *  num_dma_bufs;
-	sgt_buf = netdev_alloc_frag(sgt_buf_size + DPAA2_ETH_TX_BUF_ALIGN);
-	if (unlikely(!sgt_buf)) {
-		err = -ENOMEM;
-		goto sgt_buf_alloc_failed;
-	}
-	sgt_buf = PTR_ALIGN(sgt_buf, DPAA2_ETH_TX_BUF_ALIGN);
-	memset(sgt_buf, 0, sgt_buf_size);
-
-	sgt = (struct dpaa2_sg_entry *)(sgt_buf + priv->tx_data_offset);
-
-	/* Fill in the HW SGT structure.
-	 *
-	 * sgt_buf is zeroed out, so the following fields are implicit
-	 * in all sgt entries:
-	 *   - offset is 0
-	 *   - format is 'dpaa2_sg_single'
-	 */
-	for_each_sg(scl, crt_scl, num_dma_bufs, i) {
-		dpaa2_sg_set_addr(&sgt[i], sg_dma_address(crt_scl));
-		dpaa2_sg_set_len(&sgt[i], sg_dma_len(crt_scl));
-	}
-	dpaa2_sg_set_final(&sgt[i - 1], true);
-
-	/* Store the skb backpointer in the SGT buffer.
-	 * Fit the scatterlist and the number of buffers alongside the
-	 * skb backpointer in the software annotation area. We'll need
-	 * all of them on Tx Conf.
-	 */
-	swa = (struct dpaa2_eth_swa *)sgt_buf;
-	swa->skb = skb;
-	swa->scl = scl;
-	swa->num_sg = num_sg;
-	swa->sgt_size = sgt_buf_size;
-
-	/* Separately map the SGT buffer */
-	addr = dma_map_single(dev, sgt_buf, sgt_buf_size, DMA_BIDIRECTIONAL);
-	if (unlikely(dma_mapping_error(dev, addr))) {
-		err = -ENOMEM;
-		goto dma_map_single_failed;
-	}
-	dpaa2_fd_set_offset(fd, priv->tx_data_offset);
-	dpaa2_fd_set_format(fd, dpaa2_fd_sg);
-	dpaa2_fd_set_addr(fd, addr);
-	dpaa2_fd_set_len(fd, skb->len);
-	dpaa2_fd_set_ctrl(fd, FD_CTRL_PTA | FD_CTRL_PTV1);
-
-	if (priv->tx_tstamp && skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)
-		enable_tx_tstamp(fd, sgt_buf);
-
-	return 0;
-
-dma_map_single_failed:
-	skb_free_frag(sgt_buf);
-sgt_buf_alloc_failed:
-	dma_unmap_sg(dev, scl, num_sg, DMA_BIDIRECTIONAL);
-dma_map_sg_failed:
-	kfree(scl);
-	return err;
-}
-
-/* Create a frame descriptor based on a linear skb */
-static int build_single_fd(struct dpaa2_eth_priv *priv,
-			   struct sk_buff *skb,
-			   struct dpaa2_fd *fd)
-{
-	struct device *dev = priv->net_dev->dev.parent;
-	u8 *buffer_start, *aligned_start;
-	struct sk_buff **skbh;
-	dma_addr_t addr;
-
-	buffer_start = skb->data - dpaa2_eth_needed_headroom(priv, skb);
-
-	/* If there's enough room to align the FD address, do it.
-	 * It will help hardware optimize accesses.
-	 */
-	aligned_start = PTR_ALIGN(buffer_start - DPAA2_ETH_TX_BUF_ALIGN,
-				  DPAA2_ETH_TX_BUF_ALIGN);
-	if (aligned_start >= skb->head)
-		buffer_start = aligned_start;
-
-	/* Store a backpointer to the skb at the beginning of the buffer
-	 * (in the private data area) such that we can release it
-	 * on Tx confirm
-	 */
-	skbh = (struct sk_buff **)buffer_start;
-	*skbh = skb;
-
-	addr = dma_map_single(dev, buffer_start,
-			      skb_tail_pointer(skb) - buffer_start,
-			      DMA_BIDIRECTIONAL);
-	if (unlikely(dma_mapping_error(dev, addr)))
-		return -ENOMEM;
-
-	dpaa2_fd_set_addr(fd, addr);
-	dpaa2_fd_set_offset(fd, (u16)(skb->data - buffer_start));
-	dpaa2_fd_set_len(fd, skb->len);
-	dpaa2_fd_set_format(fd, dpaa2_fd_single);
-	dpaa2_fd_set_ctrl(fd, FD_CTRL_PTA | FD_CTRL_PTV1);
-
-	if (priv->tx_tstamp && skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)
-		enable_tx_tstamp(fd, buffer_start);
-
-	return 0;
-}
-
-/* FD freeing routine on the Tx path
- *
- * DMA-unmap and free FD and possibly SGT buffer allocated on Tx. The skb
- * back-pointed to is also freed.
- * This can be called either from dpaa2_eth_tx_conf() or on the error path of
- * dpaa2_eth_tx().
- */
-static void free_tx_fd(const struct dpaa2_eth_priv *priv,
-		       const struct dpaa2_fd *fd)
-{
-	struct device *dev = priv->net_dev->dev.parent;
-	dma_addr_t fd_addr;
-	struct sk_buff **skbh, *skb;
-	unsigned char *buffer_start;
-	struct dpaa2_eth_swa *swa;
-	u8 fd_format = dpaa2_fd_get_format(fd);
-
-	fd_addr = dpaa2_fd_get_addr(fd);
-	skbh = dpaa2_iova_to_virt(priv->iommu_domain, fd_addr);
-
-	if (fd_format == dpaa2_fd_single) {
-		skb = *skbh;
-		buffer_start = (unsigned char *)skbh;
-		/* Accessing the skb buffer is safe before dma unmap, because
-		 * we didn't map the actual skb shell.
-		 */
-		dma_unmap_single(dev, fd_addr,
-				 skb_tail_pointer(skb) - buffer_start,
-				 DMA_BIDIRECTIONAL);
-	} else if (fd_format == dpaa2_fd_sg) {
-		swa = (struct dpaa2_eth_swa *)skbh;
-		skb = swa->skb;
-
-		/* Unmap the scatterlist */
-		dma_unmap_sg(dev, swa->scl, swa->num_sg, DMA_BIDIRECTIONAL);
-		kfree(swa->scl);
-
-		/* Unmap the SGT buffer */
-		dma_unmap_single(dev, fd_addr, swa->sgt_size,
-				 DMA_BIDIRECTIONAL);
-	} else {
-		netdev_dbg(priv->net_dev, "Invalid FD format\n");
-		return;
-	}
-
-	/* Get the timestamp value */
-	if (priv->tx_tstamp && skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP) {
-		struct skb_shared_hwtstamps shhwtstamps;
-		__le64 *ts = dpaa2_get_ts(skbh, true);
-		u64 ns;
-
-		memset(&shhwtstamps, 0, sizeof(shhwtstamps));
-
-		ns = DPAA2_PTP_CLK_PERIOD_NS * le64_to_cpup(ts);
-		shhwtstamps.hwtstamp = ns_to_ktime(ns);
-		skb_tstamp_tx(skb, &shhwtstamps);
-	}
-
-	/* Free SGT buffer allocated on tx */
-	if (fd_format != dpaa2_fd_single)
-		skb_free_frag(skbh);
-
-	/* Move on with skb release */
-	dev_kfree_skb(skb);
-}
-
-static netdev_tx_t dpaa2_eth_tx(struct sk_buff *skb, struct net_device *net_dev)
-{
-	struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-	struct dpaa2_fd fd;
-	struct rtnl_link_stats64 *percpu_stats;
-	struct dpaa2_eth_drv_stats *percpu_extras;
-	struct dpaa2_eth_fq *fq;
-	u16 queue_mapping;
-	unsigned int needed_headroom;
-	int err, i;
-
-	percpu_stats = this_cpu_ptr(priv->percpu_stats);
-	percpu_extras = this_cpu_ptr(priv->percpu_extras);
-
-	needed_headroom = dpaa2_eth_needed_headroom(priv, skb);
-	if (skb_headroom(skb) < needed_headroom) {
-		struct sk_buff *ns;
-
-		ns = skb_realloc_headroom(skb, needed_headroom);
-		if (unlikely(!ns)) {
-			percpu_stats->tx_dropped++;
-			goto err_alloc_headroom;
-		}
-		percpu_extras->tx_reallocs++;
-
-		if (skb->sk)
-			skb_set_owner_w(ns, skb->sk);
-
-		dev_kfree_skb(skb);
-		skb = ns;
-	}
-
-	/* We'll be holding a back-reference to the skb until Tx Confirmation;
-	 * we don't want that overwritten by a concurrent Tx with a cloned skb.
-	 */
-	skb = skb_unshare(skb, GFP_ATOMIC);
-	if (unlikely(!skb)) {
-		/* skb_unshare() has already freed the skb */
-		percpu_stats->tx_dropped++;
-		return NETDEV_TX_OK;
-	}
-
-	/* Setup the FD fields */
-	memset(&fd, 0, sizeof(fd));
-
-	if (skb_is_nonlinear(skb)) {
-		err = build_sg_fd(priv, skb, &fd);
-		percpu_extras->tx_sg_frames++;
-		percpu_extras->tx_sg_bytes += skb->len;
-	} else {
-		err = build_single_fd(priv, skb, &fd);
-	}
-
-	if (unlikely(err)) {
-		percpu_stats->tx_dropped++;
-		goto err_build_fd;
-	}
-
-	/* Tracing point */
-	trace_dpaa2_tx_fd(net_dev, &fd);
-
-	/* TxConf FQ selection relies on queue id from the stack.
-	 * In case of a forwarded frame from another DPNI interface, we choose
-	 * a queue affined to the same core that processed the Rx frame
-	 */
-	queue_mapping = skb_get_queue_mapping(skb);
-	fq = &priv->fq[queue_mapping];
-	for (i = 0; i < DPAA2_ETH_ENQUEUE_RETRIES; i++) {
-		err = dpaa2_io_service_enqueue_qd(fq->channel->dpio,
-						  priv->tx_qdid, 0,
-						  fq->tx_qdbin, &fd);
-		if (err != -EBUSY)
-			break;
-	}
-	percpu_extras->tx_portal_busy += i;
-	if (unlikely(err < 0)) {
-		percpu_stats->tx_errors++;
-		/* Clean up everything, including freeing the skb */
-		free_tx_fd(priv, &fd);
-	} else {
-		percpu_stats->tx_packets++;
-		percpu_stats->tx_bytes += dpaa2_fd_get_len(&fd);
-	}
-
-	return NETDEV_TX_OK;
-
-err_build_fd:
-err_alloc_headroom:
-	dev_kfree_skb(skb);
-
-	return NETDEV_TX_OK;
-}
-
-/* Tx confirmation frame processing routine */
-static void dpaa2_eth_tx_conf(struct dpaa2_eth_priv *priv,
-			      struct dpaa2_eth_channel *ch,
-			      const struct dpaa2_fd *fd,
-			      struct napi_struct *napi __always_unused,
-			      u16 queue_id __always_unused)
-{
-	struct rtnl_link_stats64 *percpu_stats;
-	struct dpaa2_eth_drv_stats *percpu_extras;
-	u32 fd_errors;
-
-	/* Tracing point */
-	trace_dpaa2_tx_conf_fd(priv->net_dev, fd);
-
-	percpu_extras = this_cpu_ptr(priv->percpu_extras);
-	percpu_extras->tx_conf_frames++;
-	percpu_extras->tx_conf_bytes += dpaa2_fd_get_len(fd);
-
-	/* Check frame errors in the FD field */
-	fd_errors = dpaa2_fd_get_ctrl(fd) & DPAA2_FD_TX_ERR_MASK;
-	free_tx_fd(priv, fd);
-
-	if (likely(!fd_errors))
-		return;
-
-	if (net_ratelimit())
-		netdev_dbg(priv->net_dev, "TX frame FD error: 0x%08x\n",
-			   fd_errors);
-
-	percpu_stats = this_cpu_ptr(priv->percpu_stats);
-	/* Tx-conf logically pertains to the egress path. */
-	percpu_stats->tx_errors++;
-}
-
-static int set_rx_csum(struct dpaa2_eth_priv *priv, bool enable)
-{
-	int err;
-
-	err = dpni_set_offload(priv->mc_io, 0, priv->mc_token,
-			       DPNI_OFF_RX_L3_CSUM, enable);
-	if (err) {
-		netdev_err(priv->net_dev,
-			   "dpni_set_offload(RX_L3_CSUM) failed\n");
-		return err;
-	}
-
-	err = dpni_set_offload(priv->mc_io, 0, priv->mc_token,
-			       DPNI_OFF_RX_L4_CSUM, enable);
-	if (err) {
-		netdev_err(priv->net_dev,
-			   "dpni_set_offload(RX_L4_CSUM) failed\n");
-		return err;
-	}
-
-	return 0;
-}
-
-static int set_tx_csum(struct dpaa2_eth_priv *priv, bool enable)
-{
-	int err;
-
-	err = dpni_set_offload(priv->mc_io, 0, priv->mc_token,
-			       DPNI_OFF_TX_L3_CSUM, enable);
-	if (err) {
-		netdev_err(priv->net_dev, "dpni_set_offload(TX_L3_CSUM) failed\n");
-		return err;
-	}
-
-	err = dpni_set_offload(priv->mc_io, 0, priv->mc_token,
-			       DPNI_OFF_TX_L4_CSUM, enable);
-	if (err) {
-		netdev_err(priv->net_dev, "dpni_set_offload(TX_L4_CSUM) failed\n");
-		return err;
-	}
-
-	return 0;
-}
-
-/* Free buffers acquired from the buffer pool or which were meant to
- * be released in the pool
- */
-static void free_bufs(struct dpaa2_eth_priv *priv, u64 *buf_array, int count)
-{
-	struct device *dev = priv->net_dev->dev.parent;
-	void *vaddr;
-	int i;
-
-	for (i = 0; i < count; i++) {
-		vaddr = dpaa2_iova_to_virt(priv->iommu_domain, buf_array[i]);
-		dma_unmap_single(dev, buf_array[i], DPAA2_ETH_RX_BUF_SIZE,
-				 DMA_FROM_DEVICE);
-		skb_free_frag(vaddr);
-	}
-}
-
-/* Perform a single release command to add buffers
- * to the specified buffer pool
- */
-static int add_bufs(struct dpaa2_eth_priv *priv,
-		    struct dpaa2_eth_channel *ch, u16 bpid)
-{
-	struct device *dev = priv->net_dev->dev.parent;
-	u64 buf_array[DPAA2_ETH_BUFS_PER_CMD];
-	void *buf;
-	dma_addr_t addr;
-	int i, err;
-
-	for (i = 0; i < DPAA2_ETH_BUFS_PER_CMD; i++) {
-		/* Allocate buffer visible to WRIOP + skb shared info +
-		 * alignment padding
-		 */
-		buf = napi_alloc_frag(dpaa2_eth_buf_raw_size(priv));
-		if (unlikely(!buf))
-			goto err_alloc;
-
-		buf = PTR_ALIGN(buf, priv->rx_buf_align);
-
-		addr = dma_map_single(dev, buf, DPAA2_ETH_RX_BUF_SIZE,
-				      DMA_FROM_DEVICE);
-		if (unlikely(dma_mapping_error(dev, addr)))
-			goto err_map;
-
-		buf_array[i] = addr;
-
-		/* tracing point */
-		trace_dpaa2_eth_buf_seed(priv->net_dev,
-					 buf, dpaa2_eth_buf_raw_size(priv),
-					 addr, DPAA2_ETH_RX_BUF_SIZE,
-					 bpid);
-	}
-
-release_bufs:
-	/* In case the portal is busy, retry until successful */
-	while ((err = dpaa2_io_service_release(ch->dpio, bpid,
-					       buf_array, i)) == -EBUSY)
-		cpu_relax();
-
-	/* If release command failed, clean up and bail out;
-	 * not much else we can do about it
-	 */
-	if (err) {
-		free_bufs(priv, buf_array, i);
-		return 0;
-	}
-
-	return i;
-
-err_map:
-	skb_free_frag(buf);
-err_alloc:
-	/* If we managed to allocate at least some buffers,
-	 * release them to hardware
-	 */
-	if (i)
-		goto release_bufs;
-
-	return 0;
-}
-
-static int seed_pool(struct dpaa2_eth_priv *priv, u16 bpid)
-{
-	int i, j;
-	int new_count;
-
-	/* This is the lazy seeding of Rx buffer pools.
-	 * dpaa2_add_bufs() is also used on the Rx hotpath and calls
-	 * napi_alloc_frag(). The trouble with that is that it in turn ends up
-	 * calling this_cpu_ptr(), which mandates execution in atomic context.
-	 * Rather than splitting up the code, do a one-off preempt disable.
-	 */
-	preempt_disable();
-	for (j = 0; j < priv->num_channels; j++) {
-		for (i = 0; i < DPAA2_ETH_NUM_BUFS;
-		     i += DPAA2_ETH_BUFS_PER_CMD) {
-			new_count = add_bufs(priv, priv->channel[j], bpid);
-			priv->channel[j]->buf_count += new_count;
-
-			if (new_count < DPAA2_ETH_BUFS_PER_CMD) {
-				preempt_enable();
-				return -ENOMEM;
-			}
-		}
-	}
-	preempt_enable();
-
-	return 0;
-}
-
-/**
- * Drain the specified number of buffers from the DPNI's private buffer pool.
- * @count must not exceeed DPAA2_ETH_BUFS_PER_CMD
- */
-static void drain_bufs(struct dpaa2_eth_priv *priv, int count)
-{
-	u64 buf_array[DPAA2_ETH_BUFS_PER_CMD];
-	int ret;
-
-	do {
-		ret = dpaa2_io_service_acquire(NULL, priv->bpid,
-					       buf_array, count);
-		if (ret < 0) {
-			netdev_err(priv->net_dev, "dpaa2_io_service_acquire() failed\n");
-			return;
-		}
-		free_bufs(priv, buf_array, ret);
-	} while (ret);
-}
-
-static void drain_pool(struct dpaa2_eth_priv *priv)
-{
-	int i;
-
-	drain_bufs(priv, DPAA2_ETH_BUFS_PER_CMD);
-	drain_bufs(priv, 1);
-
-	for (i = 0; i < priv->num_channels; i++)
-		priv->channel[i]->buf_count = 0;
-}
-
-/* Function is called from softirq context only, so we don't need to guard
- * the access to percpu count
- */
-static int refill_pool(struct dpaa2_eth_priv *priv,
-		       struct dpaa2_eth_channel *ch,
-		       u16 bpid)
-{
-	int new_count;
-
-	if (likely(ch->buf_count >= DPAA2_ETH_REFILL_THRESH))
-		return 0;
-
-	do {
-		new_count = add_bufs(priv, ch, bpid);
-		if (unlikely(!new_count)) {
-			/* Out of memory; abort for now, we'll try later on */
-			break;
-		}
-		ch->buf_count += new_count;
-	} while (ch->buf_count < DPAA2_ETH_NUM_BUFS);
-
-	if (unlikely(ch->buf_count < DPAA2_ETH_NUM_BUFS))
-		return -ENOMEM;
-
-	return 0;
-}
-
-static int pull_channel(struct dpaa2_eth_channel *ch)
-{
-	int err;
-	int dequeues = -1;
-
-	/* Retry while portal is busy */
-	do {
-		err = dpaa2_io_service_pull_channel(ch->dpio, ch->ch_id,
-						    ch->store);
-		dequeues++;
-		cpu_relax();
-	} while (err == -EBUSY);
-
-	ch->stats.dequeue_portal_busy += dequeues;
-	if (unlikely(err))
-		ch->stats.pull_err++;
-
-	return err;
-}
-
-/* NAPI poll routine
- *
- * Frames are dequeued from the QMan channel associated with this NAPI context.
- * Rx, Tx confirmation and (if configured) Rx error frames all count
- * towards the NAPI budget.
- */
-static int dpaa2_eth_poll(struct napi_struct *napi, int budget)
-{
-	struct dpaa2_eth_channel *ch;
-	int cleaned = 0, store_cleaned;
-	struct dpaa2_eth_priv *priv;
-	int err;
-
-	ch = container_of(napi, struct dpaa2_eth_channel, napi);
-	priv = ch->priv;
-
-	while (cleaned < budget) {
-		err = pull_channel(ch);
-		if (unlikely(err))
-			break;
-
-		/* Refill pool if appropriate */
-		refill_pool(priv, ch, priv->bpid);
-
-		store_cleaned = consume_frames(ch);
-		cleaned += store_cleaned;
-
-		/* If we have enough budget left for a full store,
-		 * try a new pull dequeue, otherwise we're done here
-		 */
-		if (store_cleaned == 0 ||
-		    cleaned > budget - DPAA2_ETH_STORE_SIZE)
-			break;
-	}
-
-	if (cleaned < budget && napi_complete_done(napi, cleaned)) {
-		/* Re-enable data available notifications */
-		do {
-			err = dpaa2_io_service_rearm(ch->dpio, &ch->nctx);
-			cpu_relax();
-		} while (err == -EBUSY);
-		WARN_ONCE(err, "CDAN notifications rearm failed on core %d",
-			  ch->nctx.desired_cpu);
-	}
-
-	ch->stats.frames += cleaned;
-
-	return cleaned;
-}
-
-static void enable_ch_napi(struct dpaa2_eth_priv *priv)
-{
-	struct dpaa2_eth_channel *ch;
-	int i;
-
-	for (i = 0; i < priv->num_channels; i++) {
-		ch = priv->channel[i];
-		napi_enable(&ch->napi);
-	}
-}
-
-static void disable_ch_napi(struct dpaa2_eth_priv *priv)
-{
-	struct dpaa2_eth_channel *ch;
-	int i;
-
-	for (i = 0; i < priv->num_channels; i++) {
-		ch = priv->channel[i];
-		napi_disable(&ch->napi);
-	}
-}
-
-static int link_state_update(struct dpaa2_eth_priv *priv)
-{
-	struct dpni_link_state state;
-	int err;
-
-	err = dpni_get_link_state(priv->mc_io, 0, priv->mc_token, &state);
-	if (unlikely(err)) {
-		netdev_err(priv->net_dev,
-			   "dpni_get_link_state() failed\n");
-		return err;
-	}
-
-	/* Chech link state; speed / duplex changes are not treated yet */
-	if (priv->link_state.up == state.up)
-		return 0;
-
-	priv->link_state = state;
-	if (state.up) {
-		netif_carrier_on(priv->net_dev);
-		netif_tx_start_all_queues(priv->net_dev);
-	} else {
-		netif_tx_stop_all_queues(priv->net_dev);
-		netif_carrier_off(priv->net_dev);
-	}
-
-	netdev_info(priv->net_dev, "Link Event: state %s\n",
-		    state.up ? "up" : "down");
-
-	return 0;
-}
-
-static int dpaa2_eth_open(struct net_device *net_dev)
-{
-	struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-	int err;
-
-	err = seed_pool(priv, priv->bpid);
-	if (err) {
-		/* Not much to do; the buffer pool, though not filled up,
-		 * may still contain some buffers which would enable us
-		 * to limp on.
-		 */
-		netdev_err(net_dev, "Buffer seeding failed for DPBP %d (bpid=%d)\n",
-			   priv->dpbp_dev->obj_desc.id, priv->bpid);
-	}
-
-	/* We'll only start the txqs when the link is actually ready; make sure
-	 * we don't race against the link up notification, which may come
-	 * immediately after dpni_enable();
-	 */
-	netif_tx_stop_all_queues(net_dev);
-	enable_ch_napi(priv);
-	/* Also, explicitly set carrier off, otherwise netif_carrier_ok() will
-	 * return true and cause 'ip link show' to report the LOWER_UP flag,
-	 * even though the link notification wasn't even received.
-	 */
-	netif_carrier_off(net_dev);
-
-	err = dpni_enable(priv->mc_io, 0, priv->mc_token);
-	if (err < 0) {
-		netdev_err(net_dev, "dpni_enable() failed\n");
-		goto enable_err;
-	}
-
-	/* If the DPMAC object has already processed the link up interrupt,
-	 * we have to learn the link state ourselves.
-	 */
-	err = link_state_update(priv);
-	if (err < 0) {
-		netdev_err(net_dev, "Can't update link state\n");
-		goto link_state_err;
-	}
-
-	return 0;
-
-link_state_err:
-enable_err:
-	disable_ch_napi(priv);
-	drain_pool(priv);
-	return err;
-}
-
-/* The DPIO store must be empty when we call this,
- * at the end of every NAPI cycle.
- */
-static u32 drain_channel(struct dpaa2_eth_priv *priv,
-			 struct dpaa2_eth_channel *ch)
-{
-	u32 drained = 0, total = 0;
-
-	do {
-		pull_channel(ch);
-		drained = consume_frames(ch);
-		total += drained;
-	} while (drained);
-
-	return total;
-}
-
-static u32 drain_ingress_frames(struct dpaa2_eth_priv *priv)
-{
-	struct dpaa2_eth_channel *ch;
-	int i;
-	u32 drained = 0;
-
-	for (i = 0; i < priv->num_channels; i++) {
-		ch = priv->channel[i];
-		drained += drain_channel(priv, ch);
-	}
-
-	return drained;
-}
-
-static int dpaa2_eth_stop(struct net_device *net_dev)
-{
-	struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-	int dpni_enabled;
-	int retries = 10;
-	u32 drained;
-
-	netif_tx_stop_all_queues(net_dev);
-	netif_carrier_off(net_dev);
-
-	/* Loop while dpni_disable() attempts to drain the egress FQs
-	 * and confirm them back to us.
-	 */
-	do {
-		dpni_disable(priv->mc_io, 0, priv->mc_token);
-		dpni_is_enabled(priv->mc_io, 0, priv->mc_token, &dpni_enabled);
-		if (dpni_enabled)
-			/* Allow the hardware some slack */
-			msleep(100);
-	} while (dpni_enabled && --retries);
-	if (!retries) {
-		netdev_warn(net_dev, "Retry count exceeded disabling DPNI\n");
-		/* Must go on and disable NAPI nonetheless, so we don't crash at
-		 * the next "ifconfig up"
-		 */
-	}
-
-	/* Wait for NAPI to complete on every core and disable it.
-	 * In particular, this will also prevent NAPI from being rescheduled if
-	 * a new CDAN is serviced, effectively discarding the CDAN. We therefore
-	 * don't even need to disarm the channels, except perhaps for the case
-	 * of a huge coalescing value.
-	 */
-	disable_ch_napi(priv);
-
-	 /* Manually drain the Rx and TxConf queues */
-	drained = drain_ingress_frames(priv);
-	if (drained)
-		netdev_dbg(net_dev, "Drained %d frames.\n", drained);
-
-	/* Empty the buffer pool */
-	drain_pool(priv);
-
-	return 0;
-}
-
-static int dpaa2_eth_init(struct net_device *net_dev)
-{
-	u64 supported = 0;
-	u64 not_supported = 0;
-	struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-	u32 options = priv->dpni_attrs.options;
-
-	/* Capabilities listing */
-	supported |= IFF_LIVE_ADDR_CHANGE;
-
-	if (options & DPNI_OPT_NO_MAC_FILTER)
-		not_supported |= IFF_UNICAST_FLT;
-	else
-		supported |= IFF_UNICAST_FLT;
-
-	net_dev->priv_flags |= supported;
-	net_dev->priv_flags &= ~not_supported;
-
-	/* Features */
-	net_dev->features = NETIF_F_RXCSUM |
-			    NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM |
-			    NETIF_F_SG | NETIF_F_HIGHDMA |
-			    NETIF_F_LLTX;
-	net_dev->hw_features = net_dev->features;
-
-	return 0;
-}
-
-static int dpaa2_eth_set_addr(struct net_device *net_dev, void *addr)
-{
-	struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-	struct device *dev = net_dev->dev.parent;
-	int err;
-
-	err = eth_mac_addr(net_dev, addr);
-	if (err < 0) {
-		dev_err(dev, "eth_mac_addr() failed (%d)\n", err);
-		return err;
-	}
-
-	err = dpni_set_primary_mac_addr(priv->mc_io, 0, priv->mc_token,
-					net_dev->dev_addr);
-	if (err) {
-		dev_err(dev, "dpni_set_primary_mac_addr() failed (%d)\n", err);
-		return err;
-	}
-
-	return 0;
-}
-
-/** Fill in counters maintained by the GPP driver. These may be different from
- * the hardware counters obtained by ethtool.
- */
-static void dpaa2_eth_get_stats(struct net_device *net_dev,
-				struct rtnl_link_stats64 *stats)
-{
-	struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-	struct rtnl_link_stats64 *percpu_stats;
-	u64 *cpustats;
-	u64 *netstats = (u64 *)stats;
-	int i, j;
-	int num = sizeof(struct rtnl_link_stats64) / sizeof(u64);
-
-	for_each_possible_cpu(i) {
-		percpu_stats = per_cpu_ptr(priv->percpu_stats, i);
-		cpustats = (u64 *)percpu_stats;
-		for (j = 0; j < num; j++)
-			netstats[j] += cpustats[j];
-	}
-}
-
-/* Copy mac unicast addresses from @net_dev to @priv.
- * Its sole purpose is to make dpaa2_eth_set_rx_mode() more readable.
- */
-static void add_uc_hw_addr(const struct net_device *net_dev,
-			   struct dpaa2_eth_priv *priv)
-{
-	struct netdev_hw_addr *ha;
-	int err;
-
-	netdev_for_each_uc_addr(ha, net_dev) {
-		err = dpni_add_mac_addr(priv->mc_io, 0, priv->mc_token,
-					ha->addr);
-		if (err)
-			netdev_warn(priv->net_dev,
-				    "Could not add ucast MAC %pM to the filtering table (err %d)\n",
-				    ha->addr, err);
-	}
-}
-
-/* Copy mac multicast addresses from @net_dev to @priv
- * Its sole purpose is to make dpaa2_eth_set_rx_mode() more readable.
- */
-static void add_mc_hw_addr(const struct net_device *net_dev,
-			   struct dpaa2_eth_priv *priv)
-{
-	struct netdev_hw_addr *ha;
-	int err;
-
-	netdev_for_each_mc_addr(ha, net_dev) {
-		err = dpni_add_mac_addr(priv->mc_io, 0, priv->mc_token,
-					ha->addr);
-		if (err)
-			netdev_warn(priv->net_dev,
-				    "Could not add mcast MAC %pM to the filtering table (err %d)\n",
-				    ha->addr, err);
-	}
-}
-
-static void dpaa2_eth_set_rx_mode(struct net_device *net_dev)
-{
-	struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-	int uc_count = netdev_uc_count(net_dev);
-	int mc_count = netdev_mc_count(net_dev);
-	u8 max_mac = priv->dpni_attrs.mac_filter_entries;
-	u32 options = priv->dpni_attrs.options;
-	u16 mc_token = priv->mc_token;
-	struct fsl_mc_io *mc_io = priv->mc_io;
-	int err;
-
-	/* Basic sanity checks; these probably indicate a misconfiguration */
-	if (options & DPNI_OPT_NO_MAC_FILTER && max_mac != 0)
-		netdev_info(net_dev,
-			    "mac_filter_entries=%d, DPNI_OPT_NO_MAC_FILTER option must be disabled\n",
-			    max_mac);
-
-	/* Force promiscuous if the uc or mc counts exceed our capabilities. */
-	if (uc_count > max_mac) {
-		netdev_info(net_dev,
-			    "Unicast addr count reached %d, max allowed is %d; forcing promisc\n",
-			    uc_count, max_mac);
-		goto force_promisc;
-	}
-	if (mc_count + uc_count > max_mac) {
-		netdev_info(net_dev,
-			    "Unicast + multicast addr count reached %d, max allowed is %d; forcing promisc\n",
-			    uc_count + mc_count, max_mac);
-		goto force_mc_promisc;
-	}
-
-	/* Adjust promisc settings due to flag combinations */
-	if (net_dev->flags & IFF_PROMISC)
-		goto force_promisc;
-	if (net_dev->flags & IFF_ALLMULTI) {
-		/* First, rebuild unicast filtering table. This should be done
-		 * in promisc mode, in order to avoid frame loss while we
-		 * progressively add entries to the table.
-		 * We don't know whether we had been in promisc already, and
-		 * making an MC call to find out is expensive; so set uc promisc
-		 * nonetheless.
-		 */
-		err = dpni_set_unicast_promisc(mc_io, 0, mc_token, 1);
-		if (err)
-			netdev_warn(net_dev, "Can't set uc promisc\n");
-
-		/* Actual uc table reconstruction. */
-		err = dpni_clear_mac_filters(mc_io, 0, mc_token, 1, 0);
-		if (err)
-			netdev_warn(net_dev, "Can't clear uc filters\n");
-		add_uc_hw_addr(net_dev, priv);
-
-		/* Finally, clear uc promisc and set mc promisc as requested. */
-		err = dpni_set_unicast_promisc(mc_io, 0, mc_token, 0);
-		if (err)
-			netdev_warn(net_dev, "Can't clear uc promisc\n");
-		goto force_mc_promisc;
-	}
-
-	/* Neither unicast, nor multicast promisc will be on... eventually.
-	 * For now, rebuild mac filtering tables while forcing both of them on.
-	 */
-	err = dpni_set_unicast_promisc(mc_io, 0, mc_token, 1);
-	if (err)
-		netdev_warn(net_dev, "Can't set uc promisc (%d)\n", err);
-	err = dpni_set_multicast_promisc(mc_io, 0, mc_token, 1);
-	if (err)
-		netdev_warn(net_dev, "Can't set mc promisc (%d)\n", err);
-
-	/* Actual mac filtering tables reconstruction */
-	err = dpni_clear_mac_filters(mc_io, 0, mc_token, 1, 1);
-	if (err)
-		netdev_warn(net_dev, "Can't clear mac filters\n");
-	add_mc_hw_addr(net_dev, priv);
-	add_uc_hw_addr(net_dev, priv);
-
-	/* Now we can clear both ucast and mcast promisc, without risking
-	 * to drop legitimate frames anymore.
-	 */
-	err = dpni_set_unicast_promisc(mc_io, 0, mc_token, 0);
-	if (err)
-		netdev_warn(net_dev, "Can't clear ucast promisc\n");
-	err = dpni_set_multicast_promisc(mc_io, 0, mc_token, 0);
-	if (err)
-		netdev_warn(net_dev, "Can't clear mcast promisc\n");
-
-	return;
-
-force_promisc:
-	err = dpni_set_unicast_promisc(mc_io, 0, mc_token, 1);
-	if (err)
-		netdev_warn(net_dev, "Can't set ucast promisc\n");
-force_mc_promisc:
-	err = dpni_set_multicast_promisc(mc_io, 0, mc_token, 1);
-	if (err)
-		netdev_warn(net_dev, "Can't set mcast promisc\n");
-}
-
-static int dpaa2_eth_set_features(struct net_device *net_dev,
-				  netdev_features_t features)
-{
-	struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-	netdev_features_t changed = features ^ net_dev->features;
-	bool enable;
-	int err;
-
-	if (changed & NETIF_F_RXCSUM) {
-		enable = !!(features & NETIF_F_RXCSUM);
-		err = set_rx_csum(priv, enable);
-		if (err)
-			return err;
-	}
-
-	if (changed & (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM)) {
-		enable = !!(features & (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM));
-		err = set_tx_csum(priv, enable);
-		if (err)
-			return err;
-	}
-
-	return 0;
-}
-
-static int dpaa2_eth_ts_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
-{
-	struct dpaa2_eth_priv *priv = netdev_priv(dev);
-	struct hwtstamp_config config;
-
-	if (copy_from_user(&config, rq->ifr_data, sizeof(config)))
-		return -EFAULT;
-
-	switch (config.tx_type) {
-	case HWTSTAMP_TX_OFF:
-		priv->tx_tstamp = false;
-		break;
-	case HWTSTAMP_TX_ON:
-		priv->tx_tstamp = true;
-		break;
-	default:
-		return -ERANGE;
-	}
-
-	if (config.rx_filter == HWTSTAMP_FILTER_NONE) {
-		priv->rx_tstamp = false;
-	} else {
-		priv->rx_tstamp = true;
-		/* TS is set for all frame types, not only those requested */
-		config.rx_filter = HWTSTAMP_FILTER_ALL;
-	}
-
-	return copy_to_user(rq->ifr_data, &config, sizeof(config)) ?
-			-EFAULT : 0;
-}
-
-static int dpaa2_eth_ioctl(struct net_device *dev, struct ifreq *rq, int cmd)
-{
-	if (cmd == SIOCSHWTSTAMP)
-		return dpaa2_eth_ts_ioctl(dev, rq, cmd);
-
-	return -EINVAL;
-}
-
-static const struct net_device_ops dpaa2_eth_ops = {
-	.ndo_open = dpaa2_eth_open,
-	.ndo_start_xmit = dpaa2_eth_tx,
-	.ndo_stop = dpaa2_eth_stop,
-	.ndo_init = dpaa2_eth_init,
-	.ndo_set_mac_address = dpaa2_eth_set_addr,
-	.ndo_get_stats64 = dpaa2_eth_get_stats,
-	.ndo_set_rx_mode = dpaa2_eth_set_rx_mode,
-	.ndo_set_features = dpaa2_eth_set_features,
-	.ndo_do_ioctl = dpaa2_eth_ioctl,
-};
-
-static void cdan_cb(struct dpaa2_io_notification_ctx *ctx)
-{
-	struct dpaa2_eth_channel *ch;
-
-	ch = container_of(ctx, struct dpaa2_eth_channel, nctx);
-
-	/* Update NAPI statistics */
-	ch->stats.cdan++;
-
-	napi_schedule_irqoff(&ch->napi);
-}
-
-/* Allocate and configure a DPCON object */
-static struct fsl_mc_device *setup_dpcon(struct dpaa2_eth_priv *priv)
-{
-	struct fsl_mc_device *dpcon;
-	struct device *dev = priv->net_dev->dev.parent;
-	struct dpcon_attr attrs;
-	int err;
-
-	err = fsl_mc_object_allocate(to_fsl_mc_device(dev),
-				     FSL_MC_POOL_DPCON, &dpcon);
-	if (err) {
-		dev_info(dev, "Not enough DPCONs, will go on as-is\n");
-		return NULL;
-	}
-
-	err = dpcon_open(priv->mc_io, 0, dpcon->obj_desc.id, &dpcon->mc_handle);
-	if (err) {
-		dev_err(dev, "dpcon_open() failed\n");
-		goto free;
-	}
-
-	err = dpcon_reset(priv->mc_io, 0, dpcon->mc_handle);
-	if (err) {
-		dev_err(dev, "dpcon_reset() failed\n");
-		goto close;
-	}
-
-	err = dpcon_get_attributes(priv->mc_io, 0, dpcon->mc_handle, &attrs);
-	if (err) {
-		dev_err(dev, "dpcon_get_attributes() failed\n");
-		goto close;
-	}
-
-	err = dpcon_enable(priv->mc_io, 0, dpcon->mc_handle);
-	if (err) {
-		dev_err(dev, "dpcon_enable() failed\n");
-		goto close;
-	}
-
-	return dpcon;
-
-close:
-	dpcon_close(priv->mc_io, 0, dpcon->mc_handle);
-free:
-	fsl_mc_object_free(dpcon);
-
-	return NULL;
-}
-
-static void free_dpcon(struct dpaa2_eth_priv *priv,
-		       struct fsl_mc_device *dpcon)
-{
-	dpcon_disable(priv->mc_io, 0, dpcon->mc_handle);
-	dpcon_close(priv->mc_io, 0, dpcon->mc_handle);
-	fsl_mc_object_free(dpcon);
-}
-
-static struct dpaa2_eth_channel *
-alloc_channel(struct dpaa2_eth_priv *priv)
-{
-	struct dpaa2_eth_channel *channel;
-	struct dpcon_attr attr;
-	struct device *dev = priv->net_dev->dev.parent;
-	int err;
-
-	channel = kzalloc(sizeof(*channel), GFP_KERNEL);
-	if (!channel)
-		return NULL;
-
-	channel->dpcon = setup_dpcon(priv);
-	if (!channel->dpcon)
-		goto err_setup;
-
-	err = dpcon_get_attributes(priv->mc_io, 0, channel->dpcon->mc_handle,
-				   &attr);
-	if (err) {
-		dev_err(dev, "dpcon_get_attributes() failed\n");
-		goto err_get_attr;
-	}
-
-	channel->dpcon_id = attr.id;
-	channel->ch_id = attr.qbman_ch_id;
-	channel->priv = priv;
-
-	return channel;
-
-err_get_attr:
-	free_dpcon(priv, channel->dpcon);
-err_setup:
-	kfree(channel);
-	return NULL;
-}
-
-static void free_channel(struct dpaa2_eth_priv *priv,
-			 struct dpaa2_eth_channel *channel)
-{
-	free_dpcon(priv, channel->dpcon);
-	kfree(channel);
-}
-
-/* DPIO setup: allocate and configure QBMan channels, setup core affinity
- * and register data availability notifications
- */
-static int setup_dpio(struct dpaa2_eth_priv *priv)
-{
-	struct dpaa2_io_notification_ctx *nctx;
-	struct dpaa2_eth_channel *channel;
-	struct dpcon_notification_cfg dpcon_notif_cfg;
-	struct device *dev = priv->net_dev->dev.parent;
-	int i, err;
-
-	/* We want the ability to spread ingress traffic (RX, TX conf) to as
-	 * many cores as possible, so we need one channel for each core
-	 * (unless there's fewer queues than cores, in which case the extra
-	 * channels would be wasted).
-	 * Allocate one channel per core and register it to the core's
-	 * affine DPIO. If not enough channels are available for all cores
-	 * or if some cores don't have an affine DPIO, there will be no
-	 * ingress frame processing on those cores.
-	 */
-	cpumask_clear(&priv->dpio_cpumask);
-	for_each_online_cpu(i) {
-		/* Try to allocate a channel */
-		channel = alloc_channel(priv);
-		if (!channel) {
-			dev_info(dev,
-				 "No affine channel for cpu %d and above\n", i);
-			err = -ENODEV;
-			goto err_alloc_ch;
-		}
-
-		priv->channel[priv->num_channels] = channel;
-
-		nctx = &channel->nctx;
-		nctx->is_cdan = 1;
-		nctx->cb = cdan_cb;
-		nctx->id = channel->ch_id;
-		nctx->desired_cpu = i;
-
-		/* Register the new context */
-		channel->dpio = dpaa2_io_service_select(i);
-		err = dpaa2_io_service_register(channel->dpio, nctx);
-		if (err) {
-			dev_dbg(dev, "No affine DPIO for cpu %d\n", i);
-			/* If no affine DPIO for this core, there's probably
-			 * none available for next cores either. Signal we want
-			 * to retry later, in case the DPIO devices weren't
-			 * probed yet.
-			 */
-			err = -EPROBE_DEFER;
-			goto err_service_reg;
-		}
-
-		/* Register DPCON notification with MC */
-		dpcon_notif_cfg.dpio_id = nctx->dpio_id;
-		dpcon_notif_cfg.priority = 0;
-		dpcon_notif_cfg.user_ctx = nctx->qman64;
-		err = dpcon_set_notification(priv->mc_io, 0,
-					     channel->dpcon->mc_handle,
-					     &dpcon_notif_cfg);
-		if (err) {
-			dev_err(dev, "dpcon_set_notification failed()\n");
-			goto err_set_cdan;
-		}
-
-		/* If we managed to allocate a channel and also found an affine
-		 * DPIO for this core, add it to the final mask
-		 */
-		cpumask_set_cpu(i, &priv->dpio_cpumask);
-		priv->num_channels++;
-
-		/* Stop if we already have enough channels to accommodate all
-		 * RX and TX conf queues
-		 */
-		if (priv->num_channels == dpaa2_eth_queue_count(priv))
-			break;
-	}
-
-	return 0;
-
-err_set_cdan:
-	dpaa2_io_service_deregister(channel->dpio, nctx);
-err_service_reg:
-	free_channel(priv, channel);
-err_alloc_ch:
-	if (cpumask_empty(&priv->dpio_cpumask)) {
-		dev_err(dev, "No cpu with an affine DPIO/DPCON\n");
-		return err;
-	}
-
-	dev_info(dev, "Cores %*pbl available for processing ingress traffic\n",
-		 cpumask_pr_args(&priv->dpio_cpumask));
-
-	return 0;
-}
-
-static void free_dpio(struct dpaa2_eth_priv *priv)
-{
-	int i;
-	struct dpaa2_eth_channel *ch;
-
-	/* deregister CDAN notifications and free channels */
-	for (i = 0; i < priv->num_channels; i++) {
-		ch = priv->channel[i];
-		dpaa2_io_service_deregister(ch->dpio, &ch->nctx);
-		free_channel(priv, ch);
-	}
-}
-
-static struct dpaa2_eth_channel *get_affine_channel(struct dpaa2_eth_priv *priv,
-						    int cpu)
-{
-	struct device *dev = priv->net_dev->dev.parent;
-	int i;
-
-	for (i = 0; i < priv->num_channels; i++)
-		if (priv->channel[i]->nctx.desired_cpu == cpu)
-			return priv->channel[i];
-
-	/* We should never get here. Issue a warning and return
-	 * the first channel, because it's still better than nothing
-	 */
-	dev_warn(dev, "No affine channel found for cpu %d\n", cpu);
-
-	return priv->channel[0];
-}
-
-static void set_fq_affinity(struct dpaa2_eth_priv *priv)
-{
-	struct device *dev = priv->net_dev->dev.parent;
-	struct cpumask xps_mask;
-	struct dpaa2_eth_fq *fq;
-	int rx_cpu, txc_cpu;
-	int i, err;
-
-	/* For each FQ, pick one channel/CPU to deliver frames to.
-	 * This may well change at runtime, either through irqbalance or
-	 * through direct user intervention.
-	 */
-	rx_cpu = txc_cpu = cpumask_first(&priv->dpio_cpumask);
-
-	for (i = 0; i < priv->num_fqs; i++) {
-		fq = &priv->fq[i];
-		switch (fq->type) {
-		case DPAA2_RX_FQ:
-			fq->target_cpu = rx_cpu;
-			rx_cpu = cpumask_next(rx_cpu, &priv->dpio_cpumask);
-			if (rx_cpu >= nr_cpu_ids)
-				rx_cpu = cpumask_first(&priv->dpio_cpumask);
-			break;
-		case DPAA2_TX_CONF_FQ:
-			fq->target_cpu = txc_cpu;
-
-			/* Tell the stack to affine to txc_cpu the Tx queue
-			 * associated with the confirmation one
-			 */
-			cpumask_clear(&xps_mask);
-			cpumask_set_cpu(txc_cpu, &xps_mask);
-			err = netif_set_xps_queue(priv->net_dev, &xps_mask,
-						  fq->flowid);
-			if (err)
-				dev_err(dev, "Error setting XPS queue\n");
-
-			txc_cpu = cpumask_next(txc_cpu, &priv->dpio_cpumask);
-			if (txc_cpu >= nr_cpu_ids)
-				txc_cpu = cpumask_first(&priv->dpio_cpumask);
-			break;
-		default:
-			dev_err(dev, "Unknown FQ type: %d\n", fq->type);
-		}
-		fq->channel = get_affine_channel(priv, fq->target_cpu);
-	}
-}
-
-static void setup_fqs(struct dpaa2_eth_priv *priv)
-{
-	int i;
-
-	/* We have one TxConf FQ per Tx flow.
-	 * The number of Tx and Rx queues is the same.
-	 * Tx queues come first in the fq array.
-	 */
-	for (i = 0; i < dpaa2_eth_queue_count(priv); i++) {
-		priv->fq[priv->num_fqs].type = DPAA2_TX_CONF_FQ;
-		priv->fq[priv->num_fqs].consume = dpaa2_eth_tx_conf;
-		priv->fq[priv->num_fqs++].flowid = (u16)i;
-	}
-
-	for (i = 0; i < dpaa2_eth_queue_count(priv); i++) {
-		priv->fq[priv->num_fqs].type = DPAA2_RX_FQ;
-		priv->fq[priv->num_fqs].consume = dpaa2_eth_rx;
-		priv->fq[priv->num_fqs++].flowid = (u16)i;
-	}
-
-	/* For each FQ, decide on which core to process incoming frames */
-	set_fq_affinity(priv);
-}
-
-/* Allocate and configure one buffer pool for each interface */
-static int setup_dpbp(struct dpaa2_eth_priv *priv)
-{
-	int err;
-	struct fsl_mc_device *dpbp_dev;
-	struct device *dev = priv->net_dev->dev.parent;
-	struct dpbp_attr dpbp_attrs;
-
-	err = fsl_mc_object_allocate(to_fsl_mc_device(dev), FSL_MC_POOL_DPBP,
-				     &dpbp_dev);
-	if (err) {
-		dev_err(dev, "DPBP device allocation failed\n");
-		return err;
-	}
-
-	priv->dpbp_dev = dpbp_dev;
-
-	err = dpbp_open(priv->mc_io, 0, priv->dpbp_dev->obj_desc.id,
-			&dpbp_dev->mc_handle);
-	if (err) {
-		dev_err(dev, "dpbp_open() failed\n");
-		goto err_open;
-	}
-
-	err = dpbp_reset(priv->mc_io, 0, dpbp_dev->mc_handle);
-	if (err) {
-		dev_err(dev, "dpbp_reset() failed\n");
-		goto err_reset;
-	}
-
-	err = dpbp_enable(priv->mc_io, 0, dpbp_dev->mc_handle);
-	if (err) {
-		dev_err(dev, "dpbp_enable() failed\n");
-		goto err_enable;
-	}
-
-	err = dpbp_get_attributes(priv->mc_io, 0, dpbp_dev->mc_handle,
-				  &dpbp_attrs);
-	if (err) {
-		dev_err(dev, "dpbp_get_attributes() failed\n");
-		goto err_get_attr;
-	}
-	priv->bpid = dpbp_attrs.bpid;
-
-	return 0;
-
-err_get_attr:
-	dpbp_disable(priv->mc_io, 0, dpbp_dev->mc_handle);
-err_enable:
-err_reset:
-	dpbp_close(priv->mc_io, 0, dpbp_dev->mc_handle);
-err_open:
-	fsl_mc_object_free(dpbp_dev);
-
-	return err;
-}
-
-static void free_dpbp(struct dpaa2_eth_priv *priv)
-{
-	drain_pool(priv);
-	dpbp_disable(priv->mc_io, 0, priv->dpbp_dev->mc_handle);
-	dpbp_close(priv->mc_io, 0, priv->dpbp_dev->mc_handle);
-	fsl_mc_object_free(priv->dpbp_dev);
-}
-
-static int set_buffer_layout(struct dpaa2_eth_priv *priv)
-{
-	struct device *dev = priv->net_dev->dev.parent;
-	struct dpni_buffer_layout buf_layout = {0};
-	int err;
-
-	/* We need to check for WRIOP version 1.0.0, but depending on the MC
-	 * version, this number is not always provided correctly on rev1.
-	 * We need to check for both alternatives in this situation.
-	 */
-	if (priv->dpni_attrs.wriop_version == DPAA2_WRIOP_VERSION(0, 0, 0) ||
-	    priv->dpni_attrs.wriop_version == DPAA2_WRIOP_VERSION(1, 0, 0))
-		priv->rx_buf_align = DPAA2_ETH_RX_BUF_ALIGN_REV1;
-	else
-		priv->rx_buf_align = DPAA2_ETH_RX_BUF_ALIGN;
-
-	/* tx buffer */
-	buf_layout.private_data_size = DPAA2_ETH_SWA_SIZE;
-	buf_layout.pass_timestamp = true;
-	buf_layout.options = DPNI_BUF_LAYOUT_OPT_PRIVATE_DATA_SIZE |
-			     DPNI_BUF_LAYOUT_OPT_TIMESTAMP;
-	err = dpni_set_buffer_layout(priv->mc_io, 0, priv->mc_token,
-				     DPNI_QUEUE_TX, &buf_layout);
-	if (err) {
-		dev_err(dev, "dpni_set_buffer_layout(TX) failed\n");
-		return err;
-	}
-
-	/* tx-confirm buffer */
-	buf_layout.options = DPNI_BUF_LAYOUT_OPT_TIMESTAMP;
-	err = dpni_set_buffer_layout(priv->mc_io, 0, priv->mc_token,
-				     DPNI_QUEUE_TX_CONFIRM, &buf_layout);
-	if (err) {
-		dev_err(dev, "dpni_set_buffer_layout(TX_CONF) failed\n");
-		return err;
-	}
-
-	/* Now that we've set our tx buffer layout, retrieve the minimum
-	 * required tx data offset.
-	 */
-	err = dpni_get_tx_data_offset(priv->mc_io, 0, priv->mc_token,
-				      &priv->tx_data_offset);
-	if (err) {
-		dev_err(dev, "dpni_get_tx_data_offset() failed\n");
-		return err;
-	}
-
-	if ((priv->tx_data_offset % 64) != 0)
-		dev_warn(dev, "Tx data offset (%d) not a multiple of 64B\n",
-			 priv->tx_data_offset);
-
-	/* rx buffer */
-	buf_layout.pass_frame_status = true;
-	buf_layout.pass_parser_result = true;
-	buf_layout.data_align = priv->rx_buf_align;
-	buf_layout.data_head_room = dpaa2_eth_rx_head_room(priv);
-	buf_layout.private_data_size = 0;
-	buf_layout.options = DPNI_BUF_LAYOUT_OPT_PARSER_RESULT |
-			     DPNI_BUF_LAYOUT_OPT_FRAME_STATUS |
-			     DPNI_BUF_LAYOUT_OPT_DATA_ALIGN |
-			     DPNI_BUF_LAYOUT_OPT_DATA_HEAD_ROOM |
-			     DPNI_BUF_LAYOUT_OPT_TIMESTAMP;
-	err = dpni_set_buffer_layout(priv->mc_io, 0, priv->mc_token,
-				     DPNI_QUEUE_RX, &buf_layout);
-	if (err) {
-		dev_err(dev, "dpni_set_buffer_layout(RX) failed\n");
-		return err;
-	}
-
-	return 0;
-}
-
-/* Configure the DPNI object this interface is associated with */
-static int setup_dpni(struct fsl_mc_device *ls_dev)
-{
-	struct device *dev = &ls_dev->dev;
-	struct dpaa2_eth_priv *priv;
-	struct net_device *net_dev;
-	int err;
-
-	net_dev = dev_get_drvdata(dev);
-	priv = netdev_priv(net_dev);
-
-	/* get a handle for the DPNI object */
-	err = dpni_open(priv->mc_io, 0, ls_dev->obj_desc.id, &priv->mc_token);
-	if (err) {
-		dev_err(dev, "dpni_open() failed\n");
-		return err;
-	}
-
-	/* Check if we can work with this DPNI object */
-	err = dpni_get_api_version(priv->mc_io, 0, &priv->dpni_ver_major,
-				   &priv->dpni_ver_minor);
-	if (err) {
-		dev_err(dev, "dpni_get_api_version() failed\n");
-		goto close;
-	}
-	if (dpaa2_eth_cmp_dpni_ver(priv, DPNI_VER_MAJOR, DPNI_VER_MINOR) < 0) {
-		dev_err(dev, "DPNI version %u.%u not supported, need >= %u.%u\n",
-			priv->dpni_ver_major, priv->dpni_ver_minor,
-			DPNI_VER_MAJOR, DPNI_VER_MINOR);
-		err = -ENOTSUPP;
-		goto close;
-	}
-
-	ls_dev->mc_io = priv->mc_io;
-	ls_dev->mc_handle = priv->mc_token;
-
-	err = dpni_reset(priv->mc_io, 0, priv->mc_token);
-	if (err) {
-		dev_err(dev, "dpni_reset() failed\n");
-		goto close;
-	}
-
-	err = dpni_get_attributes(priv->mc_io, 0, priv->mc_token,
-				  &priv->dpni_attrs);
-	if (err) {
-		dev_err(dev, "dpni_get_attributes() failed (err=%d)\n", err);
-		goto close;
-	}
-
-	err = set_buffer_layout(priv);
-	if (err)
-		goto close;
-
-	return 0;
-
-close:
-	dpni_close(priv->mc_io, 0, priv->mc_token);
-
-	return err;
-}
-
-static void free_dpni(struct dpaa2_eth_priv *priv)
-{
-	int err;
-
-	err = dpni_reset(priv->mc_io, 0, priv->mc_token);
-	if (err)
-		netdev_warn(priv->net_dev, "dpni_reset() failed (err %d)\n",
-			    err);
-
-	dpni_close(priv->mc_io, 0, priv->mc_token);
-}
-
-static int setup_rx_flow(struct dpaa2_eth_priv *priv,
-			 struct dpaa2_eth_fq *fq)
-{
-	struct device *dev = priv->net_dev->dev.parent;
-	struct dpni_queue queue;
-	struct dpni_queue_id qid;
-	struct dpni_taildrop td;
-	int err;
-
-	err = dpni_get_queue(priv->mc_io, 0, priv->mc_token,
-			     DPNI_QUEUE_RX, 0, fq->flowid, &queue, &qid);
-	if (err) {
-		dev_err(dev, "dpni_get_queue(RX) failed\n");
-		return err;
-	}
-
-	fq->fqid = qid.fqid;
-
-	queue.destination.id = fq->channel->dpcon_id;
-	queue.destination.type = DPNI_DEST_DPCON;
-	queue.destination.priority = 1;
-	queue.user_context = (u64)(uintptr_t)fq;
-	err = dpni_set_queue(priv->mc_io, 0, priv->mc_token,
-			     DPNI_QUEUE_RX, 0, fq->flowid,
-			     DPNI_QUEUE_OPT_USER_CTX | DPNI_QUEUE_OPT_DEST,
-			     &queue);
-	if (err) {
-		dev_err(dev, "dpni_set_queue(RX) failed\n");
-		return err;
-	}
-
-	td.enable = 1;
-	td.threshold = DPAA2_ETH_TAILDROP_THRESH;
-	err = dpni_set_taildrop(priv->mc_io, 0, priv->mc_token, DPNI_CP_QUEUE,
-				DPNI_QUEUE_RX, 0, fq->flowid, &td);
-	if (err) {
-		dev_err(dev, "dpni_set_threshold() failed\n");
-		return err;
-	}
-
-	return 0;
-}
-
-static int setup_tx_flow(struct dpaa2_eth_priv *priv,
-			 struct dpaa2_eth_fq *fq)
-{
-	struct device *dev = priv->net_dev->dev.parent;
-	struct dpni_queue queue;
-	struct dpni_queue_id qid;
-	int err;
-
-	err = dpni_get_queue(priv->mc_io, 0, priv->mc_token,
-			     DPNI_QUEUE_TX, 0, fq->flowid, &queue, &qid);
-	if (err) {
-		dev_err(dev, "dpni_get_queue(TX) failed\n");
-		return err;
-	}
-
-	fq->tx_qdbin = qid.qdbin;
-
-	err = dpni_get_queue(priv->mc_io, 0, priv->mc_token,
-			     DPNI_QUEUE_TX_CONFIRM, 0, fq->flowid,
-			     &queue, &qid);
-	if (err) {
-		dev_err(dev, "dpni_get_queue(TX_CONF) failed\n");
-		return err;
-	}
-
-	fq->fqid = qid.fqid;
-
-	queue.destination.id = fq->channel->dpcon_id;
-	queue.destination.type = DPNI_DEST_DPCON;
-	queue.destination.priority = 0;
-	queue.user_context = (u64)(uintptr_t)fq;
-	err = dpni_set_queue(priv->mc_io, 0, priv->mc_token,
-			     DPNI_QUEUE_TX_CONFIRM, 0, fq->flowid,
-			     DPNI_QUEUE_OPT_USER_CTX | DPNI_QUEUE_OPT_DEST,
-			     &queue);
-	if (err) {
-		dev_err(dev, "dpni_set_queue(TX_CONF) failed\n");
-		return err;
-	}
-
-	return 0;
-}
-
-/* Hash key is a 5-tuple: IPsrc, IPdst, IPnextproto, L4src, L4dst */
-static const struct dpaa2_eth_hash_fields hash_fields[] = {
-	{
-		/* IP header */
-		.rxnfc_field = RXH_IP_SRC,
-		.cls_prot = NET_PROT_IP,
-		.cls_field = NH_FLD_IP_SRC,
-		.size = 4,
-	}, {
-		.rxnfc_field = RXH_IP_DST,
-		.cls_prot = NET_PROT_IP,
-		.cls_field = NH_FLD_IP_DST,
-		.size = 4,
-	}, {
-		.rxnfc_field = RXH_L3_PROTO,
-		.cls_prot = NET_PROT_IP,
-		.cls_field = NH_FLD_IP_PROTO,
-		.size = 1,
-	}, {
-		/* Using UDP ports, this is functionally equivalent to raw
-		 * byte pairs from L4 header.
-		 */
-		.rxnfc_field = RXH_L4_B_0_1,
-		.cls_prot = NET_PROT_UDP,
-		.cls_field = NH_FLD_UDP_PORT_SRC,
-		.size = 2,
-	}, {
-		.rxnfc_field = RXH_L4_B_2_3,
-		.cls_prot = NET_PROT_UDP,
-		.cls_field = NH_FLD_UDP_PORT_DST,
-		.size = 2,
-	},
-};
-
-/* Set RX hash options
- * flags is a combination of RXH_ bits
- */
-static int dpaa2_eth_set_hash(struct net_device *net_dev, u64 flags)
-{
-	struct device *dev = net_dev->dev.parent;
-	struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-	struct dpkg_profile_cfg cls_cfg;
-	struct dpni_rx_tc_dist_cfg dist_cfg;
-	u8 *dma_mem;
-	int i;
-	int err = 0;
-
-	if (!dpaa2_eth_hash_enabled(priv)) {
-		dev_dbg(dev, "Hashing support is not enabled\n");
-		return 0;
-	}
-
-	memset(&cls_cfg, 0, sizeof(cls_cfg));
-
-	for (i = 0; i < ARRAY_SIZE(hash_fields); i++) {
-		struct dpkg_extract *key =
-			&cls_cfg.extracts[cls_cfg.num_extracts];
-
-		if (!(flags & hash_fields[i].rxnfc_field))
-			continue;
-
-		if (cls_cfg.num_extracts >= DPKG_MAX_NUM_OF_EXTRACTS) {
-			dev_err(dev, "error adding key extraction rule, too many rules?\n");
-			return -E2BIG;
-		}
-
-		key->type = DPKG_EXTRACT_FROM_HDR;
-		key->extract.from_hdr.prot = hash_fields[i].cls_prot;
-		key->extract.from_hdr.type = DPKG_FULL_FIELD;
-		key->extract.from_hdr.field = hash_fields[i].cls_field;
-		cls_cfg.num_extracts++;
-
-		priv->rx_hash_fields |= hash_fields[i].rxnfc_field;
-	}
-
-	dma_mem = kzalloc(DPAA2_CLASSIFIER_DMA_SIZE, GFP_KERNEL);
-	if (!dma_mem)
-		return -ENOMEM;
-
-	err = dpni_prepare_key_cfg(&cls_cfg, dma_mem);
-	if (err) {
-		dev_err(dev, "dpni_prepare_key_cfg error %d\n", err);
-		goto err_prep_key;
-	}
-
-	memset(&dist_cfg, 0, sizeof(dist_cfg));
-
-	/* Prepare for setting the rx dist */
-	dist_cfg.key_cfg_iova = dma_map_single(dev, dma_mem,
-					       DPAA2_CLASSIFIER_DMA_SIZE,
-					       DMA_TO_DEVICE);
-	if (dma_mapping_error(dev, dist_cfg.key_cfg_iova)) {
-		dev_err(dev, "DMA mapping failed\n");
-		err = -ENOMEM;
-		goto err_dma_map;
-	}
-
-	dist_cfg.dist_size = dpaa2_eth_queue_count(priv);
-	dist_cfg.dist_mode = DPNI_DIST_MODE_HASH;
-
-	err = dpni_set_rx_tc_dist(priv->mc_io, 0, priv->mc_token, 0, &dist_cfg);
-	dma_unmap_single(dev, dist_cfg.key_cfg_iova,
-			 DPAA2_CLASSIFIER_DMA_SIZE, DMA_TO_DEVICE);
-	if (err)
-		dev_err(dev, "dpni_set_rx_tc_dist() error %d\n", err);
-
-err_dma_map:
-err_prep_key:
-	kfree(dma_mem);
-	return err;
-}
-
-/* Bind the DPNI to its needed objects and resources: buffer pool, DPIOs,
- * frame queues and channels
- */
-static int bind_dpni(struct dpaa2_eth_priv *priv)
-{
-	struct net_device *net_dev = priv->net_dev;
-	struct device *dev = net_dev->dev.parent;
-	struct dpni_pools_cfg pools_params;
-	struct dpni_error_cfg err_cfg;
-	int err = 0;
-	int i;
-
-	pools_params.num_dpbp = 1;
-	pools_params.pools[0].dpbp_id = priv->dpbp_dev->obj_desc.id;
-	pools_params.pools[0].backup_pool = 0;
-	pools_params.pools[0].buffer_size = DPAA2_ETH_RX_BUF_SIZE;
-	err = dpni_set_pools(priv->mc_io, 0, priv->mc_token, &pools_params);
-	if (err) {
-		dev_err(dev, "dpni_set_pools() failed\n");
-		return err;
-	}
-
-	/* have the interface implicitly distribute traffic based on
-	 * the default hash key
-	 */
-	err = dpaa2_eth_set_hash(net_dev, DPAA2_RXH_DEFAULT);
-	if (err)
-		dev_err(dev, "Failed to configure hashing\n");
-
-	/* Configure handling of error frames */
-	err_cfg.errors = DPAA2_FAS_RX_ERR_MASK;
-	err_cfg.set_frame_annotation = 1;
-	err_cfg.error_action = DPNI_ERROR_ACTION_DISCARD;
-	err = dpni_set_errors_behavior(priv->mc_io, 0, priv->mc_token,
-				       &err_cfg);
-	if (err) {
-		dev_err(dev, "dpni_set_errors_behavior failed\n");
-		return err;
-	}
-
-	/* Configure Rx and Tx conf queues to generate CDANs */
-	for (i = 0; i < priv->num_fqs; i++) {
-		switch (priv->fq[i].type) {
-		case DPAA2_RX_FQ:
-			err = setup_rx_flow(priv, &priv->fq[i]);
-			break;
-		case DPAA2_TX_CONF_FQ:
-			err = setup_tx_flow(priv, &priv->fq[i]);
-			break;
-		default:
-			dev_err(dev, "Invalid FQ type %d\n", priv->fq[i].type);
-			return -EINVAL;
-		}
-		if (err)
-			return err;
-	}
-
-	err = dpni_get_qdid(priv->mc_io, 0, priv->mc_token,
-			    DPNI_QUEUE_TX, &priv->tx_qdid);
-	if (err) {
-		dev_err(dev, "dpni_get_qdid() failed\n");
-		return err;
-	}
-
-	return 0;
-}
-
-/* Allocate rings for storing incoming frame descriptors */
-static int alloc_rings(struct dpaa2_eth_priv *priv)
-{
-	struct net_device *net_dev = priv->net_dev;
-	struct device *dev = net_dev->dev.parent;
-	int i;
-
-	for (i = 0; i < priv->num_channels; i++) {
-		priv->channel[i]->store =
-			dpaa2_io_store_create(DPAA2_ETH_STORE_SIZE, dev);
-		if (!priv->channel[i]->store) {
-			netdev_err(net_dev, "dpaa2_io_store_create() failed\n");
-			goto err_ring;
-		}
-	}
-
-	return 0;
-
-err_ring:
-	for (i = 0; i < priv->num_channels; i++) {
-		if (!priv->channel[i]->store)
-			break;
-		dpaa2_io_store_destroy(priv->channel[i]->store);
-	}
-
-	return -ENOMEM;
-}
-
-static void free_rings(struct dpaa2_eth_priv *priv)
-{
-	int i;
-
-	for (i = 0; i < priv->num_channels; i++)
-		dpaa2_io_store_destroy(priv->channel[i]->store);
-}
-
-static int set_mac_addr(struct dpaa2_eth_priv *priv)
-{
-	struct net_device *net_dev = priv->net_dev;
-	struct device *dev = net_dev->dev.parent;
-	u8 mac_addr[ETH_ALEN], dpni_mac_addr[ETH_ALEN];
-	int err;
-
-	/* Get firmware address, if any */
-	err = dpni_get_port_mac_addr(priv->mc_io, 0, priv->mc_token, mac_addr);
-	if (err) {
-		dev_err(dev, "dpni_get_port_mac_addr() failed\n");
-		return err;
-	}
-
-	/* Get DPNI attributes address, if any */
-	err = dpni_get_primary_mac_addr(priv->mc_io, 0, priv->mc_token,
-					dpni_mac_addr);
-	if (err) {
-		dev_err(dev, "dpni_get_primary_mac_addr() failed\n");
-		return err;
-	}
-
-	/* First check if firmware has any address configured by bootloader */
-	if (!is_zero_ether_addr(mac_addr)) {
-		/* If the DPMAC addr != DPNI addr, update it */
-		if (!ether_addr_equal(mac_addr, dpni_mac_addr)) {
-			err = dpni_set_primary_mac_addr(priv->mc_io, 0,
-							priv->mc_token,
-							mac_addr);
-			if (err) {
-				dev_err(dev, "dpni_set_primary_mac_addr() failed\n");
-				return err;
-			}
-		}
-		memcpy(net_dev->dev_addr, mac_addr, net_dev->addr_len);
-	} else if (is_zero_ether_addr(dpni_mac_addr)) {
-		/* No MAC address configured, fill in net_dev->dev_addr
-		 * with a random one
-		 */
-		eth_hw_addr_random(net_dev);
-		dev_dbg_once(dev, "device(s) have all-zero hwaddr, replaced with random\n");
-
-		err = dpni_set_primary_mac_addr(priv->mc_io, 0, priv->mc_token,
-						net_dev->dev_addr);
-		if (err) {
-			dev_err(dev, "dpni_set_primary_mac_addr() failed\n");
-			return err;
-		}
-
-		/* Override NET_ADDR_RANDOM set by eth_hw_addr_random(); for all
-		 * practical purposes, this will be our "permanent" mac address,
-		 * at least until the next reboot. This move will also permit
-		 * register_netdevice() to properly fill up net_dev->perm_addr.
-		 */
-		net_dev->addr_assign_type = NET_ADDR_PERM;
-	} else {
-		/* NET_ADDR_PERM is default, all we have to do is
-		 * fill in the device addr.
-		 */
-		memcpy(net_dev->dev_addr, dpni_mac_addr, net_dev->addr_len);
-	}
-
-	return 0;
-}
-
-static int netdev_init(struct net_device *net_dev)
-{
-	struct device *dev = net_dev->dev.parent;
-	struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-	u8 bcast_addr[ETH_ALEN];
-	u8 num_queues;
-	int err;
-
-	net_dev->netdev_ops = &dpaa2_eth_ops;
-
-	err = set_mac_addr(priv);
-	if (err)
-		return err;
-
-	/* Explicitly add the broadcast address to the MAC filtering table */
-	eth_broadcast_addr(bcast_addr);
-	err = dpni_add_mac_addr(priv->mc_io, 0, priv->mc_token, bcast_addr);
-	if (err) {
-		dev_err(dev, "dpni_add_mac_addr() failed\n");
-		return err;
-	}
-
-	/* Set MTU upper limit; lower limit is 68B (default value) */
-	net_dev->max_mtu = DPAA2_ETH_MAX_MTU;
-	err = dpni_set_max_frame_length(priv->mc_io, 0, priv->mc_token,
-					DPAA2_ETH_MFL);
-	if (err) {
-		dev_err(dev, "dpni_set_max_frame_length() failed\n");
-		return err;
-	}
-
-	/* Set actual number of queues in the net device */
-	num_queues = dpaa2_eth_queue_count(priv);
-	err = netif_set_real_num_tx_queues(net_dev, num_queues);
-	if (err) {
-		dev_err(dev, "netif_set_real_num_tx_queues() failed\n");
-		return err;
-	}
-	err = netif_set_real_num_rx_queues(net_dev, num_queues);
-	if (err) {
-		dev_err(dev, "netif_set_real_num_rx_queues() failed\n");
-		return err;
-	}
-
-	/* Our .ndo_init will be called herein */
-	err = register_netdev(net_dev);
-	if (err < 0) {
-		dev_err(dev, "register_netdev() failed\n");
-		return err;
-	}
-
-	return 0;
-}
-
-static int poll_link_state(void *arg)
-{
-	struct dpaa2_eth_priv *priv = (struct dpaa2_eth_priv *)arg;
-	int err;
-
-	while (!kthread_should_stop()) {
-		err = link_state_update(priv);
-		if (unlikely(err))
-			return err;
-
-		msleep(DPAA2_ETH_LINK_STATE_REFRESH);
-	}
-
-	return 0;
-}
-
-static irqreturn_t dpni_irq0_handler_thread(int irq_num, void *arg)
-{
-	u32 status = ~0;
-	struct device *dev = (struct device *)arg;
-	struct fsl_mc_device *dpni_dev = to_fsl_mc_device(dev);
-	struct net_device *net_dev = dev_get_drvdata(dev);
-	int err;
-
-	err = dpni_get_irq_status(dpni_dev->mc_io, 0, dpni_dev->mc_handle,
-				  DPNI_IRQ_INDEX, &status);
-	if (unlikely(err)) {
-		netdev_err(net_dev, "Can't get irq status (err %d)\n", err);
-		return IRQ_HANDLED;
-	}
-
-	if (status & DPNI_IRQ_EVENT_LINK_CHANGED)
-		link_state_update(netdev_priv(net_dev));
-
-	return IRQ_HANDLED;
-}
-
-static int setup_irqs(struct fsl_mc_device *ls_dev)
-{
-	int err = 0;
-	struct fsl_mc_device_irq *irq;
-
-	err = fsl_mc_allocate_irqs(ls_dev);
-	if (err) {
-		dev_err(&ls_dev->dev, "MC irqs allocation failed\n");
-		return err;
-	}
-
-	irq = ls_dev->irqs[0];
-	err = devm_request_threaded_irq(&ls_dev->dev, irq->msi_desc->irq,
-					NULL, dpni_irq0_handler_thread,
-					IRQF_NO_SUSPEND | IRQF_ONESHOT,
-					dev_name(&ls_dev->dev), &ls_dev->dev);
-	if (err < 0) {
-		dev_err(&ls_dev->dev, "devm_request_threaded_irq(): %d\n", err);
-		goto free_mc_irq;
-	}
-
-	err = dpni_set_irq_mask(ls_dev->mc_io, 0, ls_dev->mc_handle,
-				DPNI_IRQ_INDEX, DPNI_IRQ_EVENT_LINK_CHANGED);
-	if (err < 0) {
-		dev_err(&ls_dev->dev, "dpni_set_irq_mask(): %d\n", err);
-		goto free_irq;
-	}
-
-	err = dpni_set_irq_enable(ls_dev->mc_io, 0, ls_dev->mc_handle,
-				  DPNI_IRQ_INDEX, 1);
-	if (err < 0) {
-		dev_err(&ls_dev->dev, "dpni_set_irq_enable(): %d\n", err);
-		goto free_irq;
-	}
-
-	return 0;
-
-free_irq:
-	devm_free_irq(&ls_dev->dev, irq->msi_desc->irq, &ls_dev->dev);
-free_mc_irq:
-	fsl_mc_free_irqs(ls_dev);
-
-	return err;
-}
-
-static void add_ch_napi(struct dpaa2_eth_priv *priv)
-{
-	int i;
-	struct dpaa2_eth_channel *ch;
-
-	for (i = 0; i < priv->num_channels; i++) {
-		ch = priv->channel[i];
-		/* NAPI weight *MUST* be a multiple of DPAA2_ETH_STORE_SIZE */
-		netif_napi_add(priv->net_dev, &ch->napi, dpaa2_eth_poll,
-			       NAPI_POLL_WEIGHT);
-	}
-}
-
-static void del_ch_napi(struct dpaa2_eth_priv *priv)
-{
-	int i;
-	struct dpaa2_eth_channel *ch;
-
-	for (i = 0; i < priv->num_channels; i++) {
-		ch = priv->channel[i];
-		netif_napi_del(&ch->napi);
-	}
-}
-
-static int dpaa2_eth_probe(struct fsl_mc_device *dpni_dev)
-{
-	struct device *dev;
-	struct net_device *net_dev = NULL;
-	struct dpaa2_eth_priv *priv = NULL;
-	int err = 0;
-
-	dev = &dpni_dev->dev;
-
-	/* Net device */
-	net_dev = alloc_etherdev_mq(sizeof(*priv), DPAA2_ETH_MAX_TX_QUEUES);
-	if (!net_dev) {
-		dev_err(dev, "alloc_etherdev_mq() failed\n");
-		return -ENOMEM;
-	}
-
-	SET_NETDEV_DEV(net_dev, dev);
-	dev_set_drvdata(dev, net_dev);
-
-	priv = netdev_priv(net_dev);
-	priv->net_dev = net_dev;
-
-	priv->iommu_domain = iommu_get_domain_for_dev(dev);
-
-	/* Obtain a MC portal */
-	err = fsl_mc_portal_allocate(dpni_dev, FSL_MC_IO_ATOMIC_CONTEXT_PORTAL,
-				     &priv->mc_io);
-	if (err) {
-		if (err == -ENXIO)
-			err = -EPROBE_DEFER;
-		else
-			dev_err(dev, "MC portal allocation failed\n");
-		goto err_portal_alloc;
-	}
-
-	/* MC objects initialization and configuration */
-	err = setup_dpni(dpni_dev);
-	if (err)
-		goto err_dpni_setup;
-
-	err = setup_dpio(priv);
-	if (err)
-		goto err_dpio_setup;
-
-	setup_fqs(priv);
-
-	err = setup_dpbp(priv);
-	if (err)
-		goto err_dpbp_setup;
-
-	err = bind_dpni(priv);
-	if (err)
-		goto err_bind;
-
-	/* Add a NAPI context for each channel */
-	add_ch_napi(priv);
-
-	/* Percpu statistics */
-	priv->percpu_stats = alloc_percpu(*priv->percpu_stats);
-	if (!priv->percpu_stats) {
-		dev_err(dev, "alloc_percpu(percpu_stats) failed\n");
-		err = -ENOMEM;
-		goto err_alloc_percpu_stats;
-	}
-	priv->percpu_extras = alloc_percpu(*priv->percpu_extras);
-	if (!priv->percpu_extras) {
-		dev_err(dev, "alloc_percpu(percpu_extras) failed\n");
-		err = -ENOMEM;
-		goto err_alloc_percpu_extras;
-	}
-
-	err = netdev_init(net_dev);
-	if (err)
-		goto err_netdev_init;
-
-	/* Configure checksum offload based on current interface flags */
-	err = set_rx_csum(priv, !!(net_dev->features & NETIF_F_RXCSUM));
-	if (err)
-		goto err_csum;
-
-	err = set_tx_csum(priv, !!(net_dev->features &
-				   (NETIF_F_IP_CSUM | NETIF_F_IPV6_CSUM)));
-	if (err)
-		goto err_csum;
-
-	err = alloc_rings(priv);
-	if (err)
-		goto err_alloc_rings;
-
-	net_dev->ethtool_ops = &dpaa2_ethtool_ops;
-
-	err = setup_irqs(dpni_dev);
-	if (err) {
-		netdev_warn(net_dev, "Failed to set link interrupt, fall back to polling\n");
-		priv->poll_thread = kthread_run(poll_link_state, priv,
-						"%s_poll_link", net_dev->name);
-		if (IS_ERR(priv->poll_thread)) {
-			netdev_err(net_dev, "Error starting polling thread\n");
-			goto err_poll_thread;
-		}
-		priv->do_link_poll = true;
-	}
-
-	dev_info(dev, "Probed interface %s\n", net_dev->name);
-	return 0;
-
-err_poll_thread:
-	free_rings(priv);
-err_alloc_rings:
-err_csum:
-	unregister_netdev(net_dev);
-err_netdev_init:
-	free_percpu(priv->percpu_extras);
-err_alloc_percpu_extras:
-	free_percpu(priv->percpu_stats);
-err_alloc_percpu_stats:
-	del_ch_napi(priv);
-err_bind:
-	free_dpbp(priv);
-err_dpbp_setup:
-	free_dpio(priv);
-err_dpio_setup:
-	free_dpni(priv);
-err_dpni_setup:
-	fsl_mc_portal_free(priv->mc_io);
-err_portal_alloc:
-	dev_set_drvdata(dev, NULL);
-	free_netdev(net_dev);
-
-	return err;
-}
-
-static int dpaa2_eth_remove(struct fsl_mc_device *ls_dev)
-{
-	struct device *dev;
-	struct net_device *net_dev;
-	struct dpaa2_eth_priv *priv;
-
-	dev = &ls_dev->dev;
-	net_dev = dev_get_drvdata(dev);
-	priv = netdev_priv(net_dev);
-
-	unregister_netdev(net_dev);
-
-	if (priv->do_link_poll)
-		kthread_stop(priv->poll_thread);
-	else
-		fsl_mc_free_irqs(ls_dev);
-
-	free_rings(priv);
-	free_percpu(priv->percpu_stats);
-	free_percpu(priv->percpu_extras);
-
-	del_ch_napi(priv);
-	free_dpbp(priv);
-	free_dpio(priv);
-	free_dpni(priv);
-
-	fsl_mc_portal_free(priv->mc_io);
-
-	free_netdev(net_dev);
-
-	dev_dbg(net_dev->dev.parent, "Removed interface %s\n", net_dev->name);
-
-	return 0;
-}
-
-static const struct fsl_mc_device_id dpaa2_eth_match_id_table[] = {
-	{
-		.vendor = FSL_MC_VENDOR_FREESCALE,
-		.obj_type = "dpni",
-	},
-	{ .vendor = 0x0 }
-};
-MODULE_DEVICE_TABLE(fslmc, dpaa2_eth_match_id_table);
-
-static struct fsl_mc_driver dpaa2_eth_driver = {
-	.driver = {
-		.name = KBUILD_MODNAME,
-		.owner = THIS_MODULE,
-	},
-	.probe = dpaa2_eth_probe,
-	.remove = dpaa2_eth_remove,
-	.match_id_table = dpaa2_eth_match_id_table
-};
-
-module_fsl_mc_driver(dpaa2_eth_driver);
diff --git a/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.h b/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.h
deleted file mode 100644
index 1f86a78..0000000
--- a/drivers/staging/fsl-dpaa2/ethernet/dpaa2-eth.h
+++ /dev/null
@@ -1,412 +0,0 @@
-/* SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause) */
-/* Copyright 2014-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- */
-
-#ifndef __DPAA2_ETH_H
-#define __DPAA2_ETH_H
-
-#include <linux/netdevice.h>
-#include <linux/if_vlan.h>
-#include <linux/fsl/mc.h>
-
-#include <soc/fsl/dpaa2-io.h>
-#include <soc/fsl/dpaa2-fd.h>
-#include "dpni.h"
-#include "dpni-cmd.h"
-
-#include "dpaa2-eth-trace.h"
-
-#define DPAA2_WRIOP_VERSION(x, y, z) ((x) << 10 | (y) << 5 | (z) << 0)
-
-#define DPAA2_ETH_STORE_SIZE		16
-
-/* Maximum number of scatter-gather entries in an ingress frame,
- * considering the maximum receive frame size is 64K
- */
-#define DPAA2_ETH_MAX_SG_ENTRIES	((64 * 1024) / DPAA2_ETH_RX_BUF_SIZE)
-
-/* Maximum acceptable MTU value. It is in direct relation with the hardware
- * enforced Max Frame Length (currently 10k).
- */
-#define DPAA2_ETH_MFL			(10 * 1024)
-#define DPAA2_ETH_MAX_MTU		(DPAA2_ETH_MFL - VLAN_ETH_HLEN)
-/* Convert L3 MTU to L2 MFL */
-#define DPAA2_ETH_L2_MAX_FRM(mtu)	((mtu) + VLAN_ETH_HLEN)
-
-/* Set the taildrop threshold (in bytes) to allow the enqueue of several jumbo
- * frames in the Rx queues (length of the current frame is not
- * taken into account when making the taildrop decision)
- */
-#define DPAA2_ETH_TAILDROP_THRESH	(64 * 1024)
-
-/* Buffer quota per queue. Must be large enough such that for minimum sized
- * frames taildrop kicks in before the bpool gets depleted, so we compute
- * how many 64B frames fit inside the taildrop threshold and add a margin
- * to accommodate the buffer refill delay.
- */
-#define DPAA2_ETH_MAX_FRAMES_PER_QUEUE	(DPAA2_ETH_TAILDROP_THRESH / 64)
-#define DPAA2_ETH_NUM_BUFS		(DPAA2_ETH_MAX_FRAMES_PER_QUEUE + 256)
-#define DPAA2_ETH_REFILL_THRESH		DPAA2_ETH_MAX_FRAMES_PER_QUEUE
-
-/* Maximum number of buffers that can be acquired/released through a single
- * QBMan command
- */
-#define DPAA2_ETH_BUFS_PER_CMD		7
-
-/* Hardware requires alignment for ingress/egress buffer addresses */
-#define DPAA2_ETH_TX_BUF_ALIGN		64
-
-#define DPAA2_ETH_RX_BUF_SIZE		2048
-#define DPAA2_ETH_SKB_SIZE \
-	(DPAA2_ETH_RX_BUF_SIZE + SKB_DATA_ALIGN(sizeof(struct skb_shared_info)))
-
-/* Hardware annotation area in RX/TX buffers */
-#define DPAA2_ETH_RX_HWA_SIZE		64
-#define DPAA2_ETH_TX_HWA_SIZE		128
-
-/* PTP nominal frequency 1GHz */
-#define DPAA2_PTP_CLK_PERIOD_NS		1
-
-/* Due to a limitation in WRIOP 1.0.0, the RX buffer data must be aligned
- * to 256B. For newer revisions, the requirement is only for 64B alignment
- */
-#define DPAA2_ETH_RX_BUF_ALIGN_REV1	256
-#define DPAA2_ETH_RX_BUF_ALIGN		64
-
-/* We are accommodating a skb backpointer and some S/G info
- * in the frame's software annotation. The hardware
- * options are either 0 or 64, so we choose the latter.
- */
-#define DPAA2_ETH_SWA_SIZE		64
-
-/* Must keep this struct smaller than DPAA2_ETH_SWA_SIZE */
-struct dpaa2_eth_swa {
-	struct sk_buff *skb;
-	struct scatterlist *scl;
-	int num_sg;
-	int sgt_size;
-};
-
-/* Annotation valid bits in FD FRC */
-#define DPAA2_FD_FRC_FASV		0x8000
-#define DPAA2_FD_FRC_FAEADV		0x4000
-#define DPAA2_FD_FRC_FAPRV		0x2000
-#define DPAA2_FD_FRC_FAIADV		0x1000
-#define DPAA2_FD_FRC_FASWOV		0x0800
-#define DPAA2_FD_FRC_FAICFDV		0x0400
-
-/* Error bits in FD CTRL */
-#define DPAA2_FD_RX_ERR_MASK		(FD_CTRL_SBE | FD_CTRL_FAERR)
-#define DPAA2_FD_TX_ERR_MASK		(FD_CTRL_UFD	| \
-					 FD_CTRL_SBE	| \
-					 FD_CTRL_FSE	| \
-					 FD_CTRL_FAERR)
-
-/* Annotation bits in FD CTRL */
-#define DPAA2_FD_CTRL_ASAL		0x00020000	/* ASAL = 128B */
-
-/* Frame annotation status */
-struct dpaa2_fas {
-	u8 reserved;
-	u8 ppid;
-	__le16 ifpid;
-	__le32 status;
-};
-
-/* Frame annotation status word is located in the first 8 bytes
- * of the buffer's hardware annoatation area
- */
-#define DPAA2_FAS_OFFSET		0
-#define DPAA2_FAS_SIZE			(sizeof(struct dpaa2_fas))
-
-/* Timestamp is located in the next 8 bytes of the buffer's
- * hardware annotation area
- */
-#define DPAA2_TS_OFFSET			0x8
-
-/* Frame annotation egress action descriptor */
-#define DPAA2_FAEAD_OFFSET		0x58
-
-struct dpaa2_faead {
-	__le32 conf_fqid;
-	__le32 ctrl;
-};
-
-#define DPAA2_FAEAD_A2V			0x20000000
-#define DPAA2_FAEAD_UPDV		0x00001000
-#define DPAA2_FAEAD_UPD			0x00000010
-
-/* Accessors for the hardware annotation fields that we use */
-static inline void *dpaa2_get_hwa(void *buf_addr, bool swa)
-{
-	return buf_addr + (swa ? DPAA2_ETH_SWA_SIZE : 0);
-}
-
-static inline struct dpaa2_fas *dpaa2_get_fas(void *buf_addr, bool swa)
-{
-	return dpaa2_get_hwa(buf_addr, swa) + DPAA2_FAS_OFFSET;
-}
-
-static inline __le64 *dpaa2_get_ts(void *buf_addr, bool swa)
-{
-	return dpaa2_get_hwa(buf_addr, swa) + DPAA2_TS_OFFSET;
-}
-
-static inline struct dpaa2_faead *dpaa2_get_faead(void *buf_addr, bool swa)
-{
-	return dpaa2_get_hwa(buf_addr, swa) + DPAA2_FAEAD_OFFSET;
-}
-
-/* Error and status bits in the frame annotation status word */
-/* Debug frame, otherwise supposed to be discarded */
-#define DPAA2_FAS_DISC			0x80000000
-/* MACSEC frame */
-#define DPAA2_FAS_MS			0x40000000
-#define DPAA2_FAS_PTP			0x08000000
-/* Ethernet multicast frame */
-#define DPAA2_FAS_MC			0x04000000
-/* Ethernet broadcast frame */
-#define DPAA2_FAS_BC			0x02000000
-#define DPAA2_FAS_KSE			0x00040000
-#define DPAA2_FAS_EOFHE			0x00020000
-#define DPAA2_FAS_MNLE			0x00010000
-#define DPAA2_FAS_TIDE			0x00008000
-#define DPAA2_FAS_PIEE			0x00004000
-/* Frame length error */
-#define DPAA2_FAS_FLE			0x00002000
-/* Frame physical error */
-#define DPAA2_FAS_FPE			0x00001000
-#define DPAA2_FAS_PTE			0x00000080
-#define DPAA2_FAS_ISP			0x00000040
-#define DPAA2_FAS_PHE			0x00000020
-#define DPAA2_FAS_BLE			0x00000010
-/* L3 csum validation performed */
-#define DPAA2_FAS_L3CV			0x00000008
-/* L3 csum error */
-#define DPAA2_FAS_L3CE			0x00000004
-/* L4 csum validation performed */
-#define DPAA2_FAS_L4CV			0x00000002
-/* L4 csum error */
-#define DPAA2_FAS_L4CE			0x00000001
-/* Possible errors on the ingress path */
-#define DPAA2_FAS_RX_ERR_MASK		(DPAA2_FAS_KSE		| \
-					 DPAA2_FAS_EOFHE	| \
-					 DPAA2_FAS_MNLE		| \
-					 DPAA2_FAS_TIDE		| \
-					 DPAA2_FAS_PIEE		| \
-					 DPAA2_FAS_FLE		| \
-					 DPAA2_FAS_FPE		| \
-					 DPAA2_FAS_PTE		| \
-					 DPAA2_FAS_ISP		| \
-					 DPAA2_FAS_PHE		| \
-					 DPAA2_FAS_BLE		| \
-					 DPAA2_FAS_L3CE		| \
-					 DPAA2_FAS_L4CE)
-
-/* Time in milliseconds between link state updates */
-#define DPAA2_ETH_LINK_STATE_REFRESH	1000
-
-/* Number of times to retry a frame enqueue before giving up.
- * Value determined empirically, in order to minimize the number
- * of frames dropped on Tx
- */
-#define DPAA2_ETH_ENQUEUE_RETRIES	10
-
-/* Driver statistics, other than those in struct rtnl_link_stats64.
- * These are usually collected per-CPU and aggregated by ethtool.
- */
-struct dpaa2_eth_drv_stats {
-	__u64	tx_conf_frames;
-	__u64	tx_conf_bytes;
-	__u64	tx_sg_frames;
-	__u64	tx_sg_bytes;
-	__u64	tx_reallocs;
-	__u64	rx_sg_frames;
-	__u64	rx_sg_bytes;
-	/* Enqueues retried due to portal busy */
-	__u64	tx_portal_busy;
-};
-
-/* Per-FQ statistics */
-struct dpaa2_eth_fq_stats {
-	/* Number of frames received on this queue */
-	__u64 frames;
-};
-
-/* Per-channel statistics */
-struct dpaa2_eth_ch_stats {
-	/* Volatile dequeues retried due to portal busy */
-	__u64 dequeue_portal_busy;
-	/* Number of CDANs; useful to estimate avg NAPI len */
-	__u64 cdan;
-	/* Number of frames received on queues from this channel */
-	__u64 frames;
-	/* Pull errors */
-	__u64 pull_err;
-};
-
-/* Maximum number of queues associated with a DPNI */
-#define DPAA2_ETH_MAX_RX_QUEUES		16
-#define DPAA2_ETH_MAX_TX_QUEUES		16
-#define DPAA2_ETH_MAX_QUEUES		(DPAA2_ETH_MAX_RX_QUEUES + \
-					DPAA2_ETH_MAX_TX_QUEUES)
-
-#define DPAA2_ETH_MAX_DPCONS		16
-
-enum dpaa2_eth_fq_type {
-	DPAA2_RX_FQ = 0,
-	DPAA2_TX_CONF_FQ,
-};
-
-struct dpaa2_eth_priv;
-
-struct dpaa2_eth_fq {
-	u32 fqid;
-	u32 tx_qdbin;
-	u16 flowid;
-	int target_cpu;
-	struct dpaa2_eth_channel *channel;
-	enum dpaa2_eth_fq_type type;
-
-	void (*consume)(struct dpaa2_eth_priv *,
-			struct dpaa2_eth_channel *,
-			const struct dpaa2_fd *,
-			struct napi_struct *,
-			u16 queue_id);
-	struct dpaa2_eth_fq_stats stats;
-};
-
-struct dpaa2_eth_channel {
-	struct dpaa2_io_notification_ctx nctx;
-	struct fsl_mc_device *dpcon;
-	int dpcon_id;
-	int ch_id;
-	struct napi_struct napi;
-	struct dpaa2_io *dpio;
-	struct dpaa2_io_store *store;
-	struct dpaa2_eth_priv *priv;
-	int buf_count;
-	struct dpaa2_eth_ch_stats stats;
-};
-
-struct dpaa2_eth_hash_fields {
-	u64 rxnfc_field;
-	enum net_prot cls_prot;
-	int cls_field;
-	int size;
-};
-
-/* Driver private data */
-struct dpaa2_eth_priv {
-	struct net_device *net_dev;
-
-	u8 num_fqs;
-	struct dpaa2_eth_fq fq[DPAA2_ETH_MAX_QUEUES];
-
-	u8 num_channels;
-	struct dpaa2_eth_channel *channel[DPAA2_ETH_MAX_DPCONS];
-
-	struct dpni_attr dpni_attrs;
-	u16 dpni_ver_major;
-	u16 dpni_ver_minor;
-	u16 tx_data_offset;
-
-	struct fsl_mc_device *dpbp_dev;
-	u16 bpid;
-	struct iommu_domain *iommu_domain;
-
-	bool tx_tstamp; /* Tx timestamping enabled */
-	bool rx_tstamp; /* Rx timestamping enabled */
-
-	u16 tx_qdid;
-	u16 rx_buf_align;
-	struct fsl_mc_io *mc_io;
-	/* Cores which have an affine DPIO/DPCON.
-	 * This is the cpu set on which Rx and Tx conf frames are processed
-	 */
-	struct cpumask dpio_cpumask;
-
-	/* Standard statistics */
-	struct rtnl_link_stats64 __percpu *percpu_stats;
-	/* Extra stats, in addition to the ones known by the kernel */
-	struct dpaa2_eth_drv_stats __percpu *percpu_extras;
-
-	u16 mc_token;
-
-	struct dpni_link_state link_state;
-	bool do_link_poll;
-	struct task_struct *poll_thread;
-
-	/* enabled ethtool hashing bits */
-	u64 rx_hash_fields;
-};
-
-#define DPAA2_RXH_SUPPORTED	(RXH_L2DA | RXH_VLAN | RXH_L3_PROTO \
-				| RXH_IP_SRC | RXH_IP_DST | RXH_L4_B_0_1 \
-				| RXH_L4_B_2_3)
-
-/* default Rx hash options, set during probing */
-#define DPAA2_RXH_DEFAULT	(RXH_L3_PROTO | RXH_IP_SRC | RXH_IP_DST | \
-				 RXH_L4_B_0_1 | RXH_L4_B_2_3)
-
-#define dpaa2_eth_hash_enabled(priv)	\
-	((priv)->dpni_attrs.num_queues > 1)
-
-/* Required by struct dpni_rx_tc_dist_cfg::key_cfg_iova */
-#define DPAA2_CLASSIFIER_DMA_SIZE 256
-
-extern const struct ethtool_ops dpaa2_ethtool_ops;
-extern int dpaa2_phc_index;
-
-static inline int dpaa2_eth_cmp_dpni_ver(struct dpaa2_eth_priv *priv,
-					 u16 ver_major, u16 ver_minor)
-{
-	if (priv->dpni_ver_major == ver_major)
-		return priv->dpni_ver_minor - ver_minor;
-	return priv->dpni_ver_major - ver_major;
-}
-
-/* Hardware only sees DPAA2_ETH_RX_BUF_SIZE, but the skb built around
- * the buffer also needs space for its shared info struct, and we need
- * to allocate enough to accommodate hardware alignment restrictions
- */
-static inline unsigned int dpaa2_eth_buf_raw_size(struct dpaa2_eth_priv *priv)
-{
-	return DPAA2_ETH_SKB_SIZE + priv->rx_buf_align;
-}
-
-static inline
-unsigned int dpaa2_eth_needed_headroom(struct dpaa2_eth_priv *priv,
-				       struct sk_buff *skb)
-{
-	unsigned int headroom = DPAA2_ETH_SWA_SIZE;
-
-	/* For non-linear skbs we have no headroom requirement, as we build a
-	 * SG frame with a newly allocated SGT buffer
-	 */
-	if (skb_is_nonlinear(skb))
-		return 0;
-
-	/* If we have Tx timestamping, need 128B hardware annotation */
-	if (priv->tx_tstamp && skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP)
-		headroom += DPAA2_ETH_TX_HWA_SIZE;
-
-	return headroom;
-}
-
-/* Extra headroom space requested to hardware, in order to make sure there's
- * no realloc'ing in forwarding scenarios
- */
-static inline unsigned int dpaa2_eth_rx_head_room(struct dpaa2_eth_priv *priv)
-{
-	return priv->tx_data_offset + DPAA2_ETH_TX_BUF_ALIGN -
-	       DPAA2_ETH_RX_HWA_SIZE;
-}
-
-static int dpaa2_eth_queue_count(struct dpaa2_eth_priv *priv)
-{
-	return priv->dpni_attrs.num_queues;
-}
-
-#endif	/* __DPAA2_H */
diff --git a/drivers/staging/fsl-dpaa2/ethernet/dpaa2-ethtool.c b/drivers/staging/fsl-dpaa2/ethernet/dpaa2-ethtool.c
deleted file mode 100644
index 8056a95..0000000
--- a/drivers/staging/fsl-dpaa2/ethernet/dpaa2-ethtool.c
+++ /dev/null
@@ -1,280 +0,0 @@
-// SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause)
-/* Copyright 2014-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- */
-
-#include <linux/net_tstamp.h>
-
-#include "dpni.h"	/* DPNI_LINK_OPT_* */
-#include "dpaa2-eth.h"
-
-/* To be kept in sync with DPNI statistics */
-static char dpaa2_ethtool_stats[][ETH_GSTRING_LEN] = {
-	"[hw] rx frames",
-	"[hw] rx bytes",
-	"[hw] rx mcast frames",
-	"[hw] rx mcast bytes",
-	"[hw] rx bcast frames",
-	"[hw] rx bcast bytes",
-	"[hw] tx frames",
-	"[hw] tx bytes",
-	"[hw] tx mcast frames",
-	"[hw] tx mcast bytes",
-	"[hw] tx bcast frames",
-	"[hw] tx bcast bytes",
-	"[hw] rx filtered frames",
-	"[hw] rx discarded frames",
-	"[hw] rx nobuffer discards",
-	"[hw] tx discarded frames",
-	"[hw] tx confirmed frames",
-};
-
-#define DPAA2_ETH_NUM_STATS	ARRAY_SIZE(dpaa2_ethtool_stats)
-
-static char dpaa2_ethtool_extras[][ETH_GSTRING_LEN] = {
-	/* per-cpu stats */
-	"[drv] tx conf frames",
-	"[drv] tx conf bytes",
-	"[drv] tx sg frames",
-	"[drv] tx sg bytes",
-	"[drv] tx realloc frames",
-	"[drv] rx sg frames",
-	"[drv] rx sg bytes",
-	"[drv] enqueue portal busy",
-	/* Channel stats */
-	"[drv] dequeue portal busy",
-	"[drv] channel pull errors",
-	"[drv] cdan",
-};
-
-#define DPAA2_ETH_NUM_EXTRA_STATS	ARRAY_SIZE(dpaa2_ethtool_extras)
-
-static void dpaa2_eth_get_drvinfo(struct net_device *net_dev,
-				  struct ethtool_drvinfo *drvinfo)
-{
-	struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-
-	strlcpy(drvinfo->driver, KBUILD_MODNAME, sizeof(drvinfo->driver));
-
-	snprintf(drvinfo->fw_version, sizeof(drvinfo->fw_version),
-		 "%u.%u", priv->dpni_ver_major, priv->dpni_ver_minor);
-
-	strlcpy(drvinfo->bus_info, dev_name(net_dev->dev.parent->parent),
-		sizeof(drvinfo->bus_info));
-}
-
-static int
-dpaa2_eth_get_link_ksettings(struct net_device *net_dev,
-			     struct ethtool_link_ksettings *link_settings)
-{
-	struct dpni_link_state state = {0};
-	int err = 0;
-	struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-
-	err = dpni_get_link_state(priv->mc_io, 0, priv->mc_token, &state);
-	if (err) {
-		netdev_err(net_dev, "ERROR %d getting link state\n", err);
-		goto out;
-	}
-
-	/* At the moment, we have no way of interrogating the DPMAC
-	 * from the DPNI side - and for that matter there may exist
-	 * no DPMAC at all. So for now we just don't report anything
-	 * beyond the DPNI attributes.
-	 */
-	if (state.options & DPNI_LINK_OPT_AUTONEG)
-		link_settings->base.autoneg = AUTONEG_ENABLE;
-	if (!(state.options & DPNI_LINK_OPT_HALF_DUPLEX))
-		link_settings->base.duplex = DUPLEX_FULL;
-	link_settings->base.speed = state.rate;
-
-out:
-	return err;
-}
-
-#define DPNI_DYNAMIC_LINK_SET_VER_MAJOR		7
-#define DPNI_DYNAMIC_LINK_SET_VER_MINOR		1
-static int
-dpaa2_eth_set_link_ksettings(struct net_device *net_dev,
-			     const struct ethtool_link_ksettings *link_settings)
-{
-	struct dpni_link_cfg cfg = {0};
-	struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-	int err = 0;
-
-	/* If using an older MC version, the DPNI must be down
-	 * in order to be able to change link settings. Taking steps to let
-	 * the user know that.
-	 */
-	if (dpaa2_eth_cmp_dpni_ver(priv, DPNI_DYNAMIC_LINK_SET_VER_MAJOR,
-				   DPNI_DYNAMIC_LINK_SET_VER_MINOR) < 0) {
-		if (netif_running(net_dev)) {
-			netdev_info(net_dev, "Interface must be brought down first.\n");
-			return -EACCES;
-		}
-	}
-
-	cfg.rate = link_settings->base.speed;
-	if (link_settings->base.autoneg == AUTONEG_ENABLE)
-		cfg.options |= DPNI_LINK_OPT_AUTONEG;
-	else
-		cfg.options &= ~DPNI_LINK_OPT_AUTONEG;
-	if (link_settings->base.duplex  == DUPLEX_HALF)
-		cfg.options |= DPNI_LINK_OPT_HALF_DUPLEX;
-	else
-		cfg.options &= ~DPNI_LINK_OPT_HALF_DUPLEX;
-
-	err = dpni_set_link_cfg(priv->mc_io, 0, priv->mc_token, &cfg);
-	if (err)
-		/* ethtool will be loud enough if we return an error; no point
-		 * in putting our own error message on the console by default
-		 */
-		netdev_dbg(net_dev, "ERROR %d setting link cfg\n", err);
-
-	return err;
-}
-
-static void dpaa2_eth_get_strings(struct net_device *netdev, u32 stringset,
-				  u8 *data)
-{
-	u8 *p = data;
-	int i;
-
-	switch (stringset) {
-	case ETH_SS_STATS:
-		for (i = 0; i < DPAA2_ETH_NUM_STATS; i++) {
-			strlcpy(p, dpaa2_ethtool_stats[i], ETH_GSTRING_LEN);
-			p += ETH_GSTRING_LEN;
-		}
-		for (i = 0; i < DPAA2_ETH_NUM_EXTRA_STATS; i++) {
-			strlcpy(p, dpaa2_ethtool_extras[i], ETH_GSTRING_LEN);
-			p += ETH_GSTRING_LEN;
-		}
-		break;
-	}
-}
-
-static int dpaa2_eth_get_sset_count(struct net_device *net_dev, int sset)
-{
-	switch (sset) {
-	case ETH_SS_STATS: /* ethtool_get_stats(), ethtool_get_drvinfo() */
-		return DPAA2_ETH_NUM_STATS + DPAA2_ETH_NUM_EXTRA_STATS;
-	default:
-		return -EOPNOTSUPP;
-	}
-}
-
-/** Fill in hardware counters, as returned by MC.
- */
-static void dpaa2_eth_get_ethtool_stats(struct net_device *net_dev,
-					struct ethtool_stats *stats,
-					u64 *data)
-{
-	int i = 0;
-	int j, k, err;
-	int num_cnt;
-	union dpni_statistics dpni_stats;
-	u64 cdan = 0;
-	u64 portal_busy = 0, pull_err = 0;
-	struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-	struct dpaa2_eth_drv_stats *extras;
-	struct dpaa2_eth_ch_stats *ch_stats;
-
-	memset(data, 0,
-	       sizeof(u64) * (DPAA2_ETH_NUM_STATS + DPAA2_ETH_NUM_EXTRA_STATS));
-
-	/* Print standard counters, from DPNI statistics */
-	for (j = 0; j <= 2; j++) {
-		err = dpni_get_statistics(priv->mc_io, 0, priv->mc_token,
-					  j, &dpni_stats);
-		if (err != 0)
-			netdev_warn(net_dev, "dpni_get_stats(%d) failed\n", j);
-		switch (j) {
-		case 0:
-			num_cnt = sizeof(dpni_stats.page_0) / sizeof(u64);
-			break;
-		case 1:
-			num_cnt = sizeof(dpni_stats.page_1) / sizeof(u64);
-			break;
-		case 2:
-			num_cnt = sizeof(dpni_stats.page_2) / sizeof(u64);
-			break;
-		}
-		for (k = 0; k < num_cnt; k++)
-			*(data + i++) = dpni_stats.raw.counter[k];
-	}
-
-	/* Print per-cpu extra stats */
-	for_each_online_cpu(k) {
-		extras = per_cpu_ptr(priv->percpu_extras, k);
-		for (j = 0; j < sizeof(*extras) / sizeof(__u64); j++)
-			*((__u64 *)data + i + j) += *((__u64 *)extras + j);
-	}
-	i += j;
-
-	for (j = 0; j < priv->num_channels; j++) {
-		ch_stats = &priv->channel[j]->stats;
-		cdan += ch_stats->cdan;
-		portal_busy += ch_stats->dequeue_portal_busy;
-		pull_err += ch_stats->pull_err;
-	}
-
-	*(data + i++) = portal_busy;
-	*(data + i++) = pull_err;
-	*(data + i++) = cdan;
-}
-
-static int dpaa2_eth_get_rxnfc(struct net_device *net_dev,
-			       struct ethtool_rxnfc *rxnfc, u32 *rule_locs)
-{
-	struct dpaa2_eth_priv *priv = netdev_priv(net_dev);
-
-	switch (rxnfc->cmd) {
-	case ETHTOOL_GRXFH:
-		/* we purposely ignore cmd->flow_type for now, because the
-		 * classifier only supports a single set of fields for all
-		 * protocols
-		 */
-		rxnfc->data = priv->rx_hash_fields;
-		break;
-	case ETHTOOL_GRXRINGS:
-		rxnfc->data = dpaa2_eth_queue_count(priv);
-		break;
-	default:
-		return -EOPNOTSUPP;
-	}
-
-	return 0;
-}
-
-int dpaa2_phc_index = -1;
-EXPORT_SYMBOL(dpaa2_phc_index);
-
-static int dpaa2_eth_get_ts_info(struct net_device *dev,
-				 struct ethtool_ts_info *info)
-{
-	info->so_timestamping = SOF_TIMESTAMPING_TX_HARDWARE |
-				SOF_TIMESTAMPING_RX_HARDWARE |
-				SOF_TIMESTAMPING_RAW_HARDWARE;
-
-	info->phc_index = dpaa2_phc_index;
-
-	info->tx_types = (1 << HWTSTAMP_TX_OFF) |
-			 (1 << HWTSTAMP_TX_ON);
-
-	info->rx_filters = (1 << HWTSTAMP_FILTER_NONE) |
-			   (1 << HWTSTAMP_FILTER_ALL);
-	return 0;
-}
-
-const struct ethtool_ops dpaa2_ethtool_ops = {
-	.get_drvinfo = dpaa2_eth_get_drvinfo,
-	.get_link = ethtool_op_get_link,
-	.get_link_ksettings = dpaa2_eth_get_link_ksettings,
-	.set_link_ksettings = dpaa2_eth_set_link_ksettings,
-	.get_sset_count = dpaa2_eth_get_sset_count,
-	.get_ethtool_stats = dpaa2_eth_get_ethtool_stats,
-	.get_strings = dpaa2_eth_get_strings,
-	.get_rxnfc = dpaa2_eth_get_rxnfc,
-	.get_ts_info = dpaa2_eth_get_ts_info,
-};
diff --git a/drivers/staging/fsl-dpaa2/ethernet/dpkg.h b/drivers/staging/fsl-dpaa2/ethernet/dpkg.h
deleted file mode 100644
index 6de613b1..0000000
--- a/drivers/staging/fsl-dpaa2/ethernet/dpkg.h
+++ /dev/null
@@ -1,480 +0,0 @@
-/* SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause) */
-/* Copyright 2013-2015 Freescale Semiconductor Inc.
- */
-#ifndef __FSL_DPKG_H_
-#define __FSL_DPKG_H_
-
-#include <linux/types.h>
-
-/* Data Path Key Generator API
- * Contains initialization APIs and runtime APIs for the Key Generator
- */
-
-/** Key Generator properties */
-
-/**
- * Number of masks per key extraction
- */
-#define DPKG_NUM_OF_MASKS		4
-/**
- * Number of extractions per key profile
- */
-#define DPKG_MAX_NUM_OF_EXTRACTS	10
-
-/**
- * enum dpkg_extract_from_hdr_type - Selecting extraction by header types
- * @DPKG_FROM_HDR: Extract selected bytes from header, by offset
- * @DPKG_FROM_FIELD: Extract selected bytes from header, by offset from field
- * @DPKG_FULL_FIELD: Extract a full field
- */
-enum dpkg_extract_from_hdr_type {
-	DPKG_FROM_HDR = 0,
-	DPKG_FROM_FIELD = 1,
-	DPKG_FULL_FIELD = 2
-};
-
-/**
- * enum dpkg_extract_type - Enumeration for selecting extraction type
- * @DPKG_EXTRACT_FROM_HDR: Extract from the header
- * @DPKG_EXTRACT_FROM_DATA: Extract from data not in specific header
- * @DPKG_EXTRACT_FROM_PARSE: Extract from parser-result;
- *	e.g. can be used to extract header existence;
- *	please refer to 'Parse Result definition' section in the parser BG
- */
-enum dpkg_extract_type {
-	DPKG_EXTRACT_FROM_HDR = 0,
-	DPKG_EXTRACT_FROM_DATA = 1,
-	DPKG_EXTRACT_FROM_PARSE = 3
-};
-
-/**
- * struct dpkg_mask - A structure for defining a single extraction mask
- * @mask: Byte mask for the extracted content
- * @offset: Offset within the extracted content
- */
-struct dpkg_mask {
-	u8 mask;
-	u8 offset;
-};
-
-/* Protocol fields */
-
-/* Ethernet fields */
-#define NH_FLD_ETH_DA				BIT(0)
-#define NH_FLD_ETH_SA				BIT(1)
-#define NH_FLD_ETH_LENGTH			BIT(2)
-#define NH_FLD_ETH_TYPE				BIT(3)
-#define NH_FLD_ETH_FINAL_CKSUM			BIT(4)
-#define NH_FLD_ETH_PADDING			BIT(5)
-#define NH_FLD_ETH_ALL_FIELDS			(BIT(6) - 1)
-
-/* VLAN fields */
-#define NH_FLD_VLAN_VPRI			BIT(0)
-#define NH_FLD_VLAN_CFI				BIT(1)
-#define NH_FLD_VLAN_VID				BIT(2)
-#define NH_FLD_VLAN_LENGTH			BIT(3)
-#define NH_FLD_VLAN_TYPE			BIT(4)
-#define NH_FLD_VLAN_ALL_FIELDS			(BIT(5) - 1)
-
-#define NH_FLD_VLAN_TCI				(NH_FLD_VLAN_VPRI | \
-						 NH_FLD_VLAN_CFI | \
-						 NH_FLD_VLAN_VID)
-
-/* IP (generic) fields */
-#define NH_FLD_IP_VER				BIT(0)
-#define NH_FLD_IP_DSCP				BIT(2)
-#define NH_FLD_IP_ECN				BIT(3)
-#define NH_FLD_IP_PROTO				BIT(4)
-#define NH_FLD_IP_SRC				BIT(5)
-#define NH_FLD_IP_DST				BIT(6)
-#define NH_FLD_IP_TOS_TC			BIT(7)
-#define NH_FLD_IP_ID				BIT(8)
-#define NH_FLD_IP_ALL_FIELDS			(BIT(9) - 1)
-
-/* IPV4 fields */
-#define NH_FLD_IPV4_VER				BIT(0)
-#define NH_FLD_IPV4_HDR_LEN			BIT(1)
-#define NH_FLD_IPV4_TOS				BIT(2)
-#define NH_FLD_IPV4_TOTAL_LEN			BIT(3)
-#define NH_FLD_IPV4_ID				BIT(4)
-#define NH_FLD_IPV4_FLAG_D			BIT(5)
-#define NH_FLD_IPV4_FLAG_M			BIT(6)
-#define NH_FLD_IPV4_OFFSET			BIT(7)
-#define NH_FLD_IPV4_TTL				BIT(8)
-#define NH_FLD_IPV4_PROTO			BIT(9)
-#define NH_FLD_IPV4_CKSUM			BIT(10)
-#define NH_FLD_IPV4_SRC_IP			BIT(11)
-#define NH_FLD_IPV4_DST_IP			BIT(12)
-#define NH_FLD_IPV4_OPTS			BIT(13)
-#define NH_FLD_IPV4_OPTS_COUNT			BIT(14)
-#define NH_FLD_IPV4_ALL_FIELDS			(BIT(15) - 1)
-
-/* IPV6 fields */
-#define NH_FLD_IPV6_VER				BIT(0)
-#define NH_FLD_IPV6_TC				BIT(1)
-#define NH_FLD_IPV6_SRC_IP			BIT(2)
-#define NH_FLD_IPV6_DST_IP			BIT(3)
-#define NH_FLD_IPV6_NEXT_HDR			BIT(4)
-#define NH_FLD_IPV6_FL				BIT(5)
-#define NH_FLD_IPV6_HOP_LIMIT			BIT(6)
-#define NH_FLD_IPV6_ID				BIT(7)
-#define NH_FLD_IPV6_ALL_FIELDS			(BIT(8) - 1)
-
-/* ICMP fields */
-#define NH_FLD_ICMP_TYPE			BIT(0)
-#define NH_FLD_ICMP_CODE			BIT(1)
-#define NH_FLD_ICMP_CKSUM			BIT(2)
-#define NH_FLD_ICMP_ID				BIT(3)
-#define NH_FLD_ICMP_SQ_NUM			BIT(4)
-#define NH_FLD_ICMP_ALL_FIELDS			(BIT(5) - 1)
-
-/* IGMP fields */
-#define NH_FLD_IGMP_VERSION			BIT(0)
-#define NH_FLD_IGMP_TYPE			BIT(1)
-#define NH_FLD_IGMP_CKSUM			BIT(2)
-#define NH_FLD_IGMP_DATA			BIT(3)
-#define NH_FLD_IGMP_ALL_FIELDS			(BIT(4) - 1)
-
-/* TCP fields */
-#define NH_FLD_TCP_PORT_SRC			BIT(0)
-#define NH_FLD_TCP_PORT_DST			BIT(1)
-#define NH_FLD_TCP_SEQ				BIT(2)
-#define NH_FLD_TCP_ACK				BIT(3)
-#define NH_FLD_TCP_OFFSET			BIT(4)
-#define NH_FLD_TCP_FLAGS			BIT(5)
-#define NH_FLD_TCP_WINDOW			BIT(6)
-#define NH_FLD_TCP_CKSUM			BIT(7)
-#define NH_FLD_TCP_URGPTR			BIT(8)
-#define NH_FLD_TCP_OPTS				BIT(9)
-#define NH_FLD_TCP_OPTS_COUNT			BIT(10)
-#define NH_FLD_TCP_ALL_FIELDS			(BIT(11) - 1)
-
-/* UDP fields */
-#define NH_FLD_UDP_PORT_SRC			BIT(0)
-#define NH_FLD_UDP_PORT_DST			BIT(1)
-#define NH_FLD_UDP_LEN				BIT(2)
-#define NH_FLD_UDP_CKSUM			BIT(3)
-#define NH_FLD_UDP_ALL_FIELDS			(BIT(4) - 1)
-
-/* UDP-lite fields */
-#define NH_FLD_UDP_LITE_PORT_SRC		BIT(0)
-#define NH_FLD_UDP_LITE_PORT_DST		BIT(1)
-#define NH_FLD_UDP_LITE_ALL_FIELDS		(BIT(2) - 1)
-
-/* UDP-encap-ESP fields */
-#define NH_FLD_UDP_ENC_ESP_PORT_SRC		BIT(0)
-#define NH_FLD_UDP_ENC_ESP_PORT_DST		BIT(1)
-#define NH_FLD_UDP_ENC_ESP_LEN			BIT(2)
-#define NH_FLD_UDP_ENC_ESP_CKSUM		BIT(3)
-#define NH_FLD_UDP_ENC_ESP_SPI			BIT(4)
-#define NH_FLD_UDP_ENC_ESP_SEQUENCE_NUM		BIT(5)
-#define NH_FLD_UDP_ENC_ESP_ALL_FIELDS		(BIT(6) - 1)
-
-/* SCTP fields */
-#define NH_FLD_SCTP_PORT_SRC			BIT(0)
-#define NH_FLD_SCTP_PORT_DST			BIT(1)
-#define NH_FLD_SCTP_VER_TAG			BIT(2)
-#define NH_FLD_SCTP_CKSUM			BIT(3)
-#define NH_FLD_SCTP_ALL_FIELDS			(BIT(4) - 1)
-
-/* DCCP fields */
-#define NH_FLD_DCCP_PORT_SRC			BIT(0)
-#define NH_FLD_DCCP_PORT_DST			BIT(1)
-#define NH_FLD_DCCP_ALL_FIELDS			(BIT(2) - 1)
-
-/* IPHC fields */
-#define NH_FLD_IPHC_CID				BIT(0)
-#define NH_FLD_IPHC_CID_TYPE			BIT(1)
-#define NH_FLD_IPHC_HCINDEX			BIT(2)
-#define NH_FLD_IPHC_GEN				BIT(3)
-#define NH_FLD_IPHC_D_BIT			BIT(4)
-#define NH_FLD_IPHC_ALL_FIELDS			(BIT(5) - 1)
-
-/* SCTP fields */
-#define NH_FLD_SCTP_CHUNK_DATA_TYPE		BIT(0)
-#define NH_FLD_SCTP_CHUNK_DATA_FLAGS		BIT(1)
-#define NH_FLD_SCTP_CHUNK_DATA_LENGTH		BIT(2)
-#define NH_FLD_SCTP_CHUNK_DATA_TSN		BIT(3)
-#define NH_FLD_SCTP_CHUNK_DATA_STREAM_ID	BIT(4)
-#define NH_FLD_SCTP_CHUNK_DATA_STREAM_SQN	BIT(5)
-#define NH_FLD_SCTP_CHUNK_DATA_PAYLOAD_PID	BIT(6)
-#define NH_FLD_SCTP_CHUNK_DATA_UNORDERED	BIT(7)
-#define NH_FLD_SCTP_CHUNK_DATA_BEGGINING	BIT(8)
-#define NH_FLD_SCTP_CHUNK_DATA_END		BIT(9)
-#define NH_FLD_SCTP_CHUNK_DATA_ALL_FIELDS	(BIT(10) - 1)
-
-/* L2TPV2 fields */
-#define NH_FLD_L2TPV2_TYPE_BIT			BIT(0)
-#define NH_FLD_L2TPV2_LENGTH_BIT		BIT(1)
-#define NH_FLD_L2TPV2_SEQUENCE_BIT		BIT(2)
-#define NH_FLD_L2TPV2_OFFSET_BIT		BIT(3)
-#define NH_FLD_L2TPV2_PRIORITY_BIT		BIT(4)
-#define NH_FLD_L2TPV2_VERSION			BIT(5)
-#define NH_FLD_L2TPV2_LEN			BIT(6)
-#define NH_FLD_L2TPV2_TUNNEL_ID			BIT(7)
-#define NH_FLD_L2TPV2_SESSION_ID		BIT(8)
-#define NH_FLD_L2TPV2_NS			BIT(9)
-#define NH_FLD_L2TPV2_NR			BIT(10)
-#define NH_FLD_L2TPV2_OFFSET_SIZE		BIT(11)
-#define NH_FLD_L2TPV2_FIRST_BYTE		BIT(12)
-#define NH_FLD_L2TPV2_ALL_FIELDS		(BIT(13) - 1)
-
-/* L2TPV3 fields */
-#define NH_FLD_L2TPV3_CTRL_TYPE_BIT		BIT(0)
-#define NH_FLD_L2TPV3_CTRL_LENGTH_BIT		BIT(1)
-#define NH_FLD_L2TPV3_CTRL_SEQUENCE_BIT		BIT(2)
-#define NH_FLD_L2TPV3_CTRL_VERSION		BIT(3)
-#define NH_FLD_L2TPV3_CTRL_LENGTH		BIT(4)
-#define NH_FLD_L2TPV3_CTRL_CONTROL		BIT(5)
-#define NH_FLD_L2TPV3_CTRL_SENT			BIT(6)
-#define NH_FLD_L2TPV3_CTRL_RECV			BIT(7)
-#define NH_FLD_L2TPV3_CTRL_FIRST_BYTE		BIT(8)
-#define NH_FLD_L2TPV3_CTRL_ALL_FIELDS		(BIT(9) - 1)
-
-#define NH_FLD_L2TPV3_SESS_TYPE_BIT		BIT(0)
-#define NH_FLD_L2TPV3_SESS_VERSION		BIT(1)
-#define NH_FLD_L2TPV3_SESS_ID			BIT(2)
-#define NH_FLD_L2TPV3_SESS_COOKIE		BIT(3)
-#define NH_FLD_L2TPV3_SESS_ALL_FIELDS		(BIT(4) - 1)
-
-/* PPP fields */
-#define NH_FLD_PPP_PID				BIT(0)
-#define NH_FLD_PPP_COMPRESSED			BIT(1)
-#define NH_FLD_PPP_ALL_FIELDS			(BIT(2) - 1)
-
-/* PPPoE fields */
-#define NH_FLD_PPPOE_VER			BIT(0)
-#define NH_FLD_PPPOE_TYPE			BIT(1)
-#define NH_FLD_PPPOE_CODE			BIT(2)
-#define NH_FLD_PPPOE_SID			BIT(3)
-#define NH_FLD_PPPOE_LEN			BIT(4)
-#define NH_FLD_PPPOE_SESSION			BIT(5)
-#define NH_FLD_PPPOE_PID			BIT(6)
-#define NH_FLD_PPPOE_ALL_FIELDS			(BIT(7) - 1)
-
-/* PPP-Mux fields */
-#define NH_FLD_PPPMUX_PID			BIT(0)
-#define NH_FLD_PPPMUX_CKSUM			BIT(1)
-#define NH_FLD_PPPMUX_COMPRESSED		BIT(2)
-#define NH_FLD_PPPMUX_ALL_FIELDS		(BIT(3) - 1)
-
-/* PPP-Mux sub-frame fields */
-#define NH_FLD_PPPMUX_SUBFRM_PFF		BIT(0)
-#define NH_FLD_PPPMUX_SUBFRM_LXT		BIT(1)
-#define NH_FLD_PPPMUX_SUBFRM_LEN		BIT(2)
-#define NH_FLD_PPPMUX_SUBFRM_PID		BIT(3)
-#define NH_FLD_PPPMUX_SUBFRM_USE_PID		BIT(4)
-#define NH_FLD_PPPMUX_SUBFRM_ALL_FIELDS		(BIT(5) - 1)
-
-/* LLC fields */
-#define NH_FLD_LLC_DSAP				BIT(0)
-#define NH_FLD_LLC_SSAP				BIT(1)
-#define NH_FLD_LLC_CTRL				BIT(2)
-#define NH_FLD_LLC_ALL_FIELDS			(BIT(3) - 1)
-
-/* NLPID fields */
-#define NH_FLD_NLPID_NLPID			BIT(0)
-#define NH_FLD_NLPID_ALL_FIELDS			(BIT(1) - 1)
-
-/* SNAP fields */
-#define NH_FLD_SNAP_OUI				BIT(0)
-#define NH_FLD_SNAP_PID				BIT(1)
-#define NH_FLD_SNAP_ALL_FIELDS			(BIT(2) - 1)
-
-/* LLC SNAP fields */
-#define NH_FLD_LLC_SNAP_TYPE			BIT(0)
-#define NH_FLD_LLC_SNAP_ALL_FIELDS		(BIT(1) - 1)
-
-/* ARP fields */
-#define NH_FLD_ARP_HTYPE			BIT(0)
-#define NH_FLD_ARP_PTYPE			BIT(1)
-#define NH_FLD_ARP_HLEN				BIT(2)
-#define NH_FLD_ARP_PLEN				BIT(3)
-#define NH_FLD_ARP_OPER				BIT(4)
-#define NH_FLD_ARP_SHA				BIT(5)
-#define NH_FLD_ARP_SPA				BIT(6)
-#define NH_FLD_ARP_THA				BIT(7)
-#define NH_FLD_ARP_TPA				BIT(8)
-#define NH_FLD_ARP_ALL_FIELDS			(BIT(9) - 1)
-
-/* RFC2684 fields */
-#define NH_FLD_RFC2684_LLC			BIT(0)
-#define NH_FLD_RFC2684_NLPID			BIT(1)
-#define NH_FLD_RFC2684_OUI			BIT(2)
-#define NH_FLD_RFC2684_PID			BIT(3)
-#define NH_FLD_RFC2684_VPN_OUI			BIT(4)
-#define NH_FLD_RFC2684_VPN_IDX			BIT(5)
-#define NH_FLD_RFC2684_ALL_FIELDS		(BIT(6) - 1)
-
-/* User defined fields */
-#define NH_FLD_USER_DEFINED_SRCPORT		BIT(0)
-#define NH_FLD_USER_DEFINED_PCDID		BIT(1)
-#define NH_FLD_USER_DEFINED_ALL_FIELDS		(BIT(2) - 1)
-
-/* Payload fields */
-#define NH_FLD_PAYLOAD_BUFFER			BIT(0)
-#define NH_FLD_PAYLOAD_SIZE			BIT(1)
-#define NH_FLD_MAX_FRM_SIZE			BIT(2)
-#define NH_FLD_MIN_FRM_SIZE			BIT(3)
-#define NH_FLD_PAYLOAD_TYPE			BIT(4)
-#define NH_FLD_FRAME_SIZE			BIT(5)
-#define NH_FLD_PAYLOAD_ALL_FIELDS		(BIT(6) - 1)
-
-/* GRE fields */
-#define NH_FLD_GRE_TYPE				BIT(0)
-#define NH_FLD_GRE_ALL_FIELDS			(BIT(1) - 1)
-
-/* MINENCAP fields */
-#define NH_FLD_MINENCAP_SRC_IP			BIT(0)
-#define NH_FLD_MINENCAP_DST_IP			BIT(1)
-#define NH_FLD_MINENCAP_TYPE			BIT(2)
-#define NH_FLD_MINENCAP_ALL_FIELDS		(BIT(3) - 1)
-
-/* IPSEC AH fields */
-#define NH_FLD_IPSEC_AH_SPI			BIT(0)
-#define NH_FLD_IPSEC_AH_NH			BIT(1)
-#define NH_FLD_IPSEC_AH_ALL_FIELDS		(BIT(2) - 1)
-
-/* IPSEC ESP fields */
-#define NH_FLD_IPSEC_ESP_SPI			BIT(0)
-#define NH_FLD_IPSEC_ESP_SEQUENCE_NUM		BIT(1)
-#define NH_FLD_IPSEC_ESP_ALL_FIELDS		(BIT(2) - 1)
-
-/* MPLS fields */
-#define NH_FLD_MPLS_LABEL_STACK			BIT(0)
-#define NH_FLD_MPLS_LABEL_STACK_ALL_FIELDS	(BIT(1) - 1)
-
-/* MACSEC fields */
-#define NH_FLD_MACSEC_SECTAG			BIT(0)
-#define NH_FLD_MACSEC_ALL_FIELDS		(BIT(1) - 1)
-
-/* GTP fields */
-#define NH_FLD_GTP_TEID				BIT(0)
-
-/* Supported protocols */
-enum net_prot {
-	NET_PROT_NONE = 0,
-	NET_PROT_PAYLOAD,
-	NET_PROT_ETH,
-	NET_PROT_VLAN,
-	NET_PROT_IPV4,
-	NET_PROT_IPV6,
-	NET_PROT_IP,
-	NET_PROT_TCP,
-	NET_PROT_UDP,
-	NET_PROT_UDP_LITE,
-	NET_PROT_IPHC,
-	NET_PROT_SCTP,
-	NET_PROT_SCTP_CHUNK_DATA,
-	NET_PROT_PPPOE,
-	NET_PROT_PPP,
-	NET_PROT_PPPMUX,
-	NET_PROT_PPPMUX_SUBFRM,
-	NET_PROT_L2TPV2,
-	NET_PROT_L2TPV3_CTRL,
-	NET_PROT_L2TPV3_SESS,
-	NET_PROT_LLC,
-	NET_PROT_LLC_SNAP,
-	NET_PROT_NLPID,
-	NET_PROT_SNAP,
-	NET_PROT_MPLS,
-	NET_PROT_IPSEC_AH,
-	NET_PROT_IPSEC_ESP,
-	NET_PROT_UDP_ENC_ESP, /* RFC 3948 */
-	NET_PROT_MACSEC,
-	NET_PROT_GRE,
-	NET_PROT_MINENCAP,
-	NET_PROT_DCCP,
-	NET_PROT_ICMP,
-	NET_PROT_IGMP,
-	NET_PROT_ARP,
-	NET_PROT_CAPWAP_DATA,
-	NET_PROT_CAPWAP_CTRL,
-	NET_PROT_RFC2684,
-	NET_PROT_ICMPV6,
-	NET_PROT_FCOE,
-	NET_PROT_FIP,
-	NET_PROT_ISCSI,
-	NET_PROT_GTP,
-	NET_PROT_USER_DEFINED_L2,
-	NET_PROT_USER_DEFINED_L3,
-	NET_PROT_USER_DEFINED_L4,
-	NET_PROT_USER_DEFINED_L5,
-	NET_PROT_USER_DEFINED_SHIM1,
-	NET_PROT_USER_DEFINED_SHIM2,
-
-	NET_PROT_DUMMY_LAST
-};
-
-/**
- * struct dpkg_extract - A structure for defining a single extraction
- * @type: Determines how the union below is interpreted:
- *	DPKG_EXTRACT_FROM_HDR: selects 'from_hdr';
- *	DPKG_EXTRACT_FROM_DATA: selects 'from_data';
- *	DPKG_EXTRACT_FROM_PARSE: selects 'from_parse'
- * @extract: Selects extraction method
- * @extract.from_hdr: Used when 'type = DPKG_EXTRACT_FROM_HDR'
- * @extract.from_data: Used when 'type = DPKG_EXTRACT_FROM_DATA'
- * @extract.from_parse:  Used when 'type = DPKG_EXTRACT_FROM_PARSE'
- * @extract.from_hdr.prot: Any of the supported headers
- * @extract.from_hdr.type: Defines the type of header extraction:
- *	DPKG_FROM_HDR: use size & offset below;
- *	DPKG_FROM_FIELD: use field, size and offset below;
- *	DPKG_FULL_FIELD: use field below
- * @extract.from_hdr.field: One of the supported fields (NH_FLD_)
- * @extract.from_hdr.size: Size in bytes
- * @extract.from_hdr.offset: Byte offset
- * @extract.from_hdr.hdr_index: Clear for cases not listed below;
- *	Used for protocols that may have more than a single
- *	header, 0 indicates an outer header;
- *	Supported protocols (possible values):
- *	NET_PROT_VLAN (0, HDR_INDEX_LAST);
- *	NET_PROT_MPLS (0, 1, HDR_INDEX_LAST);
- *	NET_PROT_IP(0, HDR_INDEX_LAST);
- *	NET_PROT_IPv4(0, HDR_INDEX_LAST);
- *	NET_PROT_IPv6(0, HDR_INDEX_LAST);
- * @extract.from_data.size: Size in bytes
- * @extract.from_data.offset: Byte offset
- * @extract.from_parse.size: Size in bytes
- * @extract.from_parse.offset: Byte offset
- * @num_of_byte_masks: Defines the number of valid entries in the array below;
- *		This is	also the number of bytes to be used as masks
- * @masks: Masks parameters
- */
-struct dpkg_extract {
-	enum dpkg_extract_type type;
-	union {
-		struct {
-			enum net_prot			prot;
-			enum dpkg_extract_from_hdr_type type;
-			u32			field;
-			u8			size;
-			u8			offset;
-			u8			hdr_index;
-		} from_hdr;
-		struct {
-			u8 size;
-			u8 offset;
-		} from_data;
-		struct {
-			u8 size;
-			u8 offset;
-		} from_parse;
-	} extract;
-
-	u8		num_of_byte_masks;
-	struct dpkg_mask	masks[DPKG_NUM_OF_MASKS];
-};
-
-/**
- * struct dpkg_profile_cfg - A structure for defining a full Key Generation
- *				profile (rule)
- * @num_extracts: Defines the number of valid entries in the array below
- * @extracts: Array of required extractions
- */
-struct dpkg_profile_cfg {
-	u8 num_extracts;
-	struct dpkg_extract extracts[DPKG_MAX_NUM_OF_EXTRACTS];
-};
-
-#endif /* __FSL_DPKG_H_ */
diff --git a/drivers/staging/fsl-dpaa2/ethernet/dpni-cmd.h b/drivers/staging/fsl-dpaa2/ethernet/dpni-cmd.h
deleted file mode 100644
index 83698ab..0000000
--- a/drivers/staging/fsl-dpaa2/ethernet/dpni-cmd.h
+++ /dev/null
@@ -1,518 +0,0 @@
-/* SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause) */
-/* Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- */
-#ifndef _FSL_DPNI_CMD_H
-#define _FSL_DPNI_CMD_H
-
-#include "dpni.h"
-
-/* DPNI Version */
-#define DPNI_VER_MAJOR				7
-#define DPNI_VER_MINOR				0
-#define DPNI_CMD_BASE_VERSION			1
-#define DPNI_CMD_ID_OFFSET			4
-
-#define DPNI_CMD(id)	(((id) << DPNI_CMD_ID_OFFSET) | DPNI_CMD_BASE_VERSION)
-
-#define DPNI_CMDID_OPEN					DPNI_CMD(0x801)
-#define DPNI_CMDID_CLOSE				DPNI_CMD(0x800)
-#define DPNI_CMDID_CREATE				DPNI_CMD(0x901)
-#define DPNI_CMDID_DESTROY				DPNI_CMD(0x900)
-#define DPNI_CMDID_GET_API_VERSION			DPNI_CMD(0xa01)
-
-#define DPNI_CMDID_ENABLE				DPNI_CMD(0x002)
-#define DPNI_CMDID_DISABLE				DPNI_CMD(0x003)
-#define DPNI_CMDID_GET_ATTR				DPNI_CMD(0x004)
-#define DPNI_CMDID_RESET				DPNI_CMD(0x005)
-#define DPNI_CMDID_IS_ENABLED				DPNI_CMD(0x006)
-
-#define DPNI_CMDID_SET_IRQ				DPNI_CMD(0x010)
-#define DPNI_CMDID_GET_IRQ				DPNI_CMD(0x011)
-#define DPNI_CMDID_SET_IRQ_ENABLE			DPNI_CMD(0x012)
-#define DPNI_CMDID_GET_IRQ_ENABLE			DPNI_CMD(0x013)
-#define DPNI_CMDID_SET_IRQ_MASK				DPNI_CMD(0x014)
-#define DPNI_CMDID_GET_IRQ_MASK				DPNI_CMD(0x015)
-#define DPNI_CMDID_GET_IRQ_STATUS			DPNI_CMD(0x016)
-#define DPNI_CMDID_CLEAR_IRQ_STATUS			DPNI_CMD(0x017)
-
-#define DPNI_CMDID_SET_POOLS				DPNI_CMD(0x200)
-#define DPNI_CMDID_SET_ERRORS_BEHAVIOR			DPNI_CMD(0x20B)
-
-#define DPNI_CMDID_GET_QDID				DPNI_CMD(0x210)
-#define DPNI_CMDID_GET_TX_DATA_OFFSET			DPNI_CMD(0x212)
-#define DPNI_CMDID_GET_LINK_STATE			DPNI_CMD(0x215)
-#define DPNI_CMDID_SET_MAX_FRAME_LENGTH			DPNI_CMD(0x216)
-#define DPNI_CMDID_GET_MAX_FRAME_LENGTH			DPNI_CMD(0x217)
-#define DPNI_CMDID_SET_LINK_CFG				DPNI_CMD(0x21A)
-#define DPNI_CMDID_SET_TX_SHAPING			DPNI_CMD(0x21B)
-
-#define DPNI_CMDID_SET_MCAST_PROMISC			DPNI_CMD(0x220)
-#define DPNI_CMDID_GET_MCAST_PROMISC			DPNI_CMD(0x221)
-#define DPNI_CMDID_SET_UNICAST_PROMISC			DPNI_CMD(0x222)
-#define DPNI_CMDID_GET_UNICAST_PROMISC			DPNI_CMD(0x223)
-#define DPNI_CMDID_SET_PRIM_MAC				DPNI_CMD(0x224)
-#define DPNI_CMDID_GET_PRIM_MAC				DPNI_CMD(0x225)
-#define DPNI_CMDID_ADD_MAC_ADDR				DPNI_CMD(0x226)
-#define DPNI_CMDID_REMOVE_MAC_ADDR			DPNI_CMD(0x227)
-#define DPNI_CMDID_CLR_MAC_FILTERS			DPNI_CMD(0x228)
-
-#define DPNI_CMDID_SET_RX_TC_DIST			DPNI_CMD(0x235)
-
-#define DPNI_CMDID_ADD_FS_ENT				DPNI_CMD(0x244)
-#define DPNI_CMDID_REMOVE_FS_ENT			DPNI_CMD(0x245)
-#define DPNI_CMDID_CLR_FS_ENT				DPNI_CMD(0x246)
-
-#define DPNI_CMDID_GET_STATISTICS			DPNI_CMD(0x25D)
-#define DPNI_CMDID_GET_QUEUE				DPNI_CMD(0x25F)
-#define DPNI_CMDID_SET_QUEUE				DPNI_CMD(0x260)
-#define DPNI_CMDID_GET_TAILDROP				DPNI_CMD(0x261)
-#define DPNI_CMDID_SET_TAILDROP				DPNI_CMD(0x262)
-
-#define DPNI_CMDID_GET_PORT_MAC_ADDR			DPNI_CMD(0x263)
-
-#define DPNI_CMDID_GET_BUFFER_LAYOUT			DPNI_CMD(0x264)
-#define DPNI_CMDID_SET_BUFFER_LAYOUT			DPNI_CMD(0x265)
-
-#define DPNI_CMDID_SET_TX_CONFIRMATION_MODE		DPNI_CMD(0x266)
-#define DPNI_CMDID_SET_CONGESTION_NOTIFICATION		DPNI_CMD(0x267)
-#define DPNI_CMDID_GET_CONGESTION_NOTIFICATION		DPNI_CMD(0x268)
-#define DPNI_CMDID_SET_EARLY_DROP			DPNI_CMD(0x269)
-#define DPNI_CMDID_GET_EARLY_DROP			DPNI_CMD(0x26A)
-#define DPNI_CMDID_GET_OFFLOAD				DPNI_CMD(0x26B)
-#define DPNI_CMDID_SET_OFFLOAD				DPNI_CMD(0x26C)
-
-/* Macros for accessing command fields smaller than 1byte */
-#define DPNI_MASK(field)	\
-	GENMASK(DPNI_##field##_SHIFT + DPNI_##field##_SIZE - 1, \
-		DPNI_##field##_SHIFT)
-
-#define dpni_set_field(var, field, val)	\
-	((var) |= (((val) << DPNI_##field##_SHIFT) & DPNI_MASK(field)))
-#define dpni_get_field(var, field)	\
-	(((var) & DPNI_MASK(field)) >> DPNI_##field##_SHIFT)
-
-struct dpni_cmd_open {
-	__le32 dpni_id;
-};
-
-#define DPNI_BACKUP_POOL(val, order)	(((val) & 0x1) << (order))
-struct dpni_cmd_set_pools {
-	/* cmd word 0 */
-	u8 num_dpbp;
-	u8 backup_pool_mask;
-	__le16 pad;
-	/* cmd word 0..4 */
-	__le32 dpbp_id[DPNI_MAX_DPBP];
-	/* cmd word 4..6 */
-	__le16 buffer_size[DPNI_MAX_DPBP];
-};
-
-/* The enable indication is always the least significant bit */
-#define DPNI_ENABLE_SHIFT		0
-#define DPNI_ENABLE_SIZE		1
-
-struct dpni_rsp_is_enabled {
-	u8 enabled;
-};
-
-struct dpni_rsp_get_irq {
-	/* response word 0 */
-	__le32 irq_val;
-	__le32 pad;
-	/* response word 1 */
-	__le64 irq_addr;
-	/* response word 2 */
-	__le32 irq_num;
-	__le32 type;
-};
-
-struct dpni_cmd_set_irq_enable {
-	u8 enable;
-	u8 pad[3];
-	u8 irq_index;
-};
-
-struct dpni_cmd_get_irq_enable {
-	__le32 pad;
-	u8 irq_index;
-};
-
-struct dpni_rsp_get_irq_enable {
-	u8 enabled;
-};
-
-struct dpni_cmd_set_irq_mask {
-	__le32 mask;
-	u8 irq_index;
-};
-
-struct dpni_cmd_get_irq_mask {
-	__le32 pad;
-	u8 irq_index;
-};
-
-struct dpni_rsp_get_irq_mask {
-	__le32 mask;
-};
-
-struct dpni_cmd_get_irq_status {
-	__le32 status;
-	u8 irq_index;
-};
-
-struct dpni_rsp_get_irq_status {
-	__le32 status;
-};
-
-struct dpni_cmd_clear_irq_status {
-	__le32 status;
-	u8 irq_index;
-};
-
-struct dpni_rsp_get_attr {
-	/* response word 0 */
-	__le32 options;
-	u8 num_queues;
-	u8 num_tcs;
-	u8 mac_filter_entries;
-	u8 pad0;
-	/* response word 1 */
-	u8 vlan_filter_entries;
-	u8 pad1;
-	u8 qos_entries;
-	u8 pad2;
-	__le16 fs_entries;
-	__le16 pad3;
-	/* response word 2 */
-	u8 qos_key_size;
-	u8 fs_key_size;
-	__le16 wriop_version;
-};
-
-#define DPNI_ERROR_ACTION_SHIFT		0
-#define DPNI_ERROR_ACTION_SIZE		4
-#define DPNI_FRAME_ANN_SHIFT		4
-#define DPNI_FRAME_ANN_SIZE		1
-
-struct dpni_cmd_set_errors_behavior {
-	__le32 errors;
-	/* from least significant bit: error_action:4, set_frame_annotation:1 */
-	u8 flags;
-};
-
-/* There are 3 separate commands for configuring Rx, Tx and Tx confirmation
- * buffer layouts, but they all share the same parameters.
- * If one of the functions changes, below structure needs to be split.
- */
-
-#define DPNI_PASS_TS_SHIFT		0
-#define DPNI_PASS_TS_SIZE		1
-#define DPNI_PASS_PR_SHIFT		1
-#define DPNI_PASS_PR_SIZE		1
-#define DPNI_PASS_FS_SHIFT		2
-#define DPNI_PASS_FS_SIZE		1
-
-struct dpni_cmd_get_buffer_layout {
-	u8 qtype;
-};
-
-struct dpni_rsp_get_buffer_layout {
-	/* response word 0 */
-	u8 pad0[6];
-	/* from LSB: pass_timestamp:1, parser_result:1, frame_status:1 */
-	u8 flags;
-	u8 pad1;
-	/* response word 1 */
-	__le16 private_data_size;
-	__le16 data_align;
-	__le16 head_room;
-	__le16 tail_room;
-};
-
-struct dpni_cmd_set_buffer_layout {
-	/* cmd word 0 */
-	u8 qtype;
-	u8 pad0[3];
-	__le16 options;
-	/* from LSB: pass_timestamp:1, parser_result:1, frame_status:1 */
-	u8 flags;
-	u8 pad1;
-	/* cmd word 1 */
-	__le16 private_data_size;
-	__le16 data_align;
-	__le16 head_room;
-	__le16 tail_room;
-};
-
-struct dpni_cmd_set_offload {
-	u8 pad[3];
-	u8 dpni_offload;
-	__le32 config;
-};
-
-struct dpni_cmd_get_offload {
-	u8 pad[3];
-	u8 dpni_offload;
-};
-
-struct dpni_rsp_get_offload {
-	__le32 pad;
-	__le32 config;
-};
-
-struct dpni_cmd_get_qdid {
-	u8 qtype;
-};
-
-struct dpni_rsp_get_qdid {
-	__le16 qdid;
-};
-
-struct dpni_rsp_get_tx_data_offset {
-	__le16 data_offset;
-};
-
-struct dpni_cmd_get_statistics {
-	u8 page_number;
-};
-
-struct dpni_rsp_get_statistics {
-	__le64 counter[DPNI_STATISTICS_CNT];
-};
-
-struct dpni_cmd_set_link_cfg {
-	/* cmd word 0 */
-	__le64 pad0;
-	/* cmd word 1 */
-	__le32 rate;
-	__le32 pad1;
-	/* cmd word 2 */
-	__le64 options;
-};
-
-#define DPNI_LINK_STATE_SHIFT		0
-#define DPNI_LINK_STATE_SIZE		1
-
-struct dpni_rsp_get_link_state {
-	/* response word 0 */
-	__le32 pad0;
-	/* from LSB: up:1 */
-	u8 flags;
-	u8 pad1[3];
-	/* response word 1 */
-	__le32 rate;
-	__le32 pad2;
-	/* response word 2 */
-	__le64 options;
-};
-
-struct dpni_cmd_set_max_frame_length {
-	__le16 max_frame_length;
-};
-
-struct dpni_rsp_get_max_frame_length {
-	__le16 max_frame_length;
-};
-
-struct dpni_cmd_set_multicast_promisc {
-	u8 enable;
-};
-
-struct dpni_rsp_get_multicast_promisc {
-	u8 enabled;
-};
-
-struct dpni_cmd_set_unicast_promisc {
-	u8 enable;
-};
-
-struct dpni_rsp_get_unicast_promisc {
-	u8 enabled;
-};
-
-struct dpni_cmd_set_primary_mac_addr {
-	__le16 pad;
-	u8 mac_addr[6];
-};
-
-struct dpni_rsp_get_primary_mac_addr {
-	__le16 pad;
-	u8 mac_addr[6];
-};
-
-struct dpni_rsp_get_port_mac_addr {
-	__le16 pad;
-	u8 mac_addr[6];
-};
-
-struct dpni_cmd_add_mac_addr {
-	__le16 pad;
-	u8 mac_addr[6];
-};
-
-struct dpni_cmd_remove_mac_addr {
-	__le16 pad;
-	u8 mac_addr[6];
-};
-
-#define DPNI_UNICAST_FILTERS_SHIFT	0
-#define DPNI_UNICAST_FILTERS_SIZE	1
-#define DPNI_MULTICAST_FILTERS_SHIFT	1
-#define DPNI_MULTICAST_FILTERS_SIZE	1
-
-struct dpni_cmd_clear_mac_filters {
-	/* from LSB: unicast:1, multicast:1 */
-	u8 flags;
-};
-
-#define DPNI_DIST_MODE_SHIFT		0
-#define DPNI_DIST_MODE_SIZE		4
-#define DPNI_MISS_ACTION_SHIFT		4
-#define DPNI_MISS_ACTION_SIZE		4
-
-struct dpni_cmd_set_rx_tc_dist {
-	/* cmd word 0 */
-	__le16 dist_size;
-	u8 tc_id;
-	/* from LSB: dist_mode:4, miss_action:4 */
-	u8 flags;
-	__le16 pad0;
-	__le16 default_flow_id;
-	/* cmd word 1..5 */
-	__le64 pad1[5];
-	/* cmd word 6 */
-	__le64 key_cfg_iova;
-};
-
-/* dpni_set_rx_tc_dist extension (structure of the DMA-able memory at
- * key_cfg_iova)
- */
-struct dpni_mask_cfg {
-	u8 mask;
-	u8 offset;
-};
-
-#define DPNI_EFH_TYPE_SHIFT		0
-#define DPNI_EFH_TYPE_SIZE		4
-#define DPNI_EXTRACT_TYPE_SHIFT		0
-#define DPNI_EXTRACT_TYPE_SIZE		4
-
-struct dpni_dist_extract {
-	/* word 0 */
-	u8 prot;
-	/* EFH type stored in the 4 least significant bits */
-	u8 efh_type;
-	u8 size;
-	u8 offset;
-	__le32 field;
-	/* word 1 */
-	u8 hdr_index;
-	u8 constant;
-	u8 num_of_repeats;
-	u8 num_of_byte_masks;
-	/* Extraction type is stored in the 4 LSBs */
-	u8 extract_type;
-	u8 pad[3];
-	/* word 2 */
-	struct dpni_mask_cfg masks[4];
-};
-
-struct dpni_ext_set_rx_tc_dist {
-	/* extension word 0 */
-	u8 num_extracts;
-	u8 pad[7];
-	/* words 1..25 */
-	struct dpni_dist_extract extracts[DPKG_MAX_NUM_OF_EXTRACTS];
-};
-
-struct dpni_cmd_get_queue {
-	u8 qtype;
-	u8 tc;
-	u8 index;
-};
-
-#define DPNI_DEST_TYPE_SHIFT		0
-#define DPNI_DEST_TYPE_SIZE		4
-#define DPNI_STASH_CTRL_SHIFT		6
-#define DPNI_STASH_CTRL_SIZE		1
-#define DPNI_HOLD_ACTIVE_SHIFT		7
-#define DPNI_HOLD_ACTIVE_SIZE		1
-
-struct dpni_rsp_get_queue {
-	/* response word 0 */
-	__le64 pad0;
-	/* response word 1 */
-	__le32 dest_id;
-	__le16 pad1;
-	u8 dest_prio;
-	/* From LSB: dest_type:4, pad:2, flc_stash_ctrl:1, hold_active:1 */
-	u8 flags;
-	/* response word 2 */
-	__le64 flc;
-	/* response word 3 */
-	__le64 user_context;
-	/* response word 4 */
-	__le32 fqid;
-	__le16 qdbin;
-};
-
-struct dpni_cmd_set_queue {
-	/* cmd word 0 */
-	u8 qtype;
-	u8 tc;
-	u8 index;
-	u8 options;
-	__le32 pad0;
-	/* cmd word 1 */
-	__le32 dest_id;
-	__le16 pad1;
-	u8 dest_prio;
-	u8 flags;
-	/* cmd word 2 */
-	__le64 flc;
-	/* cmd word 3 */
-	__le64 user_context;
-};
-
-struct dpni_cmd_set_taildrop {
-	/* cmd word 0 */
-	u8 congestion_point;
-	u8 qtype;
-	u8 tc;
-	u8 index;
-	__le32 pad0;
-	/* cmd word 1 */
-	/* Only least significant bit is relevant */
-	u8 enable;
-	u8 pad1;
-	u8 units;
-	u8 pad2;
-	__le32 threshold;
-};
-
-struct dpni_cmd_get_taildrop {
-	u8 congestion_point;
-	u8 qtype;
-	u8 tc;
-	u8 index;
-};
-
-struct dpni_rsp_get_taildrop {
-	/* cmd word 0 */
-	__le64 pad0;
-	/* cmd word 1 */
-	/* only least significant bit is relevant */
-	u8 enable;
-	u8 pad1;
-	u8 units;
-	u8 pad2;
-	__le32 threshold;
-};
-
-struct dpni_rsp_get_api_version {
-	__le16 major;
-	__le16 minor;
-};
-
-#endif /* _FSL_DPNI_CMD_H */
diff --git a/drivers/staging/fsl-dpaa2/ethernet/dpni.c b/drivers/staging/fsl-dpaa2/ethernet/dpni.c
deleted file mode 100644
index d6ac267..0000000
--- a/drivers/staging/fsl-dpaa2/ethernet/dpni.c
+++ /dev/null
@@ -1,1600 +0,0 @@
-// SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause)
-/* Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- */
-#include <linux/kernel.h>
-#include <linux/errno.h>
-#include <linux/fsl/mc.h>
-#include "dpni.h"
-#include "dpni-cmd.h"
-
-/**
- * dpni_prepare_key_cfg() - function prepare extract parameters
- * @cfg: defining a full Key Generation profile (rule)
- * @key_cfg_buf: Zeroed 256 bytes of memory before mapping it to DMA
- *
- * This function has to be called before the following functions:
- *	- dpni_set_rx_tc_dist()
- *	- dpni_set_qos_table()
- */
-int dpni_prepare_key_cfg(const struct dpkg_profile_cfg *cfg, u8 *key_cfg_buf)
-{
-	int i, j;
-	struct dpni_ext_set_rx_tc_dist *dpni_ext;
-	struct dpni_dist_extract *extr;
-
-	if (cfg->num_extracts > DPKG_MAX_NUM_OF_EXTRACTS)
-		return -EINVAL;
-
-	dpni_ext = (struct dpni_ext_set_rx_tc_dist *)key_cfg_buf;
-	dpni_ext->num_extracts = cfg->num_extracts;
-
-	for (i = 0; i < cfg->num_extracts; i++) {
-		extr = &dpni_ext->extracts[i];
-
-		switch (cfg->extracts[i].type) {
-		case DPKG_EXTRACT_FROM_HDR:
-			extr->prot = cfg->extracts[i].extract.from_hdr.prot;
-			dpni_set_field(extr->efh_type, EFH_TYPE,
-				       cfg->extracts[i].extract.from_hdr.type);
-			extr->size = cfg->extracts[i].extract.from_hdr.size;
-			extr->offset = cfg->extracts[i].extract.from_hdr.offset;
-			extr->field = cpu_to_le32(
-				cfg->extracts[i].extract.from_hdr.field);
-			extr->hdr_index =
-				cfg->extracts[i].extract.from_hdr.hdr_index;
-			break;
-		case DPKG_EXTRACT_FROM_DATA:
-			extr->size = cfg->extracts[i].extract.from_data.size;
-			extr->offset =
-				cfg->extracts[i].extract.from_data.offset;
-			break;
-		case DPKG_EXTRACT_FROM_PARSE:
-			extr->size = cfg->extracts[i].extract.from_parse.size;
-			extr->offset =
-				cfg->extracts[i].extract.from_parse.offset;
-			break;
-		default:
-			return -EINVAL;
-		}
-
-		extr->num_of_byte_masks = cfg->extracts[i].num_of_byte_masks;
-		dpni_set_field(extr->extract_type, EXTRACT_TYPE,
-			       cfg->extracts[i].type);
-
-		for (j = 0; j < DPKG_NUM_OF_MASKS; j++) {
-			extr->masks[j].mask = cfg->extracts[i].masks[j].mask;
-			extr->masks[j].offset =
-				cfg->extracts[i].masks[j].offset;
-		}
-	}
-
-	return 0;
-}
-
-/**
- * dpni_open() - Open a control session for the specified object
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @dpni_id:	DPNI unique ID
- * @token:	Returned token; use in subsequent API calls
- *
- * This function can be used to open a control session for an
- * already created object; an object may have been declared in
- * the DPL or by calling the dpni_create() function.
- * This function returns a unique authentication token,
- * associated with the specific object ID and the specific MC
- * portal; this token must be used in all subsequent commands for
- * this specific object.
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_open(struct fsl_mc_io *mc_io,
-	      u32 cmd_flags,
-	      int dpni_id,
-	      u16 *token)
-{
-	struct fsl_mc_command cmd = { 0 };
-	struct dpni_cmd_open *cmd_params;
-
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_OPEN,
-					  cmd_flags,
-					  0);
-	cmd_params = (struct dpni_cmd_open *)cmd.params;
-	cmd_params->dpni_id = cpu_to_le32(dpni_id);
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	*token = mc_cmd_hdr_read_token(&cmd);
-
-	return 0;
-}
-
-/**
- * dpni_close() - Close the control session of the object
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- *
- * After this function is called, no further operations are
- * allowed on the object without opening a new control session.
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_close(struct fsl_mc_io *mc_io,
-	       u32 cmd_flags,
-	       u16 token)
-{
-	struct fsl_mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_CLOSE,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpni_set_pools() - Set buffer pools configuration
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @cfg:	Buffer pools configuration
- *
- * mandatory for DPNI operation
- * warning:Allowed only when DPNI is disabled
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_set_pools(struct fsl_mc_io *mc_io,
-		   u32 cmd_flags,
-		   u16 token,
-		   const struct dpni_pools_cfg *cfg)
-{
-	struct fsl_mc_command cmd = { 0 };
-	struct dpni_cmd_set_pools *cmd_params;
-	int i;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_POOLS,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpni_cmd_set_pools *)cmd.params;
-	cmd_params->num_dpbp = cfg->num_dpbp;
-	for (i = 0; i < DPNI_MAX_DPBP; i++) {
-		cmd_params->dpbp_id[i] = cpu_to_le32(cfg->pools[i].dpbp_id);
-		cmd_params->buffer_size[i] =
-			cpu_to_le16(cfg->pools[i].buffer_size);
-		cmd_params->backup_pool_mask |=
-			DPNI_BACKUP_POOL(cfg->pools[i].backup_pool, i);
-	}
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpni_enable() - Enable the DPNI, allow sending and receiving frames.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:		Token of DPNI object
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_enable(struct fsl_mc_io *mc_io,
-		u32 cmd_flags,
-		u16 token)
-{
-	struct fsl_mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_ENABLE,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpni_disable() - Disable the DPNI, stop sending and receiving frames.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_disable(struct fsl_mc_io *mc_io,
-		 u32 cmd_flags,
-		 u16 token)
-{
-	struct fsl_mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_DISABLE,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpni_is_enabled() - Check if the DPNI is enabled.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @en:		Returns '1' if object is enabled; '0' otherwise
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_is_enabled(struct fsl_mc_io *mc_io,
-		    u32 cmd_flags,
-		    u16 token,
-		    int *en)
-{
-	struct fsl_mc_command cmd = { 0 };
-	struct dpni_rsp_is_enabled *rsp_params;
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_IS_ENABLED,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpni_rsp_is_enabled *)cmd.params;
-	*en = dpni_get_field(rsp_params->enabled, ENABLE);
-
-	return 0;
-}
-
-/**
- * dpni_reset() - Reset the DPNI, returns the object to initial state.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_reset(struct fsl_mc_io *mc_io,
-	       u32 cmd_flags,
-	       u16 token)
-{
-	struct fsl_mc_command cmd = { 0 };
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_RESET,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpni_set_irq_enable() - Set overall interrupt state.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @irq_index:	The interrupt index to configure
- * @en:		Interrupt state: - enable = 1, disable = 0
- *
- * Allows GPP software to control when interrupts are generated.
- * Each interrupt can have up to 32 causes.  The enable/disable control's the
- * overall interrupt state. if the interrupt is disabled no causes will cause
- * an interrupt.
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_set_irq_enable(struct fsl_mc_io *mc_io,
-			u32 cmd_flags,
-			u16 token,
-			u8 irq_index,
-			u8 en)
-{
-	struct fsl_mc_command cmd = { 0 };
-	struct dpni_cmd_set_irq_enable *cmd_params;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_IRQ_ENABLE,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpni_cmd_set_irq_enable *)cmd.params;
-	dpni_set_field(cmd_params->enable, ENABLE, en);
-	cmd_params->irq_index = irq_index;
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpni_get_irq_enable() - Get overall interrupt state
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @irq_index:	The interrupt index to configure
- * @en:		Returned interrupt state - enable = 1, disable = 0
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_get_irq_enable(struct fsl_mc_io *mc_io,
-			u32 cmd_flags,
-			u16 token,
-			u8 irq_index,
-			u8 *en)
-{
-	struct fsl_mc_command cmd = { 0 };
-	struct dpni_cmd_get_irq_enable *cmd_params;
-	struct dpni_rsp_get_irq_enable *rsp_params;
-
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_IRQ_ENABLE,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpni_cmd_get_irq_enable *)cmd.params;
-	cmd_params->irq_index = irq_index;
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpni_rsp_get_irq_enable *)cmd.params;
-	*en = dpni_get_field(rsp_params->enabled, ENABLE);
-
-	return 0;
-}
-
-/**
- * dpni_set_irq_mask() - Set interrupt mask.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @irq_index:	The interrupt index to configure
- * @mask:	event mask to trigger interrupt;
- *			each bit:
- *				0 = ignore event
- *				1 = consider event for asserting IRQ
- *
- * Every interrupt can have up to 32 causes and the interrupt model supports
- * masking/unmasking each cause independently
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_set_irq_mask(struct fsl_mc_io *mc_io,
-		      u32 cmd_flags,
-		      u16 token,
-		      u8 irq_index,
-		      u32 mask)
-{
-	struct fsl_mc_command cmd = { 0 };
-	struct dpni_cmd_set_irq_mask *cmd_params;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_IRQ_MASK,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpni_cmd_set_irq_mask *)cmd.params;
-	cmd_params->mask = cpu_to_le32(mask);
-	cmd_params->irq_index = irq_index;
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpni_get_irq_mask() - Get interrupt mask.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @irq_index:	The interrupt index to configure
- * @mask:	Returned event mask to trigger interrupt
- *
- * Every interrupt can have up to 32 causes and the interrupt model supports
- * masking/unmasking each cause independently
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_get_irq_mask(struct fsl_mc_io *mc_io,
-		      u32 cmd_flags,
-		      u16 token,
-		      u8 irq_index,
-		      u32 *mask)
-{
-	struct fsl_mc_command cmd = { 0 };
-	struct dpni_cmd_get_irq_mask *cmd_params;
-	struct dpni_rsp_get_irq_mask *rsp_params;
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_IRQ_MASK,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpni_cmd_get_irq_mask *)cmd.params;
-	cmd_params->irq_index = irq_index;
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpni_rsp_get_irq_mask *)cmd.params;
-	*mask = le32_to_cpu(rsp_params->mask);
-
-	return 0;
-}
-
-/**
- * dpni_get_irq_status() - Get the current status of any pending interrupts.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @irq_index:	The interrupt index to configure
- * @status:	Returned interrupts status - one bit per cause:
- *			0 = no interrupt pending
- *			1 = interrupt pending
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_get_irq_status(struct fsl_mc_io *mc_io,
-			u32 cmd_flags,
-			u16 token,
-			u8 irq_index,
-			u32 *status)
-{
-	struct fsl_mc_command cmd = { 0 };
-	struct dpni_cmd_get_irq_status *cmd_params;
-	struct dpni_rsp_get_irq_status *rsp_params;
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_IRQ_STATUS,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpni_cmd_get_irq_status *)cmd.params;
-	cmd_params->status = cpu_to_le32(*status);
-	cmd_params->irq_index = irq_index;
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpni_rsp_get_irq_status *)cmd.params;
-	*status = le32_to_cpu(rsp_params->status);
-
-	return 0;
-}
-
-/**
- * dpni_clear_irq_status() - Clear a pending interrupt's status
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @irq_index:	The interrupt index to configure
- * @status:	bits to clear (W1C) - one bit per cause:
- *			0 = don't change
- *			1 = clear status bit
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_clear_irq_status(struct fsl_mc_io *mc_io,
-			  u32 cmd_flags,
-			  u16 token,
-			  u8 irq_index,
-			  u32 status)
-{
-	struct fsl_mc_command cmd = { 0 };
-	struct dpni_cmd_clear_irq_status *cmd_params;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_CLEAR_IRQ_STATUS,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpni_cmd_clear_irq_status *)cmd.params;
-	cmd_params->irq_index = irq_index;
-	cmd_params->status = cpu_to_le32(status);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpni_get_attributes() - Retrieve DPNI attributes.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @attr:	Object's attributes
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_get_attributes(struct fsl_mc_io *mc_io,
-			u32 cmd_flags,
-			u16 token,
-			struct dpni_attr *attr)
-{
-	struct fsl_mc_command cmd = { 0 };
-	struct dpni_rsp_get_attr *rsp_params;
-
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_ATTR,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpni_rsp_get_attr *)cmd.params;
-	attr->options = le32_to_cpu(rsp_params->options);
-	attr->num_queues = rsp_params->num_queues;
-	attr->num_tcs = rsp_params->num_tcs;
-	attr->mac_filter_entries = rsp_params->mac_filter_entries;
-	attr->vlan_filter_entries = rsp_params->vlan_filter_entries;
-	attr->qos_entries = rsp_params->qos_entries;
-	attr->fs_entries = le16_to_cpu(rsp_params->fs_entries);
-	attr->qos_key_size = rsp_params->qos_key_size;
-	attr->fs_key_size = rsp_params->fs_key_size;
-	attr->wriop_version = le16_to_cpu(rsp_params->wriop_version);
-
-	return 0;
-}
-
-/**
- * dpni_set_errors_behavior() - Set errors behavior
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @cfg:	Errors configuration
- *
- * this function may be called numerous times with different
- * error masks
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_set_errors_behavior(struct fsl_mc_io *mc_io,
-			     u32 cmd_flags,
-			     u16 token,
-			     struct dpni_error_cfg *cfg)
-{
-	struct fsl_mc_command cmd = { 0 };
-	struct dpni_cmd_set_errors_behavior *cmd_params;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_ERRORS_BEHAVIOR,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpni_cmd_set_errors_behavior *)cmd.params;
-	cmd_params->errors = cpu_to_le32(cfg->errors);
-	dpni_set_field(cmd_params->flags, ERROR_ACTION, cfg->error_action);
-	dpni_set_field(cmd_params->flags, FRAME_ANN, cfg->set_frame_annotation);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpni_get_buffer_layout() - Retrieve buffer layout attributes.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @qtype:	Type of queue to retrieve configuration for
- * @layout:	Returns buffer layout attributes
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_get_buffer_layout(struct fsl_mc_io *mc_io,
-			   u32 cmd_flags,
-			   u16 token,
-			   enum dpni_queue_type qtype,
-			   struct dpni_buffer_layout *layout)
-{
-	struct fsl_mc_command cmd = { 0 };
-	struct dpni_cmd_get_buffer_layout *cmd_params;
-	struct dpni_rsp_get_buffer_layout *rsp_params;
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_BUFFER_LAYOUT,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpni_cmd_get_buffer_layout *)cmd.params;
-	cmd_params->qtype = qtype;
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpni_rsp_get_buffer_layout *)cmd.params;
-	layout->pass_timestamp = dpni_get_field(rsp_params->flags, PASS_TS);
-	layout->pass_parser_result = dpni_get_field(rsp_params->flags, PASS_PR);
-	layout->pass_frame_status = dpni_get_field(rsp_params->flags, PASS_FS);
-	layout->private_data_size = le16_to_cpu(rsp_params->private_data_size);
-	layout->data_align = le16_to_cpu(rsp_params->data_align);
-	layout->data_head_room = le16_to_cpu(rsp_params->head_room);
-	layout->data_tail_room = le16_to_cpu(rsp_params->tail_room);
-
-	return 0;
-}
-
-/**
- * dpni_set_buffer_layout() - Set buffer layout configuration.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @qtype:	Type of queue this configuration applies to
- * @layout:	Buffer layout configuration
- *
- * Return:	'0' on Success; Error code otherwise.
- *
- * @warning	Allowed only when DPNI is disabled
- */
-int dpni_set_buffer_layout(struct fsl_mc_io *mc_io,
-			   u32 cmd_flags,
-			   u16 token,
-			   enum dpni_queue_type qtype,
-			   const struct dpni_buffer_layout *layout)
-{
-	struct fsl_mc_command cmd = { 0 };
-	struct dpni_cmd_set_buffer_layout *cmd_params;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_BUFFER_LAYOUT,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpni_cmd_set_buffer_layout *)cmd.params;
-	cmd_params->qtype = qtype;
-	cmd_params->options = cpu_to_le16(layout->options);
-	dpni_set_field(cmd_params->flags, PASS_TS, layout->pass_timestamp);
-	dpni_set_field(cmd_params->flags, PASS_PR, layout->pass_parser_result);
-	dpni_set_field(cmd_params->flags, PASS_FS, layout->pass_frame_status);
-	cmd_params->private_data_size = cpu_to_le16(layout->private_data_size);
-	cmd_params->data_align = cpu_to_le16(layout->data_align);
-	cmd_params->head_room = cpu_to_le16(layout->data_head_room);
-	cmd_params->tail_room = cpu_to_le16(layout->data_tail_room);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpni_set_offload() - Set DPNI offload configuration.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @type:	Type of DPNI offload
- * @config:	Offload configuration.
- *		For checksum offloads, non-zero value enables the offload
- *
- * Return:     '0' on Success; Error code otherwise.
- *
- * @warning    Allowed only when DPNI is disabled
- */
-
-int dpni_set_offload(struct fsl_mc_io *mc_io,
-		     u32 cmd_flags,
-		     u16 token,
-		     enum dpni_offload type,
-		     u32 config)
-{
-	struct fsl_mc_command cmd = { 0 };
-	struct dpni_cmd_set_offload *cmd_params;
-
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_OFFLOAD,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpni_cmd_set_offload *)cmd.params;
-	cmd_params->dpni_offload = type;
-	cmd_params->config = cpu_to_le32(config);
-
-	return mc_send_command(mc_io, &cmd);
-}
-
-int dpni_get_offload(struct fsl_mc_io *mc_io,
-		     u32 cmd_flags,
-		     u16 token,
-		     enum dpni_offload type,
-		     u32 *config)
-{
-	struct fsl_mc_command cmd = { 0 };
-	struct dpni_cmd_get_offload *cmd_params;
-	struct dpni_rsp_get_offload *rsp_params;
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_OFFLOAD,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpni_cmd_get_offload *)cmd.params;
-	cmd_params->dpni_offload = type;
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpni_rsp_get_offload *)cmd.params;
-	*config = le32_to_cpu(rsp_params->config);
-
-	return 0;
-}
-
-/**
- * dpni_get_qdid() - Get the Queuing Destination ID (QDID) that should be used
- *			for enqueue operations
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @qtype:	Type of queue to receive QDID for
- * @qdid:	Returned virtual QDID value that should be used as an argument
- *			in all enqueue operations
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_get_qdid(struct fsl_mc_io *mc_io,
-		  u32 cmd_flags,
-		  u16 token,
-		  enum dpni_queue_type qtype,
-		  u16 *qdid)
-{
-	struct fsl_mc_command cmd = { 0 };
-	struct dpni_cmd_get_qdid *cmd_params;
-	struct dpni_rsp_get_qdid *rsp_params;
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_QDID,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpni_cmd_get_qdid *)cmd.params;
-	cmd_params->qtype = qtype;
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpni_rsp_get_qdid *)cmd.params;
-	*qdid = le16_to_cpu(rsp_params->qdid);
-
-	return 0;
-}
-
-/**
- * dpni_get_tx_data_offset() - Get the Tx data offset (from start of buffer)
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @data_offset: Tx data offset (from start of buffer)
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_get_tx_data_offset(struct fsl_mc_io *mc_io,
-			    u32 cmd_flags,
-			    u16 token,
-			    u16 *data_offset)
-{
-	struct fsl_mc_command cmd = { 0 };
-	struct dpni_rsp_get_tx_data_offset *rsp_params;
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_TX_DATA_OFFSET,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpni_rsp_get_tx_data_offset *)cmd.params;
-	*data_offset = le16_to_cpu(rsp_params->data_offset);
-
-	return 0;
-}
-
-/**
- * dpni_set_link_cfg() - set the link configuration.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @cfg:	Link configuration
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_set_link_cfg(struct fsl_mc_io *mc_io,
-		      u32 cmd_flags,
-		      u16 token,
-		      const struct dpni_link_cfg *cfg)
-{
-	struct fsl_mc_command cmd = { 0 };
-	struct dpni_cmd_set_link_cfg *cmd_params;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_LINK_CFG,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpni_cmd_set_link_cfg *)cmd.params;
-	cmd_params->rate = cpu_to_le32(cfg->rate);
-	cmd_params->options = cpu_to_le64(cfg->options);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpni_get_link_state() - Return the link state (either up or down)
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @state:	Returned link state;
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_get_link_state(struct fsl_mc_io *mc_io,
-			u32 cmd_flags,
-			u16 token,
-			struct dpni_link_state *state)
-{
-	struct fsl_mc_command cmd = { 0 };
-	struct dpni_rsp_get_link_state *rsp_params;
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_LINK_STATE,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpni_rsp_get_link_state *)cmd.params;
-	state->up = dpni_get_field(rsp_params->flags, LINK_STATE);
-	state->rate = le32_to_cpu(rsp_params->rate);
-	state->options = le64_to_cpu(rsp_params->options);
-
-	return 0;
-}
-
-/**
- * dpni_set_max_frame_length() - Set the maximum received frame length.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @max_frame_length:	Maximum received frame length (in
- *				bytes); frame is discarded if its
- *				length exceeds this value
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_set_max_frame_length(struct fsl_mc_io *mc_io,
-			      u32 cmd_flags,
-			      u16 token,
-			      u16 max_frame_length)
-{
-	struct fsl_mc_command cmd = { 0 };
-	struct dpni_cmd_set_max_frame_length *cmd_params;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_MAX_FRAME_LENGTH,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpni_cmd_set_max_frame_length *)cmd.params;
-	cmd_params->max_frame_length = cpu_to_le16(max_frame_length);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpni_get_max_frame_length() - Get the maximum received frame length.
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @max_frame_length:	Maximum received frame length (in
- *				bytes); frame is discarded if its
- *				length exceeds this value
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_get_max_frame_length(struct fsl_mc_io *mc_io,
-			      u32 cmd_flags,
-			      u16 token,
-			      u16 *max_frame_length)
-{
-	struct fsl_mc_command cmd = { 0 };
-	struct dpni_rsp_get_max_frame_length *rsp_params;
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_MAX_FRAME_LENGTH,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpni_rsp_get_max_frame_length *)cmd.params;
-	*max_frame_length = le16_to_cpu(rsp_params->max_frame_length);
-
-	return 0;
-}
-
-/**
- * dpni_set_multicast_promisc() - Enable/disable multicast promiscuous mode
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @en:		Set to '1' to enable; '0' to disable
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_set_multicast_promisc(struct fsl_mc_io *mc_io,
-			       u32 cmd_flags,
-			       u16 token,
-			       int en)
-{
-	struct fsl_mc_command cmd = { 0 };
-	struct dpni_cmd_set_multicast_promisc *cmd_params;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_MCAST_PROMISC,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpni_cmd_set_multicast_promisc *)cmd.params;
-	dpni_set_field(cmd_params->enable, ENABLE, en);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpni_get_multicast_promisc() - Get multicast promiscuous mode
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @en:		Returns '1' if enabled; '0' otherwise
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_get_multicast_promisc(struct fsl_mc_io *mc_io,
-			       u32 cmd_flags,
-			       u16 token,
-			       int *en)
-{
-	struct fsl_mc_command cmd = { 0 };
-	struct dpni_rsp_get_multicast_promisc *rsp_params;
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_MCAST_PROMISC,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpni_rsp_get_multicast_promisc *)cmd.params;
-	*en = dpni_get_field(rsp_params->enabled, ENABLE);
-
-	return 0;
-}
-
-/**
- * dpni_set_unicast_promisc() - Enable/disable unicast promiscuous mode
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @en:		Set to '1' to enable; '0' to disable
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_set_unicast_promisc(struct fsl_mc_io *mc_io,
-			     u32 cmd_flags,
-			     u16 token,
-			     int en)
-{
-	struct fsl_mc_command cmd = { 0 };
-	struct dpni_cmd_set_unicast_promisc *cmd_params;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_UNICAST_PROMISC,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpni_cmd_set_unicast_promisc *)cmd.params;
-	dpni_set_field(cmd_params->enable, ENABLE, en);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpni_get_unicast_promisc() - Get unicast promiscuous mode
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @en:		Returns '1' if enabled; '0' otherwise
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_get_unicast_promisc(struct fsl_mc_io *mc_io,
-			     u32 cmd_flags,
-			     u16 token,
-			     int *en)
-{
-	struct fsl_mc_command cmd = { 0 };
-	struct dpni_rsp_get_unicast_promisc *rsp_params;
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_UNICAST_PROMISC,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpni_rsp_get_unicast_promisc *)cmd.params;
-	*en = dpni_get_field(rsp_params->enabled, ENABLE);
-
-	return 0;
-}
-
-/**
- * dpni_set_primary_mac_addr() - Set the primary MAC address
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @mac_addr:	MAC address to set as primary address
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_set_primary_mac_addr(struct fsl_mc_io *mc_io,
-			      u32 cmd_flags,
-			      u16 token,
-			      const u8 mac_addr[6])
-{
-	struct fsl_mc_command cmd = { 0 };
-	struct dpni_cmd_set_primary_mac_addr *cmd_params;
-	int i;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_PRIM_MAC,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpni_cmd_set_primary_mac_addr *)cmd.params;
-	for (i = 0; i < 6; i++)
-		cmd_params->mac_addr[i] = mac_addr[5 - i];
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpni_get_primary_mac_addr() - Get the primary MAC address
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @mac_addr:	Returned MAC address
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_get_primary_mac_addr(struct fsl_mc_io *mc_io,
-			      u32 cmd_flags,
-			      u16 token,
-			      u8 mac_addr[6])
-{
-	struct fsl_mc_command cmd = { 0 };
-	struct dpni_rsp_get_primary_mac_addr *rsp_params;
-	int i, err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_PRIM_MAC,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpni_rsp_get_primary_mac_addr *)cmd.params;
-	for (i = 0; i < 6; i++)
-		mac_addr[5 - i] = rsp_params->mac_addr[i];
-
-	return 0;
-}
-
-/**
- * dpni_get_port_mac_addr() - Retrieve MAC address associated to the physical
- *			port the DPNI is attached to
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @mac_addr:	MAC address of the physical port, if any, otherwise 0
- *
- * The primary MAC address is not cleared by this operation.
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_get_port_mac_addr(struct fsl_mc_io *mc_io,
-			   u32 cmd_flags,
-			   u16 token,
-			   u8 mac_addr[6])
-{
-	struct fsl_mc_command cmd = { 0 };
-	struct dpni_rsp_get_port_mac_addr *rsp_params;
-	int i, err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_PORT_MAC_ADDR,
-					  cmd_flags,
-					  token);
-
-	/* send command to mc*/
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpni_rsp_get_port_mac_addr *)cmd.params;
-	for (i = 0; i < 6; i++)
-		mac_addr[5 - i] = rsp_params->mac_addr[i];
-
-	return 0;
-}
-
-/**
- * dpni_add_mac_addr() - Add MAC address filter
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @mac_addr:	MAC address to add
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_add_mac_addr(struct fsl_mc_io *mc_io,
-		      u32 cmd_flags,
-		      u16 token,
-		      const u8 mac_addr[6])
-{
-	struct fsl_mc_command cmd = { 0 };
-	struct dpni_cmd_add_mac_addr *cmd_params;
-	int i;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_ADD_MAC_ADDR,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpni_cmd_add_mac_addr *)cmd.params;
-	for (i = 0; i < 6; i++)
-		cmd_params->mac_addr[i] = mac_addr[5 - i];
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpni_remove_mac_addr() - Remove MAC address filter
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @mac_addr:	MAC address to remove
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_remove_mac_addr(struct fsl_mc_io *mc_io,
-			 u32 cmd_flags,
-			 u16 token,
-			 const u8 mac_addr[6])
-{
-	struct fsl_mc_command cmd = { 0 };
-	struct dpni_cmd_remove_mac_addr *cmd_params;
-	int i;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_REMOVE_MAC_ADDR,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpni_cmd_remove_mac_addr *)cmd.params;
-	for (i = 0; i < 6; i++)
-		cmd_params->mac_addr[i] = mac_addr[5 - i];
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpni_clear_mac_filters() - Clear all unicast and/or multicast MAC filters
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @unicast:	Set to '1' to clear unicast addresses
- * @multicast:	Set to '1' to clear multicast addresses
- *
- * The primary MAC address is not cleared by this operation.
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_clear_mac_filters(struct fsl_mc_io *mc_io,
-			   u32 cmd_flags,
-			   u16 token,
-			   int unicast,
-			   int multicast)
-{
-	struct fsl_mc_command cmd = { 0 };
-	struct dpni_cmd_clear_mac_filters *cmd_params;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_CLR_MAC_FILTERS,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpni_cmd_clear_mac_filters *)cmd.params;
-	dpni_set_field(cmd_params->flags, UNICAST_FILTERS, unicast);
-	dpni_set_field(cmd_params->flags, MULTICAST_FILTERS, multicast);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpni_set_rx_tc_dist() - Set Rx traffic class distribution configuration
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @tc_id:	Traffic class selection (0-7)
- * @cfg:	Traffic class distribution configuration
- *
- * warning: if 'dist_mode != DPNI_DIST_MODE_NONE', call dpni_prepare_key_cfg()
- *			first to prepare the key_cfg_iova parameter
- *
- * Return:	'0' on Success; error code otherwise.
- */
-int dpni_set_rx_tc_dist(struct fsl_mc_io *mc_io,
-			u32 cmd_flags,
-			u16 token,
-			u8 tc_id,
-			const struct dpni_rx_tc_dist_cfg *cfg)
-{
-	struct fsl_mc_command cmd = { 0 };
-	struct dpni_cmd_set_rx_tc_dist *cmd_params;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_RX_TC_DIST,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpni_cmd_set_rx_tc_dist *)cmd.params;
-	cmd_params->dist_size = cpu_to_le16(cfg->dist_size);
-	cmd_params->tc_id = tc_id;
-	dpni_set_field(cmd_params->flags, DIST_MODE, cfg->dist_mode);
-	dpni_set_field(cmd_params->flags, MISS_ACTION, cfg->fs_cfg.miss_action);
-	cmd_params->default_flow_id = cpu_to_le16(cfg->fs_cfg.default_flow_id);
-	cmd_params->key_cfg_iova = cpu_to_le64(cfg->key_cfg_iova);
-
-	/* send command to mc*/
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpni_set_queue() - Set queue parameters
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @qtype:	Type of queue - all queue types are supported, although
- *		the command is ignored for Tx
- * @tc:		Traffic class, in range 0 to NUM_TCS - 1
- * @index:	Selects the specific queue out of the set allocated for the
- *		same TC. Value must be in range 0 to NUM_QUEUES - 1
- * @options:	A combination of DPNI_QUEUE_OPT_ values that control what
- *		configuration options are set on the queue
- * @queue:	Queue structure
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_set_queue(struct fsl_mc_io *mc_io,
-		   u32 cmd_flags,
-		   u16 token,
-		   enum dpni_queue_type qtype,
-		   u8 tc,
-		   u8 index,
-		   u8 options,
-		   const struct dpni_queue *queue)
-{
-	struct fsl_mc_command cmd = { 0 };
-	struct dpni_cmd_set_queue *cmd_params;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_QUEUE,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpni_cmd_set_queue *)cmd.params;
-	cmd_params->qtype = qtype;
-	cmd_params->tc = tc;
-	cmd_params->index = index;
-	cmd_params->options = options;
-	cmd_params->dest_id = cpu_to_le32(queue->destination.id);
-	cmd_params->dest_prio = queue->destination.priority;
-	dpni_set_field(cmd_params->flags, DEST_TYPE, queue->destination.type);
-	dpni_set_field(cmd_params->flags, STASH_CTRL, queue->flc.stash_control);
-	dpni_set_field(cmd_params->flags, HOLD_ACTIVE,
-		       queue->destination.hold_active);
-	cmd_params->flc = cpu_to_le64(queue->flc.value);
-	cmd_params->user_context = cpu_to_le64(queue->user_context);
-
-	/* send command to mc */
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpni_get_queue() - Get queue parameters
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @qtype:	Type of queue - all queue types are supported
- * @tc:		Traffic class, in range 0 to NUM_TCS - 1
- * @index:	Selects the specific queue out of the set allocated for the
- *		same TC. Value must be in range 0 to NUM_QUEUES - 1
- * @queue:	Queue configuration structure
- * @qid:	Queue identification
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_get_queue(struct fsl_mc_io *mc_io,
-		   u32 cmd_flags,
-		   u16 token,
-		   enum dpni_queue_type qtype,
-		   u8 tc,
-		   u8 index,
-		   struct dpni_queue *queue,
-		   struct dpni_queue_id *qid)
-{
-	struct fsl_mc_command cmd = { 0 };
-	struct dpni_cmd_get_queue *cmd_params;
-	struct dpni_rsp_get_queue *rsp_params;
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_QUEUE,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpni_cmd_get_queue *)cmd.params;
-	cmd_params->qtype = qtype;
-	cmd_params->tc = tc;
-	cmd_params->index = index;
-
-	/* send command to mc */
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpni_rsp_get_queue *)cmd.params;
-	queue->destination.id = le32_to_cpu(rsp_params->dest_id);
-	queue->destination.priority = rsp_params->dest_prio;
-	queue->destination.type = dpni_get_field(rsp_params->flags,
-						 DEST_TYPE);
-	queue->flc.stash_control = dpni_get_field(rsp_params->flags,
-						  STASH_CTRL);
-	queue->destination.hold_active = dpni_get_field(rsp_params->flags,
-							HOLD_ACTIVE);
-	queue->flc.value = le64_to_cpu(rsp_params->flc);
-	queue->user_context = le64_to_cpu(rsp_params->user_context);
-	qid->fqid = le32_to_cpu(rsp_params->fqid);
-	qid->qdbin = le16_to_cpu(rsp_params->qdbin);
-
-	return 0;
-}
-
-/**
- * dpni_get_statistics() - Get DPNI statistics
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @page:	Selects the statistics page to retrieve, see
- *		DPNI_GET_STATISTICS output. Pages are numbered 0 to 2.
- * @stat:	Structure containing the statistics
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_get_statistics(struct fsl_mc_io *mc_io,
-			u32 cmd_flags,
-			u16 token,
-			u8 page,
-			union dpni_statistics *stat)
-{
-	struct fsl_mc_command cmd = { 0 };
-	struct dpni_cmd_get_statistics *cmd_params;
-	struct dpni_rsp_get_statistics *rsp_params;
-	int i, err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_STATISTICS,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpni_cmd_get_statistics *)cmd.params;
-	cmd_params->page_number = page;
-
-	/* send command to mc */
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpni_rsp_get_statistics *)cmd.params;
-	for (i = 0; i < DPNI_STATISTICS_CNT; i++)
-		stat->raw.counter[i] = le64_to_cpu(rsp_params->counter[i]);
-
-	return 0;
-}
-
-/**
- * dpni_set_taildrop() - Set taildrop per queue or TC
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @cg_point:	Congestion point
- * @q_type:	Queue type on which the taildrop is configured.
- *		Only Rx queues are supported for now
- * @tc:		Traffic class to apply this taildrop to
- * @q_index:	Index of the queue if the DPNI supports multiple queues for
- *		traffic distribution. Ignored if CONGESTION_POINT is not 0.
- * @taildrop:	Taildrop structure
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_set_taildrop(struct fsl_mc_io *mc_io,
-		      u32 cmd_flags,
-		      u16 token,
-		      enum dpni_congestion_point cg_point,
-		      enum dpni_queue_type qtype,
-		      u8 tc,
-		      u8 index,
-		      struct dpni_taildrop *taildrop)
-{
-	struct fsl_mc_command cmd = { 0 };
-	struct dpni_cmd_set_taildrop *cmd_params;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_SET_TAILDROP,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpni_cmd_set_taildrop *)cmd.params;
-	cmd_params->congestion_point = cg_point;
-	cmd_params->qtype = qtype;
-	cmd_params->tc = tc;
-	cmd_params->index = index;
-	dpni_set_field(cmd_params->enable, ENABLE, taildrop->enable);
-	cmd_params->units = taildrop->units;
-	cmd_params->threshold = cpu_to_le32(taildrop->threshold);
-
-	/* send command to mc */
-	return mc_send_command(mc_io, &cmd);
-}
-
-/**
- * dpni_get_taildrop() - Get taildrop information
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @token:	Token of DPNI object
- * @cg_point:	Congestion point
- * @q_type:	Queue type on which the taildrop is configured.
- *		Only Rx queues are supported for now
- * @tc:		Traffic class to apply this taildrop to
- * @q_index:	Index of the queue if the DPNI supports multiple queues for
- *		traffic distribution. Ignored if CONGESTION_POINT is not 0.
- * @taildrop:	Taildrop structure
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_get_taildrop(struct fsl_mc_io *mc_io,
-		      u32 cmd_flags,
-		      u16 token,
-		      enum dpni_congestion_point cg_point,
-		      enum dpni_queue_type qtype,
-		      u8 tc,
-		      u8 index,
-		      struct dpni_taildrop *taildrop)
-{
-	struct fsl_mc_command cmd = { 0 };
-	struct dpni_cmd_get_taildrop *cmd_params;
-	struct dpni_rsp_get_taildrop *rsp_params;
-	int err;
-
-	/* prepare command */
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_TAILDROP,
-					  cmd_flags,
-					  token);
-	cmd_params = (struct dpni_cmd_get_taildrop *)cmd.params;
-	cmd_params->congestion_point = cg_point;
-	cmd_params->qtype = qtype;
-	cmd_params->tc = tc;
-	cmd_params->index = index;
-
-	/* send command to mc */
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	/* retrieve response parameters */
-	rsp_params = (struct dpni_rsp_get_taildrop *)cmd.params;
-	taildrop->enable = dpni_get_field(rsp_params->enable, ENABLE);
-	taildrop->units = rsp_params->units;
-	taildrop->threshold = le32_to_cpu(rsp_params->threshold);
-
-	return 0;
-}
-
-/**
- * dpni_get_api_version() - Get Data Path Network Interface API version
- * @mc_io:	Pointer to MC portal's I/O object
- * @cmd_flags:	Command flags; one or more of 'MC_CMD_FLAG_'
- * @major_ver:	Major version of data path network interface API
- * @minor_ver:	Minor version of data path network interface API
- *
- * Return:	'0' on Success; Error code otherwise.
- */
-int dpni_get_api_version(struct fsl_mc_io *mc_io,
-			 u32 cmd_flags,
-			 u16 *major_ver,
-			 u16 *minor_ver)
-{
-	struct dpni_rsp_get_api_version *rsp_params;
-	struct fsl_mc_command cmd = { 0 };
-	int err;
-
-	cmd.header = mc_encode_cmd_header(DPNI_CMDID_GET_API_VERSION,
-					  cmd_flags, 0);
-
-	err = mc_send_command(mc_io, &cmd);
-	if (err)
-		return err;
-
-	rsp_params = (struct dpni_rsp_get_api_version *)cmd.params;
-	*major_ver = le16_to_cpu(rsp_params->major);
-	*minor_ver = le16_to_cpu(rsp_params->minor);
-
-	return 0;
-}
diff --git a/drivers/staging/fsl-dpaa2/ethernet/dpni.h b/drivers/staging/fsl-dpaa2/ethernet/dpni.h
deleted file mode 100644
index b378a00..0000000
--- a/drivers/staging/fsl-dpaa2/ethernet/dpni.h
+++ /dev/null
@@ -1,824 +0,0 @@
-/* SPDX-License-Identifier: (GPL-2.0+ OR BSD-3-Clause) */
-/* Copyright 2013-2016 Freescale Semiconductor Inc.
- * Copyright 2016 NXP
- */
-#ifndef __FSL_DPNI_H
-#define __FSL_DPNI_H
-
-#include "dpkg.h"
-
-struct fsl_mc_io;
-
-/**
- * Data Path Network Interface API
- * Contains initialization APIs and runtime control APIs for DPNI
- */
-
-/** General DPNI macros */
-
-/**
- * Maximum number of traffic classes
- */
-#define DPNI_MAX_TC				8
-/**
- * Maximum number of buffer pools per DPNI
- */
-#define DPNI_MAX_DPBP				8
-
-/**
- * All traffic classes considered; see dpni_set_queue()
- */
-#define DPNI_ALL_TCS				(u8)(-1)
-/**
- * All flows within traffic class considered; see dpni_set_queue()
- */
-#define DPNI_ALL_TC_FLOWS			(u16)(-1)
-/**
- * Generate new flow ID; see dpni_set_queue()
- */
-#define DPNI_NEW_FLOW_ID			(u16)(-1)
-
-/**
- * Tx traffic is always released to a buffer pool on transmit, there are no
- * resources allocated to have the frames confirmed back to the source after
- * transmission.
- */
-#define DPNI_OPT_TX_FRM_RELEASE			0x000001
-/**
- * Disables support for MAC address filtering for addresses other than primary
- * MAC address. This affects both unicast and multicast. Promiscuous mode can
- * still be enabled/disabled for both unicast and multicast. If promiscuous mode
- * is disabled, only traffic matching the primary MAC address will be accepted.
- */
-#define DPNI_OPT_NO_MAC_FILTER			0x000002
-/**
- * Allocate policers for this DPNI. They can be used to rate-limit traffic per
- * traffic class (TC) basis.
- */
-#define DPNI_OPT_HAS_POLICING			0x000004
-/**
- * Congestion can be managed in several ways, allowing the buffer pool to
- * deplete on ingress, taildrop on each queue or use congestion groups for sets
- * of queues. If set, it configures a single congestion groups across all TCs.
- * If reset, a congestion group is allocated for each TC. Only relevant if the
- * DPNI has multiple traffic classes.
- */
-#define DPNI_OPT_SHARED_CONGESTION		0x000008
-/**
- * Enables TCAM for Flow Steering and QoS look-ups. If not specified, all
- * look-ups are exact match. Note that TCAM is not available on LS1088 and its
- * variants. Setting this bit on these SoCs will trigger an error.
- */
-#define DPNI_OPT_HAS_KEY_MASKING		0x000010
-/**
- * Disables the flow steering table.
- */
-#define DPNI_OPT_NO_FS				0x000020
-
-int dpni_open(struct fsl_mc_io	*mc_io,
-	      u32		cmd_flags,
-	      int		dpni_id,
-	      u16		*token);
-
-int dpni_close(struct fsl_mc_io	*mc_io,
-	       u32		cmd_flags,
-	       u16		token);
-
-/**
- * struct dpni_pools_cfg - Structure representing buffer pools configuration
- * @num_dpbp: Number of DPBPs
- * @pools: Array of buffer pools parameters; The number of valid entries
- *	must match 'num_dpbp' value
- * @pools.dpbp_id: DPBP object ID
- * @pools.buffer_size: Buffer size
- * @pools.backup_pool: Backup pool
- */
-struct dpni_pools_cfg {
-	u8		num_dpbp;
-	struct {
-		int	dpbp_id;
-		u16	buffer_size;
-		int	backup_pool;
-	} pools[DPNI_MAX_DPBP];
-};
-
-int dpni_set_pools(struct fsl_mc_io		*mc_io,
-		   u32				cmd_flags,
-		   u16				token,
-		   const struct dpni_pools_cfg	*cfg);
-
-int dpni_enable(struct fsl_mc_io	*mc_io,
-		u32			cmd_flags,
-		u16			token);
-
-int dpni_disable(struct fsl_mc_io	*mc_io,
-		 u32			cmd_flags,
-		 u16			token);
-
-int dpni_is_enabled(struct fsl_mc_io	*mc_io,
-		    u32			cmd_flags,
-		    u16			token,
-		    int			*en);
-
-int dpni_reset(struct fsl_mc_io	*mc_io,
-	       u32		cmd_flags,
-	       u16		token);
-
-/**
- * DPNI IRQ Index and Events
- */
-
-/**
- * IRQ index
- */
-#define DPNI_IRQ_INDEX				0
-/**
- * IRQ event - indicates a change in link state
- */
-#define DPNI_IRQ_EVENT_LINK_CHANGED		0x00000001
-
-int dpni_set_irq_enable(struct fsl_mc_io	*mc_io,
-			u32			cmd_flags,
-			u16			token,
-			u8			irq_index,
-			u8			en);
-
-int dpni_get_irq_enable(struct fsl_mc_io	*mc_io,
-			u32			cmd_flags,
-			u16			token,
-			u8			irq_index,
-			u8			*en);
-
-int dpni_set_irq_mask(struct fsl_mc_io	*mc_io,
-		      u32		cmd_flags,
-		      u16		token,
-		      u8		irq_index,
-		      u32		mask);
-
-int dpni_get_irq_mask(struct fsl_mc_io	*mc_io,
-		      u32		cmd_flags,
-		      u16		token,
-		      u8		irq_index,
-		      u32		*mask);
-
-int dpni_get_irq_status(struct fsl_mc_io	*mc_io,
-			u32			cmd_flags,
-			u16			token,
-			u8			irq_index,
-			u32			*status);
-
-int dpni_clear_irq_status(struct fsl_mc_io	*mc_io,
-			  u32			cmd_flags,
-			  u16			token,
-			  u8			irq_index,
-			  u32			status);
-
-/**
- * struct dpni_attr - Structure representing DPNI attributes
- * @options: Any combination of the following options:
- *		DPNI_OPT_TX_FRM_RELEASE
- *		DPNI_OPT_NO_MAC_FILTER
- *		DPNI_OPT_HAS_POLICING
- *		DPNI_OPT_SHARED_CONGESTION
- *		DPNI_OPT_HAS_KEY_MASKING
- *		DPNI_OPT_NO_FS
- * @num_queues: Number of Tx and Rx queues used for traffic distribution.
- * @num_tcs: Number of traffic classes (TCs), reserved for the DPNI.
- * @mac_filter_entries: Number of entries in the MAC address filtering table.
- * @vlan_filter_entries: Number of entries in the VLAN address filtering table.
- * @qos_entries: Number of entries in the QoS classification table.
- * @fs_entries: Number of entries in the flow steering table.
- * @qos_key_size: Size, in bytes, of the QoS look-up key. Defining a key larger
- *		than this when adding QoS entries will result in an error.
- * @fs_key_size: Size, in bytes, of the flow steering look-up key. Defining a
- *		key larger than this when composing the hash + FS key will
- *		result in an error.
- * @wriop_version: Version of WRIOP HW block. The 3 version values are stored
- *		on 6, 5, 5 bits respectively.
- */
-struct dpni_attr {
-	u32 options;
-	u8 num_queues;
-	u8 num_tcs;
-	u8 mac_filter_entries;
-	u8 vlan_filter_entries;
-	u8 qos_entries;
-	u16 fs_entries;
-	u8 qos_key_size;
-	u8 fs_key_size;
-	u16 wriop_version;
-};
-
-int dpni_get_attributes(struct fsl_mc_io	*mc_io,
-			u32			cmd_flags,
-			u16			token,
-			struct dpni_attr	*attr);
-
-/**
- * DPNI errors
- */
-
-/**
- * Extract out of frame header error
- */
-#define DPNI_ERROR_EOFHE	0x00020000
-/**
- * Frame length error
- */
-#define DPNI_ERROR_FLE		0x00002000
-/**
- * Frame physical error
- */
-#define DPNI_ERROR_FPE		0x00001000
-/**
- * Parsing header error
- */
-#define DPNI_ERROR_PHE		0x00000020
-/**
- * Parser L3 checksum error
- */
-#define DPNI_ERROR_L3CE		0x00000004
-/**
- * Parser L3 checksum error
- */
-#define DPNI_ERROR_L4CE		0x00000001
-
-/**
- * enum dpni_error_action - Defines DPNI behavior for errors
- * @DPNI_ERROR_ACTION_DISCARD: Discard the frame
- * @DPNI_ERROR_ACTION_CONTINUE: Continue with the normal flow
- * @DPNI_ERROR_ACTION_SEND_TO_ERROR_QUEUE: Send the frame to the error queue
- */
-enum dpni_error_action {
-	DPNI_ERROR_ACTION_DISCARD = 0,
-	DPNI_ERROR_ACTION_CONTINUE = 1,
-	DPNI_ERROR_ACTION_SEND_TO_ERROR_QUEUE = 2
-};
-
-/**
- * struct dpni_error_cfg - Structure representing DPNI errors treatment
- * @errors: Errors mask; use 'DPNI_ERROR__<X>
- * @error_action: The desired action for the errors mask
- * @set_frame_annotation: Set to '1' to mark the errors in frame annotation
- *		status (FAS); relevant only for the non-discard action
- */
-struct dpni_error_cfg {
-	u32			errors;
-	enum dpni_error_action	error_action;
-	int			set_frame_annotation;
-};
-
-int dpni_set_errors_behavior(struct fsl_mc_io		*mc_io,
-			     u32			cmd_flags,
-			     u16			token,
-			     struct dpni_error_cfg	*cfg);
-
-/**
- * DPNI buffer layout modification options
- */
-
-/**
- * Select to modify the time-stamp setting
- */
-#define DPNI_BUF_LAYOUT_OPT_TIMESTAMP		0x00000001
-/**
- * Select to modify the parser-result setting; not applicable for Tx
- */
-#define DPNI_BUF_LAYOUT_OPT_PARSER_RESULT	0x00000002
-/**
- * Select to modify the frame-status setting
- */
-#define DPNI_BUF_LAYOUT_OPT_FRAME_STATUS	0x00000004
-/**
- * Select to modify the private-data-size setting
- */
-#define DPNI_BUF_LAYOUT_OPT_PRIVATE_DATA_SIZE	0x00000008
-/**
- * Select to modify the data-alignment setting
- */
-#define DPNI_BUF_LAYOUT_OPT_DATA_ALIGN		0x00000010
-/**
- * Select to modify the data-head-room setting
- */
-#define DPNI_BUF_LAYOUT_OPT_DATA_HEAD_ROOM	0x00000020
-/**
- * Select to modify the data-tail-room setting
- */
-#define DPNI_BUF_LAYOUT_OPT_DATA_TAIL_ROOM	0x00000040
-
-/**
- * struct dpni_buffer_layout - Structure representing DPNI buffer layout
- * @options: Flags representing the suggested modifications to the buffer
- *		layout; Use any combination of 'DPNI_BUF_LAYOUT_OPT_<X>' flags
- * @pass_timestamp: Pass timestamp value
- * @pass_parser_result: Pass parser results
- * @pass_frame_status: Pass frame status
- * @private_data_size: Size kept for private data (in bytes)
- * @data_align: Data alignment
- * @data_head_room: Data head room
- * @data_tail_room: Data tail room
- */
-struct dpni_buffer_layout {
-	u32	options;
-	int	pass_timestamp;
-	int	pass_parser_result;
-	int	pass_frame_status;
-	u16	private_data_size;
-	u16	data_align;
-	u16	data_head_room;
-	u16	data_tail_room;
-};
-
-/**
- * enum dpni_queue_type - Identifies a type of queue targeted by the command
- * @DPNI_QUEUE_RX: Rx queue
- * @DPNI_QUEUE_TX: Tx queue
- * @DPNI_QUEUE_TX_CONFIRM: Tx confirmation queue
- * @DPNI_QUEUE_RX_ERR: Rx error queue
- */enum dpni_queue_type {
-	DPNI_QUEUE_RX,
-	DPNI_QUEUE_TX,
-	DPNI_QUEUE_TX_CONFIRM,
-	DPNI_QUEUE_RX_ERR,
-};
-
-int dpni_get_buffer_layout(struct fsl_mc_io		*mc_io,
-			   u32				cmd_flags,
-			   u16				token,
-			   enum dpni_queue_type		qtype,
-			   struct dpni_buffer_layout	*layout);
-
-int dpni_set_buffer_layout(struct fsl_mc_io		   *mc_io,
-			   u32				   cmd_flags,
-			   u16				   token,
-			   enum dpni_queue_type		   qtype,
-			   const struct dpni_buffer_layout *layout);
-
-/**
- * enum dpni_offload - Identifies a type of offload targeted by the command
- * @DPNI_OFF_RX_L3_CSUM: Rx L3 checksum validation
- * @DPNI_OFF_RX_L4_CSUM: Rx L4 checksum validation
- * @DPNI_OFF_TX_L3_CSUM: Tx L3 checksum generation
- * @DPNI_OFF_TX_L4_CSUM: Tx L4 checksum generation
- */
-enum dpni_offload {
-	DPNI_OFF_RX_L3_CSUM,
-	DPNI_OFF_RX_L4_CSUM,
-	DPNI_OFF_TX_L3_CSUM,
-	DPNI_OFF_TX_L4_CSUM,
-};
-
-int dpni_set_offload(struct fsl_mc_io	*mc_io,
-		     u32		cmd_flags,
-		     u16		token,
-		     enum dpni_offload	type,
-		     u32		config);
-
-int dpni_get_offload(struct fsl_mc_io	*mc_io,
-		     u32		cmd_flags,
-		     u16		token,
-		     enum dpni_offload	type,
-		     u32		*config);
-
-int dpni_get_qdid(struct fsl_mc_io	*mc_io,
-		  u32			cmd_flags,
-		  u16			token,
-		  enum dpni_queue_type	qtype,
-		  u16			*qdid);
-
-int dpni_get_tx_data_offset(struct fsl_mc_io	*mc_io,
-			    u32			cmd_flags,
-			    u16			token,
-			    u16			*data_offset);
-
-#define DPNI_STATISTICS_CNT		7
-
-/**
- * union dpni_statistics - Union describing the DPNI statistics
- * @page_0: Page_0 statistics structure
- * @page_0.ingress_all_frames: Ingress frame count
- * @page_0.ingress_all_bytes: Ingress byte count
- * @page_0.ingress_multicast_frames: Ingress multicast frame count
- * @page_0.ingress_multicast_bytes: Ingress multicast byte count
- * @page_0.ingress_broadcast_frames: Ingress broadcast frame count
- * @page_0.ingress_broadcast_bytes: Ingress broadcast byte count
- * @page_1: Page_1 statistics structure
- * @page_1.egress_all_frames: Egress frame count
- * @page_1.egress_all_bytes: Egress byte count
- * @page_1.egress_multicast_frames: Egress multicast frame count
- * @page_1.egress_multicast_bytes: Egress multicast byte count
- * @page_1.egress_broadcast_frames: Egress broadcast frame count
- * @page_1.egress_broadcast_bytes: Egress broadcast byte count
- * @page_2: Page_2 statistics structure
- * @page_2.ingress_filtered_frames: Ingress filtered frame count
- * @page_2.ingress_discarded_frames: Ingress discarded frame count
- * @page_2.ingress_nobuffer_discards: Ingress discarded frame count due to
- *	lack of buffers
- * @page_2.egress_discarded_frames: Egress discarded frame count
- * @page_2.egress_confirmed_frames: Egress confirmed frame count
- * @raw: raw statistics structure, used to index counters
- */
-union dpni_statistics {
-	struct {
-		u64 ingress_all_frames;
-		u64 ingress_all_bytes;
-		u64 ingress_multicast_frames;
-		u64 ingress_multicast_bytes;
-		u64 ingress_broadcast_frames;
-		u64 ingress_broadcast_bytes;
-	} page_0;
-	struct {
-		u64 egress_all_frames;
-		u64 egress_all_bytes;
-		u64 egress_multicast_frames;
-		u64 egress_multicast_bytes;
-		u64 egress_broadcast_frames;
-		u64 egress_broadcast_bytes;
-	} page_1;
-	struct {
-		u64 ingress_filtered_frames;
-		u64 ingress_discarded_frames;
-		u64 ingress_nobuffer_discards;
-		u64 egress_discarded_frames;
-		u64 egress_confirmed_frames;
-	} page_2;
-	struct {
-		u64 counter[DPNI_STATISTICS_CNT];
-	} raw;
-};
-
-int dpni_get_statistics(struct fsl_mc_io	*mc_io,
-			u32			cmd_flags,
-			u16			token,
-			u8			page,
-			union dpni_statistics	*stat);
-
-/**
- * Enable auto-negotiation
- */
-#define DPNI_LINK_OPT_AUTONEG		0x0000000000000001ULL
-/**
- * Enable half-duplex mode
- */
-#define DPNI_LINK_OPT_HALF_DUPLEX	0x0000000000000002ULL
-/**
- * Enable pause frames
- */
-#define DPNI_LINK_OPT_PAUSE		0x0000000000000004ULL
-/**
- * Enable a-symmetric pause frames
- */
-#define DPNI_LINK_OPT_ASYM_PAUSE	0x0000000000000008ULL
-
-/**
- * struct - Structure representing DPNI link configuration
- * @rate: Rate
- * @options: Mask of available options; use 'DPNI_LINK_OPT_<X>' values
- */
-struct dpni_link_cfg {
-	u32 rate;
-	u64 options;
-};
-
-int dpni_set_link_cfg(struct fsl_mc_io			*mc_io,
-		      u32				cmd_flags,
-		      u16				token,
-		      const struct dpni_link_cfg	*cfg);
-
-/**
- * struct dpni_link_state - Structure representing DPNI link state
- * @rate: Rate
- * @options: Mask of available options; use 'DPNI_LINK_OPT_<X>' values
- * @up: Link state; '0' for down, '1' for up
- */
-struct dpni_link_state {
-	u32	rate;
-	u64	options;
-	int	up;
-};
-
-int dpni_get_link_state(struct fsl_mc_io	*mc_io,
-			u32			cmd_flags,
-			u16			token,
-			struct dpni_link_state	*state);
-
-int dpni_set_max_frame_length(struct fsl_mc_io	*mc_io,
-			      u32		cmd_flags,
-			      u16		token,
-			      u16		max_frame_length);
-
-int dpni_get_max_frame_length(struct fsl_mc_io	*mc_io,
-			      u32		cmd_flags,
-			      u16		token,
-			      u16		*max_frame_length);
-
-int dpni_set_multicast_promisc(struct fsl_mc_io *mc_io,
-			       u32		cmd_flags,
-			       u16		token,
-			       int		en);
-
-int dpni_get_multicast_promisc(struct fsl_mc_io *mc_io,
-			       u32		cmd_flags,
-			       u16		token,
-			       int		*en);
-
-int dpni_set_unicast_promisc(struct fsl_mc_io	*mc_io,
-			     u32		cmd_flags,
-			     u16		token,
-			     int		en);
-
-int dpni_get_unicast_promisc(struct fsl_mc_io	*mc_io,
-			     u32		cmd_flags,
-			     u16		token,
-			     int		*en);
-
-int dpni_set_primary_mac_addr(struct fsl_mc_io *mc_io,
-			      u32		cmd_flags,
-			      u16		token,
-			      const u8		mac_addr[6]);
-
-int dpni_get_primary_mac_addr(struct fsl_mc_io	*mc_io,
-			      u32		cmd_flags,
-			      u16		token,
-			      u8		mac_addr[6]);
-
-int dpni_get_port_mac_addr(struct fsl_mc_io	*mc_io,
-			   u32			cm_flags,
-			   u16			token,
-			   u8			mac_addr[6]);
-
-int dpni_add_mac_addr(struct fsl_mc_io	*mc_io,
-		      u32		cmd_flags,
-		      u16		token,
-		      const u8		mac_addr[6]);
-
-int dpni_remove_mac_addr(struct fsl_mc_io	*mc_io,
-			 u32			cmd_flags,
-			 u16			token,
-			 const u8		mac_addr[6]);
-
-int dpni_clear_mac_filters(struct fsl_mc_io	*mc_io,
-			   u32			cmd_flags,
-			   u16			token,
-			   int			unicast,
-			   int			multicast);
-
-/**
- * enum dpni_dist_mode - DPNI distribution mode
- * @DPNI_DIST_MODE_NONE: No distribution
- * @DPNI_DIST_MODE_HASH: Use hash distribution; only relevant if
- *		the 'DPNI_OPT_DIST_HASH' option was set at DPNI creation
- * @DPNI_DIST_MODE_FS:  Use explicit flow steering; only relevant if
- *	 the 'DPNI_OPT_DIST_FS' option was set at DPNI creation
- */
-enum dpni_dist_mode {
-	DPNI_DIST_MODE_NONE = 0,
-	DPNI_DIST_MODE_HASH = 1,
-	DPNI_DIST_MODE_FS = 2
-};
-
-/**
- * enum dpni_fs_miss_action -   DPNI Flow Steering miss action
- * @DPNI_FS_MISS_DROP: In case of no-match, drop the frame
- * @DPNI_FS_MISS_EXPLICIT_FLOWID: In case of no-match, use explicit flow-id
- * @DPNI_FS_MISS_HASH: In case of no-match, distribute using hash
- */
-enum dpni_fs_miss_action {
-	DPNI_FS_MISS_DROP = 0,
-	DPNI_FS_MISS_EXPLICIT_FLOWID = 1,
-	DPNI_FS_MISS_HASH = 2
-};
-
-/**
- * struct dpni_fs_tbl_cfg - Flow Steering table configuration
- * @miss_action: Miss action selection
- * @default_flow_id: Used when 'miss_action = DPNI_FS_MISS_EXPLICIT_FLOWID'
- */
-struct dpni_fs_tbl_cfg {
-	enum dpni_fs_miss_action	miss_action;
-	u16				default_flow_id;
-};
-
-int dpni_prepare_key_cfg(const struct dpkg_profile_cfg *cfg,
-			 u8 *key_cfg_buf);
-
-/**
- * struct dpni_rx_tc_dist_cfg - Rx traffic class distribution configuration
- * @dist_size: Set the distribution size;
- *	supported values: 1,2,3,4,6,7,8,12,14,16,24,28,32,48,56,64,96,
- *	112,128,192,224,256,384,448,512,768,896,1024
- * @dist_mode: Distribution mode
- * @key_cfg_iova: I/O virtual address of 256 bytes DMA-able memory filled with
- *		the extractions to be used for the distribution key by calling
- *		dpni_prepare_key_cfg() relevant only when
- *		'dist_mode != DPNI_DIST_MODE_NONE', otherwise it can be '0'
- * @fs_cfg: Flow Steering table configuration; only relevant if
- *		'dist_mode = DPNI_DIST_MODE_FS'
- */
-struct dpni_rx_tc_dist_cfg {
-	u16			dist_size;
-	enum dpni_dist_mode	dist_mode;
-	u64			key_cfg_iova;
-	struct dpni_fs_tbl_cfg	fs_cfg;
-};
-
-int dpni_set_rx_tc_dist(struct fsl_mc_io			*mc_io,
-			u32					cmd_flags,
-			u16					token,
-			u8					tc_id,
-			const struct dpni_rx_tc_dist_cfg	*cfg);
-
-/**
- * enum dpni_dest - DPNI destination types
- * @DPNI_DEST_NONE: Unassigned destination; The queue is set in parked mode and
- *		does not generate FQDAN notifications; user is expected to
- *		dequeue from the queue based on polling or other user-defined
- *		method
- * @DPNI_DEST_DPIO: The queue is set in schedule mode and generates FQDAN
- *		notifications to the specified DPIO; user is expected to dequeue
- *		from the queue only after notification is received
- * @DPNI_DEST_DPCON: The queue is set in schedule mode and does not generate
- *		FQDAN notifications, but is connected to the specified DPCON
- *		object; user is expected to dequeue from the DPCON channel
- */
-enum dpni_dest {
-	DPNI_DEST_NONE = 0,
-	DPNI_DEST_DPIO = 1,
-	DPNI_DEST_DPCON = 2
-};
-
-/**
- * struct dpni_queue - Queue structure
- * @destination - Destination structure
- * @destination.id: ID of the destination, only relevant if DEST_TYPE is > 0.
- *	Identifies either a DPIO or a DPCON object.
- *	Not relevant for Tx queues.
- * @destination.type:	May be one of the following:
- *	0 - No destination, queue can be manually
- *		queried, but will not push traffic or
- *		notifications to a DPIO;
- *	1 - The destination is a DPIO. When traffic
- *		becomes available in the queue a FQDAN
- *		(FQ data available notification) will be
- *		generated to selected DPIO;
- *	2 - The destination is a DPCON. The queue is
- *		associated with a DPCON object for the
- *		purpose of scheduling between multiple
- *		queues. The DPCON may be independently
- *		configured to generate notifications.
- *		Not relevant for Tx queues.
- * @destination.hold_active: Hold active, maintains a queue scheduled for longer
- *	in a DPIO during dequeue to reduce spread of traffic.
- *	Only relevant if queues are
- *	not affined to a single DPIO.
- * @user_context: User data, presented to the user along with any frames
- *	from this queue. Not relevant for Tx queues.
- * @flc: FD FLow Context structure
- * @flc.value: Default FLC value for traffic dequeued from
- *      this queue.  Please check description of FD
- *      structure for more information.
- *      Note that FLC values set using dpni_add_fs_entry,
- *      if any, take precedence over values per queue.
- * @flc.stash_control: Boolean, indicates whether the 6 lowest
- *      - significant bits are used for stash control.
- *      significant bits are used for stash control.  If set, the 6
- *      least significant bits in value are interpreted as follows:
- *      - bits 0-1: indicates the number of 64 byte units of context
- *      that are stashed.  FLC value is interpreted as a memory address
- *      in this case, excluding the 6 LS bits.
- *      - bits 2-3: indicates the number of 64 byte units of frame
- *      annotation to be stashed.  Annotation is placed at FD[ADDR].
- *      - bits 4-5: indicates the number of 64 byte units of frame
- *      data to be stashed.  Frame data is placed at FD[ADDR] +
- *      FD[OFFSET].
- *      For more details check the Frame Descriptor section in the
- *      hardware documentation.
- */
-struct dpni_queue {
-	struct {
-		u16 id;
-		enum dpni_dest type;
-		char hold_active;
-		u8 priority;
-	} destination;
-	u64 user_context;
-	struct {
-		u64 value;
-		char stash_control;
-	} flc;
-};
-
-/**
- * struct dpni_queue_id - Queue identification, used for enqueue commands
- *			or queue control
- * @fqid: FQID used for enqueueing to and/or configuration of this specific FQ
- * @qdbin: Queueing bin, used to enqueue using QDID, DQBIN, QPRI. Only relevant
- *		for Tx queues.
- */
-struct dpni_queue_id {
-	u32 fqid;
-	u16 qdbin;
-};
-
-/**
- * Set User Context
- */
-#define DPNI_QUEUE_OPT_USER_CTX		0x00000001
-#define DPNI_QUEUE_OPT_DEST		0x00000002
-#define DPNI_QUEUE_OPT_FLC		0x00000004
-#define DPNI_QUEUE_OPT_HOLD_ACTIVE	0x00000008
-
-int dpni_set_queue(struct fsl_mc_io	*mc_io,
-		   u32			cmd_flags,
-		   u16			token,
-		   enum dpni_queue_type	qtype,
-		   u8			tc,
-		   u8			index,
-		   u8			options,
-		   const struct dpni_queue *queue);
-
-int dpni_get_queue(struct fsl_mc_io	*mc_io,
-		   u32			cmd_flags,
-		   u16			token,
-		   enum dpni_queue_type	qtype,
-		   u8			tc,
-		   u8			index,
-		   struct dpni_queue	*queue,
-		   struct dpni_queue_id	*qid);
-
-/**
- * enum dpni_congestion_unit - DPNI congestion units
- * @DPNI_CONGESTION_UNIT_BYTES: bytes units
- * @DPNI_CONGESTION_UNIT_FRAMES: frames units
- */
-enum dpni_congestion_unit {
-	DPNI_CONGESTION_UNIT_BYTES = 0,
-	DPNI_CONGESTION_UNIT_FRAMES
-};
-
-/**
- * enum dpni_congestion_point - Structure representing congestion point
- * @DPNI_CP_QUEUE: Set taildrop per queue, identified by QUEUE_TYPE, TC and
- *		QUEUE_INDEX
- * @DPNI_CP_GROUP: Set taildrop per queue group. Depending on options used to
- *		define the DPNI this can be either per TC (default) or per
- *		interface (DPNI_OPT_SHARED_CONGESTION set at DPNI create).
- *		QUEUE_INDEX is ignored if this type is used.
- */
-enum dpni_congestion_point {
-	DPNI_CP_QUEUE,
-	DPNI_CP_GROUP,
-};
-
-/**
- * struct dpni_taildrop - Structure representing the taildrop
- * @enable:	Indicates whether the taildrop is active or not.
- * @units:	Indicates the unit of THRESHOLD. Queue taildrop only supports
- *		byte units, this field is ignored and assumed = 0 if
- *		CONGESTION_POINT is 0.
- * @threshold:	Threshold value, in units identified by UNITS field. Value 0
- *		cannot be used as a valid taildrop threshold, THRESHOLD must
- *		be > 0 if the taildrop is enabled.
- */
-struct dpni_taildrop {
-	char enable;
-	enum dpni_congestion_unit units;
-	u32 threshold;
-};
-
-int dpni_set_taildrop(struct fsl_mc_io *mc_io,
-		      u32 cmd_flags,
-		      u16 token,
-		      enum dpni_congestion_point cg_point,
-		      enum dpni_queue_type q_type,
-		      u8 tc,
-		      u8 q_index,
-		      struct dpni_taildrop *taildrop);
-
-int dpni_get_taildrop(struct fsl_mc_io *mc_io,
-		      u32 cmd_flags,
-		      u16 token,
-		      enum dpni_congestion_point cg_point,
-		      enum dpni_queue_type q_type,
-		      u8 tc,
-		      u8 q_index,
-		      struct dpni_taildrop *taildrop);
-
-/**
- * struct dpni_rule_cfg - Rule configuration for table lookup
- * @key_iova: I/O virtual address of the key (must be in DMA-able memory)
- * @mask_iova: I/O virtual address of the mask (must be in DMA-able memory)
- * @key_size: key and mask size (in bytes)
- */
-struct dpni_rule_cfg {
-	u64	key_iova;
-	u64	mask_iova;
-	u8	key_size;
-};
-
-int dpni_get_api_version(struct fsl_mc_io *mc_io,
-			 u32 cmd_flags,
-			 u16 *major_ver,
-			 u16 *minor_ver);
-
-#endif /* __FSL_DPNI_H */
diff --git a/drivers/staging/fsl-dpaa2/ethernet/ethernet-driver.rst b/drivers/staging/fsl-dpaa2/ethernet/ethernet-driver.rst
deleted file mode 100644
index 90ec940..0000000
--- a/drivers/staging/fsl-dpaa2/ethernet/ethernet-driver.rst
+++ /dev/null
@@ -1,185 +0,0 @@
-.. SPDX-License-Identifier: GPL-2.0
-.. include:: <isonum.txt>
-
-===============================
-DPAA2 Ethernet driver
-===============================
-
-:Copyright: |copy| 2017-2018 NXP
-
-This file provides documentation for the Freescale DPAA2 Ethernet driver.
-
-Supported Platforms
-===================
-This driver provides networking support for Freescale DPAA2 SoCs, e.g.
-LS2080A, LS2088A, LS1088A.
-
-
-Architecture Overview
-=====================
-Unlike regular NICs, in the DPAA2 architecture there is no single hardware block
-representing network interfaces; instead, several separate hardware resources
-concur to provide the networking functionality:
-
-- network interfaces
-- queues, channels
-- buffer pools
-- MAC/PHY
-
-All hardware resources are allocated and configured through the Management
-Complex (MC) portals. MC abstracts most of these resources as DPAA2 objects
-and exposes ABIs through which they can be configured and controlled. A few
-hardware resources, like queues, do not have a corresponding MC object and
-are treated as internal resources of other objects.
-
-For a more detailed description of the DPAA2 architecture and its object
-abstractions see *Documentation/networking/dpaa2/overview.rst*.
-
-Each Linux net device is built on top of a Datapath Network Interface (DPNI)
-object and uses Buffer Pools (DPBPs), I/O Portals (DPIOs) and Concentrators
-(DPCONs).
-
-Configuration interface::
-
-                 -----------------------
-                | DPAA2 Ethernet Driver |
-                 -----------------------
-                     .      .      .
-                     .      .      .
-             . . . . .      .      . . . . . .
-             .              .                .
-             .              .                .
-         ----------     ----------      -----------
-        | DPBP API |   | DPNI API |    | DPCON API |
-         ----------     ----------      -----------
-             .              .                .             software
-    =======  .  ==========  .  ============  .  ===================
-             .              .                .             hardware
-         ------------------------------------------
-        |            MC hardware portals           |
-         ------------------------------------------
-             .              .                .
-             .              .                .
-          ------         ------            -------
-         | DPBP |       | DPNI |          | DPCON |
-          ------         ------            -------
-
-The DPNIs are network interfaces without a direct one-on-one mapping to PHYs.
-DPBPs represent hardware buffer pools. Packet I/O is performed in the context
-of DPCON objects, using DPIO portals for managing and communicating with the
-hardware resources.
-
-Datapath (I/O) interface::
-
-         -----------------------------------------------
-        |           DPAA2 Ethernet Driver               |
-         -----------------------------------------------
-          |          ^        ^         |            |
-          |          |        |         |            |
-   enqueue|   dequeue|   data |  dequeue|       seed |
-    (Tx)  | (Rx, TxC)|  avail.|  request|     buffers|
-          |          |  notify|         |            |
-          |          |        |         |            |
-          V          |        |         V            V
-         -----------------------------------------------
-        |                 DPIO Driver                   |
-         -----------------------------------------------
-          |          |        |         |            |          software
-          |          |        |         |            |  ================
-          |          |        |         |            |          hardware
-         -----------------------------------------------
-        |               I/O hardware portals            |
-         -----------------------------------------------
-          |          ^        ^         |            |
-          |          |        |         |            |
-          |          |        |         V            |
-          V          |    ================           V
-        ----------------------           |      -------------
- queues  ----------------------          |     | Buffer pool |
-          ----------------------         |      -------------
-                   =======================
-                                Channel
-
-Datapath I/O (DPIO) portals provide enqueue and dequeue services, data
-availability notifications and buffer pool management. DPIOs are shared between
-all DPAA2 objects (and implicitly all DPAA2 kernel drivers) that work with data
-frames, but must be affine to the CPUs for the purpose of traffic distribution.
-
-Frames are transmitted and received through hardware frame queues, which can be
-grouped in channels for the purpose of hardware scheduling. The Ethernet driver
-enqueues TX frames on egress queues and after transmission is complete a TX
-confirmation frame is sent back to the CPU.
-
-When frames are available on ingress queues, a data availability notification
-is sent to the CPU; notifications are raised per channel, so even if multiple
-queues in the same channel have available frames, only one notification is sent.
-After a channel fires a notification, is must be explicitly rearmed.
-
-Each network interface can have multiple Rx, Tx and confirmation queues affined
-to CPUs, and one channel (DPCON) for each CPU that services at least one queue.
-DPCONs are used to distribute ingress traffic to different CPUs via the cores'
-affine DPIOs.
-
-The role of hardware buffer pools is storage of ingress frame data. Each network
-interface has a privately owned buffer pool which it seeds with kernel allocated
-buffers.
-
-
-DPNIs are decoupled from PHYs; a DPNI can be connected to a PHY through a DPMAC
-object or to another DPNI through an internal link, but the connection is
-managed by MC and completely transparent to the Ethernet driver.
-
-::
-
-     ---------     ---------     ---------
-    | eth if1 |   | eth if2 |   | eth ifn |
-     ---------     ---------     ---------
-          .           .          .
-          .           .          .
-          .           .          .
-         ---------------------------
-        |   DPAA2 Ethernet Driver   |
-         ---------------------------
-          .           .          .
-          .           .          .
-          .           .          .
-       ------      ------      ------            -------
-      | DPNI |    | DPNI |    | DPNI |          | DPMAC |----+
-       ------      ------      ------            -------     |
-         |           |           |                  |        |
-         |           |           |                  |      -----
-          ===========             ==================      | PHY |
-                                                           -----
-
-Creating a Network Interface
-============================
-A net device is created for each DPNI object probed on the MC bus. Each DPNI has
-a number of properties which determine the network interface configuration
-options and associated hardware resources.
-
-DPNI objects (and the other DPAA2 objects needed for a network interface) can be
-added to a container on the MC bus in one of two ways: statically, through a
-Datapath Layout Binary file (DPL) that is parsed by MC at boot time; or created
-dynamically at runtime, via the DPAA2 objects APIs.
-
-
-Features & Offloads
-===================
-Hardware checksum offloading is supported for TCP and UDP over IPv4/6 frames.
-The checksum offloads can be independently configured on RX and TX through
-ethtool.
-
-Hardware offload of unicast and multicast MAC filtering is supported on the
-ingress path and permanently enabled.
-
-Scatter-gather frames are supported on both RX and TX paths. On TX, SG support
-is configurable via ethtool; on RX it is always enabled.
-
-The DPAA2 hardware can process jumbo Ethernet frames of up to 10K bytes.
-
-The Ethernet driver defines a static flow hashing scheme that distributes
-traffic based on a 5-tuple key: src IP, dst IP, IP proto, L4 src port,
-L4 dst port. No user configuration is supported for now.
-
-Hardware specific statistics for the network interface as well as some
-non-standard driver stats can be consulted through ethtool -S option.
-- 
2.7.4

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ