[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20070918.151528.84360712.davem@davemloft.net>
Date: Tue, 18 Sep 2007 15:15:28 -0700 (PDT)
From: David Miller <davem@...emloft.net>
To: netdev@...r.kernel.org
CC: Ariel.Hendel@....com, greg.onufer@....com, jeff@...zik.org
Subject: [PATCH]: Preliminary release of Sun Neptune driver
This is a preliminary posting of my Sun Neptune driver against the
net-2.6.24 tree.
The TODO list is very long, but it's a lot of small things,
most of the basic substance of the driver is all there.
Things which are known to be broken:
1) Niagara-2 frontend not written, I have recently received
the hardware necessary to code that part up.
2) Jumbo MTU. I have to add some code to adjust the RBR sizes
when the MTU goes above 1500, should be about an afternoon of
work.
3) Need to add an API for adding TCAM entries and modifying other
state. However, the driver does have full multicast and
alternate MAC support.
4) Need an API for adding user classification entries, and also
modifying the flow hashing selections.
I anticipate that #3 and #4 will use a new ethtool interface of
some sort, and this was suggested by Jeff.
Things I know I will change:
1) All the "#if 1" code blocks will disappear or be changed into
controllable logging statements. They are there now to catch
non-fastpath interrupt events etc. for initial driver debugging.
2) All the "#if 0" code blocks will either have users added or
be removed.
Things which I know are not tested:
1) BMAC and bcm5464r support
I simply lack the hardware which can make use of the 10/100Mbit PHY
support.
The locking also needs to be audited, and I plan to add a lockless
interrupt quiesce scheme like what we use in the tg3 driver. And
on the topic of interrupts I need to investigate if it's too rude
to try and use so many MSI-X vectors for a single device and in
particular if that can consume all of the MSIs available on some
machines. Currently the driver just tries to acquire as many as
it can.
Finally, I need to performance tune this driver. In particular I
am setting the MARK bit on every TX descriptor. I have to do this
because the chip provides no way to interrupt the host when the
TX queue drains completely. So if we get some packets before we
hit the threshold for setting the MARK bit, and then get no future
packets, those packet buffers won't be released which is totally
illegal because those buffers are charged against sockets which
cannot be released until the packet buffers get freed.
My initial idea is to use some kind of timer when we have packets
pending in the TX queue but no MARK bits set yet. As soon as we
set the first MARK bit, we kill the timer. Otherwise the timer
fires and we process the TX completion queue.
On the performance side there is also the issue of how much I
should pre-pull from the page vector data into the linear packet
buffer. Basically this is amortizing all the pskb_may_pull()
calls the stack is going to do anyways. It might be possible to
check out the classification result to figure out how much would
be an optimal pre-pull.
We could also sprinkle some prefetches around the RX path.
The NAPI stuff could be significantly improved especially when we
know that the MSI-X points only to RX DMA events of a specific
channel. We could avoid all the silly tests on data structures
and also we'd only have to read the V0 and V1 LDG status values.
Anyways, enough yapping, I should save my fingers for implementing
all of the stuff above :-) This posting is just so people can take
an initial look at the driver.
commit 123f82e3a952726b28c8fb0d6c10d6100e56ff0e
Author: David S. Miller <davem@...set.davemloft.net>
Date: Tue Sep 18 14:58:50 2007 -0700
[NIU]: Add Sun Neptune ethernet driver.
Signed-off-by: David S. Miller <davem@...emloft.net>
diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig
index fd284a9..8a1d21f 100644
--- a/drivers/net/Kconfig
+++ b/drivers/net/Kconfig
@@ -2638,6 +2638,13 @@ config NETXEN_NIC
help
This enables the support for NetXen's Gigabit Ethernet card.
+config NIU
+ tristate "Sun Neptune 10Gbit Ethernet support"
+ depends on PCI
+ help
+ This enables support for cards based upon Sun's
+ Neptune chipset.
+
config PASEMI_MAC
tristate "PA Semi 1/10Gbit MAC"
depends on PPC64 && PCI
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index 2ab33e8..2706d4d 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -241,3 +241,4 @@ obj-$(CONFIG_NETCONSOLE) += netconsole.o
obj-$(CONFIG_FS_ENET) += fs_enet/
obj-$(CONFIG_NETXEN_NIC) += netxen/
+obj-$(CONFIG_NIU) += niu.o
diff --git a/drivers/net/niu.c b/drivers/net/niu.c
new file mode 100644
index 0000000..6dfcfac
--- /dev/null
+++ b/drivers/net/niu.c
@@ -0,0 +1,7406 @@
+/* niu.c: Neptune ethernet driver.
+ *
+ * Copyright (C) 2007 David S. Miller (davem@...emloft.net)
+ */
+
+#include <linux/module.h>
+#include <linux/init.h>
+#include <linux/pci.h>
+#include <linux/dma-mapping.h>
+#include <linux/netdevice.h>
+#include <linux/ethtool.h>
+#include <linux/etherdevice.h>
+#include <linux/platform_device.h>
+#include <linux/delay.h>
+#include <linux/bitops.h>
+#include <linux/mii.h>
+#include <linux/if_ether.h>
+#include <linux/if_vlan.h>
+#include <linux/ip.h>
+#include <linux/in.h>
+#include <linux/ipv6.h>
+#include <linux/log2.h>
+#include <linux/jiffies.h>
+#include <linux/crc32.h>
+
+#include <net/flow.h>
+
+#include "niu.h"
+
+#define DRV_MODULE_NAME "niu"
+#define PFX DRV_MODULE_NAME ": "
+#define DRV_MODULE_VERSION "0.06"
+#define DRV_MODULE_RELDATE "September 18, 2007"
+
+static char version[] __devinitdata =
+ DRV_MODULE_NAME ".c:v" DRV_MODULE_VERSION " (" DRV_MODULE_RELDATE ")\n";
+
+MODULE_AUTHOR("David S. Miller (davem@...emloft.net)");
+MODULE_DESCRIPTION("NIU ethernet driver");
+MODULE_LICENSE("GPL");
+MODULE_VERSION(DRV_MODULE_VERSION);
+
+#ifndef DMA_44BIT_MASK
+#define DMA_44BIT_MASK 0x00000fffffffffffULL
+#endif
+
+#ifndef PCI_DEVICE_ID_SUN_NEPTUNE
+#define PCI_DEVICE_ID_SUN_NEPTUNE 0xabcd
+#endif
+
+static struct pci_device_id niu_pci_tbl[] = {
+ {PCI_DEVICE(PCI_VENDOR_ID_SUN, PCI_DEVICE_ID_SUN_NEPTUNE),
+ .driver_data = 0xff},
+ {}
+};
+
+MODULE_DEVICE_TABLE(pci, niu_pci_tbl);
+
+#define NIU_TX_TIMEOUT (5 * HZ)
+
+#define nr64(reg) readq(np->regs + (reg))
+#define nw64(val, reg) writeq((val), np->regs + (reg))
+
+#define nr64_mac(reg) readq(np->mac_regs + (reg))
+#define nw64_mac(val, reg) writeq((val), np->mac_regs + (reg))
+
+#define nr64_ipp(reg) readq(np->regs + np->ipp_off + (reg))
+#define nw64_ipp(val, reg) writeq((val), np->regs + np->ipp_off + (reg))
+
+#define nr64_pcs(reg) readq(np->regs + np->pcs_off + (reg))
+#define nw64_pcs(val, reg) writeq((val), np->regs + np->pcs_off + (reg))
+
+#define nr64_xpcs(reg) readq(np->regs + np->xpcs_off + (reg))
+#define nw64_xpcs(val, reg) writeq((val), np->regs + np->xpcs_off + (reg))
+
+static unsigned int niu_debug;
+#define NIU_DEBUG_INTERRUPT 0x00000001
+#define NIU_DEBUG_TX_WORK 0x00000002
+#define NIU_DEBUG_RX_WORK 0x00000004
+#define NIU_DEBUG_POLL 0x00000008
+#define NIU_DEBUG_PROBE 0x00010000
+#define NIU_DEBUG_MDIO 0x00020000
+#define NIU_DEBUG_MII 0x00040000
+#define NIU_DEBUG_INIT_HW 0x00080000
+#define NIU_DEBUG_STOP_HW 0x00080000
+
+module_param(niu_debug, int, 0);
+MODULE_PARM_DESC(niu_debug,
+"NIU bitmapped debugging message enable value:\n"
+" 0x00000001 Log interrupt events\n"
+" 0x00000002 Log TX work\n"
+" 0x00000004 Log RX work\n"
+" 0x00000008 Log NAPI poll\n"
+" 0x00010000 Log device probe events\n"
+" 0x00020000 Log MDIO reads and writes\n"
+" 0x00040000 Log MII reads and writes\n"
+" 0x00080000 Log HW initialization\n"
+" 0x00100000 Log HW shutdown\n"
+);
+
+#define niudbg(TYPE, f, a...) \
+do { if (niu_debug & NIU_DEBUG_##TYPE) \
+ printk(KERN_ERR PFX f, ## a); \
+} while (0)
+
+#define niu_lock_parent(np, flags) \
+ spin_lock_irqsave(&np->parent->lock, flags)
+#define niu_unlock_parent(np, flags) \
+ spin_unlock_irqrestore(&np->parent->lock, flags)
+
+static int niu_wait_bits_clear_mac(struct niu *np, unsigned long reg, u64 bits,
+ int limit, int delay)
+{
+ BUILD_BUG_ON(limit <= 0 || delay < 0);
+ while (--limit >= 0) {
+ u64 val = nr64_mac(reg);
+
+ if (!(val & bits))
+ break;
+ udelay(delay);
+ }
+ if (limit < 0)
+ return -ENODEV;
+ return 0;
+}
+
+static int niu_set_and_wait_clear_mac(struct niu *np, unsigned long reg,
+ u64 bits, int limit, int delay,
+ const char *reg_name)
+{
+ int err;
+
+ nw64_mac(bits, reg);
+ err = niu_wait_bits_clear_mac(np, reg, bits, limit, delay);
+ if (err)
+ printk(KERN_ERR PFX "%s: bits (%lx) of register %s "
+ "would not clear, val[%lx]\n",
+ np->dev->name, bits, reg_name,
+ nr64_mac(reg));
+ return err;
+}
+
+static int niu_wait_bits_clear_ipp(struct niu *np, unsigned long reg, u64 bits,
+ int limit, int delay)
+{
+ BUILD_BUG_ON(limit <= 0 || delay < 0);
+ while (--limit >= 0) {
+ u64 val = nr64_ipp(reg);
+
+ if (!(val & bits))
+ break;
+ udelay(delay);
+ }
+ if (limit < 0)
+ return -ENODEV;
+ return 0;
+}
+
+static int niu_set_and_wait_clear_ipp(struct niu *np, unsigned long reg,
+ u64 bits, int limit, int delay,
+ const char *reg_name)
+{
+ int err;
+ u64 val;
+
+ val = nr64_ipp(reg);
+ val |= bits;
+ nw64_ipp(val, reg);
+
+ err = niu_wait_bits_clear_ipp(np, reg, bits, limit, delay);
+ if (err)
+ printk(KERN_ERR PFX "%s: bits (%lx) of register %s "
+ "would not clear, val[%lx]\n",
+ np->dev->name, bits, reg_name,
+ nr64_ipp(reg));
+ return err;
+}
+
+static int niu_wait_bits_clear(struct niu *np, unsigned long reg, u64 bits,
+ int limit, int delay)
+{
+ BUILD_BUG_ON(limit <= 0 || delay < 0);
+ while (--limit >= 0) {
+ u64 val = nr64(reg);
+
+ if (!(val & bits))
+ break;
+ udelay(delay);
+ }
+ if (limit < 0)
+ return -ENODEV;
+ return 0;
+}
+
+static int niu_set_and_wait_clear(struct niu *np, unsigned long reg,
+ u64 bits, int limit, int delay,
+ const char *reg_name)
+{
+ int err;
+
+ nw64(bits, reg);
+ err = niu_wait_bits_clear(np, reg, bits, limit, delay);
+ if (err)
+ printk(KERN_ERR PFX "%s: bits (%lx) of register %s "
+ "would not clear, val[%lx]\n",
+ np->dev->name, bits, reg_name,
+ nr64(reg));
+ return err;
+}
+
+static void niu_ldg_rearm(struct niu *np, struct niu_ldg *lp, int on)
+{
+ u64 val = (u64) lp->timer;
+
+ if (on)
+ val |= LDG_IMGMT_ARM;
+
+ nw64(val, LDG_IMGMT(lp->ldg_num));
+}
+
+static int niu_ldn_irq_enable(struct niu *np, int ldn, int on)
+{
+ unsigned long mask_reg, bits;
+ u64 val;
+
+ if (ldn < 0 || ldn > LDN_MAX)
+ return -EINVAL;
+
+ if (ldn < 64) {
+ mask_reg = LD_IM0(ldn);
+ bits = LD_IM0_MASK;
+ } else {
+ mask_reg = LD_IM1(ldn - 64);
+ bits = LD_IM1_MASK;
+ }
+
+ val = nr64(mask_reg);
+ if (on)
+ val &= ~bits;
+ else
+ val |= bits;
+ nw64(val, mask_reg);
+
+ return 0;
+}
+
+static int niu_enable_ldn_in_ldg(struct niu *np, struct niu_ldg *lp, int on)
+{
+ struct niu_parent *parent = np->parent;
+ int i;
+
+ for (i = 0; i <= LDN_MAX; i++) {
+ int err;
+
+ if (parent->ldg_map[i] != lp->ldg_num)
+ continue;
+
+ err = niu_ldn_irq_enable(np, i, on);
+ if (err)
+ return err;
+ }
+ return 0;
+}
+
+static int niu_enable_interrupts(struct niu *np, int on)
+{
+ int i;
+
+ for (i = 0; i < np->num_ldg; i++) {
+ struct niu_ldg *lp = &np->ldg[i];
+ int err;
+
+ err = niu_enable_ldn_in_ldg(np, lp, on);
+ if (err)
+ return err;
+ }
+ for (i = 0; i < np->num_ldg; i++)
+ niu_ldg_rearm(np, &np->ldg[i], on);
+
+ return 0;
+}
+
+static __u32 phy_encode(__u32 type, int port)
+{
+ return (type << (port * 2));
+}
+
+static __u32 phy_decode(__u32 val, int port)
+{
+ return (val >> (port * 2)) & PORT_TYPE_MASK;
+}
+
+static int mdio_wait(struct niu *np)
+{
+ int limit = 1000;
+ u64 val;
+
+ while (--limit > 0) {
+ val = nr64(MIF_FRAME_OUTPUT);
+ if ((val >> MIF_FRAME_OUTPUT_TA_SHIFT) & 0x1)
+ return val & MIF_FRAME_OUTPUT_DATA;
+
+ udelay(10);
+ }
+
+ return -ENODEV;
+}
+
+static int mdio_read(struct niu *np, int port, int dev, int reg)
+{
+ int err;
+
+ nw64(MDIO_ADDR_OP(port, dev, reg), MIF_FRAME_OUTPUT);
+ err = mdio_wait(np);
+ if (err < 0)
+ return err;
+
+ nw64(MDIO_READ_OP(port, dev), MIF_FRAME_OUTPUT);
+ return mdio_wait(np);
+}
+
+static int mdio_write(struct niu *np, int port, int dev, int reg, int data)
+{
+ int err;
+
+ nw64(MDIO_ADDR_OP(port, dev, reg), MIF_FRAME_OUTPUT);
+ err = mdio_wait(np);
+ if (err < 0)
+ return err;
+
+ nw64(MDIO_WRITE_OP(port, dev, data), MIF_FRAME_OUTPUT);
+ err = mdio_wait(np);
+ if (err < 0)
+ return err;
+
+ return 0;
+}
+
+static int mii_read(struct niu *np, int port, int reg)
+{
+ nw64(MII_READ_OP(port, reg), MIF_FRAME_OUTPUT);
+ return mdio_wait(np);
+}
+
+static int mii_write(struct niu *np, int port, int reg, int data)
+{
+ int err;
+
+ nw64(MII_WRITE_OP(port, reg, data), MIF_FRAME_OUTPUT);
+ err = mdio_wait(np);
+ if (err < 0)
+ return err;
+
+ return 0;
+}
+
+static int esr2_set_tx_cfg(struct niu *np, unsigned long channel, u32 val)
+{
+ int err;
+
+ err = mdio_write(np, np->port, NIU_ESR2_DEV_ADDR,
+ ESR2_TI_PLL_TX_CFG_L(channel),
+ val & 0xffff);
+ if (!err)
+ err = mdio_write(np, np->port, NIU_ESR2_DEV_ADDR,
+ ESR2_TI_PLL_TX_CFG_H(channel),
+ val >> 16);
+ return err;
+}
+
+static int esr2_set_rx_cfg(struct niu *np, unsigned long channel, u32 val)
+{
+ int err;
+
+ err = mdio_write(np, np->port, NIU_ESR2_DEV_ADDR,
+ ESR2_TI_PLL_RX_CFG_L(channel),
+ val & 0xffff);
+ if (!err)
+ err = mdio_write(np, np->port, NIU_ESR2_DEV_ADDR,
+ ESR2_TI_PLL_RX_CFG_H(channel),
+ val >> 16);
+ return err;
+}
+
+/* Mode is always 10G fiber. */
+static int serdes_init_niu(struct niu *np)
+{
+ struct niu_link_config *lp = &np->link_config;
+ u32 tx_cfg, rx_cfg;
+ unsigned long i;
+
+ tx_cfg = (PLL_TX_CFG_ENTX | PLL_TX_CFG_SWING_1375MV);
+ rx_cfg = (PLL_RX_CFG_ENRX | PLL_RX_CFG_TERM_0P8VDDT |
+ PLL_RX_CFG_ALIGN_ENA | PLL_RX_CFG_LOS_LTHRESH |
+ PLL_RX_CFG_EQ_LP_ADAPTIVE);
+
+ if (lp->loopback_mode == LOOPBACK_PHY) {
+ u16 test_cfg = PLL_TEST_CFG_LOOPBACK_CML_DIS;
+
+ mdio_write(np, np->port, NIU_ESR2_DEV_ADDR,
+ ESR2_TI_PLL_TEST_CFG_L, test_cfg);
+
+ tx_cfg |= PLL_TX_CFG_ENTEST;
+ rx_cfg |= PLL_RX_CFG_ENTEST;
+ }
+
+ for (i = 0; i < 4; i++) {
+ int err = esr2_set_tx_cfg(np, i, tx_cfg);
+ if (err)
+ return err;
+ }
+
+ for (i = 0; i < 4; i++) {
+ int err = esr2_set_rx_cfg(np, i, rx_cfg);
+ if (err)
+ return err;
+ }
+
+ return 0;
+}
+
+static int esr_read_rxtx_ctrl(struct niu *np, unsigned long chan, u32 *val)
+{
+ int err;
+
+ err = mdio_read(np, np->port, NIU_ESR_DEV_ADDR, ESR_RXTX_CTRL_L(chan));
+ if (err >= 0) {
+ *val = (err & 0xffff);
+ err = mdio_read(np, np->port, NIU_ESR_DEV_ADDR,
+ ESR_RXTX_CTRL_H(chan));
+ if (err >= 0)
+ *val |= ((err & 0xffff) << 16);
+ err = 0;
+ }
+ return err;
+}
+
+static int esr_read_glue0(struct niu *np, unsigned long chan, u32 *val)
+{
+ int err;
+
+ err = mdio_read(np, np->port, NIU_ESR_DEV_ADDR,
+ ESR_GLUE_CTRL0_L(chan));
+ if (err >= 0) {
+ *val = (err & 0xffff);
+ err = mdio_read(np, np->port, NIU_ESR_DEV_ADDR,
+ ESR_GLUE_CTRL0_H(chan));
+ if (err >= 0) {
+ *val |= ((err & 0xffff) << 16);
+ err = 0;
+ }
+ }
+ return err;
+}
+
+static int esr_read_reset(struct niu *np, u32 *val)
+{
+ int err;
+
+ err = mdio_read(np, np->port, NIU_ESR_DEV_ADDR,
+ ESR_RXTX_RESET_CTRL_L);
+ if (err >= 0) {
+ *val = (err & 0xffff);
+ err = mdio_read(np, np->port, NIU_ESR_DEV_ADDR,
+ ESR_RXTX_RESET_CTRL_H);
+ if (err >= 0) {
+ *val |= ((err & 0xffff) << 16);
+ err = 0;
+ }
+ }
+ return err;
+}
+
+static int esr_write_rxtx_ctrl(struct niu *np, unsigned long chan, u32 val)
+{
+ int err;
+
+ err = mdio_write(np, np->port, NIU_ESR_DEV_ADDR,
+ ESR_RXTX_CTRL_L(chan), val & 0xffff);
+ if (!err)
+ err = mdio_write(np, np->port, NIU_ESR_DEV_ADDR,
+ ESR_RXTX_CTRL_H(chan), (val >> 16));
+ return err;
+}
+
+static int esr_write_glue0(struct niu *np, unsigned long chan, u32 val)
+{
+ int err;
+
+ err = mdio_write(np, np->port, NIU_ESR_DEV_ADDR,
+ ESR_GLUE_CTRL0_L(chan), val & 0xffff);
+ if (!err)
+ err = mdio_write(np, np->port, NIU_ESR_DEV_ADDR,
+ ESR_GLUE_CTRL0_H(chan), (val >> 16));
+ return err;
+}
+
+static int esr_reset(struct niu *np)
+{
+ u32 reset;
+ int err;
+
+ err = mdio_write(np, np->port, NIU_ESR_DEV_ADDR,
+ ESR_RXTX_RESET_CTRL_L, 0x0000);
+ if (err)
+ return err;
+ err = mdio_write(np, np->port, NIU_ESR_DEV_ADDR,
+ ESR_RXTX_RESET_CTRL_H, 0xffff);
+ if (err)
+ return err;
+ udelay(200);
+
+
+ err = mdio_write(np, np->port, NIU_ESR_DEV_ADDR,
+ ESR_RXTX_RESET_CTRL_L, 0xffff);
+ if (err)
+ return err;
+ udelay(200);
+
+ err = mdio_write(np, np->port, NIU_ESR_DEV_ADDR,
+ ESR_RXTX_RESET_CTRL_H, 0x0000);
+ if (err)
+ return err;
+ udelay(200);
+
+ err = esr_read_reset(np, &reset);
+ if (err)
+ return err;
+ if (reset != 0) {
+ printk(KERN_ERR PFX "Port %u ESR_RESET did not clear [%08x]\n",
+ np->port, reset);
+ return -ENODEV;
+ }
+
+ return 0;
+}
+
+static int serdes_init_10g(struct niu *np)
+{
+ struct niu_link_config *lp = &np->link_config;
+ unsigned long ctrl_reg, test_cfg_reg, i;
+ u64 ctrl_val, test_cfg_val, sig, mask, val;
+ int err;
+
+ switch (np->port) {
+ case 0:
+ ctrl_reg = ENET_SERDES_0_CTRL_CFG;
+ test_cfg_reg = ENET_SERDES_0_TEST_CFG;
+ break;
+ case 1:
+ ctrl_reg = ENET_SERDES_1_CTRL_CFG;
+ test_cfg_reg = ENET_SERDES_1_TEST_CFG;
+ break;
+
+ default:
+ return -EINVAL;
+ }
+ ctrl_val = (ENET_SERDES_CTRL_SDET_0 |
+ ENET_SERDES_CTRL_SDET_1 |
+ ENET_SERDES_CTRL_SDET_2 |
+ ENET_SERDES_CTRL_SDET_3 |
+ (0x5 << ENET_SERDES_CTRL_EMPH_0_SHIFT) |
+ (0x5 << ENET_SERDES_CTRL_EMPH_1_SHIFT) |
+ (0x5 << ENET_SERDES_CTRL_EMPH_2_SHIFT) |
+ (0x5 << ENET_SERDES_CTRL_EMPH_3_SHIFT) |
+ (0x1 << ENET_SERDES_CTRL_LADJ_0_SHIFT) |
+ (0x1 << ENET_SERDES_CTRL_LADJ_1_SHIFT) |
+ (0x1 << ENET_SERDES_CTRL_LADJ_2_SHIFT) |
+ (0x1 << ENET_SERDES_CTRL_LADJ_3_SHIFT));
+ test_cfg_val = 0;
+
+ if (lp->loopback_mode == LOOPBACK_PHY) {
+ test_cfg_val |= ((ENET_TEST_MD_PAD_LOOPBACK <<
+ ENET_SERDES_TEST_MD_0_SHIFT) |
+ (ENET_TEST_MD_PAD_LOOPBACK <<
+ ENET_SERDES_TEST_MD_1_SHIFT) |
+ (ENET_TEST_MD_PAD_LOOPBACK <<
+ ENET_SERDES_TEST_MD_2_SHIFT) |
+ (ENET_TEST_MD_PAD_LOOPBACK <<
+ ENET_SERDES_TEST_MD_3_SHIFT));
+ }
+
+ nw64(ctrl_val, ctrl_reg);
+ nw64(test_cfg_val, test_cfg_reg);
+
+ for (i = 0; i < 4; i++) {
+ u32 rxtx_ctrl, glue0;
+ int err;
+
+ err = esr_read_rxtx_ctrl(np, i, &rxtx_ctrl);
+ if (err)
+ return err;
+ err = esr_read_glue0(np, i, &glue0);
+ if (err)
+ return err;
+
+ rxtx_ctrl &= ~(ESR_RXTX_CTRL_VMUXLO);
+ rxtx_ctrl |= (ESR_RXTX_CTRL_ENSTRETCH |
+ (2 << ESR_RXTX_CTRL_VMUXLO_SHIFT));
+
+ glue0 &= ~(ESR_GLUE_CTRL0_SRATE |
+ ESR_GLUE_CTRL0_THCNT |
+ ESR_GLUE_CTRL0_BLTIME);
+ glue0 |= (ESR_GLUE_CTRL0_RXLOSENAB |
+ (0xf << ESR_GLUE_CTRL0_SRATE_SHIFT) |
+ (0xff << ESR_GLUE_CTRL0_THCNT_SHIFT) |
+ (BLTIME_300_CYCLES <<
+ ESR_GLUE_CTRL0_BLTIME_SHIFT));
+
+ err = esr_write_rxtx_ctrl(np, i, rxtx_ctrl);
+ if (err)
+ return err;
+ err = esr_write_glue0(np, i, glue0);
+ if (err)
+ return err;
+ }
+
+ err = esr_reset(np);
+ if (err)
+ return err;
+
+ sig = nr64(ESR_INT_SIGNALS);
+ switch (np->port) {
+ case 0:
+ mask = ESR_INT_SIGNALS_P0_BITS;
+ val = (ESR_INT_SRDY0_P0 |
+ ESR_INT_DET0_P0 |
+ ESR_INT_XSRDY_P0 |
+ ESR_INT_XDP_P0_CH3 |
+ ESR_INT_XDP_P0_CH2 |
+ ESR_INT_XDP_P0_CH1 |
+ ESR_INT_XDP_P0_CH0);
+ break;
+
+ case 1:
+ mask = ESR_INT_SIGNALS_P1_BITS;
+ val = (ESR_INT_SRDY0_P1 |
+ ESR_INT_DET0_P1 |
+ ESR_INT_XSRDY_P1 |
+ ESR_INT_XDP_P1_CH3 |
+ ESR_INT_XDP_P1_CH2 |
+ ESR_INT_XDP_P1_CH1 |
+ ESR_INT_XDP_P1_CH0);
+ break;
+
+ default:
+ return -EINVAL;
+ }
+
+ if ((sig & mask) != val) {
+ printk(KERN_ERR PFX "Port %u signal bits [%08lx] are not "
+ "[%08lx]\n", np->port, (sig & mask), val);
+ return -ENODEV;
+ }
+
+ return 0;
+}
+
+static int serdes_init_1g(struct niu *np)
+{
+ u64 val;
+
+ val = nr64(ENET_SERDES_1_PLL_CFG);
+ val &= ~ENET_SERDES_PLL_FBDIV2;
+ switch (np->port) {
+ case 0:
+ val |= ENET_SERDES_PLL_HRATE0;
+ break;
+ case 1:
+ val |= ENET_SERDES_PLL_HRATE1;
+ break;
+ case 2:
+ val |= ENET_SERDES_PLL_HRATE2;
+ break;
+ case 3:
+ val |= ENET_SERDES_PLL_HRATE3;
+ break;
+ default:
+ return -EINVAL;
+ }
+ nw64(val, ENET_SERDES_1_PLL_CFG);
+
+ return 0;
+}
+
+static int bcm8704_reset(struct niu *np)
+{
+ int err, limit;
+
+ err = mdio_read(np, np->phy_addr,
+ BCM8704_PHYXS_DEV_ADDR, MII_BMCR);
+ if (err < 0)
+ return err;
+ err |= BMCR_RESET;
+ err = mdio_write(np, np->phy_addr, BCM8704_PHYXS_DEV_ADDR,
+ MII_BMCR, err);
+ if (err)
+ return err;
+
+ limit = 1000;
+ while (--limit >= 0) {
+ err = mdio_read(np, np->phy_addr,
+ BCM8704_PHYXS_DEV_ADDR, MII_BMCR);
+ if (err < 0)
+ return err;
+ if (!(err & BMCR_RESET))
+ break;
+ }
+ if (limit < 0) {
+ printk(KERN_ERR PFX "Port %u PHY will not reset "
+ "(bmcr=%04x)\n", np->port, (err & 0xffff));
+ return -ENODEV;
+ }
+ return 0;
+}
+
+/* When written, certain PHY registers need to be read back twice
+ * in order for the bits to settle properly.
+ */
+static int bcm8704_user_dev3_readback(struct niu *np, int reg)
+{
+ int err = mdio_read(np, np->phy_addr, BCM8704_USER_DEV3_ADDR, reg);
+ if (err < 0)
+ return err;
+ err = mdio_read(np, np->phy_addr, BCM8704_USER_DEV3_ADDR, reg);
+ if (err < 0)
+ return err;
+ return 0;
+}
+
+static int bcm8704_init_user_dev3(struct niu *np)
+{
+ int err;
+
+ err = mdio_write(np, np->phy_addr,
+ BCM8704_USER_DEV3_ADDR, BCM8704_USER_CONTROL,
+ (USER_CONTROL_OPTXRST_LVL |
+ USER_CONTROL_OPBIASFLT_LVL |
+ USER_CONTROL_OBTMPFLT_LVL |
+ USER_CONTROL_OPPRFLT_LVL |
+ USER_CONTROL_OPTXFLT_LVL |
+ USER_CONTROL_OPRXLOS_LVL |
+ USER_CONTROL_OPRXFLT_LVL |
+ USER_CONTROL_OPTXON_LVL |
+ (0x3f << USER_CONTROL_RES1_SHIFT)));
+ if (err)
+ return err;
+
+ err = mdio_write(np, np->phy_addr,
+ BCM8704_USER_DEV3_ADDR, BCM8704_USER_PMD_TX_CONTROL,
+ (USER_PMD_TX_CTL_XFP_CLKEN |
+ (1 << USER_PMD_TX_CTL_TX_DAC_TXD_SH) |
+ (2 << USER_PMD_TX_CTL_TX_DAC_TXCK_SH) |
+ USER_PMD_TX_CTL_TSCK_LPWREN));
+ if (err)
+ return err;
+
+ err = bcm8704_user_dev3_readback(np, BCM8704_USER_CONTROL);
+ if (err)
+ return err;
+ err = bcm8704_user_dev3_readback(np, BCM8704_USER_PMD_TX_CONTROL);
+ if (err)
+ return err;
+
+ err = mdio_read(np, np->phy_addr, BCM8704_USER_DEV3_ADDR,
+ BCM8704_USER_OPT_DIGITAL_CTRL);
+ if (err < 0)
+ return err;
+ err &= ~USER_ODIG_CTRL_GPIOS;
+ err |= (0x3 << USER_ODIG_CTRL_GPIOS_SHIFT);
+ err = mdio_write(np, np->phy_addr, BCM8704_USER_DEV3_ADDR,
+ BCM8704_USER_OPT_DIGITAL_CTRL, err);
+ if (err)
+ return err;
+
+ udelay(1000000);
+
+ return 0;
+}
+
+static int xcvr_init_10g(struct niu *np)
+{
+ struct niu_link_config *lp = &np->link_config;
+ u16 analog_stat0, tx_alarm_status;
+ int err;
+ u64 val;
+
+ val = nr64_mac(XMAC_CONFIG);
+ val &= ~XMAC_CONFIG_LED_POLARITY;
+ val |= XMAC_CONFIG_FORCE_LED_ON;
+ nw64_mac(val, XMAC_CONFIG);
+
+ /* XXX shared resource, lock parent XXX */
+ val = nr64(MIF_CONFIG);
+ val |= MIF_CONFIG_INDIRECT_MODE;
+ nw64(val, MIF_CONFIG);
+
+ err = bcm8704_reset(np);
+ if (err)
+ return err;
+
+ err = bcm8704_init_user_dev3(np);
+ if (err)
+ return err;
+
+ err = mdio_read(np, np->phy_addr, BCM8704_PCS_DEV_ADDR,
+ MII_BMCR);
+ if (err < 0)
+ return err;
+ err &= ~BMCR_LOOPBACK;
+
+ if (lp->loopback_mode == LOOPBACK_MAC)
+ err |= BMCR_LOOPBACK;
+
+ err = mdio_write(np, np->phy_addr, BCM8704_PCS_DEV_ADDR,
+ MII_BMCR, err);
+ if (err)
+ return err;
+
+#if 1
+ err = mdio_read(np, np->phy_addr, BCM8704_PMA_PMD_DEV_ADDR,
+ MII_STAT1000);
+ if (err < 0)
+ return err;
+ printk(KERN_ERR PFX "Port %u PMA_PMD(MII_STAT1000) [%04x]\n",
+ np->port, err);
+
+ err = mdio_read(np, np->phy_addr, BCM8704_USER_DEV3_ADDR, 0x20);
+ if (err < 0)
+ return err;
+ printk(KERN_ERR PFX "Port %u USER_DEV3(0x20) [%04x]\n",
+ np->port, err);
+
+ err = mdio_read(np, np->phy_addr, BCM8704_PHYXS_DEV_ADDR,
+ MII_NWAYTEST);
+ if (err < 0)
+ return err;
+ printk(KERN_ERR PFX "Port %u PHYXS(MII_NWAYTEST) [%04x]\n",
+ np->port, err);
+#endif
+
+ /* XXX dig this out it might not be so useful XXX */
+ err = mdio_read(np, np->phy_addr, BCM8704_USER_DEV3_ADDR,
+ BCM8704_USER_ANALOG_STATUS0);
+ if (err < 0)
+ return err;
+ err = mdio_read(np, np->phy_addr, BCM8704_USER_DEV3_ADDR,
+ BCM8704_USER_ANALOG_STATUS0);
+ if (err < 0)
+ return err;
+ analog_stat0 = err;
+
+ err = mdio_read(np, np->phy_addr, BCM8704_USER_DEV3_ADDR,
+ BCM8704_USER_TX_ALARM_STATUS);
+ if (err < 0)
+ return err;
+ err = mdio_read(np, np->phy_addr, BCM8704_USER_DEV3_ADDR,
+ BCM8704_USER_TX_ALARM_STATUS);
+ if (err < 0)
+ return err;
+ tx_alarm_status = err;
+
+ if (analog_stat0 != 0x03fc) {
+ if ((analog_stat0 == 0x43bc) && (tx_alarm_status != 0)) {
+ printk(KERN_ERR PFX "Port %u cable not connected "
+ "or bad cable.\n", np->port);
+ } else if (analog_stat0 == 0x639c) {
+ printk(KERN_ERR PFX "Port %u optical module is bad "
+ "or missing.\n", np->port);
+ }
+ }
+
+ return 0;
+}
+
+static int mii_reset(struct niu *np)
+{
+ int limit, err;
+
+ err = mii_write(np, np->phy_addr, MII_BMCR, BMCR_RESET);
+
+ limit = 1000;
+ while (--limit >= 0) {
+ err = mii_read(np, np->phy_addr, MII_BMCR);
+ if (err < 0)
+ return err;
+ if (!(err & BMCR_RESET))
+ break;
+ }
+ if (limit < 0) {
+ printk(KERN_ERR PFX "Port %u MII would not reset, "
+ "bmcr[%04x]\n", np->port, err);
+ return -ENODEV;
+ }
+
+ err = mii_write(np, np->phy_addr, MII_BMCR, 0);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+static int mii_init_common(struct niu *np)
+{
+ struct niu_link_config *lp = &np->link_config;
+ u16 bmcr, bmsr, adv, estat;
+ int err;
+
+ err = mii_reset(np);
+ if (err)
+ return err;
+
+ err = mii_read(np, np->phy_addr, MII_BMSR);
+ if (err < 0)
+ return err;
+ bmsr = err;
+
+ estat = 0;
+ if (bmsr & BMSR_ESTATEN) {
+ err = mii_read(np, np->phy_addr, MII_ESTATUS);
+ if (err < 0)
+ return err;
+ estat = err;
+ }
+
+ bmcr = 0;
+
+ if (lp->loopback_mode == LOOPBACK_MAC) {
+ bmcr |= BMCR_LOOPBACK;
+ bmcr &= ~BMCR_ANENABLE;
+ if (lp->active_speed == SPEED_1000)
+ bmcr |= BMCR_SPEED1000;
+ if (lp->active_duplex == DUPLEX_FULL)
+ bmcr |= BMCR_FULLDPLX;
+ } else {
+ bmcr &= ~BMCR_LOOPBACK;
+ }
+ if (lp->loopback_mode == LOOPBACK_PHY) {
+ u16 aux;
+
+ aux = (BCM5464R_AUX_CTL_EXT_LB |
+ BCM5464R_AUX_CTL_WRITE_1);
+ err = mii_write(np, np->phy_addr, BCM5464R_AUX_CTL, aux);
+ if (err)
+ return err;
+ }
+
+ /* XXX configurable XXX */
+ adv = 0;
+ if (bmsr & BMSR_10HALF)
+ adv |= ADVERTISE_10HALF;
+ if (bmsr & BMSR_10FULL)
+ adv |= ADVERTISE_10FULL;
+ if (bmsr & BMSR_100HALF)
+ adv |= ADVERTISE_100HALF;
+ if (bmsr & BMSR_100FULL)
+ adv |= ADVERTISE_100FULL;
+ if (bmsr & BMSR_100BASE4)
+ adv |= ADVERTISE_100BASE4;
+ adv |= (ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM);
+ err = mii_write(np, np->phy_addr, MII_ADVERTISE, adv);
+ if (err)
+ return err;
+
+ if (bmsr & BMSR_ESTATEN) {
+ u16 ctrl1000;
+
+ ctrl1000 = (BCM5464R_CTRL1000_AS_MASTER |
+ BCM5464R_CTRL1000_ENABLE_AS_MASTER);
+ if (estat & ESTATUS_1000_TFULL)
+ ctrl1000 |= ADVERTISE_1000FULL;
+ if (estat & ESTATUS_1000_THALF)
+ ctrl1000 |= ADVERTISE_1000HALF;
+ err = mii_write(np, np->phy_addr, MII_CTRL1000, ctrl1000);
+ if (err)
+ return err;
+ }
+ bmcr |= (BMCR_ANENABLE | BMCR_ANRESTART);
+
+ err = mii_write(np, np->phy_addr, MII_BMCR, bmcr);
+ if (err)
+ return err;
+
+ err = mii_read(np, np->phy_addr, MII_BMCR);
+ if (err < 0)
+ return err;
+ err = mii_read(np, np->phy_addr, MII_BMSR);
+ if (err < 0)
+ return err;
+#if 1
+ printk(KERN_ERR PFX "Port %u after MII init bmcr[%04x] bmsr[%04x]\n",
+ np->port, bmcr, bmsr);
+#endif
+
+ return 0;
+}
+
+static int xcvr_init_1g(struct niu *np)
+{
+ u64 val;
+
+ /* XXX shared resource, lock parent XXX */
+ val = nr64(MIF_CONFIG);
+ val &= ~MIF_CONFIG_INDIRECT_MODE;
+ nw64(val, MIF_CONFIG);
+
+ return mii_init_common(np);
+}
+
+static int niu_xcvr_init(struct niu *np)
+{
+ const struct niu_phy_ops *ops = np->phy_ops;
+ int err;
+
+ err = 0;
+ if (ops->xcvr_init)
+ err = ops->xcvr_init(np);
+
+ return err;
+}
+
+static int niu_serdes_init(struct niu *np)
+{
+ const struct niu_phy_ops *ops = np->phy_ops;
+ int err;
+
+ err = 0;
+ if (ops->serdes_init)
+ err = ops->serdes_init(np);
+
+ return err;
+}
+
+static void niu_init_xif(struct niu *);
+
+static int niu_link_status_common(struct niu *np, int link_up)
+{
+ struct niu_link_config *lp = &np->link_config;
+ struct net_device *dev = np->dev;
+ unsigned long flags;
+
+ if (!netif_carrier_ok(dev) && link_up) {
+ printk(KERN_INFO PFX "%s: Link is up at %s, %s duplex\n",
+ dev->name,
+ (lp->active_speed == SPEED_10000 ?
+ "10Gb/sec" :
+ (lp->active_speed == SPEED_1000 ?
+ "1Gb/sec" :
+ (lp->active_speed == SPEED_100 ?
+ "100Mbit/sec" : "10Mbit/sec"))),
+ (lp->active_duplex == DUPLEX_FULL ?
+ "full" : "half"));
+
+ spin_lock_irqsave(&np->lock, flags);
+ niu_init_xif(np);
+ spin_unlock_irqrestore(&np->lock, flags);
+
+ netif_carrier_on(dev);
+ } else if (netif_carrier_ok(dev) && !link_up) {
+ printk(KERN_WARNING PFX "%s: Link is down\n", dev->name);
+ netif_carrier_off(dev);
+ }
+
+ return 0;
+}
+
+static int link_status_10g(struct niu *np, int *link_up_p)
+{
+ unsigned long flags;
+ int err, link_up;
+
+ if (np->link_config.loopback_mode != LOOPBACK_DISABLED)
+ return -EINVAL;
+
+ link_up = 0;
+
+ spin_lock_irqsave(&np->lock, flags);
+
+ err = mdio_read(np, np->phy_addr, BCM8704_PMA_PMD_DEV_ADDR,
+ BCM8704_PMD_RCV_SIGDET);
+ if (err < 0)
+ return err;
+ if (!(err & PMD_RCV_SIGDET_GLOBAL))
+ goto out;
+
+ err = mdio_read(np, np->phy_addr, BCM8704_PCS_DEV_ADDR,
+ BCM8704_PCS_10G_R_STATUS);
+ if (err < 0)
+ return err;
+ if (!(err & PCS_10G_R_STATUS_BLK_LOCK))
+ goto out;
+
+ err = mdio_read(np, np->phy_addr, BCM8704_PHYXS_DEV_ADDR,
+ BCM8704_PHYXS_XGXS_LANE_STAT);
+ if (err < 0)
+ return err;
+
+ if (err != (PHYXS_XGXS_LANE_STAT_ALINGED |
+ PHYXS_XGXS_LANE_STAT_MAGIC |
+ PHYXS_XGXS_LANE_STAT_LANE3 |
+ PHYXS_XGXS_LANE_STAT_LANE2 |
+ PHYXS_XGXS_LANE_STAT_LANE1 |
+ PHYXS_XGXS_LANE_STAT_LANE0))
+ goto out;
+
+ link_up = 1;
+ np->link_config.active_speed = SPEED_10000;
+ np->link_config.active_duplex = DUPLEX_FULL;
+
+out:
+ spin_unlock_irqrestore(&np->lock, flags);
+
+ *link_up_p = link_up;
+ return 0;
+}
+
+static int link_status_1g(struct niu *np, int *link_up_p)
+{
+ u16 current_speed, bmsr;
+ unsigned long flags;
+ u8 current_duplex;
+ int err, link_up;
+
+ if (np->link_config.loopback_mode != LOOPBACK_DISABLED)
+ return -EINVAL;
+
+ link_up = 0;
+ current_speed = SPEED_INVALID;
+ current_duplex = DUPLEX_INVALID;
+
+ spin_lock_irqsave(&np->lock, flags);
+
+ err = mii_read(np, np->phy_addr, MII_BMSR);
+ if (err < 0)
+ goto out;
+
+ bmsr = err;
+ if (bmsr & BMSR_LSTATUS) {
+ u16 adv, lpa, common, estat;
+
+ err = mii_read(np, np->phy_addr, MII_ADVERTISE);
+ if (err < 0)
+ goto out;
+ adv = err;
+
+ err = mii_read(np, np->phy_addr, MII_LPA);
+ if (err < 0)
+ goto out;
+ lpa = err;
+
+ common = adv & lpa;
+
+ err = mii_read(np, np->phy_addr, MII_ESTATUS);
+ if (err < 0)
+ goto out;
+ estat = err;
+
+ if (estat & (ESTATUS_1000_TFULL | ESTATUS_1000_THALF)) {
+ current_speed = SPEED_1000;
+ if (estat == ESTATUS_1000_TFULL)
+ current_duplex = DUPLEX_FULL;
+ else
+ current_duplex = DUPLEX_HALF;
+ } else {
+ if (common & ADVERTISE_100BASE4) {
+ current_speed = SPEED_100;
+ current_duplex = DUPLEX_HALF;
+ } else if (common & ADVERTISE_100FULL) {
+ current_speed = SPEED_100;
+ current_duplex = DUPLEX_FULL;
+ } else if (common & ADVERTISE_100HALF) {
+ current_speed = SPEED_100;
+ current_duplex = DUPLEX_HALF;
+ } else if (common & ADVERTISE_10FULL) {
+ current_speed = SPEED_10;
+ current_duplex = DUPLEX_FULL;
+ } else if (common & ADVERTISE_10HALF) {
+ current_speed = SPEED_10;
+ current_duplex = DUPLEX_HALF;
+ } else
+ goto out;
+ }
+ link_up = 1;
+ }
+
+out:
+ spin_unlock_irqrestore(&np->lock, flags);
+
+ *link_up_p = link_up;
+ return 0;
+}
+
+static int niu_link_status(struct niu *np, int *link_up_p)
+{
+ const struct niu_phy_ops *ops = np->phy_ops;
+ int err;
+
+ err = 0;
+ if (ops->link_status)
+ err = ops->link_status(np, link_up_p);
+
+ return err;
+}
+
+static void niu_timer(unsigned long __opaque)
+{
+ struct niu *np = (struct niu *) __opaque;
+ unsigned long off;
+ int err, link_up;
+
+ err = niu_link_status(np, &link_up);
+ if (!err)
+ niu_link_status_common(np, link_up);
+
+ if (netif_carrier_ok(np->dev))
+ off = 5 * HZ;
+ else
+ off = 1 * HZ;
+ np->timer.expires = jiffies + off;
+
+ add_timer(&np->timer);
+}
+
+static const struct niu_phy_ops phy_ops_10g_fiber_niu = {
+ .serdes_init = serdes_init_niu,
+ .xcvr_init = xcvr_init_10g,
+ .link_status = link_status_10g,
+};
+
+static const struct niu_phy_ops phy_ops_10g_fiber = {
+ .serdes_init = serdes_init_10g,
+ .xcvr_init = xcvr_init_10g,
+ .link_status = link_status_10g,
+};
+
+static const struct niu_phy_ops phy_ops_10g_copper = {
+ .serdes_init = serdes_init_10g,
+ .link_status = link_status_10g, /* XXX */
+};
+
+static const struct niu_phy_ops phy_ops_1g_fiber = {
+ .serdes_init = serdes_init_1g,
+ .xcvr_init = xcvr_init_1g,
+ .link_status = link_status_1g,
+};
+
+static const struct niu_phy_ops phy_ops_1g_copper = {
+ .xcvr_init = xcvr_init_1g,
+ .link_status = link_status_1g,
+};
+
+struct niu_phy_template {
+ const struct niu_phy_ops *ops;
+ u32 phy_addr_base;
+};
+
+static const struct niu_phy_template phy_template_niu = {
+ .ops = &phy_ops_10g_fiber_niu,
+ .phy_addr_base = 16,
+};
+
+static const struct niu_phy_template phy_template_10g_fiber = {
+ .ops = &phy_ops_10g_fiber,
+ .phy_addr_base = 8,
+};
+
+static const struct niu_phy_template phy_template_10g_copper = {
+ .ops = &phy_ops_10g_copper,
+ .phy_addr_base = 10,
+};
+
+static const struct niu_phy_template phy_template_1g_fiber = {
+ .ops = &phy_ops_1g_fiber,
+ .phy_addr_base = 0,
+};
+
+static const struct niu_phy_template phy_template_1g_copper = {
+ .ops = &phy_ops_1g_copper,
+ .phy_addr_base = 0,
+};
+
+static int niu_determine_phy_disposition(struct niu *np)
+{
+ struct niu_parent *parent = np->parent;
+ __u8 plat_type = parent->plat_type;
+ const struct niu_phy_template *tp;
+ __u32 phy_addr_off = 0;
+
+ if (plat_type == PLAT_TYPE_NIU) {
+ tp = &phy_template_niu;
+ phy_addr_off += np->port;
+ } else {
+ switch (np->flags & (NIU_FLAGS_10G | NIU_FLAGS_FIBER)) {
+ case 0:
+ /* 1G copper */
+ tp = &phy_template_1g_copper;
+ if (plat_type == PLAT_TYPE_VF_P0)
+ phy_addr_off = 10;
+ else if (plat_type == PLAT_TYPE_VF_P1)
+ phy_addr_off = 26;
+
+ phy_addr_off += (np->port ^ 0x3);
+ break;
+
+ case NIU_FLAGS_10G:
+ /* 10G copper */
+ tp = &phy_template_1g_copper;
+ break;
+
+ case NIU_FLAGS_FIBER:
+ /* 1G fiber */
+ tp = &phy_template_1g_fiber;
+ break;
+
+ case NIU_FLAGS_10G | NIU_FLAGS_FIBER:
+ /* 10G fiber */
+ tp = &phy_template_10g_fiber;
+ if (plat_type == PLAT_TYPE_VF_P0 ||
+ plat_type == PLAT_TYPE_VF_P1)
+ phy_addr_off = 8;
+ phy_addr_off += np->port;
+ break;
+
+ default:
+ return -EINVAL;
+ }
+ }
+
+ np->phy_ops = tp->ops;
+ np->phy_addr = tp->phy_addr_base + phy_addr_off;
+
+ return 0;
+}
+
+static int niu_init_link(struct niu *np)
+{
+ struct niu_parent *parent = np->parent;
+ int err, ignore;
+
+ if (parent->plat_type == PLAT_TYPE_NIU) {
+ err = niu_xcvr_init(np);
+ if (err)
+ return err;
+ msleep(200);
+ }
+ err = niu_serdes_init(np);
+ if (err)
+ return err;
+ msleep(200);
+ err = niu_xcvr_init(np);
+ if (!err)
+ niu_link_status(np, &ignore);
+ return 0;
+}
+
+static void niu_set_primary_mac(struct niu *np, unsigned char *addr)
+{
+ u16 reg0 = addr[4] << 8 | addr[5];
+ u16 reg1 = addr[2] << 8 | addr[3];
+ u16 reg2 = addr[0] << 8 | addr[1];
+
+ if (np->flags & NIU_FLAGS_XMAC) {
+ nw64_mac(reg0, XMAC_ADDR0);
+ nw64_mac(reg1, XMAC_ADDR1);
+ nw64_mac(reg2, XMAC_ADDR2);
+ } else {
+ nw64_mac(reg0, BMAC_ADDR0);
+ nw64_mac(reg1, BMAC_ADDR1);
+ nw64_mac(reg2, BMAC_ADDR2);
+ }
+}
+
+static int niu_num_alt_addr(struct niu *np)
+{
+ if (np->flags & NIU_FLAGS_XMAC)
+ return XMAC_NUM_ALT_ADDR;
+ else
+ return BMAC_NUM_ALT_ADDR;
+}
+
+static int niu_set_alt_mac(struct niu *np, int index, unsigned char *addr)
+{
+ u16 reg0 = addr[4] << 8 | addr[5];
+ u16 reg1 = addr[2] << 8 | addr[3];
+ u16 reg2 = addr[0] << 8 | addr[1];
+
+ if (index >= niu_num_alt_addr(np))
+ return -EINVAL;
+
+ if (np->flags & NIU_FLAGS_XMAC) {
+ nw64_mac(reg0, XMAC_ALT_ADDR0(index));
+ nw64_mac(reg1, XMAC_ALT_ADDR1(index));
+ nw64_mac(reg2, XMAC_ALT_ADDR2(index));
+ } else {
+ nw64_mac(reg0, BMAC_ALT_ADDR0(index));
+ nw64_mac(reg1, BMAC_ALT_ADDR1(index));
+ nw64_mac(reg2, BMAC_ALT_ADDR2(index));
+ }
+
+ return 0;
+}
+
+static int niu_enable_alt_mac(struct niu *np, int index, int on)
+{
+ unsigned long reg;
+ u64 val, mask;
+
+ if (index >= niu_num_alt_addr(np))
+ return -EINVAL;
+
+ if (np->flags & NIU_FLAGS_XMAC)
+ reg = XMAC_ADDR_CMPEN;
+ else
+ reg = BMAC_ADDR_CMPEN;
+
+ mask = 1 << index;
+
+ val = nr64_mac(reg);
+ if (on)
+ val |= mask;
+ else
+ val &= ~mask;
+ nw64_mac(val, reg);
+
+ return 0;
+}
+
+static void __set_rdc_table_num_hw(struct niu *np, unsigned long reg,
+ int num, int mac_pref)
+{
+ u64 val = nr64_mac(reg);
+ val &= ~(HOST_INFO_MACRDCTBLN | HOST_INFO_MPR);
+ val |= num;
+ if (mac_pref)
+ val |= HOST_INFO_MPR;
+ nw64_mac(val, reg);
+}
+
+static int __set_rdc_table_num(struct niu *np,
+ int xmac_index, int bmac_index,
+ int rdc_table_num, int mac_pref)
+{
+ unsigned long reg;
+
+ if (rdc_table_num & ~HOST_INFO_MACRDCTBLN)
+ return -EINVAL;
+ if (np->flags & NIU_FLAGS_XMAC)
+ reg = XMAC_HOST_INFO(xmac_index);
+ else
+ reg = BMAC_HOST_INFO(bmac_index);
+ __set_rdc_table_num_hw(np, reg, rdc_table_num, mac_pref);
+ return 0;
+}
+
+static int niu_set_primary_mac_rdc_table(struct niu *np, int table_num,
+ int mac_pref)
+{
+ return __set_rdc_table_num(np, 17, 0, table_num, mac_pref);
+}
+
+static int niu_set_multicast_mac_rdc_table(struct niu *np, int table_num,
+ int mac_pref)
+{
+ return __set_rdc_table_num(np, 16, 8, table_num, mac_pref);
+}
+
+static int niu_set_alt_mac_rdc_table(struct niu *np, int idx, int table_num,
+ int mac_pref)
+{
+ if (idx >= niu_num_alt_addr(np))
+ return -EINVAL;
+ return __set_rdc_table_num(np, idx, idx + 1, table_num, mac_pref);
+}
+
+static u64 vlan_entry_set_parity(u64 reg_val)
+{
+ u64 port01_mask;
+ u64 port23_mask;
+
+ port01_mask = 0x00ff;
+ port23_mask = 0xff00;
+
+ if (hweight64(reg_val & port01_mask) & 1)
+ reg_val |= ENET_VLAN_TBL_PARITY0;
+ else
+ reg_val &= ~ENET_VLAN_TBL_PARITY0;
+
+ if (hweight64(reg_val & port23_mask) & 1)
+ reg_val |= ENET_VLAN_TBL_PARITY1;
+ else
+ reg_val &= ~ENET_VLAN_TBL_PARITY1;
+
+ return reg_val;
+}
+
+static void vlan_tbl_write(struct niu *np, unsigned long index,
+ int port, int vpr, int rdc_table)
+{
+ u64 reg_val = nr64(ENET_VLAN_TBL(index));
+
+ reg_val &= ~((ENET_VLAN_TBL_VPR |
+ ENET_VLAN_TBL_VLANRDCTBLN) <<
+ ENET_VLAN_TBL_SHIFT(port));
+ if (vpr)
+ reg_val |= (ENET_VLAN_TBL_VPR <<
+ ENET_VLAN_TBL_SHIFT(port));
+ reg_val |= (rdc_table << ENET_VLAN_TBL_SHIFT(port));
+
+ reg_val = vlan_entry_set_parity(reg_val);
+
+ nw64(reg_val, ENET_VLAN_TBL(index));
+}
+
+static void vlan_tbl_clear(struct niu *np)
+{
+ int i;
+
+ for (i = 0; i < ENET_VLAN_TBL_NUM_ENTRIES; i++)
+ nw64(0, ENET_VLAN_TBL(i));
+}
+
+static int tcam_wait_bit(struct niu *np, u64 bit)
+{
+ int limit = 1000;
+
+ while (--limit > 0) {
+ if (nr64(TCAM_CTL) & bit)
+ break;
+ udelay(1);
+ }
+ if (limit < 0)
+ return -ENODEV;
+
+ return 0;
+}
+
+static int tcam_flush(struct niu *np, int index)
+{
+ nw64(0x00, TCAM_KEY_0);
+ nw64(0xff, TCAM_KEY_MASK_0);
+ nw64((TCAM_CTL_RWC_TCAM_WRITE | index), TCAM_CTL);
+
+ return tcam_wait_bit(np, TCAM_CTL_STAT);
+}
+
+#if 0
+static int tcam_read(struct niu *np, int index,
+ u64 *key, u64 *mask)
+{
+ int err;
+
+ nw64((TCAM_CTL_RWC_TCAM_READ | index), TCAM_CTL);
+ err = tcam_wait_bit(np, TCAM_CTL_STAT);
+ if (!err) {
+ key[0] = nr64(TCAM_KEY_0);
+ key[1] = nr64(TCAM_KEY_1);
+ key[2] = nr64(TCAM_KEY_2);
+ key[3] = nr64(TCAM_KEY_3);
+ mask[0] = nr64(TCAM_KEY_MASK_0);
+ mask[1] = nr64(TCAM_KEY_MASK_1);
+ mask[2] = nr64(TCAM_KEY_MASK_2);
+ mask[3] = nr64(TCAM_KEY_MASK_3);
+ }
+ return err;
+}
+#endif
+
+static int tcam_write(struct niu *np, int index,
+ u64 *key, u64 *mask)
+{
+ nw64(key[0], TCAM_KEY_0);
+ nw64(key[1], TCAM_KEY_1);
+ nw64(key[2], TCAM_KEY_2);
+ nw64(key[3], TCAM_KEY_3);
+ nw64(mask[0], TCAM_KEY_MASK_0);
+ nw64(mask[1], TCAM_KEY_MASK_1);
+ nw64(mask[2], TCAM_KEY_MASK_2);
+ nw64(mask[3], TCAM_KEY_MASK_3);
+ nw64((TCAM_CTL_RWC_TCAM_WRITE | index), TCAM_CTL);
+
+ return tcam_wait_bit(np, TCAM_CTL_STAT);
+}
+
+#if 0
+static int tcam_assoc_read(struct niu *np, int index, u64 *data)
+{
+ int err;
+
+ nw64((TCAM_CTL_RWC_RAM_READ | index), TCAM_CTL);
+ err = tcam_wait_bit(np, TCAM_CTL_STAT);
+ if (!err)
+ *data = nr64(TCAM_KEY_1);
+
+ return err;
+}
+#endif
+
+static int tcam_assoc_write(struct niu *np, int index, u64 assoc_data)
+{
+ nw64(assoc_data, TCAM_KEY_1);
+ nw64((TCAM_CTL_RWC_RAM_WRITE | index), TCAM_CTL);
+
+ return tcam_wait_bit(np, TCAM_CTL_STAT);
+}
+
+static void tcam_enable(struct niu *np, int on)
+{
+ u64 val = nr64(FFLP_CFG_1);
+
+ if (on)
+ val &= ~FFLP_CFG_1_TCAM_DIS;
+ else
+ val |= FFLP_CFG_1_TCAM_DIS;
+ nw64(val, FFLP_CFG_1);
+}
+
+static void tcam_set_lat_and_ratio(struct niu *np, u64 latency, u64 ratio)
+{
+ u64 val = nr64(FFLP_CFG_1);
+
+ val &= ~(FFLP_CFG_1_FFLPINITDONE |
+ FFLP_CFG_1_CAMLAT |
+ FFLP_CFG_1_CAMRATIO);
+ val |= (latency << FFLP_CFG_1_CAMLAT_SHIFT);
+ val |= (ratio << FFLP_CFG_1_CAMRATIO_SHIFT);
+ nw64(val, FFLP_CFG_1);
+
+ val = nr64(FFLP_CFG_1);
+ val |= FFLP_CFG_1_FFLPINITDONE;
+ nw64(val, FFLP_CFG_1);
+}
+
+static int tcam_user_eth_class_enable(struct niu *np, unsigned long class,
+ int on)
+{
+ unsigned long reg;
+ u64 val;
+
+ if (class < CLASS_CODE_ETHERTYPE1 ||
+ class > CLASS_CODE_ETHERTYPE2)
+ return -EINVAL;
+
+ reg = L2_CLS(class - CLASS_CODE_ETHERTYPE1);
+ val = nr64(reg);
+ if (on)
+ val |= L2_CLS_VLD;
+ else
+ val &= ~L2_CLS_VLD;
+ nw64(val, reg);
+
+ return 0;
+}
+
+#if 0
+static int tcam_user_eth_class_set(struct niu *np, unsigned long class,
+ u64 ether_type)
+{
+ unsigned long reg;
+ u64 val;
+
+ if (class < CLASS_CODE_ETHERTYPE1 ||
+ class > CLASS_CODE_ETHERTYPE2 ||
+ (ether_type & ~(u64)0xffff) != 0)
+ return -EINVAL;
+
+ reg = L2_CLS(class - CLASS_CODE_ETHERTYPE1);
+ val = nr64(reg);
+ val &= ~L2_CLS_ETYPE;
+ val |= (ether_type << L2_CLS_ETYPE_SHIFT);
+ nw64(val, reg);
+
+ return 0;
+}
+#endif
+
+static int tcam_user_ip_class_enable(struct niu *np, unsigned long class,
+ int on)
+{
+ unsigned long reg;
+ u64 val;
+
+ if (class < CLASS_CODE_USER_PROG1 ||
+ class > CLASS_CODE_USER_PROG4)
+ return -EINVAL;
+
+ reg = L3_CLS(class - CLASS_CODE_USER_PROG1);
+ val = nr64(reg);
+ if (on)
+ val |= L3_CLS_VALID;
+ else
+ val &= ~L3_CLS_VALID;
+ nw64(val, reg);
+
+ return 0;
+}
+
+#if 0
+static int tcam_user_ip_class_set(struct niu *np, unsigned long class,
+ int ipv6, u64 protocol_id,
+ u64 tos_mask, u64 tos_val)
+{
+ unsigned long reg;
+ u64 val;
+
+ if (class < CLASS_CODE_USER_PROG1 ||
+ class > CLASS_CODE_USER_PROG4 ||
+ (protocol_id & ~(u64)0xff) != 0 ||
+ (tos_mask & ~(u64)0xff) != 0 ||
+ (tos_val & ~(u64)0xff) != 0)
+ return -EINVAL;
+
+ reg = L3_CLS(class - CLASS_CODE_USER_PROG1);
+ val = nr64(reg);
+ val &= ~(L3_CLS_IPVER | L3_CLS_PID |
+ L3_CLS_TOSMASK | L3_CLS_TOS);
+ if (ipv6)
+ val |= L3_CLS_IPVER;
+ val |= (protocol_id << L3_CLS_PID_SHIFT);
+ val |= (tos_mask << L3_CLS_TOSMASK_SHIFT);
+ val |= (tos_val << L3_CLS_TOS_SHIFT);
+ nw64(val, reg);
+
+ return 0;
+}
+#endif
+
+static int tcam_early_init(struct niu *np)
+{
+ unsigned long i;
+ int err;
+
+ tcam_enable(np, 0);
+ tcam_set_lat_and_ratio(np,
+ DEFAULT_TCAM_LATENCY,
+ DEFAULT_TCAM_ACCESS_RATIO);
+ for (i = CLASS_CODE_ETHERTYPE1; i <= CLASS_CODE_ETHERTYPE2; i++) {
+ err = tcam_user_eth_class_enable(np, i, 0);
+ if (err)
+ return err;
+ }
+ for (i = CLASS_CODE_USER_PROG1; i <= CLASS_CODE_USER_PROG4; i++) {
+ err = tcam_user_ip_class_enable(np, i, 0);
+ if (err)
+ return err;
+ }
+
+ return 0;
+}
+
+static int tcam_flush_all(struct niu *np)
+{
+ unsigned long i;
+
+ for (i = 0; i < np->parent->tcam_num_entries; i++) {
+ int err = tcam_flush(np, i);
+ if (err)
+ return err;
+ }
+ return 0;
+}
+
+static u64 hash_addr_regval(unsigned long index, unsigned long num_entries)
+{
+ return ((u64)index | (num_entries == 1 ?
+ HASH_TBL_ADDR_AUTOINC : 0));
+}
+
+#if 0
+static int hash_read(struct niu *np, unsigned long partition,
+ unsigned long index, unsigned long num_entries,
+ u64 *data)
+{
+ u64 val = hash_addr_regval(index, num_entries);
+ unsigned long i;
+
+ if (partition >= FCRAM_NUM_PARTITIONS ||
+ index + num_entries > FCRAM_SIZE)
+ return -EINVAL;
+
+ nw64(val, HASH_TBL_ADDR(partition));
+ for (i = 0; i < num_entries; i++)
+ data[i] = nr64(HASH_TBL_DATA(partition));
+
+ return 0;
+}
+#endif
+
+static int hash_write(struct niu *np, unsigned long partition,
+ unsigned long index, unsigned long num_entries,
+ u64 *data)
+{
+ u64 val = hash_addr_regval(index, num_entries);
+ unsigned long i;
+
+ if (partition >= FCRAM_NUM_PARTITIONS ||
+ index + (num_entries * 8) > FCRAM_SIZE)
+ return -EINVAL;
+
+ nw64(val, HASH_TBL_ADDR(partition));
+ for (i = 0; i < num_entries; i++)
+ nw64(data[i], HASH_TBL_DATA(partition));
+
+ return 0;
+}
+
+static void fflp_reset(struct niu *np)
+{
+ u64 val;
+
+ nw64(FFLP_CFG_1_PIO_FIO_RST, FFLP_CFG_1);
+ udelay(10);
+ nw64(0, FFLP_CFG_1);
+
+ val = FFLP_CFG_1_FCRAMOUTDR_NORMAL | FFLP_CFG_1_FFLPINITDONE;
+ nw64(val, FFLP_CFG_1);
+}
+
+static void fflp_set_timings(struct niu *np)
+{
+ u64 val = nr64(FFLP_CFG_1);
+
+ val &= ~FFLP_CFG_1_FFLPINITDONE;
+ val |= (DEFAULT_FCRAMRATIO << FFLP_CFG_1_FCRAMRATIO_SHIFT);
+ nw64(val, FFLP_CFG_1);
+
+ val = nr64(FFLP_CFG_1);
+ val |= FFLP_CFG_1_FFLPINITDONE;
+ nw64(val, FFLP_CFG_1);
+
+ val = nr64(FCRAM_REF_TMR);
+ val &= ~(FCRAM_REF_TMR_MAX | FCRAM_REF_TMR_MIN);
+ val |= (DEFAULT_FCRAM_REFRESH_MAX << FCRAM_REF_TMR_MAX_SHIFT);
+ val |= (DEFAULT_FCRAM_REFRESH_MIN << FCRAM_REF_TMR_MIN_SHIFT);
+ nw64(val, FCRAM_REF_TMR);
+}
+
+static int fflp_set_partition(struct niu *np, u64 partition,
+ u64 mask, u64 base, int enable)
+{
+ unsigned long reg;
+ u64 val;
+
+ if (partition >= FCRAM_NUM_PARTITIONS ||
+ (mask & ~(u64)0x1f) != 0 ||
+ (base & ~(u64)0x1f) != 0)
+ return -EINVAL;
+
+ reg = FLW_PRT_SEL(partition);
+
+ val = nr64(reg);
+ val &= ~(FLW_PRT_SEL_EXT | FLW_PRT_SEL_MASK | FLW_PRT_SEL_BASE);
+ val |= (mask << FLW_PRT_SEL_MASK_SHIFT);
+ val |= (base << FLW_PRT_SEL_BASE_SHIFT);
+ if (enable)
+ val |= FLW_PRT_SEL_EXT;
+ nw64(val, reg);
+
+ return 0;
+}
+
+static int fflp_disable_all_partitions(struct niu *np)
+{
+ unsigned long i;
+
+ for (i = 0; i < FCRAM_NUM_PARTITIONS; i++) {
+ int err = fflp_set_partition(np, 0, 0, 0, 0);
+ if (err)
+ return err;
+ }
+ return 0;
+}
+
+static void fflp_llcsnap_enable(struct niu *np, int on)
+{
+ u64 val = nr64(FFLP_CFG_1);
+
+ if (on)
+ val |= FFLP_CFG_1_LLCSNAP;
+ else
+ val &= ~FFLP_CFG_1_LLCSNAP;
+ nw64(val, FFLP_CFG_1);
+}
+
+static void fflp_errors_enable(struct niu *np, int on)
+{
+ u64 val = nr64(FFLP_CFG_1);
+
+ if (on)
+ val &= ~FFLP_CFG_1_ERRORDIS;
+ else
+ val |= FFLP_CFG_1_ERRORDIS;
+ nw64(val, FFLP_CFG_1);
+}
+
+static int fflp_hash_clear(struct niu *np)
+{
+ struct fcram_hash_ipv4 ent;
+ unsigned long i;
+
+ /* IPV4 hash entry with valid bit clear, rest is don't care. */
+ memset(&ent, 0, sizeof(ent));
+ ent.header = HASH_HEADER_EXT;
+
+ for (i = 0; i < FCRAM_SIZE; i += sizeof(ent)) {
+ int err = hash_write(np, 0, i, 1, (u64 *) &ent);
+ if (err)
+ return err;
+ }
+ return 0;
+}
+
+static int fflp_early_init(struct niu *np)
+{
+ struct niu_parent *parent;
+ unsigned long flags;
+ int err;
+
+ niu_lock_parent(np, flags);
+
+ parent = np->parent;
+ err = 0;
+ if (!(parent->flags & PARENT_FLGS_CLS_HWINIT)) {
+ niudbg(PROBE, "fflp_early_init: Initting hw on port %u\n",
+ np->port);
+ if (np->parent->plat_type == PLAT_TYPE_ATLAS) {
+ fflp_reset(np);
+ fflp_set_timings(np);
+ err = fflp_disable_all_partitions(np);
+ if (err) {
+ niudbg(PROBE, "fflp_disable_all_partitions "
+ "failed, err=%d\n", err);
+ goto out;
+ }
+ }
+
+ err = tcam_early_init(np);
+ if (err) {
+ niudbg(PROBE, "tcam_early_init failed, err=%d\n",
+ err);
+ goto out;
+ }
+ fflp_llcsnap_enable(np, 1);
+ fflp_errors_enable(np, 0);
+ nw64(0, H1POLY);
+ nw64(0, H2POLY);
+
+ err = tcam_flush_all(np);
+ if (err) {
+ niudbg(PROBE, "tcam_flush_all failed, err=%d\n",
+ err);
+ goto out;
+ }
+ if (np->parent->plat_type == PLAT_TYPE_ATLAS) {
+ err = fflp_hash_clear(np);
+ if (err) {
+ niudbg(PROBE, "fflp_hash_clear failed, "
+ "err=%d\n", err);
+ goto out;
+ }
+ }
+
+ vlan_tbl_clear(np);
+
+ niudbg(PROBE, "fflp_early_init: Success\n");
+ parent->flags |= PARENT_FLGS_CLS_HWINIT;
+ }
+out:
+ niu_unlock_parent(np, flags);
+ return err;
+}
+
+static int niu_set_flow_key(struct niu *np, unsigned long class_code, u64 key)
+{
+ if (class_code < CLASS_CODE_USER_PROG1 ||
+ class_code > CLASS_CODE_SCTP_IPV6)
+ return -EINVAL;
+
+ nw64(key, FLOW_KEY(class_code - CLASS_CODE_USER_PROG1));
+ return 0;
+}
+
+static int niu_set_tcam_key(struct niu *np, unsigned long class_code, u64 key)
+{
+ if (class_code < CLASS_CODE_USER_PROG1 ||
+ class_code > CLASS_CODE_SCTP_IPV6)
+ return -EINVAL;
+
+ nw64(key, TCAM_KEY(class_code - CLASS_CODE_USER_PROG1));
+ return 0;
+}
+
+static void niu_rx_skb_append(struct sk_buff *skb, struct page *page,
+ __u32 offset, __u32 size)
+{
+ int i = skb_shinfo(skb)->nr_frags;
+ skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
+
+ frag->page = page;
+ frag->page_offset = offset;
+ frag->size = size;
+
+ skb->len += size;
+ skb->data_len += size;
+ skb->truesize += size;
+
+ skb_shinfo(skb)->nr_frags = i + 1;
+#if 0
+ {
+ unsigned char *data = page_address(page) + offset;
+ int q;
+
+ printk(KERN_ERR PFX "SKB append off(%u) size(%u) len(%u) dlen(%u) HDR [ ",
+ offset, size, skb->len, skb->data_len);
+ for (q = 0; q < 14; q++)
+ printk("%02x", data[q]);
+ printk(" ]\n");
+ }
+#endif
+}
+
+static unsigned int niu_hash_rxaddr(struct rx_ring_info *rp, u64 a)
+{
+ a >>= PAGE_SHIFT;
+ a ^= (a >> ilog2(MAX_RBR_RING_SIZE));
+
+ return (a & (MAX_RBR_RING_SIZE - 1));
+}
+
+static struct page *niu_find_rxpage(struct rx_ring_info *rp, u64 addr,
+ struct page ***link)
+{
+ unsigned int h = niu_hash_rxaddr(rp, addr);
+ struct page *p, **pp;
+
+ addr &= PAGE_MASK;
+ pp = &rp->rxhash[h];
+ for (; (p = *pp) != NULL; pp = (struct page **) &p->mapping) {
+ if (p->index == addr) {
+ *link = pp;
+ break;
+ }
+ }
+
+ return p;
+}
+
+static void niu_hash_page(struct rx_ring_info *rp, struct page *page, u64 base)
+{
+ unsigned int h = niu_hash_rxaddr(rp, base);
+
+ page->index = base;
+ page->mapping = (struct address_space *) rp->rxhash[h];
+ rp->rxhash[h] = page;
+}
+
+static int niu_rbr_add_page(struct niu *np, struct rx_ring_info *rp,
+ gfp_t mask, int start_index)
+{
+ struct page *page;
+ u64 addr;
+ int i;
+
+ page = alloc_page(mask);
+ if (!page)
+ return -ENOMEM;
+
+ addr = np->ops->map_page(np->device, page, 0,
+ PAGE_SIZE, DMA_FROM_DEVICE);
+
+ niu_hash_page(rp, page, addr);
+ if (rp->rbr_blocks_per_page > 1)
+ atomic_add(rp->rbr_blocks_per_page - 1,
+ &compound_head(page)->_count);
+
+ for (i = 0; i < rp->rbr_blocks_per_page; i++) {
+ __le32 *rbr = &rp->rbr[start_index + i];
+
+ *rbr = cpu_to_le32(addr >> RBR_DESCR_ADDR_SHIFT);
+ addr += rp->rbr_block_size;
+ }
+
+ return 0;
+}
+
+static void niu_rbr_refill(struct niu *np, struct rx_ring_info *rp, gfp_t mask)
+{
+ int index = rp->rbr_index;
+
+ rp->rbr_pending++;
+ if ((rp->rbr_pending % rp->rbr_blocks_per_page) == 0) {
+ int err = niu_rbr_add_page(np, rp, mask, index);
+
+ if (unlikely(err)) {
+ rp->rbr_pending--;
+ return;
+ }
+
+ rp->rbr_index += rp->rbr_blocks_per_page;
+ BUG_ON(rp->rbr_index > rp->rbr_table_size);
+ if (rp->rbr_index == rp->rbr_table_size)
+ rp->rbr_index = 0;
+
+ if (rp->rbr_pending >= rp->rbr_kick_thresh) {
+ nw64(rp->rbr_pending, RBR_KICK(rp->rx_channel));
+ rp->rbr_pending = 0;
+ }
+ }
+}
+
+static int niu_rx_pkt_ignore(struct niu *np, struct rx_ring_info *rp)
+{
+ unsigned int index = rp->rcr_index;
+ int num_rcr = 0;
+
+ rp->rx_dropped++;
+ while (1) {
+ struct page *page, **link;
+ u64 addr, val;
+ u32 rcr_size;
+
+ num_rcr++;
+
+ val = le64_to_cpup(&rp->rcr[index]);
+ addr = (val & RCR_ENTRY_PKT_BUF_ADDR) <<
+ RCR_ENTRY_PKT_BUF_ADDR_SHIFT;
+ page = niu_find_rxpage(rp, addr, &link);
+ BUG_ON(!page);
+
+ rcr_size = rp->rbr_sizes[(val & RCR_ENTRY_PKTBUFSZ) >>
+ RCR_ENTRY_PKTBUFSZ_SHIFT];
+ if ((page->index + PAGE_SIZE) - rcr_size == addr) {
+ *link = (struct page *) page->mapping;
+ np->ops->unmap_page(np->device, page->index,
+ PAGE_SIZE, DMA_FROM_DEVICE);
+ page->index = 0;
+ page->mapping = NULL;
+ __free_page(page);
+ rp->rbr_refill_pending++;
+ }
+
+ index = NEXT_RCR(rp, index);
+ if (!(val & RCR_ENTRY_MULTI))
+ break;
+
+ }
+ rp->rcr_index = index;
+
+ return num_rcr;
+}
+
+static int niu_process_rx_pkt(struct niu *np, struct rx_ring_info *rp)
+{
+ unsigned int index = rp->rcr_index;
+ struct sk_buff *skb;
+ int len, num_rcr;
+
+ skb = netdev_alloc_skb(np->dev, RX_SKB_ALLOC_SIZE);
+ if (unlikely(!skb))
+ return niu_rx_pkt_ignore(np, rp);
+
+ num_rcr = 0;
+ while (1) {
+ struct page *page, **link;
+ u32 rcr_size, append_size;
+ u64 addr, val, off;
+
+ num_rcr++;
+
+ val = le64_to_cpup(&rp->rcr[index]);
+
+ len = (val & RCR_ENTRY_L2_LEN) >>
+ RCR_ENTRY_L2_LEN_SHIFT;
+ len -= ETH_FCS_LEN;
+
+ addr = (val & RCR_ENTRY_PKT_BUF_ADDR) <<
+ RCR_ENTRY_PKT_BUF_ADDR_SHIFT;
+ page = niu_find_rxpage(rp, addr, &link);
+ BUG_ON(!page);
+
+ rcr_size = rp->rbr_sizes[(val & RCR_ENTRY_PKTBUFSZ) >>
+ RCR_ENTRY_PKTBUFSZ_SHIFT];
+
+ off = addr & ~PAGE_MASK;
+ append_size = rcr_size;
+ if (num_rcr == 1) {
+ int ptype;
+
+ off += 2;
+ append_size -= 2;
+
+ ptype = (val >> RCR_ENTRY_PKT_TYPE_SHIFT);
+ if ((ptype == RCR_PKT_TYPE_TCP ||
+ ptype == RCR_PKT_TYPE_UDP) &&
+ !(val & (RCR_ENTRY_NOPORT |
+ RCR_ENTRY_ERROR)))
+ skb->ip_summed = CHECKSUM_UNNECESSARY;
+ else
+ skb->ip_summed = CHECKSUM_NONE;
+ }
+ if (!(val & RCR_ENTRY_MULTI))
+ append_size = len - skb->len;
+
+ niu_rx_skb_append(skb, page, off, append_size);
+ if ((page->index + rp->rbr_block_size) - rcr_size == addr) {
+ *link = (struct page *) page->mapping;
+ np->ops->unmap_page(np->device, page->index,
+ PAGE_SIZE, DMA_FROM_DEVICE);
+ page->index = 0;
+ page->mapping = NULL;
+ rp->rbr_refill_pending++;
+ } else
+ get_page(page);
+
+ index = NEXT_RCR(rp, index);
+ if (!(val & RCR_ENTRY_MULTI))
+ break;
+
+ }
+ rp->rcr_index = index;
+
+ skb_reserve(skb, NET_IP_ALIGN);
+ __pskb_pull_tail(skb, min(len, NIU_RXPULL_MAX));
+
+ rp->rx_packets++;
+ rp->rx_bytes += skb->len;
+
+ skb->protocol = eth_type_trans(skb, np->dev);
+ netif_receive_skb(skb);
+
+ return num_rcr;
+}
+
+static int niu_rbr_fill(struct niu *np, struct rx_ring_info *rp, gfp_t mask)
+{
+ int blocks_per_page = rp->rbr_blocks_per_page;
+ int err, index = rp->rbr_index;
+
+ err = 0;
+ while (index < (rp->rbr_table_size - blocks_per_page)) {
+ err = niu_rbr_add_page(np, rp, mask, index);
+ if (err)
+ break;
+
+ index += blocks_per_page;
+ }
+
+ rp->rbr_index = index;
+ return err;
+}
+
+static void niu_rbr_free(struct niu *np, struct rx_ring_info *rp)
+{
+ int i;
+
+ for (i = 0; i < MAX_RBR_RING_SIZE; i++) {
+ struct page *page;
+
+ page = rp->rxhash[i];
+ while (page) {
+ struct page *next = (struct page *) page->mapping;
+ u64 base = page->index;
+
+ np->ops->unmap_page(np->device, base, PAGE_SIZE,
+ DMA_FROM_DEVICE);
+ page->index = 0;
+ page->mapping = NULL;
+
+ __free_page(page);
+
+ page = next;
+ }
+ }
+
+ for (i = 0; i < rp->rbr_table_size; i++)
+ rp->rbr[i] = cpu_to_le32(0);
+ rp->rbr_index = 0;
+}
+
+static int release_tx_packet(struct niu *np, struct tx_ring_info *rp, int idx)
+{
+ struct tx_buff_info *tb = &rp->tx_buffs[idx];
+ struct sk_buff *skb = tb->skb;
+ struct tx_pkt_hdr *tp;
+ u64 tx_flags;
+ int i;
+
+ BUG_ON(!skb);
+
+ tp = (struct tx_pkt_hdr *) skb->data;
+ tx_flags = le64_to_cpup(&tp->flags);
+
+ rp->tx_packets++;
+ rp->tx_bytes += (((tx_flags & TXHDR_LEN) >> TXHDR_LEN_SHIFT) -
+ ((tx_flags & TXHDR_PAD) / 2));
+
+ np->ops->unmap_single(np->device, tb->mapping,
+ skb_headlen(skb), DMA_TO_DEVICE);
+
+ tb->skb = NULL;
+ idx = NEXT_TX(rp, idx);
+
+ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
+ tb = &rp->tx_buffs[idx];
+
+ BUG_ON(tb->skb != NULL);
+ np->ops->unmap_page(np->device, tb->mapping,
+ skb_shinfo(skb)->frags[i].size,
+ DMA_TO_DEVICE);
+ idx = NEXT_TX(rp, idx);
+ }
+
+ dev_kfree_skb(skb);
+
+ return idx;
+}
+
+#define NIU_TX_WAKEUP_THRESH(rp) ((rp)->pending / 4)
+
+static void niu_tx_work(struct niu *np, struct tx_ring_info *rp)
+{
+ u64 hdl, cs;
+ int head, cons;
+
+ cs = rp->tx_cs;
+ if (unlikely(!(cs & (TX_CS_MK | TX_CS_MMK))))
+ goto out;
+
+ hdl = rp->tx_hdl;
+ head = (hdl & TX_RING_HDL_HEAD) >> TX_RING_HDL_HEAD_SHIFT;
+ cons = rp->cons;
+
+ niudbg(TX_WORK, "%s: niu_tx_work() head[%d] cons[%d]\n",
+ np->dev->name, head, cons);
+
+ while (cons != head)
+ cons = release_tx_packet(np, rp, cons);
+
+ rp->cons = cons;
+ smp_mb();
+
+out:
+ if (unlikely(netif_queue_stopped(np->dev) &&
+ (niu_tx_avail(rp) > NIU_TX_WAKEUP_THRESH(rp)))) {
+ netif_tx_lock(np->dev);
+ if (netif_queue_stopped(np->dev) &&
+ (niu_tx_avail(rp) > NIU_TX_WAKEUP_THRESH(rp)))
+ netif_wake_queue(np->dev);
+ netif_tx_unlock(np->dev);
+ }
+}
+
+static int niu_rx_work(struct niu *np, struct rx_ring_info *rp, int budget)
+{
+ int qlen, rcr_done = 0, work_done = 0;
+ struct rxdma_mailbox *mbox = rp->mbox;
+ u64 stat;
+
+#if 1
+ stat = nr64(RX_DMA_CTL_STAT(rp->rx_channel));
+ qlen = nr64(RCRSTAT_A(rp->rx_channel)) & RCRSTAT_A_QLEN;
+#else
+ stat = le64_to_cpup(&mbox->rx_dma_ctl_stat);
+ qlen = (le64_to_cpup(&mbox->rcrstat_a) & RCRSTAT_A_QLEN);
+#endif
+ mbox->rx_dma_ctl_stat = 0;
+ mbox->rcrstat_a = 0;
+
+ niudbg(RX_WORK, "%s: niu_rx_work(chan[%d]), stat[%lx] qlen=%d\n",
+ np->dev->name, rp->rx_channel, stat, qlen);
+
+ rcr_done = work_done = 0;
+ qlen = min(qlen, budget);
+ while (work_done < qlen) {
+ rcr_done += niu_process_rx_pkt(np, rp);
+ work_done++;
+ }
+
+ if (rp->rbr_refill_pending >= rp->rbr_kick_thresh) {
+ unsigned int i;
+
+ for (i = 0; i < rp->rbr_refill_pending; i++)
+ niu_rbr_refill(np, rp, GFP_ATOMIC);
+ rp->rbr_refill_pending = 0;
+ }
+
+ stat = (RX_DMA_CTL_STAT_MEX |
+ ((u64)work_done << RX_DMA_CTL_STAT_PKTREAD_SHIFT) |
+ ((u64)rcr_done << RX_DMA_CTL_STAT_PTRREAD_SHIFT));
+
+ nw64(stat, RX_DMA_CTL_STAT(rp->rx_channel));
+
+ return work_done;
+}
+
+static int niu_poll_core(struct niu *np, struct niu_ldg *lp, int budget)
+{
+ u64 v0 = lp->v0;
+ u32 tx_vec = (v0 >> 32);
+ u32 rx_vec = (v0 & 0xffffffff);
+ int i, work_done = 0;
+
+ niudbg(POLL, "%s: niu_poll_core() v0[%016lx]\n",
+ np->dev->name, v0);
+
+ for (i = 0; i < np->num_tx_rings; i++) {
+ struct tx_ring_info *rp = &np->tx_rings[i];
+ if (tx_vec & (1 << rp->tx_channel))
+ niu_tx_work(np, rp);
+ nw64(0, LD_IM0(LDN_TXDMA(rp->tx_channel)));
+ }
+
+ for (i = 0; i < np->num_rx_rings; i++) {
+ struct rx_ring_info *rp = &np->rx_rings[i];
+
+ if (rx_vec & (1 << rp->rx_channel)) {
+ int this_work_done;
+
+ this_work_done = niu_rx_work(np, rp,
+ budget);
+
+ budget -= this_work_done;
+ work_done += this_work_done;
+ }
+ nw64(0, LD_IM0(LDN_RXDMA(rp->rx_channel)));
+ }
+
+ return work_done;
+}
+
+static int niu_poll(struct napi_struct *napi, int budget)
+{
+ struct niu_ldg *lp = container_of(napi, struct niu_ldg, napi);
+ struct niu *np = lp->np;
+ int work_done;
+
+ work_done = niu_poll_core(np, lp, budget);
+
+ if (work_done < budget) {
+ netif_rx_complete(np->dev, napi);
+ niu_ldg_rearm(np, lp, 1);
+ }
+ return work_done;
+}
+
+static void niu_log_rxchan_errors(struct niu *np, struct rx_ring_info *rp,
+ u64 stat)
+{
+ printk(KERN_ERR PFX "%s: RX channel %u errors ( ",
+ np->dev->name, rp->rx_channel);
+
+ if (stat & RX_DMA_CTL_STAT_RBR_TMOUT)
+ printk("RBR_TMOUT ");
+ if (stat & RX_DMA_CTL_STAT_RSP_CNT_ERR)
+ printk("RSP_CNT ");
+ if (stat & RX_DMA_CTL_STAT_BYTE_EN_BUS)
+ printk("BYTE_EN_BUS ");
+ if (stat & RX_DMA_CTL_STAT_RSP_DAT_ERR)
+ printk("RSP_DAT ");
+ if (stat & RX_DMA_CTL_STAT_RCR_ACK_ERR)
+ printk("RCR_ACK ");
+ if (stat & RX_DMA_CTL_STAT_RCR_SHA_PAR)
+ printk("RCR_SHA_PAR ");
+ if (stat & RX_DMA_CTL_STAT_RBR_PRE_PAR)
+ printk("RBR_PRE_PAR ");
+ if (stat & RX_DMA_CTL_STAT_CONFIG_ERR)
+ printk("CONFIG ");
+ if (stat & RX_DMA_CTL_STAT_RCRINCON)
+ printk("RCRINCON ");
+ if (stat & RX_DMA_CTL_STAT_RCRFULL)
+ printk("RCRFULL ");
+ if (stat & RX_DMA_CTL_STAT_RBRFULL)
+ printk("RBRFULL ");
+ if (stat & RX_DMA_CTL_STAT_RBRLOGPAGE)
+ printk("RBRLOGPAGE ");
+ if (stat & RX_DMA_CTL_STAT_CFIGLOGPAGE)
+ printk("CFIGLOGPAGE ");
+ if (stat & RX_DMA_CTL_STAT_DC_FIFO_ERR)
+ printk("DC_FIDO ");
+
+ printk(")\n");
+}
+
+static int niu_rx_error(struct niu *np, struct rx_ring_info *rp)
+{
+ u64 stat = nr64(RX_DMA_CTL_STAT(rp->rx_channel));
+ int err = 0;
+
+ printk(KERN_ERR PFX "%s: RX channel %u error, stat[%lx]\n",
+ np->dev->name, rp->rx_channel, stat);
+
+ niu_log_rxchan_errors(np, rp, stat);
+
+ if (stat & (RX_DMA_CTL_STAT_CHAN_FATAL |
+ RX_DMA_CTL_STAT_PORT_FATAL))
+ err = -EINVAL;
+
+ nw64(stat & RX_DMA_CTL_WRITE_CLEAR_ERRS,
+ RX_DMA_CTL_STAT(rp->rx_channel));
+
+ return err;
+}
+
+static void niu_log_txchan_errors(struct niu *np, struct tx_ring_info *rp,
+ u64 cs)
+{
+ printk(KERN_ERR PFX "%s: TX channel %u errors ( ",
+ np->dev->name, rp->tx_channel);
+
+ if (cs & TX_CS_MBOX_ERR)
+ printk("MBOX ");
+ if (cs & TX_CS_PKT_SIZE_ERR)
+ printk("PKT_SIZE ");
+ if (cs & TX_CS_TX_RING_OFLOW)
+ printk("TX_RING_OFLOW ");
+ if (cs & TX_CS_PREF_BUF_PAR_ERR)
+ printk("PREF_BUF_PAR ");
+ if (cs & TX_CS_NACK_PREF)
+ printk("NACK_PREF ");
+ if (cs & TX_CS_NACK_PKT_RD)
+ printk("NACK_PKT_RD ");
+ if (cs & TX_CS_CONF_PART_ERR)
+ printk("CONF_PART ");
+ if (cs & TX_CS_PKT_PRT_ERR)
+ printk("PKT_PTR ");
+
+ printk(")\n");
+}
+
+static int niu_tx_error(struct niu *np, struct tx_ring_info *rp)
+{
+ u64 cs, logh, logl;
+
+ cs = nr64(TX_CS(rp->tx_channel));
+ logh = nr64(TX_RNG_ERR_LOGH(rp->tx_channel));
+ logl = nr64(TX_RNG_ERR_LOGL(rp->tx_channel));
+
+ printk(KERN_ERR PFX "%s: TX channel %u error, "
+ "cs[%lx] logh[%lx] logl[%lx]\n",
+ np->dev->name, rp->tx_channel, cs, logh, logl);
+
+ niu_log_txchan_errors(np, rp, cs);
+
+ return -ENODEV;
+}
+
+static int niu_mif_interrupt(struct niu *np)
+{
+ u64 mif_status = nr64(MIF_STATUS);
+ int phy_mdint = 0;
+
+ if (np->flags & NIU_FLAGS_XMAC) {
+ u64 xrxmac_stat = nr64_mac(XRXMAC_STATUS);
+
+ if (xrxmac_stat & XRXMAC_STATUS_PHY_MDINT)
+ phy_mdint = 1;
+ }
+
+ printk(KERN_ERR PFX "%s: MIF interrupt, "
+ "stat[%lx] phy_mdint(%d)\n",
+ np->dev->name, mif_status, phy_mdint);
+
+ return -ENODEV;
+}
+
+static void niu_xmac_interrupt(struct niu *np)
+{
+ struct niu_xmac_stats *mp = &np->mac_stats.xmac;
+ u64 val;
+
+#if 1
+ printk(KERN_ERR PFX "%s: XMAC interrupt\n", np->dev->name);
+#endif
+
+ val = nr64_mac(XTXMAC_STATUS);
+ if (val & XTXMAC_STATUS_FRAME_CNT_EXP)
+ mp->tx_frames += TXMAC_FRM_CNT_COUNT;
+ if (val & XTXMAC_STATUS_BYTE_CNT_EXP)
+ mp->tx_bytes += TXMAC_BYTE_CNT_COUNT;
+ if (val & XTXMAC_STATUS_TXFIFO_XFR_ERR)
+ mp->tx_fifo_errors++;
+ if (val & XTXMAC_STATUS_TXMAC_OFLOW)
+ mp->tx_overflow_errors++;
+ if (val & XTXMAC_STATUS_MAX_PSIZE_ERR)
+ mp->tx_max_pkt_size_errors++;
+ if (val & XTXMAC_STATUS_TXMAC_UFLOW)
+ mp->tx_underflow_errors++;
+
+ val = nr64_mac(XRXMAC_STATUS);
+ if (val & XRXMAC_STATUS_LCL_FLT_STATUS)
+ mp->rx_local_faults++;
+ if (val & XRXMAC_STATUS_RFLT_DET)
+ mp->rx_remote_faults++;
+ if (val & XRXMAC_STATUS_LFLT_CNT_EXP)
+ mp->rx_link_faults += LINK_FAULT_CNT_COUNT;
+ if (val & XRXMAC_STATUS_ALIGNERR_CNT_EXP)
+ mp->rx_align_errors += RXMAC_ALIGN_ERR_CNT_COUNT;
+ if (val & XRXMAC_STATUS_RXFRAG_CNT_EXP)
+ mp->rx_frags += RXMAC_FRAG_CNT_COUNT;
+ if (val & XRXMAC_STATUS_RXMULTF_CNT_EXP)
+ mp->rx_mcasts += RXMAC_MC_FRM_CNT_COUNT;
+ if (val & XRXMAC_STATUS_RXBCAST_CNT_EXP)
+ mp->rx_bcasts += RXMAC_BC_FRM_CNT_COUNT;
+ if (val & XRXMAC_STATUS_RXBCAST_CNT_EXP)
+ mp->rx_bcasts += RXMAC_BC_FRM_CNT_COUNT;
+ if (val & XRXMAC_STATUS_RXHIST1_CNT_EXP)
+ mp->rx_hist_cnt1 += RXMAC_HIST_CNT1_COUNT;
+ if (val & XRXMAC_STATUS_RXHIST2_CNT_EXP)
+ mp->rx_hist_cnt2 += RXMAC_HIST_CNT2_COUNT;
+ if (val & XRXMAC_STATUS_RXHIST3_CNT_EXP)
+ mp->rx_hist_cnt3 += RXMAC_HIST_CNT3_COUNT;
+ if (val & XRXMAC_STATUS_RXHIST4_CNT_EXP)
+ mp->rx_hist_cnt4 += RXMAC_HIST_CNT4_COUNT;
+ if (val & XRXMAC_STATUS_RXHIST5_CNT_EXP)
+ mp->rx_hist_cnt5 += RXMAC_HIST_CNT5_COUNT;
+ if (val & XRXMAC_STATUS_RXHIST6_CNT_EXP)
+ mp->rx_hist_cnt6 += RXMAC_HIST_CNT6_COUNT;
+ if (val & XRXMAC_STATUS_RXHIST7_CNT_EXP)
+ mp->rx_hist_cnt7 += RXMAC_HIST_CNT7_COUNT;
+ if (val & XRXMAC_STAT_MSK_RXOCTET_CNT_EXP)
+ mp->rx_octets += RXMAC_BT_CNT_COUNT;
+ if (val & XRXMAC_STATUS_CVIOLERR_CNT_EXP)
+ mp->rx_code_violations += RXMAC_CD_VIO_CNT_COUNT;
+ if (val & XRXMAC_STATUS_LENERR_CNT_EXP)
+ mp->rx_len_errors += RXMAC_MPSZER_CNT_COUNT;
+ if (val & XRXMAC_STATUS_CRCERR_CNT_EXP)
+ mp->rx_crc_errors += RXMAC_CRC_ER_CNT_COUNT;
+ if (val & XRXMAC_STATUS_RXUFLOW)
+ mp->rx_underflows++;
+ if (val & XRXMAC_STATUS_RXOFLOW)
+ mp->rx_overflows++;
+
+ val = nr64_mac(XMAC_FC_STAT);
+ if (val & XMAC_FC_STAT_TX_MAC_NPAUSE)
+ mp->pause_off_state++;
+ if (val & XMAC_FC_STAT_TX_MAC_PAUSE)
+ mp->pause_on_state++;
+ if (val & XMAC_FC_STAT_RX_MAC_RPAUSE)
+ mp->pause_received++;
+}
+
+static void niu_bmac_interrupt(struct niu *np)
+{
+ struct niu_bmac_stats *mp = &np->mac_stats.bmac;
+ u64 val;
+
+#if 1
+ printk(KERN_ERR PFX "%s: BMAC interrupt\n", np->dev->name);
+#endif
+
+ val = nr64_mac(BTXMAC_STATUS);
+ if (val & BTXMAC_STATUS_UNDERRUN)
+ mp->tx_underflow_errors++;
+ if (val & BTXMAC_STATUS_MAX_PKT_ERR)
+ mp->tx_max_pkt_size_errors++;
+ if (val & BTXMAC_STATUS_BYTE_CNT_EXP)
+ mp->tx_bytes += BTXMAC_BYTE_CNT_COUNT;
+ if (val & BTXMAC_STATUS_FRAME_CNT_EXP)
+ mp->tx_frames += BTXMAC_FRM_CNT_COUNT;
+
+ val = nr64_mac(BRXMAC_STATUS);
+ if (val & BRXMAC_STATUS_OVERFLOW)
+ mp->rx_overflows++;
+ if (val & BRXMAC_STATUS_FRAME_CNT_EXP)
+ mp->rx_frames += BRXMAC_FRAME_CNT_COUNT;
+ if (val & BRXMAC_STATUS_ALIGN_ERR_EXP)
+ mp->rx_align_errors += BRXMAC_ALIGN_ERR_CNT_COUNT;
+ if (val & BRXMAC_STATUS_CRC_ERR_EXP)
+ mp->rx_crc_errors += BRXMAC_ALIGN_ERR_CNT_COUNT;
+ if (val & BRXMAC_STATUS_LEN_ERR_EXP)
+ mp->rx_len_errors += BRXMAC_CODE_VIOL_ERR_CNT_COUNT;
+
+ val = nr64_mac(BMAC_CTRL_STATUS);
+ if (val & BMAC_CTRL_STATUS_NOPAUSE)
+ mp->pause_off_state++;
+ if (val & BMAC_CTRL_STATUS_PAUSE)
+ mp->pause_on_state++;
+ if (val & BMAC_CTRL_STATUS_PAUSE_RECV)
+ mp->pause_received++;
+}
+
+static int niu_mac_interrupt(struct niu *np)
+{
+ if (np->flags & NIU_FLAGS_XMAC)
+ niu_xmac_interrupt(np);
+ else
+ niu_bmac_interrupt(np);
+
+ return 0;
+}
+
+static void niu_log_device_error(struct niu *np, u64 stat)
+{
+ printk(KERN_ERR PFX "%s: Core device errors ( ",
+ np->dev->name);
+
+ if (stat & SYS_ERR_MASK_META2)
+ printk("META2 ");
+ if (stat & SYS_ERR_MASK_META1)
+ printk("META1 ");
+ if (stat & SYS_ERR_MASK_PEU)
+ printk("PEU ");
+ if (stat & SYS_ERR_MASK_TXC)
+ printk("TXC ");
+ if (stat & SYS_ERR_MASK_RDMC)
+ printk("RDMC ");
+ if (stat & SYS_ERR_MASK_TDMC)
+ printk("TDMC ");
+ if (stat & SYS_ERR_MASK_ZCP)
+ printk("ZCP ");
+ if (stat & SYS_ERR_MASK_FFLP)
+ printk("FFLP ");
+ if (stat & SYS_ERR_MASK_IPP)
+ printk("IPP ");
+ if (stat & SYS_ERR_MASK_MAC)
+ printk("MAC ");
+ if (stat & SYS_ERR_MASK_SMX)
+ printk("SMX ");
+
+ printk(")\n");
+}
+
+static int niu_device_error(struct niu *np)
+{
+ u64 stat = nr64(SYS_ERR_STAT);
+
+ printk(KERN_ERR PFX "%s: Core device error, stat[%lx]\n",
+ np->dev->name, stat);
+
+ niu_log_device_error(np, stat);
+
+ return -ENODEV;
+}
+
+static int niu_slowpath_interrupt(struct niu *np, struct niu_ldg *lp)
+{
+ u64 v0 = lp->v0;
+ u64 v1 = lp->v1;
+ u64 v2 = lp->v2;
+ int i, err = 0;
+
+ if (v1 & 0x00000000ffffffff) {
+ u32 rx_vec = (v1 & 0xffffffff);
+
+ for (i = 0; i < np->num_rx_rings; i++) {
+ struct rx_ring_info *rp = &np->rx_rings[i];
+
+ if (rx_vec & (1 << rp->rx_channel)) {
+ int r = niu_rx_error(np, rp);
+ if (r)
+ err = r;
+ }
+ }
+ }
+ if (v1 & 0x7fffffff00000000) {
+ u32 tx_vec = (v1 >> 32) & 0x7fffffff;
+
+ for (i = 0; i < np->num_tx_rings; i++) {
+ struct tx_ring_info *rp = &np->tx_rings[i];
+
+ if (tx_vec & (1 << rp->tx_channel)) {
+ int r = niu_tx_error(np, rp);
+ if (r)
+ err = r;
+ }
+ }
+ }
+ if ((v0 | v1) & 0x8000000000000000) {
+ int r = niu_mif_interrupt(np);
+ if (r)
+ err = r;
+ }
+ if (v2) {
+ if (v2 & 0x01ef) {
+ int r = niu_mac_interrupt(np);
+ if (r)
+ err = r;
+ }
+ if (v2 & 0x0210) {
+ int r = niu_device_error(np);
+ if (r)
+ err = r;
+ }
+ }
+
+ if (err)
+ niu_enable_interrupts(np, 0);
+
+ return -EINVAL;
+}
+
+static void niu_rxchan_intr(struct niu *np, struct rx_ring_info *rp,
+ int ldn)
+{
+ struct rxdma_mailbox *mbox = rp->mbox;
+ u64 stat_write, stat = le64_to_cpup(&mbox->rx_dma_ctl_stat);
+
+ stat_write = (RX_DMA_CTL_STAT_RCRTHRES |
+ RX_DMA_CTL_STAT_RCRTO);
+ nw64(stat_write, RX_DMA_CTL_STAT(rp->rx_channel));
+
+ niudbg(INTERRUPT, "%s: rxchan_intr stat[%lx]\n",
+ np->dev->name, stat);
+}
+
+static void niu_txchan_intr(struct niu *np, struct tx_ring_info *rp,
+ int ldn)
+{
+ rp->tx_cs = nr64(TX_CS(rp->tx_channel));
+ rp->tx_hdl = nr64(TX_RING_HDL(rp->tx_channel));
+
+ niudbg(INTERRUPT, "%s: txchan_intr cs/hdl[%lx:%lx]\n",
+ np->dev->name,
+ rp->tx_cs, rp->tx_hdl);
+}
+
+static void __niu_fastpath_interrupt(struct niu *np, int ldg, u64 v0)
+{
+ struct niu_parent *parent = np->parent;
+ u32 rx_vec, tx_vec;
+ int i;
+
+ tx_vec = (v0 >> 32);
+ rx_vec = (v0 & 0xffffffff);
+
+ for (i = 0; i < np->num_rx_rings; i++) {
+ struct rx_ring_info *rp = &np->rx_rings[i];
+ int ldn = LDN_RXDMA(rp->rx_channel);
+
+ if (parent->ldg_map[ldn] != ldg)
+ continue;
+
+ nw64(LD_IM0_MASK, LD_IM0(ldn));
+ if (rx_vec & (1 << rp->rx_channel))
+ niu_rxchan_intr(np, rp, ldn);
+ }
+
+ for (i = 0; i < np->num_tx_rings; i++) {
+ struct tx_ring_info *rp = &np->tx_rings[i];
+ int ldn = LDN_TXDMA(rp->tx_channel);
+
+ if (parent->ldg_map[ldn] != ldg)
+ continue;
+
+ nw64(LD_IM0_MASK, LD_IM0(ldn));
+ if (tx_vec & (1 << rp->tx_channel))
+ niu_txchan_intr(np, rp, ldn);
+ }
+}
+
+static void niu_schedule_napi(struct niu *np, struct niu_ldg *lp,
+ u64 v0, u64 v1, u64 v2)
+{
+ if (likely(netif_rx_schedule_prep(np->dev, &lp->napi))) {
+ lp->v0 = v0;
+ lp->v1 = v1;
+ lp->v2 = v2;
+ __niu_fastpath_interrupt(np, lp->ldg_num, v0);
+ __netif_rx_schedule(np->dev, &lp->napi);
+ }
+}
+
+static irqreturn_t niu_interrupt(int irq, void *dev_id)
+{
+ struct niu_ldg *lp = dev_id;
+ struct niu *np = lp->np;
+ int ldg = lp->ldg_num;
+ unsigned long flags;
+ u64 v0, v1, v2;
+
+ if (niu_debug & NIU_DEBUG_INTERRUPT)
+ printk(KERN_ERR PFX "niu_interrupt() ldg[%p](%d) ",
+ lp, ldg);
+
+ spin_lock_irqsave(&np->lock, flags);
+
+ v0 = nr64(LDSV0(ldg));
+ v1 = nr64(LDSV1(ldg));
+ v2 = nr64(LDSV2(ldg));
+
+ if (niu_debug & NIU_DEBUG_INTERRUPT)
+ printk("v0[%lx] v1[%lx] v2[%lx]\n", v0, v1, v2);
+
+ if (unlikely(!v0 && !v1 && !v2)) {
+ spin_unlock_irqrestore(&np->lock, flags);
+ return IRQ_NONE;
+ }
+
+ if (unlikely((v0 & ((u64)1 << LDN_MIF)) || v1 || v2)) {
+ int err = niu_slowpath_interrupt(np, lp);
+ if (err)
+ goto out;
+ }
+ if (likely(v0 & ~((u64)1 << LDN_MIF)))
+ niu_schedule_napi(np, lp, v0, v1, v2);
+ else
+ niu_ldg_rearm(np, lp, 1);
+out:
+ spin_unlock_irqrestore(&np->lock, flags);
+
+ return IRQ_HANDLED;
+}
+
+static void niu_free_rx_ring_info(struct niu *np, struct rx_ring_info *rp)
+{
+ if (rp->mbox) {
+ np->ops->free_coherent(np->device,
+ sizeof(struct rxdma_mailbox),
+ rp->mbox, rp->mbox_dma);
+ rp->mbox = NULL;
+ }
+ if (rp->rcr) {
+ np->ops->free_coherent(np->device,
+ MAX_RCR_RING_SIZE * sizeof(__le64),
+ rp->rcr, rp->rcr_dma);
+ rp->rcr = NULL;
+ rp->rcr_table_size = 0;
+ rp->rcr_index = 0;
+ }
+ if (rp->rbr) {
+ niu_rbr_free(np, rp);
+
+ np->ops->free_coherent(np->device,
+ MAX_RBR_RING_SIZE * sizeof(__le32),
+ rp->rbr, rp->rbr_dma);
+ rp->rbr = NULL;
+ rp->rbr_table_size = 0;
+ rp->rbr_index = 0;
+ }
+ kfree(rp->rxhash);
+ rp->rxhash = NULL;
+}
+
+static void niu_free_tx_ring_info(struct niu *np, struct tx_ring_info *rp)
+{
+ if (rp->mbox) {
+ np->ops->free_coherent(np->device,
+ sizeof(struct txdma_mailbox),
+ rp->mbox, rp->mbox_dma);
+ rp->mbox = NULL;
+ }
+ if (rp->descr) {
+ int i;
+
+ for (i = 0; i < MAX_TX_RING_SIZE; i++) {
+ if (rp->tx_buffs[i].skb)
+ (void) release_tx_packet(np, rp, i);
+ }
+
+ np->ops->free_coherent(np->device,
+ MAX_TX_RING_SIZE * sizeof(__le64),
+ rp->descr, rp->descr_dma);
+ rp->descr = NULL;
+ rp->pending = 0;
+ rp->prod = 0;
+ rp->cons = 0;
+ rp->wrap_bit = 0;
+ }
+}
+
+static void niu_free_channels(struct niu *np)
+{
+ int i;
+
+ if (np->rx_rings) {
+ for (i = 0; i < np->num_rx_rings; i++) {
+ struct rx_ring_info *rp = &np->rx_rings[i];
+
+ niu_free_rx_ring_info(np, rp);
+ }
+ kfree(np->rx_rings);
+ np->rx_rings = NULL;
+ np->num_rx_rings = 0;
+ }
+
+ if (np->tx_rings) {
+ for (i = 0; i < np->num_tx_rings; i++) {
+ struct tx_ring_info *rp = &np->tx_rings[i];
+
+ niu_free_tx_ring_info(np, rp);
+ }
+ kfree(np->tx_rings);
+ np->tx_rings = NULL;
+ np->num_tx_rings = 0;
+ }
+}
+
+static int niu_alloc_rx_ring_info(struct niu *np,
+ struct rx_ring_info *rp)
+{
+ BUILD_BUG_ON(sizeof(struct rxdma_mailbox) != 64);
+
+ rp->rxhash = kzalloc(MAX_RBR_RING_SIZE * sizeof(struct page *),
+ GFP_KERNEL);
+ if (!rp->rxhash)
+ return -ENOMEM;
+
+ rp->mbox = np->ops->alloc_coherent(np->device,
+ sizeof(struct rxdma_mailbox),
+ &rp->mbox_dma, GFP_KERNEL);
+ if (!rp->mbox)
+ return -ENOMEM;
+ if ((unsigned long)rp->mbox & (64UL - 1)) {
+ printk(KERN_ERR PFX "%s: Coherent alloc gives misaligned "
+ "RXDMA mailbox %p\n", np->dev->name, rp->mbox);
+ return -EINVAL;
+ }
+
+ rp->rcr = np->ops->alloc_coherent(np->device,
+ MAX_RCR_RING_SIZE * sizeof(__le64),
+ &rp->rcr_dma, GFP_KERNEL);
+ if (!rp->rcr)
+ return -ENOMEM;
+ if ((unsigned long)rp->rcr & (64UL - 1)) {
+ printk(KERN_ERR PFX "%s: Coherent alloc gives misaligned "
+ "RXDMA RCR table %p\n", np->dev->name, rp->rcr);
+ return -EINVAL;
+ }
+ rp->rcr_table_size = MAX_RCR_RING_SIZE;
+ rp->rcr_index = 0;
+
+ rp->rbr = np->ops->alloc_coherent(np->device,
+ MAX_RBR_RING_SIZE * sizeof(__le32),
+ &rp->rbr_dma, GFP_KERNEL);
+ if (!rp->rbr)
+ return -ENOMEM;
+ if ((unsigned long)rp->rbr & (64UL - 1)) {
+ printk(KERN_ERR PFX "%s: Coherent alloc gives misaligned "
+ "RXDMA RBR table %p\n", np->dev->name, rp->rbr);
+ return -EINVAL;
+ }
+ rp->rbr_table_size = MAX_RBR_RING_SIZE;
+ rp->rbr_index = 0;
+ rp->rbr_pending = 0;
+
+ return 0;
+}
+
+static int niu_alloc_tx_ring_info(struct niu *np,
+ struct tx_ring_info *rp)
+{
+ BUILD_BUG_ON(sizeof(struct txdma_mailbox) != 64);
+
+ rp->mbox = np->ops->alloc_coherent(np->device,
+ sizeof(struct txdma_mailbox),
+ &rp->mbox_dma, GFP_KERNEL);
+ if (!rp->mbox)
+ return -ENOMEM;
+ if ((unsigned long)rp->mbox & (64UL - 1)) {
+ printk(KERN_ERR PFX "%s: Coherent alloc gives misaligned "
+ "TXDMA mailbox %p\n", np->dev->name, rp->mbox);
+ return -EINVAL;
+ }
+
+ rp->descr = np->ops->alloc_coherent(np->device,
+ MAX_TX_RING_SIZE * sizeof(__le64),
+ &rp->descr_dma, GFP_KERNEL);
+ if (!rp->descr)
+ return -ENOMEM;
+ if ((unsigned long)rp->descr & (64UL - 1)) {
+ printk(KERN_ERR PFX "%s: Coherent alloc gives misaligned "
+ "TXDMA descr table %p\n", np->dev->name, rp->descr);
+ return -EINVAL;
+ }
+
+ rp->pending = MAX_TX_RING_SIZE;
+ rp->prod = 0;
+ rp->cons = 0;
+ rp->wrap_bit = 0;
+ rp->max_burst = 1530;
+
+ return 0;
+}
+
+static int niu_alloc_channels(struct niu *np)
+{
+ struct niu_parent *parent = np->parent;
+ int first_rx_channel, first_tx_channel;
+ int i, port, err;
+
+ port = np->port;
+ first_rx_channel = first_tx_channel = 0;
+ for (i = 0; i < port; i++) {
+ first_rx_channel += parent->rxchan_per_port[i];
+ first_tx_channel += parent->txchan_per_port[i];
+ }
+
+ np->num_rx_rings = parent->rxchan_per_port[port];
+ np->num_tx_rings = parent->txchan_per_port[port];
+
+ np->rx_rings = kzalloc(np->num_rx_rings * sizeof(struct rx_ring_info),
+ GFP_KERNEL);
+ err = -ENOMEM;
+ if (!np->rx_rings)
+ goto out_err;
+
+ for (i = 0; i < np->num_rx_rings; i++) {
+ struct rx_ring_info *rp = &np->rx_rings[i];
+
+ rp->np = np;
+ rp->rx_channel = first_rx_channel + i;
+
+ err = niu_alloc_rx_ring_info(np, rp);
+ if (err)
+ goto out_err;
+
+ switch (PAGE_SIZE) {
+ case 4 * 1024:
+ case 8 * 1024:
+ case 16 * 1024:
+ case 32 * 1024:
+ rp->rbr_block_size = PAGE_SIZE;
+ rp->rbr_blocks_per_page = 1;
+ break;
+
+ default: {
+ __u16 bs;
+ if (PAGE_SIZE % (32 * 1024) == 0)
+ bs = 32 * 1024;
+ else if (PAGE_SIZE % (16 * 1024) == 0)
+ bs = 16 * 1024;
+ else if (PAGE_SIZE % (8 * 1024) == 0)
+ bs = 8 * 1024;
+ else if (PAGE_SIZE % (4 * 1024) == 0)
+ bs = 4 * 1024;
+ else
+ BUG();
+ rp->rbr_block_size = bs;
+ rp->rbr_blocks_per_page = PAGE_SIZE / bs;
+ }
+ }
+
+ rp->rbr_sizes[0] = 256;
+ rp->rbr_sizes[1] = 1024;
+ rp->rbr_sizes[2] = 2048;
+ rp->rbr_sizes[3] = rp->rbr_block_size;
+
+ /* XXX better defaults, configurable, etc... XXX */
+ rp->nonsyn_window = 64;
+ rp->nonsyn_threshold = rp->rcr_table_size - 64;
+ rp->syn_window = 64;
+ rp->syn_threshold = rp->rcr_table_size - 64;
+ rp->rcr_pkt_threshold = 8;
+ rp->rcr_timeout = 16;
+ rp->rbr_kick_thresh = RBR_REFILL_MIN;
+ if (rp->rbr_kick_thresh < rp->rbr_blocks_per_page)
+ rp->rbr_kick_thresh = rp->rbr_blocks_per_page;
+
+ err = niu_rbr_fill(np, rp, GFP_KERNEL);
+ if (err)
+ return err;
+ }
+
+ np->tx_rings = kzalloc(np->num_tx_rings * sizeof(struct tx_ring_info),
+ GFP_KERNEL);
+ err = -ENOMEM;
+ if (!np->tx_rings)
+ goto out_err;
+
+ for (i = 0; i < np->num_tx_rings; i++) {
+ struct tx_ring_info *rp = &np->tx_rings[i];
+
+ rp->np = np;
+ rp->tx_channel = first_tx_channel + i;
+
+ err = niu_alloc_tx_ring_info(np, rp);
+ if (err)
+ goto out_err;
+ }
+
+ return 0;
+
+out_err:
+ niu_free_channels(np);
+ return err;
+}
+
+static int niu_tx_cs_sng_poll(struct niu *np, int channel)
+{
+ int limit = 1000;
+
+ while (--limit > 0) {
+ u64 val = nr64(TX_CS(channel));
+ if (val & TX_CS_SNG_STATE)
+ return 0;
+ }
+ return -ENODEV;
+}
+
+static int niu_tx_channel_stop(struct niu *np, int channel)
+{
+ u64 val = nr64(TX_CS(channel));
+
+ val |= TX_CS_STOP_N_GO;
+ nw64(val, TX_CS(channel));
+
+ return niu_tx_cs_sng_poll(np, channel);
+}
+
+static int niu_tx_cs_reset_poll(struct niu *np, int channel)
+{
+ int limit = 1000;
+
+ while (--limit > 0) {
+ u64 val = nr64(TX_CS(channel));
+ if (!(val & TX_CS_RST))
+ return 0;
+ }
+ return -ENODEV;
+}
+
+static int niu_tx_channel_reset(struct niu *np, int channel)
+{
+ u64 val = nr64(TX_CS(channel));
+ int err;
+
+ val |= TX_CS_RST;
+ nw64(val, TX_CS(channel));
+
+ err = niu_tx_cs_reset_poll(np, channel);
+ if (!err)
+ nw64(0, TX_RING_KICK(channel));
+
+ return err;
+}
+
+static int niu_tx_channel_lpage_init(struct niu *np, int channel)
+{
+ u64 val;
+
+ nw64(0, TX_LOG_MASK1(channel));
+ nw64(0, TX_LOG_VAL1(channel));
+ nw64(0, TX_LOG_MASK2(channel));
+ nw64(0, TX_LOG_VAL2(channel));
+ nw64(0, TX_LOG_PAGE_RELO1(channel));
+ nw64(0, TX_LOG_PAGE_RELO2(channel));
+ nw64(0, TX_LOG_PAGE_HDL(channel));
+
+ val = (u64)np->port << TX_LOG_PAGE_VLD_FUNC_SHIFT;
+ val |= (TX_LOG_PAGE_VLD_PAGE0 | TX_LOG_PAGE_VLD_PAGE1);
+ nw64(val, TX_LOG_PAGE_VLD(channel));
+
+ /* XXX TXDMA 32bit mode? XXX */
+
+ return 0;
+}
+
+static void niu_txc_enable_port(struct niu *np, int on)
+{
+ unsigned long flags;
+ u64 val, mask;
+
+ niu_lock_parent(np, flags);
+ val = nr64(TXC_CONTROL);
+ mask = (u64)1 << np->port;
+ if (on) {
+ val |= TXC_CONTROL_ENABLE | mask;
+ } else {
+ val &= ~mask;
+ if ((val & ~TXC_CONTROL_ENABLE) == 0)
+ val &= ~TXC_CONTROL_ENABLE;
+ }
+ nw64(val, TXC_CONTROL);
+ niu_unlock_parent(np, flags);
+}
+
+static void niu_txc_set_imask(struct niu *np, u64 imask)
+{
+ unsigned long flags;
+ u64 val;
+
+ niu_lock_parent(np, flags);
+ val = nr64(TXC_INT_MASK);
+ val &= ~TXC_INT_MASK_VAL(np->port);
+ val |= (imask << TXC_INT_MASK_VAL_SHIFT(np->port));
+ niu_unlock_parent(np, flags);
+}
+
+static void niu_txc_port_dma_enable(struct niu *np, int on)
+{
+ u64 val = 0;
+
+ if (on) {
+ int i;
+
+ for (i = 0; i < np->num_tx_rings; i++)
+ val |= (1 << np->tx_rings[i].tx_channel);
+ }
+ nw64(val, TXC_PORT_DMA(np->port));
+}
+
+static int niu_init_one_tx_channel(struct niu *np, struct tx_ring_info *rp)
+{
+ int err, channel = rp->tx_channel;
+ u64 val, ring_len;
+
+ err = niu_tx_channel_stop(np, channel);
+ if (err)
+ return err;
+
+ err = niu_tx_channel_reset(np, channel);
+ if (err)
+ return err;
+
+ err = niu_tx_channel_lpage_init(np, channel);
+ if (err)
+ return err;
+
+ nw64(rp->max_burst, TXC_DMA_MAX(channel));
+ nw64(0, TX_ENT_MSK(channel));
+
+ if (rp->descr_dma & ~(TX_RNG_CFIG_STADDR_BASE |
+ TX_RNG_CFIG_STADDR)) {
+ printk(KERN_ERR PFX "%s: TX ring channel %d "
+ "DMA addr (%llx) is not aligned.\n",
+ np->dev->name, channel,
+ (unsigned long long) rp->descr_dma);
+ return -EINVAL;
+ }
+
+ /* The length field in TX_RNG_CFIG is measured in 64-byte
+ * blocks. rp->pending is the number of TX descriptors in
+ * our ring, 8 bytes each, thus we divide by 8 bytes more
+ * to get the proper value the chip wants.
+ */
+ ring_len = (rp->pending / 8);
+
+ val = ((ring_len << TX_RNG_CFIG_LEN_SHIFT) |
+ rp->descr_dma);
+ nw64(val, TX_RNG_CFIG(channel));
+
+ if (((rp->mbox_dma >> 32) & ~TXDMA_MBH_MBADDR) ||
+ ((u32)rp->mbox_dma & ~TXDMA_MBL_MBADDR)) {
+ printk(KERN_ERR PFX "%s: TX ring channel %d "
+ "MBOX addr (%llx) is has illegal bits.\n",
+ np->dev->name, channel,
+ (unsigned long long) rp->mbox_dma);
+ return -EINVAL;
+ }
+ nw64(rp->mbox_dma >> 32, TXDMA_MBH(channel));
+ nw64(rp->mbox_dma & TXDMA_MBL_MBADDR, TXDMA_MBL(channel));
+
+ nw64(0, TX_CS(channel));
+
+ return 0;
+}
+
+static void niu_init_rdc_groups(struct niu *np)
+{
+ struct niu_rdc_tables *tp = &np->parent->rdc_group_cfg[np->port];
+ int i, first_table_num = tp->first_table_num;
+
+ for (i = 0; i < tp->num_tables; i++) {
+ struct rdc_table *tbl = &tp->tables[i];
+ int this_table = first_table_num + i;
+ int slot;
+
+ for (slot = 0; slot < NIU_RDC_TABLE_SLOTS; slot++)
+ nw64(tbl->rxdma_channel[slot],
+ RDC_TBL(this_table, slot));
+ }
+
+ nw64(np->parent->rdc_default[np->port], DEF_RDC(np->port));
+}
+
+static void niu_init_drr_weight(struct niu *np)
+{
+ int type = phy_decode(np->parent->port_phy, np->port);
+ u64 val;
+
+ switch (type) {
+ case PORT_TYPE_10G:
+ val = PT_DRR_WEIGHT_DEFAULT_10G;
+ break;
+
+ case PORT_TYPE_1G:
+ default:
+ val = PT_DRR_WEIGHT_DEFAULT_1G;
+ break;
+ }
+ nw64(val, PT_DRR_WT(np->port));
+}
+
+static int niu_init_hostinfo(struct niu *np)
+{
+ struct niu_parent *parent = np->parent;
+ struct niu_rdc_tables *tp = &parent->rdc_group_cfg[np->port];
+ int i, err, num_alt = niu_num_alt_addr(np);
+ int first_rdc_table = tp->first_table_num;
+
+ err = niu_set_primary_mac_rdc_table(np, first_rdc_table, 1);
+ if (err)
+ return err;
+
+ err = niu_set_multicast_mac_rdc_table(np, first_rdc_table, 1);
+ if (err)
+ return err;
+
+ for (i = 0; i < num_alt; i++) {
+ err = niu_set_alt_mac_rdc_table(np, i, first_rdc_table, 1);
+ if (err)
+ return err;
+ }
+
+ return 0;
+}
+
+static int niu_rx_channel_reset(struct niu *np, int channel)
+{
+ return niu_set_and_wait_clear(np, RXDMA_CFIG1(channel),
+ RXDMA_CFIG1_RST, 1000, 10,
+ "RXDMA_CFIG1");
+}
+
+static int niu_rx_channel_lpage_init(struct niu *np, int channel)
+{
+ u64 val;
+
+ nw64(0, RX_LOG_MASK1(channel));
+ nw64(0, RX_LOG_VAL1(channel));
+ nw64(0, RX_LOG_MASK2(channel));
+ nw64(0, RX_LOG_VAL2(channel));
+ nw64(0, RX_LOG_PAGE_RELO1(channel));
+ nw64(0, RX_LOG_PAGE_RELO2(channel));
+ nw64(0, RX_LOG_PAGE_HDL(channel));
+
+ val = (u64)np->port << RX_LOG_PAGE_VLD_FUNC_SHIFT;
+ val |= (RX_LOG_PAGE_VLD_PAGE0 | RX_LOG_PAGE_VLD_PAGE1);
+ nw64(val, RX_LOG_PAGE_VLD(channel));
+
+ return 0;
+}
+
+static void niu_rx_channel_wred_init(struct niu *np, struct rx_ring_info *rp)
+{
+ u64 val;
+
+ val = (((u64)rp->nonsyn_window << RDC_RED_PARA_WIN_SHIFT) |
+ ((u64)rp->nonsyn_threshold << RDC_RED_PARA_THRE_SHIFT) |
+ ((u64)rp->syn_window << RDC_RED_PARA_WIN_SYN_SHIFT) |
+ ((u64)rp->syn_threshold << RDC_RED_PARA_THRE_SYN_SHIFT));
+ nw64(val, RDC_RED_PARA(rp->rx_channel));
+}
+
+static int niu_compute_rbr_cfig_b(struct rx_ring_info *rp, u64 *ret)
+{
+ u64 val = 0;
+
+ switch (rp->rbr_block_size) {
+ case 4 * 1024:
+ val |= (RBR_BLKSIZE_4K << RBR_CFIG_B_BLKSIZE_SHIFT);
+ break;
+ case 8 * 1024:
+ val |= (RBR_BLKSIZE_8K << RBR_CFIG_B_BLKSIZE_SHIFT);
+ break;
+ case 16 * 1024:
+ val |= (RBR_BLKSIZE_16K << RBR_CFIG_B_BLKSIZE_SHIFT);
+ break;
+ case 32 * 1024:
+ val |= (RBR_BLKSIZE_32K << RBR_CFIG_B_BLKSIZE_SHIFT);
+ break;
+ default:
+ return -EINVAL;
+ }
+ val |= RBR_CFIG_B_VLD2;
+ switch (rp->rbr_sizes[2]) {
+ case 2 * 1024:
+ val |= (RBR_BUFSZ2_2K << RBR_CFIG_B_BUFSZ2_SHIFT);
+ break;
+ case 4 * 1024:
+ val |= (RBR_BUFSZ2_4K << RBR_CFIG_B_BUFSZ2_SHIFT);
+ break;
+ case 8 * 1024:
+ val |= (RBR_BUFSZ2_8K << RBR_CFIG_B_BUFSZ2_SHIFT);
+ break;
+ case 16 * 1024:
+ val |= (RBR_BUFSZ2_16K << RBR_CFIG_B_BUFSZ2_SHIFT);
+ break;
+
+ default:
+ return -EINVAL;
+ }
+ val |= RBR_CFIG_B_VLD1;
+ switch (rp->rbr_sizes[1]) {
+ case 1 * 1024:
+ val |= (RBR_BUFSZ1_1K << RBR_CFIG_B_BUFSZ1_SHIFT);
+ break;
+ case 2 * 1024:
+ val |= (RBR_BUFSZ1_2K << RBR_CFIG_B_BUFSZ1_SHIFT);
+ break;
+ case 4 * 1024:
+ val |= (RBR_BUFSZ1_4K << RBR_CFIG_B_BUFSZ1_SHIFT);
+ break;
+ case 8 * 1024:
+ val |= (RBR_BUFSZ1_8K << RBR_CFIG_B_BUFSZ1_SHIFT);
+ break;
+
+ default:
+ return -EINVAL;
+ }
+ val |= RBR_CFIG_B_VLD0;
+ switch (rp->rbr_sizes[0]) {
+ case 256:
+ val |= (RBR_BUFSZ0_256 << RBR_CFIG_B_BUFSZ0_SHIFT);
+ break;
+ case 512:
+ val |= (RBR_BUFSZ0_512 << RBR_CFIG_B_BUFSZ0_SHIFT);
+ break;
+ case 1 * 1024:
+ val |= (RBR_BUFSZ0_1K << RBR_CFIG_B_BUFSZ0_SHIFT);
+ break;
+ case 2 * 1024:
+ val |= (RBR_BUFSZ0_2K << RBR_CFIG_B_BUFSZ0_SHIFT);
+ break;
+
+ default:
+ return -EINVAL;
+ }
+
+ *ret = val;
+ return 0;
+}
+
+static int niu_enable_rx_channel(struct niu *np, int channel, int on)
+{
+ u64 val = nr64(RXDMA_CFIG1(channel));
+ int limit;
+
+ if (on)
+ val |= RXDMA_CFIG1_EN;
+ else
+ val &= ~RXDMA_CFIG1_EN;
+ nw64(val, RXDMA_CFIG1(channel));
+
+ limit = 1000;
+ while (--limit > 0) {
+ if (nr64(RXDMA_CFIG1(channel)) & RXDMA_CFIG1_QST)
+ break;
+ udelay(10);
+ }
+ if (limit <= 0)
+ return -ENODEV;
+ return 0;
+}
+
+static int niu_init_one_rx_channel(struct niu *np, struct rx_ring_info *rp)
+{
+ int err, channel = rp->rx_channel;
+ u64 val;
+
+ err = niu_rx_channel_reset(np, channel);
+ if (err)
+ return err;
+
+ err = niu_rx_channel_lpage_init(np, channel);
+ if (err)
+ return err;
+
+ niu_rx_channel_wred_init(np, rp);
+
+ nw64(RX_DMA_ENT_MSK_RBR_EMPTY, RX_DMA_ENT_MSK(channel));
+ nw64(RX_DMA_CTL_STAT_MEX |
+ RX_DMA_CTL_STAT_RCRTHRES |
+ RX_DMA_CTL_STAT_RCRTO |
+ RX_DMA_CTL_STAT_RBR_EMPTY, RX_DMA_CTL_STAT(channel));
+ nw64(rp->mbox_dma >> 32, RXDMA_CFIG1(channel));
+ nw64((rp->mbox_dma & 0x00000000ffffffc0), RXDMA_CFIG2(channel));
+ nw64(((u64)rp->rbr_table_size << RBR_CFIG_A_LEN_SHIFT) |
+ (rp->rbr_dma & (RBR_CFIG_A_STADDR_BASE | RBR_CFIG_A_STADDR)),
+ RBR_CFIG_A(channel));
+ err = niu_compute_rbr_cfig_b(rp, &val);
+ if (err)
+ return err;
+ nw64(val, RBR_CFIG_B(channel));
+ nw64(((u64)rp->rcr_table_size << RCRCFIG_A_LEN_SHIFT) |
+ (rp->rcr_dma & (RCRCFIG_A_STADDR_BASE | RCRCFIG_A_STADDR)),
+ RCRCFIG_A(channel));
+ nw64(((u64)rp->rcr_pkt_threshold << RCRCFIG_B_PTHRES_SHIFT) |
+ RCRCFIG_B_ENTOUT |
+ ((u64)rp->rcr_timeout << RCRCFIG_B_TIMEOUT_SHIFT),
+ RCRCFIG_B(channel));
+
+ err = niu_enable_rx_channel(np, channel, 1);
+ if (err)
+ return err;
+
+ nw64(rp->rbr_index, RBR_KICK(channel));
+
+ val = nr64(RX_DMA_CTL_STAT(channel));
+ val |= RX_DMA_CTL_STAT_RBR_EMPTY;
+ nw64(val, RX_DMA_CTL_STAT(channel));
+
+ return 0;
+}
+
+static int niu_init_rx_channels(struct niu *np)
+{
+ unsigned long flags;
+ u64 seed = jiffies_64;
+ int err, i;
+
+ niu_lock_parent(np, flags);
+ nw64(np->parent->rxdma_clock_divider, RX_DMA_CK_DIV);
+ nw64(RED_RAN_INIT_OPMODE | (seed & RED_RAN_INIT_VAL), RED_RAN_INIT);
+ niu_unlock_parent(np, flags);
+
+ /* XXX RXDMA 32bit mode? XXX */
+
+ niu_init_rdc_groups(np);
+ niu_init_drr_weight(np);
+
+ err = niu_init_hostinfo(np);
+ if (err)
+ return err;
+
+ for (i = 0; i < np->num_rx_rings; i++) {
+ struct rx_ring_info *rp = &np->rx_rings[i];
+
+ err = niu_init_one_rx_channel(np, rp);
+ if (err)
+ return err;
+ }
+
+ return 0;
+}
+
+static int niu_set_ip_frag_rule(struct niu *np)
+{
+ struct niu_parent *parent = np->parent;
+ struct niu_classifier *cp = &np->clas;
+ struct niu_tcam_entry *tp;
+ int index, err;
+
+ /* XXX fix this allocation scheme XXX */
+ index = cp->tcam_index;
+ tp = &parent->tcam[index];
+
+ /* Note that the noport bit is the same in both ipv4 and
+ * ipv6 format TCAM entries.
+ */
+ memset(tp, 0, sizeof(*tp));
+ tp->key[1] = TCAM_V4KEY1_NOPORT;
+ tp->key_mask[1] = TCAM_V4KEY1_NOPORT;
+ tp->assoc_data = (TCAM_ASSOCDATA_TRES_USE_OFFSET |
+ ((u64)0 << TCAM_ASSOCDATA_OFFSET_SHIFT));
+ err = tcam_write(np, index, tp->key, tp->key_mask);
+ if (err)
+ return err;
+ err = tcam_assoc_write(np, index, tp->assoc_data);
+ if (err)
+ return err;
+
+ return 0;
+}
+
+static int niu_init_classifier_hw(struct niu *np)
+{
+ struct niu_parent *parent = np->parent;
+ struct niu_classifier *cp = &np->clas;
+ int i, err;
+
+ nw64(cp->h1_init, H1POLY);
+ nw64(cp->h2_init, H2POLY);
+
+ err = niu_init_hostinfo(np);
+ if (err)
+ return err;
+
+ for (i = 0; i < ENET_VLAN_TBL_NUM_ENTRIES; i++) {
+ struct niu_vlan_rdc *vp = &cp->vlan_mappings[i];
+
+ vlan_tbl_write(np, i, np->port,
+ vp->vlan_pref, vp->rdc_num);
+ }
+
+ for (i = 0; i < cp->num_alt_mac_mappings; i++) {
+ struct niu_altmac_rdc *ap = &cp->alt_mac_mappings[i];
+
+ err = niu_set_alt_mac_rdc_table(np, ap->alt_mac_num,
+ ap->rdc_num, ap->mac_pref);
+ if (err)
+ return err;
+ }
+
+ for (i = CLASS_CODE_USER_PROG1; i <= CLASS_CODE_SCTP_IPV6; i++) {
+ int index = i - CLASS_CODE_USER_PROG1;
+
+ err = niu_set_tcam_key(np, i, parent->tcam_key[index]);
+ if (err)
+ return err;
+ err = niu_set_flow_key(np, i, parent->flow_key[index]);
+ if (err)
+ return err;
+ }
+
+ err = niu_set_ip_frag_rule(np);
+ if (err)
+ return err;
+
+ tcam_enable(np, 1);
+
+ return 0;
+}
+
+static int niu_zcp_write(struct niu *np, int index, u64 *data)
+{
+ nw64(data[0], ZCP_RAM_DATA0);
+ nw64(data[1], ZCP_RAM_DATA1);
+ nw64(data[2], ZCP_RAM_DATA2);
+ nw64(data[3], ZCP_RAM_DATA3);
+ nw64(data[4], ZCP_RAM_DATA4);
+ nw64(ZCP_RAM_BE_VAL, ZCP_RAM_BE);
+ nw64((ZCP_RAM_ACC_WRITE |
+ (0 << ZCP_RAM_ACC_ZFCID_SHIFT) |
+ (ZCP_RAM_SEL_CFIFO(np->port) << ZCP_RAM_ACC_RAM_SEL_SHIFT)),
+ ZCP_RAM_ACC);
+
+ return niu_wait_bits_clear(np, ZCP_RAM_ACC, ZCP_RAM_ACC_BUSY,
+ 1000, 100);
+}
+
+static int niu_zcp_read(struct niu *np, int index, u64 *data)
+{
+ int err;
+
+ err = niu_wait_bits_clear(np, ZCP_RAM_ACC, ZCP_RAM_ACC_BUSY,
+ 1000, 100);
+ if (err) {
+ printk(KERN_ERR PFX "%s: ZCP read busy won't clear, "
+ "ZCP_RAM_ACC[%lx]\n", np->dev->name,
+ nr64(ZCP_RAM_ACC));
+ return err;
+ }
+
+ nw64((ZCP_RAM_ACC_READ |
+ (0 << ZCP_RAM_ACC_ZFCID_SHIFT) |
+ (ZCP_RAM_SEL_CFIFO(np->port) << ZCP_RAM_ACC_RAM_SEL_SHIFT)),
+ ZCP_RAM_ACC);
+
+ err = niu_wait_bits_clear(np, ZCP_RAM_ACC, ZCP_RAM_ACC_BUSY,
+ 1000, 100);
+ if (err) {
+ printk(KERN_ERR PFX "%s: ZCP read busy2 won't clear, "
+ "ZCP_RAM_ACC[%lx]\n", np->dev->name,
+ nr64(ZCP_RAM_ACC));
+ return err;
+ }
+
+ data[0] = nr64(ZCP_RAM_DATA0);
+ data[1] = nr64(ZCP_RAM_DATA1);
+ data[2] = nr64(ZCP_RAM_DATA2);
+ data[3] = nr64(ZCP_RAM_DATA3);
+ data[4] = nr64(ZCP_RAM_DATA4);
+
+ return 0;
+}
+
+static void niu_zcp_cfifo_reset(struct niu *np)
+{
+ u64 val = nr64(RESET_CFIFO);
+
+ val |= RESET_CFIFO_RST(np->port);
+ nw64(val, RESET_CFIFO);
+ udelay(10);
+
+ val &= ~RESET_CFIFO_RST(np->port);
+ nw64(val, RESET_CFIFO);
+}
+
+static int niu_init_zcp(struct niu *np)
+{
+ u64 data[5], rbuf[5];
+ int i, max, err;
+
+ if (np->parent->plat_type == PLAT_TYPE_ATLAS) {
+ if (np->port == 0 || np->port == 1)
+ max = ATLAS_P0_P1_CFIFO_ENTRIES;
+ else
+ max = ATLAS_P2_P3_CFIFO_ENTRIES;
+ } else
+ max = NIU_CFIFO_ENTRIES;
+
+ data[0] = 0;
+ data[1] = 0;
+ data[2] = 0;
+ data[3] = 0;
+ data[4] = 0;
+
+ for (i = 0; i < max; i++) {
+ err = niu_zcp_write(np, i, data);
+ if (err)
+ return err;
+ err = niu_zcp_read(np, i, rbuf);
+ if (err)
+ return err;
+ }
+
+ niu_zcp_cfifo_reset(np);
+ nw64(0, CFIFO_ECC(np->port));
+ nw64(ZCP_INT_STAT_ALL, ZCP_INT_STAT);
+ (void) nr64(ZCP_INT_STAT);
+ nw64(ZCP_INT_MASK_ALL, ZCP_INT_MASK);
+
+ return 0;
+}
+
+static void niu_ipp_write(struct niu *np, int index, u64 *data)
+{
+ u64 val = nr64_ipp(IPP_CFIG);
+
+ nw64_ipp(val | IPP_CFIG_DFIFO_PIO_W, IPP_CFIG);
+ nw64_ipp(index, IPP_DFIFO_WR_PTR);
+ nw64_ipp(data[0], IPP_DFIFO_WR0);
+ nw64_ipp(data[1], IPP_DFIFO_WR1);
+ nw64_ipp(data[2], IPP_DFIFO_WR2);
+ nw64_ipp(data[3], IPP_DFIFO_WR3);
+ nw64_ipp(data[4], IPP_DFIFO_WR4);
+ nw64_ipp(val & ~IPP_CFIG_DFIFO_PIO_W, IPP_CFIG);
+}
+
+static void niu_ipp_read(struct niu *np, int index, u64 *data)
+{
+ nw64_ipp(index, IPP_DFIFO_RD_PTR);
+ data[0] = nr64_ipp(IPP_DFIFO_RD0);
+ data[1] = nr64_ipp(IPP_DFIFO_RD1);
+ data[2] = nr64_ipp(IPP_DFIFO_RD2);
+ data[3] = nr64_ipp(IPP_DFIFO_RD3);
+ data[4] = nr64_ipp(IPP_DFIFO_RD4);
+}
+
+static int niu_ipp_reset(struct niu *np)
+{
+ return niu_set_and_wait_clear_ipp(np, IPP_CFIG, IPP_CFIG_SOFT_RST,
+ 1000, 100, "IPP_CFIG");
+}
+
+static int niu_init_ipp(struct niu *np)
+{
+ u64 data[5], rbuf[5], val;
+ int i, max, err;
+
+ if (np->parent->plat_type == PLAT_TYPE_ATLAS) {
+ if (np->port == 0 || np->port == 1)
+ max = ATLAS_P0_P1_DFIFO_ENTRIES;
+ else
+ max = ATLAS_P2_P3_DFIFO_ENTRIES;
+ } else
+ max = NIU_DFIFO_ENTRIES;
+
+ data[0] = 0;
+ data[1] = 0;
+ data[2] = 0;
+ data[3] = 0;
+ data[4] = 0;
+
+ for (i = 0; i < max; i++) {
+ niu_ipp_write(np, i, data);
+ niu_ipp_read(np, i, rbuf);
+ }
+
+ (void) nr64_ipp(IPP_INT_STAT);
+ (void) nr64_ipp(IPP_INT_STAT);
+
+ err = niu_ipp_reset(np);
+ if (err)
+ return err;
+
+ (void) nr64_ipp(IPP_PKT_DIS);
+ (void) nr64_ipp(IPP_BAD_CS_CNT);
+ (void) nr64_ipp(IPP_ECC);
+
+ (void) nr64_ipp(IPP_INT_STAT);
+
+ nw64_ipp(~IPP_MSK_ALL, IPP_MSK);
+
+ val = nr64_ipp(IPP_CFIG);
+ val &= ~IPP_CFIG_IP_MAX_PKT;
+ val |= (IPP_CFIG_IPP_ENABLE |
+ IPP_CFIG_DFIFO_ECC_EN |
+ IPP_CFIG_DROP_BAD_CRC |
+ IPP_CFIG_CKSUM_EN);
+ nw64_ipp(val, IPP_CFIG);
+
+ val = nr64_ipp(IPP_CFIG);
+ val &= ~IPP_CFIG_IP_MAX_PKT;
+ val |= (0x1ffff << IPP_CFIG_IP_MAX_PKT_SHIFT);
+ nw64_ipp(val, IPP_CFIG);
+
+ return 0;
+}
+
+static void niu_init_xif_xmac(struct niu *np)
+{
+ struct niu_link_config *lp = &np->link_config;
+ u64 val;
+
+ val = nr64_mac(XMAC_CONFIG);
+
+ if ((np->flags & NIU_FLAGS_10G) != 0 &&
+ (np->flags & NIU_FLAGS_FIBER) != 0) {
+ if (netif_carrier_ok(np->dev)) {
+ val |= XMAC_CONFIG_LED_POLARITY;
+ val &= ~XMAC_CONFIG_FORCE_LED_ON;
+ } else {
+ val |= XMAC_CONFIG_FORCE_LED_ON;
+ val &= ~XMAC_CONFIG_LED_POLARITY;
+ }
+ }
+
+ val &= ~XMAC_CONFIG_SEL_POR_CLK_SRC;
+
+ val |= XMAC_CONFIG_TX_OUTPUT_EN;
+
+ if (lp->loopback_mode == LOOPBACK_MAC) {
+ val &= ~XMAC_CONFIG_SEL_POR_CLK_SRC;
+ val |= XMAC_CONFIG_LOOPBACK;
+ } else {
+ val &= ~XMAC_CONFIG_LOOPBACK;
+ }
+
+ if (np->flags & NIU_FLAGS_10G) {
+ val &= ~XMAC_CONFIG_LFS_DISABLE;
+ } else {
+ val |= XMAC_CONFIG_LFS_DISABLE;
+ if (!(np->flags & NIU_FLAGS_FIBER))
+ val |= XMAC_CONFIG_1G_PCS_BYPASS;
+ else
+ val &= ~XMAC_CONFIG_1G_PCS_BYPASS;
+ }
+
+ val &= ~XMAC_CONFIG_10G_XPCS_BYPASS;
+
+ if (lp->active_speed == SPEED_100)
+ val |= XMAC_CONFIG_SEL_CLK_25MHZ;
+ else
+ val &= ~XMAC_CONFIG_SEL_CLK_25MHZ;
+
+ nw64_mac(val, XMAC_CONFIG);
+
+ val = nr64_mac(XMAC_CONFIG);
+ val &= ~XMAC_CONFIG_MODE_MASK;
+ if (np->flags & NIU_FLAGS_10G) {
+ val |= XMAC_CONFIG_MODE_XGMII;
+ } else {
+ if (lp->active_speed == SPEED_100)
+ val |= XMAC_CONFIG_MODE_MII;
+ else
+ val |= XMAC_CONFIG_MODE_GMII;
+ }
+
+ nw64_mac(val, XMAC_CONFIG);
+}
+
+static void niu_init_xif_bmac(struct niu *np)
+{
+ struct niu_link_config *lp = &np->link_config;
+ u64 val;
+
+ val = BMAC_XIF_CONFIG_TX_OUTPUT_EN;
+
+ if (lp->loopback_mode == LOOPBACK_MAC)
+ val |= BMAC_XIF_CONFIG_MII_LOOPBACK;
+ else
+ val &= ~BMAC_XIF_CONFIG_MII_LOOPBACK;
+
+ if (lp->active_speed == SPEED_1000)
+ val |= BMAC_XIF_CONFIG_GMII_MODE;
+ else
+ val &= ~BMAC_XIF_CONFIG_GMII_MODE;
+
+ val &= ~(BMAC_XIF_CONFIG_LINK_LED |
+ BMAC_XIF_CONFIG_LED_POLARITY);
+
+ if (!(np->flags & NIU_FLAGS_10G) &&
+ !(np->flags & NIU_FLAGS_FIBER) &&
+ lp->active_speed == SPEED_100)
+ val |= BMAC_XIF_CONFIG_25MHZ_CLOCK;
+ else
+ val &= ~BMAC_XIF_CONFIG_25MHZ_CLOCK;
+
+ nw64_mac(val, BMAC_XIF_CONFIG);
+}
+
+static void niu_init_xif(struct niu *np)
+{
+ if (np->flags & NIU_FLAGS_XMAC)
+ niu_init_xif_xmac(np);
+ else
+ niu_init_xif_bmac(np);
+}
+
+static void niu_pcs_mii_reset(struct niu *np)
+{
+ u64 val = nr64_pcs(PCS_MII_CTL);
+ val |= PCS_MII_CTL_RST;
+ nw64_pcs(val, PCS_MII_CTL);
+}
+
+static void niu_xpcs_reset(struct niu *np)
+{
+ u64 val = nr64_xpcs(XPCS_CONTROL1);
+ val |= XPCS_CONTROL1_RESET;
+ nw64_xpcs(val, XPCS_CONTROL1);
+}
+
+static int niu_init_pcs(struct niu *np)
+{
+ struct niu_link_config *lp = &np->link_config;
+ u64 val;
+
+ switch (np->flags & (NIU_FLAGS_10G | NIU_FLAGS_FIBER)) {
+ case NIU_FLAGS_FIBER:
+ /* 1G fiber */
+ nw64_pcs(PCS_CONF_MASK | PCS_CONF_ENABLE, PCS_CONF);
+ nw64_pcs(0, PCS_DPATH_MODE);
+ niu_pcs_mii_reset(np);
+ break;
+
+ case NIU_FLAGS_10G:
+ case NIU_FLAGS_10G | NIU_FLAGS_FIBER:
+ if (!(np->flags & NIU_FLAGS_XMAC))
+ return -EINVAL;
+
+ /* 10G copper or fiber */
+ val = nr64_mac(XMAC_CONFIG);
+ val &= ~XMAC_CONFIG_10G_XPCS_BYPASS;
+ nw64_mac(val, XMAC_CONFIG);
+
+ niu_xpcs_reset(np);
+
+ val = nr64_xpcs(XPCS_CONTROL1);
+ if (lp->loopback_mode == LOOPBACK_PHY)
+ val |= XPCS_CONTROL1_LOOPBACK;
+ else
+ val &= ~XPCS_CONTROL1_LOOPBACK;
+ nw64_xpcs(val, XPCS_CONTROL1);
+
+ nw64_xpcs(0, XPCS_DESKEW_ERR_CNT);
+ (void) nr64_xpcs(XPCS_SYMERR_CNT01);
+ (void) nr64_xpcs(XPCS_SYMERR_CNT23);
+ break;
+
+ case 0:
+ /* 1G copper */
+ nw64_pcs(PCS_DPATH_MODE_MII, PCS_DPATH_MODE);
+ niu_pcs_mii_reset(np);
+ break;
+
+ default:
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static int niu_reset_tx_xmac(struct niu *np)
+{
+ return niu_set_and_wait_clear_mac(np, XTXMAC_SW_RST,
+ (XTXMAC_SW_RST_REG_RS |
+ XTXMAC_SW_RST_SOFT_RST),
+ 1000, 100, "XTXMAC_SW_RST");
+}
+
+static int niu_reset_tx_bmac(struct niu *np)
+{
+ int limit;
+
+ nw64_mac(BTXMAC_SW_RST_RESET, BTXMAC_SW_RST);
+ limit = 1000;
+ while (--limit >= 0) {
+ if (!(nr64_mac(BTXMAC_SW_RST) & BTXMAC_SW_RST_RESET))
+ break;
+ udelay(100);
+ }
+ if (limit < 0) {
+ printk(KERN_ERR PFX "Port %u TX BMAC would not reset, "
+ "BTXMAC_SW_RST[%lx]\n",
+ np->port, nr64_mac(BTXMAC_SW_RST));
+ return -ENODEV;
+ }
+
+ return 0;
+}
+
+static int niu_reset_tx_mac(struct niu *np)
+{
+ if (np->flags & NIU_FLAGS_XMAC)
+ return niu_reset_tx_xmac(np);
+ else
+ return niu_reset_tx_bmac(np);
+}
+
+static void niu_init_tx_xmac(struct niu *np, u64 min, u64 max)
+{
+ u64 val;
+
+ val = nr64_mac(XMAC_MIN);
+ val &= ~(XMAC_MIN_TX_MIN_PKT_SIZE |
+ XMAC_MIN_RX_MIN_PKT_SIZE);
+ val |= (min << XMAC_MIN_RX_MIN_PKT_SIZE_SHFT);
+ val |= (min << XMAC_MIN_TX_MIN_PKT_SIZE_SHFT);
+ nw64_mac(val, XMAC_MIN);
+
+ nw64_mac(max, XMAC_MAX);
+
+ nw64_mac(~0, XTXMAC_STAT_MSK);
+
+ val = nr64_mac(XMAC_IPG);
+ if (np->flags & NIU_FLAGS_10G) {
+ val &= ~XMAC_IPG_IPG_XGMII;
+ val |= (IPG_12_15_XGMII << XMAC_IPG_IPG_XGMII_SHIFT);
+ } else {
+ val &= ~XMAC_IPG_IPG_MII_GMII;
+ val |= (IPG_12_MII_GMII << XMAC_IPG_IPG_MII_GMII_SHIFT);
+ }
+ nw64_mac(val, XMAC_IPG);
+
+ val = nr64_mac(XMAC_CONFIG);
+ val &= ~(XMAC_CONFIG_ALWAYS_NO_CRC |
+ XMAC_CONFIG_STRETCH_MODE |
+ XMAC_CONFIG_VAR_MIN_IPG_EN |
+ XMAC_CONFIG_TX_ENABLE);
+ nw64_mac(val, XMAC_CONFIG);
+
+ nw64_mac(0, TXMAC_FRM_CNT);
+ nw64_mac(0, TXMAC_BYTE_CNT);
+}
+
+static void niu_init_tx_bmac(struct niu *np, u64 min, u64 max)
+{
+ u64 val;
+
+ nw64_mac(min, BMAC_MIN_FRAME);
+ nw64_mac(max, BMAC_MAX_FRAME);
+
+ nw64_mac(~0, BTXMAC_STATUS_MASK);
+ nw64_mac(0x8808, BMAC_CTRL_TYPE);
+ nw64_mac(7, BMAC_PREAMBLE_SIZE);
+
+ val = nr64_mac(BTXMAC_CONFIG);
+ val &= ~(BTXMAC_CONFIG_FCS_DISABLE |
+ BTXMAC_CONFIG_ENABLE);
+ nw64_mac(val, BTXMAC_CONFIG);
+}
+
+static void niu_init_tx_mac(struct niu *np)
+{
+ u64 min, max;
+
+ min = 64;
+ if (0 /* jumbo */)
+ max = 9216;
+ else
+ max = 1522;
+
+ /* The XMAC_MIN register only accepts values for TX min which
+ * have the low 3 bits cleared.
+ */
+ BUILD_BUG_ON(min & 0x7);
+
+ if (np->flags & NIU_FLAGS_XMAC)
+ niu_init_tx_xmac(np, min, max);
+ else
+ niu_init_tx_bmac(np, min, max);
+}
+
+static int niu_reset_rx_xmac(struct niu *np)
+{
+ int limit;
+
+ nw64_mac(XRXMAC_SW_RST_REG_RS | XRXMAC_SW_RST_SOFT_RST,
+ XRXMAC_SW_RST);
+ limit = 1000;
+ while (--limit >= 0) {
+ if (!(nr64_mac(XRXMAC_SW_RST) & (XRXMAC_SW_RST_REG_RS |
+ XRXMAC_SW_RST_SOFT_RST)))
+ break;
+ udelay(100);
+ }
+ if (limit < 0) {
+ printk(KERN_ERR PFX "Port %u RX XMAC would not reset, "
+ "XRXMAC_SW_RST[%lx]\n",
+ np->port, nr64_mac(XRXMAC_SW_RST));
+ return -ENODEV;
+ }
+
+ return 0;
+}
+
+static int niu_reset_rx_bmac(struct niu *np)
+{
+ int limit;
+
+ nw64_mac(BRXMAC_SW_RST_RESET, BRXMAC_SW_RST);
+ limit = 1000;
+ while (--limit >= 0) {
+ if (!(nr64_mac(BRXMAC_SW_RST) & BRXMAC_SW_RST_RESET))
+ break;
+ udelay(100);
+ }
+ if (limit < 0) {
+ printk(KERN_ERR PFX "Port %u RX BMAC would not reset, "
+ "BRXMAC_SW_RST[%lx]\n",
+ np->port, nr64_mac(BRXMAC_SW_RST));
+ return -ENODEV;
+ }
+
+ return 0;
+}
+
+static int niu_reset_rx_mac(struct niu *np)
+{
+ if (np->flags & NIU_FLAGS_XMAC)
+ return niu_reset_rx_xmac(np);
+ else
+ return niu_reset_rx_bmac(np);
+}
+
+static void niu_init_rx_xmac(struct niu *np)
+{
+ struct niu_parent *parent = np->parent;
+ struct niu_rdc_tables *tp = &parent->rdc_group_cfg[np->port];
+ int first_rdc_table = tp->first_table_num;
+ unsigned long i;
+ u64 val;
+
+ nw64_mac(0, XMAC_ADD_FILT0);
+ nw64_mac(0, XMAC_ADD_FILT1);
+ nw64_mac(0, XMAC_ADD_FILT2);
+ nw64_mac(0, XMAC_ADD_FILT12_MASK);
+ nw64_mac(0, XMAC_ADD_FILT00_MASK);
+ for (i = 0; i < MAC_NUM_HASH; i++)
+ nw64_mac(0, XMAC_HASH_TBL(i));
+ nw64_mac(~0, XRXMAC_STAT_MSK);
+ niu_set_primary_mac_rdc_table(np, first_rdc_table, 1);
+ niu_set_multicast_mac_rdc_table(np, first_rdc_table, 1);
+
+ val = nr64_mac(XMAC_CONFIG);
+ val &= ~(XMAC_CONFIG_RX_MAC_ENABLE |
+ XMAC_CONFIG_PROMISCUOUS |
+ XMAC_CONFIG_PROMISC_GROUP |
+ XMAC_CONFIG_ERR_CHK_DIS |
+ XMAC_CONFIG_RX_CRC_CHK_DIS |
+ XMAC_CONFIG_RESERVED_MULTICAST |
+ XMAC_CONFIG_RX_CODEV_CHK_DIS |
+ XMAC_CONFIG_ADDR_FILTER_EN |
+ XMAC_CONFIG_RCV_PAUSE_ENABLE |
+ XMAC_CONFIG_STRIP_CRC |
+ XMAC_CONFIG_PASS_FLOW_CTRL |
+ XMAC_CONFIG_MAC2IPP_PKT_CNT_EN);
+ val |= (XMAC_CONFIG_HASH_FILTER_EN);
+ nw64_mac(val, XMAC_CONFIG);
+
+ nw64_mac(0, RXMAC_BT_CNT);
+ nw64_mac(0, RXMAC_BC_FRM_CNT);
+ nw64_mac(0, RXMAC_MC_FRM_CNT);
+ nw64_mac(0, RXMAC_FRAG_CNT);
+ nw64_mac(0, RXMAC_HIST_CNT1);
+ nw64_mac(0, RXMAC_HIST_CNT2);
+ nw64_mac(0, RXMAC_HIST_CNT3);
+ nw64_mac(0, RXMAC_HIST_CNT4);
+ nw64_mac(0, RXMAC_HIST_CNT5);
+ nw64_mac(0, RXMAC_HIST_CNT6);
+ nw64_mac(0, RXMAC_HIST_CNT7);
+ nw64_mac(0, RXMAC_MPSZER_CNT);
+ nw64_mac(0, RXMAC_CRC_ER_CNT);
+ nw64_mac(0, RXMAC_CD_VIO_CNT);
+ nw64_mac(0, LINK_FAULT_CNT);
+}
+
+static void niu_init_rx_bmac(struct niu *np)
+{
+ struct niu_parent *parent = np->parent;
+ struct niu_rdc_tables *tp = &parent->rdc_group_cfg[np->port];
+ int first_rdc_table = tp->first_table_num;
+ unsigned long i;
+ u64 val;
+
+ nw64_mac(0, BMAC_ADD_FILT0);
+ nw64_mac(0, BMAC_ADD_FILT1);
+ nw64_mac(0, BMAC_ADD_FILT2);
+ nw64_mac(0, BMAC_ADD_FILT12_MASK);
+ nw64_mac(0, BMAC_ADD_FILT00_MASK);
+ for (i = 0; i < MAC_NUM_HASH; i++)
+ nw64_mac(0, BMAC_HASH_TBL(i));
+ niu_set_primary_mac_rdc_table(np, first_rdc_table, 1);
+ niu_set_multicast_mac_rdc_table(np, first_rdc_table, 1);
+ nw64_mac(~0, BRXMAC_STATUS_MASK);
+
+ val = nr64_mac(BRXMAC_CONFIG);
+ val &= ~(BRXMAC_CONFIG_ENABLE |
+ BRXMAC_CONFIG_STRIP_PAD |
+ BRXMAC_CONFIG_STRIP_FCS |
+ BRXMAC_CONFIG_PROMISC |
+ BRXMAC_CONFIG_PROMISC_GRP |
+ BRXMAC_CONFIG_ADDR_FILT_EN |
+ BRXMAC_CONFIG_DISCARD_DIS);
+ val |= (BRXMAC_CONFIG_HASH_FILT_EN);
+ nw64_mac(val, BRXMAC_CONFIG);
+
+ val = nr64_mac(BMAC_ADDR_CMPEN);
+ val |= BMAC_ADDR_CMPEN_EN0;
+ nw64_mac(val, BMAC_ADDR_CMPEN);
+}
+
+static void niu_init_rx_mac(struct niu *np)
+{
+ niu_set_primary_mac(np, np->dev->dev_addr);
+
+ if (np->flags & NIU_FLAGS_XMAC)
+ niu_init_rx_xmac(np);
+ else
+ niu_init_rx_bmac(np);
+}
+
+static void niu_enable_tx_xmac(struct niu *np, int on)
+{
+ u64 val = nr64_mac(XMAC_CONFIG);
+
+ if (on)
+ val |= XMAC_CONFIG_TX_ENABLE;
+ else
+ val &= ~XMAC_CONFIG_TX_ENABLE;
+ nw64_mac(val, XMAC_CONFIG);
+}
+
+static void niu_enable_tx_bmac(struct niu *np, int on)
+{
+ u64 val = nr64_mac(BTXMAC_CONFIG);
+
+ if (on)
+ val |= BTXMAC_CONFIG_ENABLE;
+ else
+ val &= ~BTXMAC_CONFIG_ENABLE;
+ nw64_mac(val, BTXMAC_CONFIG);
+}
+
+static void niu_enable_tx_mac(struct niu *np, int on)
+{
+ if (np->flags & NIU_FLAGS_XMAC)
+ niu_enable_tx_xmac(np, on);
+ else
+ niu_enable_tx_bmac(np, on);
+}
+
+static void niu_enable_rx_xmac(struct niu *np, int on)
+{
+ u64 val = nr64_mac(XMAC_CONFIG);
+
+ val &= ~(XMAC_CONFIG_HASH_FILTER_EN |
+ XMAC_CONFIG_PROMISCUOUS);
+
+ if (np->flags & NIU_FLAGS_MCAST)
+ val |= XMAC_CONFIG_HASH_FILTER_EN;
+ if (np->flags & NIU_FLAGS_PROMISC)
+ val |= XMAC_CONFIG_PROMISCUOUS;
+
+ if (on)
+ val |= XMAC_CONFIG_RX_MAC_ENABLE;
+ else
+ val &= ~XMAC_CONFIG_RX_MAC_ENABLE;
+ nw64_mac(val, XMAC_CONFIG);
+}
+
+static void niu_enable_rx_bmac(struct niu *np, int on)
+{
+ u64 val = nr64_mac(BRXMAC_CONFIG);
+
+ val &= ~(BRXMAC_CONFIG_HASH_FILT_EN |
+ BRXMAC_CONFIG_PROMISC);
+
+ if (np->flags & NIU_FLAGS_MCAST)
+ val |= BRXMAC_CONFIG_HASH_FILT_EN;
+ if (np->flags & NIU_FLAGS_PROMISC)
+ val |= BRXMAC_CONFIG_PROMISC;
+
+ if (on)
+ val |= BRXMAC_CONFIG_ENABLE;
+ else
+ val &= ~BRXMAC_CONFIG_ENABLE;
+ nw64_mac(val, BRXMAC_CONFIG);
+}
+
+static void niu_enable_rx_mac(struct niu *np, int on)
+{
+ if (np->flags & NIU_FLAGS_XMAC)
+ niu_enable_rx_xmac(np, on);
+ else
+ niu_enable_rx_bmac(np, on);
+}
+
+static int niu_init_mac(struct niu *np)
+{
+ int err;
+
+ niu_init_xif(np);
+ err = niu_init_pcs(np);
+ if (err)
+ return err;
+ err = niu_reset_tx_mac(np);
+ if (err)
+ return err;
+ niu_init_tx_mac(np);
+ err = niu_reset_rx_mac(np);
+ if (err)
+ return err;
+ niu_init_rx_mac(np);
+
+ niu_enable_tx_mac(np, 1);
+ niu_enable_rx_mac(np, 1);
+
+ return 0;
+}
+
+static void niu_stop_one_tx_channel(struct niu *np, struct tx_ring_info *rp)
+{
+ (void) niu_tx_channel_stop(np, rp->tx_channel);
+}
+
+static void niu_stop_tx_channels(struct niu *np)
+{
+ int i;
+
+ for (i = 0; i < np->num_tx_rings; i++) {
+ struct tx_ring_info *rp = &np->tx_rings[i];
+
+ niu_stop_one_tx_channel(np, rp);
+ }
+}
+
+static void niu_reset_one_tx_channel(struct niu *np, struct tx_ring_info *rp)
+{
+ (void) niu_tx_channel_reset(np, rp->tx_channel);
+}
+
+static void niu_reset_tx_channels(struct niu *np)
+{
+ int i;
+
+ for (i = 0; i < np->num_tx_rings; i++) {
+ struct tx_ring_info *rp = &np->tx_rings[i];
+
+ niu_reset_one_tx_channel(np, rp);
+ }
+}
+
+static void niu_stop_one_rx_channel(struct niu *np, struct rx_ring_info *rp)
+{
+ (void) niu_enable_rx_channel(np, rp->rx_channel, 0);
+}
+
+static void niu_stop_rx_channels(struct niu *np)
+{
+ int i;
+
+ for (i = 0; i < np->num_rx_rings; i++) {
+ struct rx_ring_info *rp = &np->rx_rings[i];
+
+ niu_stop_one_rx_channel(np, rp);
+ }
+}
+
+static void niu_reset_one_rx_channel(struct niu *np, struct rx_ring_info *rp)
+{
+ int channel = rp->rx_channel;
+
+ (void) niu_rx_channel_reset(np, channel);
+ nw64(RX_DMA_ENT_MSK_ALL, RX_DMA_ENT_MSK(channel));
+ nw64(0, RX_DMA_CTL_STAT(channel));
+ (void) niu_enable_rx_channel(np, channel, 0);
+}
+
+static void niu_reset_rx_channels(struct niu *np)
+{
+ int i;
+
+ for (i = 0; i < np->num_rx_rings; i++) {
+ struct rx_ring_info *rp = &np->rx_rings[i];
+
+ niu_reset_one_rx_channel(np, rp);
+ }
+}
+
+static void niu_disable_ipp(struct niu *np)
+{
+ u64 rd, wr, val;
+ int limit;
+
+ rd = nr64_ipp(IPP_DFIFO_RD_PTR);
+ wr = nr64_ipp(IPP_DFIFO_WR_PTR);
+ limit = 100;
+ while (--limit >= 0 && (rd != wr)) {
+ rd = nr64_ipp(IPP_DFIFO_RD_PTR);
+ wr = nr64_ipp(IPP_DFIFO_WR_PTR);
+ }
+ if (limit < 0 &&
+ (rd != 0 && wr != 1)) {
+ printk(KERN_ERR PFX "%s: IPP would not quiesce, "
+ "rd_ptr[%lx] wr_ptr[%lx]\n",
+ np->dev->name,
+ nr64_ipp(IPP_DFIFO_RD_PTR),
+ nr64_ipp(IPP_DFIFO_WR_PTR));
+ }
+
+ val = nr64_ipp(IPP_CFIG);
+ val &= ~(IPP_CFIG_IPP_ENABLE |
+ IPP_CFIG_DFIFO_ECC_EN |
+ IPP_CFIG_DROP_BAD_CRC |
+ IPP_CFIG_CKSUM_EN);
+ nw64_ipp(val, IPP_CFIG);
+
+ (void) niu_ipp_reset(np);
+}
+
+static int niu_init_hw(struct niu *np)
+{
+ int i, err;
+
+ niudbg(INIT_HW, "%s: Initialize TXC\n", np->dev->name);
+ niu_txc_enable_port(np, 1);
+ niu_txc_port_dma_enable(np, 1);
+ niu_txc_set_imask(np, 0);
+
+ niudbg(INIT_HW, "%s: Initialize TX channels\n", np->dev->name);
+ for (i = 0; i < np->num_tx_rings; i++) {
+ struct tx_ring_info *rp = &np->tx_rings[i];
+
+ err = niu_init_one_tx_channel(np, rp);
+ if (err)
+ return err;
+ }
+
+ niudbg(INIT_HW, "%s: Initialize RX channels\n", np->dev->name);
+ err = niu_init_rx_channels(np);
+ if (err)
+ goto out_uninit_tx_channels;
+
+ niudbg(INIT_HW, "%s: Initialize classifier\n", np->dev->name);
+ err = niu_init_classifier_hw(np);
+ if (err)
+ goto out_uninit_rx_channels;
+
+ niudbg(INIT_HW, "%s: Initialize ZCP\n", np->dev->name);
+ err = niu_init_zcp(np);
+ if (err)
+ goto out_uninit_rx_channels;
+
+ niudbg(INIT_HW, "%s: Initialize IPP\n", np->dev->name);
+ err = niu_init_ipp(np);
+ if (err)
+ goto out_uninit_rx_channels;
+
+ niudbg(INIT_HW, "%s: Initialize MAC\n", np->dev->name);
+ err = niu_init_mac(np);
+ if (err)
+ goto out_uninit_ipp;
+
+ return 0;
+
+out_uninit_ipp:
+ niudbg(INIT_HW, "%s: Uninit IPP\n", np->dev->name);
+ niu_disable_ipp(np);
+
+out_uninit_rx_channels:
+ niudbg(INIT_HW, "%s: Uninit RX channels\n", np->dev->name);
+ niu_stop_rx_channels(np);
+ niu_reset_rx_channels(np);
+
+out_uninit_tx_channels:
+ niudbg(INIT_HW, "%s: Uninit TX channels\n", np->dev->name);
+ niu_stop_tx_channels(np);
+ niu_reset_tx_channels(np);
+
+ return err;
+}
+
+static void niu_stop_hw(struct niu *np)
+{
+ niudbg(STOP_HW, "%s: Disable interrupts\n", np->dev->name);
+ niu_enable_interrupts(np, 0);
+
+ niudbg(STOP_HW, "%s: Disable RX MAC\n", np->dev->name);
+ niu_enable_rx_mac(np, 0);
+
+ niudbg(STOP_HW, "%s: Disable IPP\n", np->dev->name);
+ niu_disable_ipp(np);
+
+ niudbg(STOP_HW, "%s: Stop TX channels\n", np->dev->name);
+ niu_stop_tx_channels(np);
+
+ niudbg(STOP_HW, "%s: Stop RX channels\n", np->dev->name);
+ niu_stop_rx_channels(np);
+
+ niudbg(STOP_HW, "%s: Reset TX channels\n", np->dev->name);
+ niu_reset_tx_channels(np);
+
+ niudbg(STOP_HW, "%s: Reset RX channels\n", np->dev->name);
+ niu_reset_rx_channels(np);
+}
+
+static int niu_request_irq(struct niu *np)
+{
+ int i, j, err;
+
+ err = 0;
+ for (i = 0; i < np->num_ldg; i++) {
+ struct niu_ldg *lp = &np->ldg[i];
+ int err;
+
+ err = request_irq(lp->irq, niu_interrupt,
+ IRQF_SHARED | IRQF_SAMPLE_RANDOM,
+ np->dev->name, lp);
+ if (err)
+ goto out_free_irqs;
+
+ }
+
+ return 0;
+
+out_free_irqs:
+ for (j = 0; j < i; j++) {
+ struct niu_ldg *lp = &np->ldg[j];
+
+ free_irq(lp->irq, lp);
+ }
+ return err;
+}
+
+static void niu_free_irq(struct niu *np)
+{
+ int i;
+
+ for (i = 0; i < np->num_ldg; i++) {
+ struct niu_ldg *lp = &np->ldg[i];
+
+ free_irq(lp->irq, lp);
+ }
+}
+
+static void niu_enable_napi(struct niu *np)
+{
+ int i;
+
+ for (i = 0; i < np->num_ldg; i++)
+ napi_enable(&np->ldg[i].napi);
+}
+
+static void niu_disable_napi(struct niu *np)
+{
+ int i;
+
+ for (i = 0; i < np->num_ldg; i++)
+ napi_disable(&np->ldg[i].napi);
+}
+
+static int niu_open(struct net_device *dev)
+{
+ struct niu *np = netdev_priv(dev);
+ int err;
+
+ netif_carrier_off(dev);
+
+ err = niu_alloc_channels(np);
+ if (err)
+ goto out_err;
+
+ err = niu_enable_interrupts(np, 0);
+ if (err)
+ goto out_free_channels;
+
+ err = niu_request_irq(np);
+ if (err)
+ goto out_free_channels;
+
+ niu_enable_napi(np);
+
+ spin_lock_irq(&np->lock);
+
+ err = niu_init_hw(np);
+ if (!err) {
+ init_timer(&np->timer);
+ np->timer.expires = jiffies + HZ;
+ np->timer.data = (unsigned long) np;
+ np->timer.function = niu_timer;
+
+ err = niu_enable_interrupts(np, 1);
+ if (err)
+ niu_stop_hw(np);
+ }
+
+ spin_unlock_irq(&np->lock);
+
+ if (err) {
+ niu_disable_napi(np);
+ goto out_free_irq;
+ }
+
+ netif_start_queue(dev);
+
+ if (np->link_config.loopback_mode != LOOPBACK_DISABLED)
+ netif_carrier_on(dev);
+
+ add_timer(&np->timer);
+
+ return 0;
+
+out_free_irq:
+ niu_free_irq(np);
+
+out_free_channels:
+ niu_free_channels(np);
+
+out_err:
+ return err;
+}
+
+static int niu_close(struct net_device *dev)
+{
+ struct niu *np = netdev_priv(dev);
+
+ cancel_work_sync(&np->reset_task);
+
+ niu_disable_napi(np);
+ netif_stop_queue(dev);
+
+ del_timer_sync(&np->timer);
+
+#if 1
+ {
+ int i;
+
+ for (i = 0; i < np->num_rx_rings; i++) {
+ struct rx_ring_info *rp = &np->rx_rings[i];
+ unsigned int chan = rp->rx_channel;
+ printk(KERN_ERR PFX "%s: chan[%u] "
+ "RX_DMA_CTL_STAT[%016lx] RBR_STAT[%lx]\n",
+ np->dev->name, chan,
+ nr64(RX_DMA_CTL_STAT(chan)),
+ nr64(RBR_STAT(chan)));
+ printk(KERN_ERR PFX "%s: chan[%u] "
+ "RCRSTAT A[%lx] B[%lx] C[%lx]\n",
+ np->dev->name, chan,
+ nr64(RCRSTAT_A(chan)),
+ nr64(RCRSTAT_B(chan)),
+ nr64(RCRSTAT_C(chan)));
+ printk(KERN_ERR PFX "%s: chan[%u] "
+ "RXDMA_CFIG1[%lx] RXDMA_CFIG2[%lx]\n",
+ np->dev->name, chan,
+ nr64(RXDMA_CFIG1(chan)),
+ nr64(RXDMA_CFIG2(chan)));
+ }
+ }
+#endif
+ spin_lock_irq(&np->lock);
+
+ niu_stop_hw(np);
+
+ spin_unlock_irq(&np->lock);
+
+ niu_free_irq(np);
+
+ niu_free_channels(np);
+ return 0;
+}
+
+static void niu_sync_xmac_stats(struct niu *np)
+{
+ struct niu_xmac_stats *mp = &np->mac_stats.xmac;
+
+ mp->tx_frames += nr64_mac(TXMAC_FRM_CNT);
+ mp->tx_bytes += nr64_mac(TXMAC_BYTE_CNT);
+
+ mp->rx_link_faults += nr64_mac(LINK_FAULT_CNT);
+ mp->rx_align_errors += nr64_mac(RXMAC_ALIGN_ERR_CNT);
+ mp->rx_frags += nr64_mac(RXMAC_FRAG_CNT);
+ mp->rx_mcasts += nr64_mac(RXMAC_MC_FRM_CNT);
+ mp->rx_bcasts += nr64_mac(RXMAC_BC_FRM_CNT);
+ mp->rx_hist_cnt1 += nr64_mac(RXMAC_HIST_CNT1);
+ mp->rx_hist_cnt2 += nr64_mac(RXMAC_HIST_CNT2);
+ mp->rx_hist_cnt3 += nr64_mac(RXMAC_HIST_CNT3);
+ mp->rx_hist_cnt4 += nr64_mac(RXMAC_HIST_CNT4);
+ mp->rx_hist_cnt5 += nr64_mac(RXMAC_HIST_CNT5);
+ mp->rx_hist_cnt6 += nr64_mac(RXMAC_HIST_CNT6);
+ mp->rx_hist_cnt7 += nr64_mac(RXMAC_HIST_CNT7);
+ mp->rx_octets += nr64_mac(RXMAC_BT_CNT);
+ mp->rx_code_violations += nr64_mac(RXMAC_CD_VIO_CNT);
+ mp->rx_len_errors += nr64_mac(RXMAC_MPSZER_CNT);
+ mp->rx_crc_errors += nr64_mac(RXMAC_CRC_ER_CNT);
+}
+
+static void niu_sync_bmac_stats(struct niu *np)
+{
+ struct niu_bmac_stats *mp = &np->mac_stats.bmac;
+
+ mp->tx_bytes += nr64_mac(BTXMAC_BYTE_CNT);
+ mp->tx_frames += nr64_mac(BTXMAC_FRM_CNT);
+
+ mp->rx_frames += nr64_mac(BRXMAC_FRAME_CNT);
+ mp->rx_align_errors += nr64_mac(BRXMAC_ALIGN_ERR_CNT);
+ mp->rx_crc_errors += nr64_mac(BRXMAC_ALIGN_ERR_CNT);
+ mp->rx_len_errors += nr64_mac(BRXMAC_CODE_VIOL_ERR_CNT);
+}
+
+static void niu_sync_mac_stats(struct niu *np)
+{
+ if (np->flags & NIU_FLAGS_XMAC)
+ niu_sync_xmac_stats(np);
+ else
+ niu_sync_bmac_stats(np);
+}
+
+static void niu_get_rx_stats(struct niu *np)
+{
+ unsigned long pkts, dropped, errors, bytes;
+ int i;
+
+ pkts = dropped = errors = bytes = 0;
+ for (i = 0; i < np->num_rx_rings; i++) {
+ struct rx_ring_info *rp = &np->rx_rings[i];
+
+ pkts += rp->rx_packets;
+ bytes += rp->rx_bytes;
+ dropped += rp->rx_dropped;
+ errors += rp->rx_errors;
+ }
+ np->net_stats.rx_packets = pkts;
+ np->net_stats.rx_bytes = bytes;
+ np->net_stats.rx_dropped = dropped;
+ np->net_stats.rx_errors = errors;
+}
+
+static void niu_get_tx_stats(struct niu *np)
+{
+ unsigned long pkts, errors, bytes;
+ int i;
+
+ pkts = errors = bytes = 0;
+ for (i = 0; i < np->num_tx_rings; i++) {
+ struct tx_ring_info *rp = &np->tx_rings[i];
+
+ pkts += rp->tx_packets;
+ bytes += rp->tx_bytes;
+ errors += rp->tx_errors;
+ }
+ np->net_stats.tx_packets = pkts;
+ np->net_stats.tx_bytes = bytes;
+ np->net_stats.tx_errors = errors;
+}
+
+static struct net_device_stats *niu_get_stats(struct net_device *dev)
+{
+ struct niu *np = netdev_priv(dev);
+
+ niu_get_rx_stats(np);
+ niu_get_tx_stats(np);
+
+ return &np->net_stats;
+}
+
+static void niu_load_hash_xmac(struct niu *np, u16 *hash)
+{
+ int i;
+
+ for (i = 0; i < 16; i++)
+ nw64_mac(hash[i], XMAC_HASH_TBL(i));
+}
+
+static void niu_load_hash_bmac(struct niu *np, u16 *hash)
+{
+ int i;
+
+ for (i = 0; i < 16; i++)
+ nw64_mac(hash[i], BMAC_HASH_TBL(i));
+}
+
+static void niu_load_hash(struct niu *np, u16 *hash)
+{
+ if (np->flags & NIU_FLAGS_XMAC)
+ niu_load_hash_xmac(np, hash);
+ else
+ niu_load_hash_bmac(np, hash);
+}
+
+static void niu_set_rx_mode(struct net_device *dev)
+{
+ struct niu *np = netdev_priv(dev);
+ int i, alt_cnt, err;
+ struct dev_addr_list *addr;
+ unsigned long flags;
+ u16 hash[16] = { 0, };
+
+ spin_lock_irqsave(&np->lock, flags);
+ niu_enable_rx_mac(np, 0);
+
+ np->flags &= ~(NIU_FLAGS_MCAST | NIU_FLAGS_PROMISC);
+ if (dev->flags & IFF_PROMISC)
+ np->flags |= NIU_FLAGS_PROMISC;
+ if ((dev->flags & IFF_ALLMULTI) || (dev->mc_count > 0))
+ np->flags |= NIU_FLAGS_MCAST;
+
+ alt_cnt = dev->uc_count;
+ if (alt_cnt > niu_num_alt_addr(np)) {
+ alt_cnt = 0;
+ np->flags |= NIU_FLAGS_PROMISC;
+ }
+
+ if (alt_cnt) {
+ int index = 0;
+
+ for (addr = dev->uc_list; addr; addr = addr->next) {
+ err = niu_set_alt_mac(np, index,
+ addr->da_addr);
+ if (err)
+ printk(KERN_WARNING PFX "%s: Error %d "
+ "adding alt mac %d\n",
+ dev->name, err, index);
+ err = niu_enable_alt_mac(np, index, 1);
+ if (err)
+ printk(KERN_WARNING PFX "%s: Error %d "
+ "enabling alt mac %d\n",
+ dev->name, err, index);
+
+ index++;
+ }
+ } else {
+ for (i = 0; i < niu_num_alt_addr(np); i++) {
+ err = niu_enable_alt_mac(np, i, 0);
+ if (err)
+ printk(KERN_WARNING PFX "%s: Error %d "
+ "disabling alt mac %d\n",
+ dev->name, err, i);
+ }
+ }
+ if (dev->flags & IFF_ALLMULTI) {
+ for (i = 0; i < 16; i++)
+ hash[i] = 0xffff;
+ } else if (dev->mc_count > 0) {
+ for (addr = dev->mc_list; addr; addr = addr->next) {
+ u32 crc = ether_crc_le(ETH_ALEN, addr->da_addr);
+
+ crc >>= 24;
+ hash[crc >> 4] |= (1 << (15 - (crc & 0xf)));
+ }
+ }
+
+ if (np->flags & NIU_FLAGS_MCAST)
+ niu_load_hash(np, hash);
+
+ niu_enable_rx_mac(np, 1);
+ spin_unlock_irqrestore(&np->lock, flags);
+}
+
+static int niu_set_mac_addr(struct net_device *dev, void *p)
+{
+ struct niu *np = netdev_priv(dev);
+ struct sockaddr *addr = p;
+ unsigned long flags;
+
+ if (!is_valid_ether_addr(addr->sa_data))
+ return -EINVAL;
+
+ memcpy(dev->dev_addr, addr->sa_data, ETH_ALEN);
+
+ if (!netif_running(dev))
+ return 0;
+
+ spin_lock_irqsave(&np->lock, flags);
+ niu_enable_rx_mac(np, 0);
+ niu_set_primary_mac(np, dev->dev_addr);
+ niu_enable_rx_mac(np, 1);
+ spin_unlock_irqrestore(&np->lock, flags);
+
+ return 0;
+}
+
+static int niu_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
+{
+ return -EOPNOTSUPP;
+}
+
+static void niu_netif_stop(struct niu *np)
+{
+ np->dev->trans_start = jiffies; /* prevent tx timeout */
+
+ niu_disable_napi(np);
+
+ netif_tx_disable(np->dev);
+}
+
+static void niu_netif_start(struct niu *np)
+{
+ /* NOTE: unconditional netif_wake_queue is only appropriate
+ * so long as all callers are assured to have free tx slots
+ * (such as after niu_init_hw).
+ */
+ netif_wake_queue(np->dev);
+
+ niu_enable_napi(np);
+
+ niu_enable_interrupts(np, 1);
+}
+
+static void niu_reset_task(struct work_struct *work)
+{
+ struct niu *np = container_of(work, struct niu, reset_task);
+ unsigned long flags;
+ int err;
+
+ spin_lock_irqsave(&np->lock, flags);
+ if (!netif_running(np->dev)) {
+ spin_unlock_irqrestore(&np->lock, flags);
+ return;
+ }
+
+ spin_unlock_irqrestore(&np->lock, flags);
+
+ del_timer_sync(&np->timer);
+
+ niu_netif_stop(np);
+
+ spin_lock_irqsave(&np->lock, flags);
+
+ niu_stop_hw(np);
+
+ err = niu_init_hw(np);
+ if (!err) {
+ np->timer.expires = jiffies + HZ;
+ add_timer(&np->timer);
+ niu_netif_start(np);
+ }
+
+ spin_unlock_irqrestore(&np->lock, flags);
+}
+
+static void niu_tx_timeout(struct net_device *dev)
+{
+ struct niu *np = netdev_priv(dev);
+
+ printk(KERN_ERR PFX "%s: Transmit timed out, resetting\n",
+ dev->name);
+
+ schedule_work(&np->reset_task);
+}
+
+static void niu_set_txd(struct tx_ring_info *rp, int index,
+ u64 mapping, u64 len, u64 mark,
+ u64 n_frags)
+{
+ __le64 *desc = &rp->descr[index];
+
+ *desc = cpu_to_le64(mark |
+ (n_frags << TX_DESC_NUM_PTR_SHIFT) |
+ (len << TX_DESC_TR_LEN_SHIFT) |
+ (mapping & TX_DESC_SAD));
+}
+
+static __u64 niu_compute_tx_flags(struct sk_buff *skb, struct ethhdr *ehdr,
+ u64 pad_bytes, u64 len)
+{
+ u16 eth_proto, eth_proto_inner;
+ u64 csum_bits, l3off, ihl, ret;
+ u8 ip_proto;
+ int ipv6;
+
+ eth_proto = be16_to_cpu(ehdr->h_proto);
+ eth_proto_inner = eth_proto;
+ if (eth_proto == ETH_P_8021Q) {
+ struct vlan_ethhdr *vp = (struct vlan_ethhdr *) ehdr;
+ __be16 val = vp->h_vlan_encapsulated_proto;
+
+ eth_proto_inner = be16_to_cpu(val);
+ }
+
+ ipv6 = ihl = 0;
+ switch (skb->protocol) {
+ case __constant_htons(ETH_P_IP):
+ ip_proto = ip_hdr(skb)->protocol;
+ ihl = ip_hdr(skb)->ihl;
+ break;
+ case __constant_htons(ETH_P_IPV6):
+ ip_proto = ipv6_hdr(skb)->nexthdr;
+ ihl = (40 >> 2);
+ ipv6 = 1;
+ break;
+ default:
+ ip_proto = ihl = 0;
+ break;
+ }
+
+ csum_bits = TXHDR_CSUM_NONE;
+ if (skb->ip_summed == CHECKSUM_PARTIAL) {
+ __u64 start, stuff;
+
+ csum_bits = (ip_proto == IPPROTO_TCP ?
+ TXHDR_CSUM_TCP :
+ (ip_proto == IPPROTO_UDP ?
+ TXHDR_CSUM_UDP : TXHDR_CSUM_SCTP));
+
+ start = skb_transport_offset(skb) -
+ (pad_bytes + sizeof(struct tx_pkt_hdr));
+ stuff = start + skb->csum_offset;
+
+ csum_bits |= (start / 2) << TXHDR_L4START_SHIFT;
+ csum_bits |= (stuff / 2) << TXHDR_L4STUFF_SHIFT;
+ }
+
+ l3off = skb_network_offset(skb) -
+ (pad_bytes + sizeof(struct tx_pkt_hdr));
+
+ ret = (((pad_bytes / 2) << TXHDR_PAD_SHIFT) |
+ (len << TXHDR_LEN_SHIFT) |
+ ((l3off / 2) << TXHDR_L3START_SHIFT) |
+ (ihl << TXHDR_IHL_SHIFT) |
+ ((eth_proto_inner < 1536) ? TXHDR_LLC : 0) |
+ ((eth_proto == ETH_P_8021Q) ? TXHDR_VLAN : 0) |
+ (ipv6 ? TXHDR_IP_VER : 0) |
+ csum_bits);
+
+ return ret;
+}
+
+static struct tx_ring_info *tx_ring_select(struct niu *np, struct sk_buff *skb)
+{
+ return &np->tx_rings[0];
+}
+
+static int niu_start_xmit(struct sk_buff *skb, struct net_device *dev)
+{
+ struct niu *np = netdev_priv(dev);
+ unsigned long align, headroom;
+ struct tx_ring_info *rp;
+ struct tx_pkt_hdr *tp;
+ struct ethhdr *ehdr;
+ unsigned int len;
+ u64 mapping;
+ int prod, i;
+
+ rp = tx_ring_select(np, skb);
+
+ if (niu_tx_avail(rp) <= (skb_shinfo(skb)->nr_frags + 1)) {
+ netif_stop_queue(dev);
+ printk(KERN_ERR PFX "%s: BUG! Tx ring full when "
+ "queue awake!\n", dev->name);
+ rp->tx_errors++;
+ return NETDEV_TX_BUSY;
+ }
+
+ if (skb->len < ETH_ZLEN) {
+ unsigned int pad_bytes = ETH_ZLEN - skb->len;
+
+ if (skb_pad(skb, pad_bytes))
+ goto out;
+ skb_put(skb, pad_bytes);
+ }
+
+ len = sizeof(struct tx_pkt_hdr) + 15;
+ if (skb_headroom(skb) < len) {
+ struct sk_buff *skb_new;
+
+#if 0
+ printk(KERN_ERR PFX "%s: XMIT not enough headroom (%d)\n",
+ np->dev->name, skb_headroom(skb));
+#endif
+ skb_new = skb_realloc_headroom(skb, len);
+ if (!skb_new) {
+ rp->tx_errors++;
+ goto out_drop;
+ }
+ kfree_skb(skb);
+ skb = skb_new;
+ }
+
+ align = ((unsigned long) skb->data & (16 - 1));
+ headroom = align + sizeof(struct tx_pkt_hdr);
+
+ ehdr = (struct ethhdr *) skb->data;
+ tp = (struct tx_pkt_hdr *) skb_push(skb, headroom);
+
+ len = skb->len - sizeof(struct tx_pkt_hdr);
+ tp->flags = cpu_to_le64(niu_compute_tx_flags(skb, ehdr, align, len));
+ tp->resv = 0;
+
+ len = skb_headlen(skb);
+ mapping = np->ops->map_single(np->device, skb->data,
+ len, DMA_TO_DEVICE);
+
+ prod = rp->prod;
+ rp->tx_buffs[prod].skb = skb;
+ rp->tx_buffs[prod].mapping = mapping;
+
+ niu_set_txd(rp, prod, mapping, len,
+ (TX_DESC_SOP | TX_DESC_MARK),
+ skb_shinfo(skb)->nr_frags + 1);
+
+ prod = NEXT_TX(rp, prod);
+
+ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
+ skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
+
+ len = frag->size;
+ mapping = np->ops->map_page(np->device, frag->page,
+ frag->page_offset, len,
+ DMA_TO_DEVICE);
+
+ rp->tx_buffs[prod].skb = NULL;
+ rp->tx_buffs[prod].mapping = mapping;
+
+ niu_set_txd(rp, prod, mapping, len, 0, 0);
+
+ prod = NEXT_TX(rp, prod);
+ }
+
+ if (prod < rp->prod)
+ rp->wrap_bit ^= TX_RING_KICK_WRAP;
+ rp->prod = prod;
+
+ nw64(rp->wrap_bit | (prod << 3), TX_RING_KICK(rp->tx_channel));
+
+ dev->trans_start = jiffies;
+
+out:
+ return NETDEV_TX_OK;
+
+out_drop:
+ rp->tx_errors++;
+ kfree_skb(skb);
+ goto out;
+}
+
+static int niu_change_mtu(struct net_device *dev, int new_mtu)
+{
+ if (new_mtu < 68 || new_mtu > ETH_DATA_LEN)
+ return -EINVAL;
+ dev->mtu = new_mtu;
+ return 0;
+}
+
+static void niu_get_drvinfo(struct net_device *dev,
+ struct ethtool_drvinfo *info)
+{
+ struct niu *np = netdev_priv(dev);
+ struct niu_vpd *vpd = &np->vpd;
+
+ strcpy(info->driver, DRV_MODULE_NAME);
+ strcpy(info->version, DRV_MODULE_VERSION);
+ sprintf(info->fw_version, "%d.%d",
+ vpd->fcode_major, vpd->fcode_minor);
+ if (np->parent->plat_type == PLAT_TYPE_ATLAS)
+ strcpy(info->bus_info, pci_name(np->pdev));
+}
+
+static int niu_get_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+ struct niu *np = netdev_priv(dev);
+ struct niu_link_config *lp;
+
+ lp = &np->link_config;
+
+ memset(cmd, 0, sizeof(*cmd));
+ cmd->phy_address = np->phy_addr;
+ cmd->supported = lp->supported;
+ cmd->advertising = lp->advertising;
+ cmd->autoneg = lp->autoneg;
+ cmd->speed = lp->active_speed;
+ cmd->duplex = lp->active_duplex;
+
+ return 0;
+}
+
+static int niu_set_settings(struct net_device *dev, struct ethtool_cmd *cmd)
+{
+ return -EINVAL;
+}
+
+static u32 niu_get_msglevel(struct net_device *dev)
+{
+ struct niu *np = netdev_priv(dev);
+ return np->msg_enable;
+}
+
+static void niu_set_msglevel(struct net_device *dev, u32 value)
+{
+ struct niu *np = netdev_priv(dev);
+ np->msg_enable = value;
+}
+
+static int niu_get_eeprom_len(struct net_device *dev)
+{
+ struct niu *np = netdev_priv(dev);
+
+ return np->eeprom_len;
+}
+
+static int niu_get_eeprom(struct net_device *dev, struct ethtool_eeprom *eeprom, u8 *data)
+{
+ struct niu *np = netdev_priv(dev);
+ u32 offset, len, val;
+
+ offset = eeprom->offset;
+ len = eeprom->len;
+
+ if (offset + len < offset)
+ return -EINVAL;
+ if (offset >= np->eeprom_len)
+ return -EINVAL;
+ if (offset + len > np->eeprom_len)
+ len = eeprom->len = np->eeprom_len - offset;
+
+ if (offset & 3) {
+ u32 b_offset, b_count;
+
+ b_offset = offset & 3;
+ b_count = 4 - b_offset;
+ if (b_count > len)
+ b_count = len;
+
+ val = nr64(ESPC_NCR((offset - b_offset) / 4));
+ memcpy(data, ((char *)&val) + b_offset, b_count);
+ data += b_count;
+ len -= b_count;
+ offset += b_count;
+ }
+ while (len >= 4) {
+ val = nr64(ESPC_NCR(offset / 4));
+ memcpy(data, &val, 4);
+ data += 4;
+ len -= 4;
+ offset += 4;
+ }
+ if (len) {
+ val = nr64(ESPC_NCR(offset / 4));
+ memcpy(data, &val, len);
+ }
+ return 0;
+}
+
+static const struct {
+ const char string[ETH_GSTRING_LEN];
+} niu_xmac_stat_keys[] = {
+ { "tx_frames" },
+ { "tx_bytes" },
+ { "tx_fifo_errors" },
+ { "tx_overflow_errors" },
+ { "tx_max_pkt_size_errors" },
+ { "tx_underflow_errors" },
+ { "rx_local_faults" },
+ { "rx_remote_faults" },
+ { "rx_link_faults" },
+ { "rx_align_errors" },
+ { "rx_frags" },
+ { "rx_mcasts" },
+ { "rx_bcasts" },
+ { "rx_hist_cnt1" },
+ { "rx_hist_cnt2" },
+ { "rx_hist_cnt3" },
+ { "rx_hist_cnt4" },
+ { "rx_hist_cnt5" },
+ { "rx_hist_cnt6" },
+ { "rx_hist_cnt7" },
+ { "rx_octets" },
+ { "rx_code_violations" },
+ { "rx_len_errors" },
+ { "rx_crc_errors" },
+ { "rx_underflows" },
+ { "rx_overflows" },
+ { "pause_off_state" },
+ { "pause_on_state" },
+ { "pause_received" },
+};
+
+#define NUM_XMAC_STAT_KEYS ARRAY_SIZE(niu_xmac_stat_keys)
+
+static const struct {
+ const char string[ETH_GSTRING_LEN];
+} niu_bmac_stat_keys[] = {
+ { "tx_underflow_errors" },
+ { "tx_max_pkt_size_errors" },
+ { "tx_bytes" },
+ { "tx_frames" },
+ { "rx_overflows" },
+ { "rx_frames" },
+ { "rx_align_errors" },
+ { "rx_crc_errors" },
+ { "rx_len_errors" },
+ { "pause_off_state" },
+ { "pause_on_state" },
+ { "pause_received" },
+};
+
+#define NUM_BMAC_STAT_KEYS ARRAY_SIZE(niu_bmac_stat_keys)
+
+static const struct {
+ const char string[ETH_GSTRING_LEN];
+} niu_rxchan_stat_keys[] = {
+ { "rx_channel" },
+ { "rx_packets" },
+ { "rx_bytes" },
+ { "rx_dropped" },
+ { "rx_errors" },
+};
+
+#define NUM_RXCHAN_STAT_KEYS ARRAY_SIZE(niu_rxchan_stat_keys)
+
+static const struct {
+ const char string[ETH_GSTRING_LEN];
+} niu_txchan_stat_keys[] = {
+ { "tx_channel" },
+ { "tx_packets" },
+ { "tx_bytes" },
+ { "tx_errors" },
+};
+
+#define NUM_TXCHAN_STAT_KEYS ARRAY_SIZE(niu_txchan_stat_keys)
+
+static void niu_get_strings(struct net_device *dev, u32 stringset, u8 *data)
+{
+ struct niu *np = netdev_priv(dev);
+ int i;
+
+ if (stringset != ETH_SS_STATS)
+ return;
+
+ if (np->flags & NIU_FLAGS_XMAC) {
+ memcpy(data, niu_xmac_stat_keys, sizeof(niu_xmac_stat_keys));
+ data += sizeof(niu_xmac_stat_keys);
+ } else {
+ memcpy(data, niu_bmac_stat_keys, sizeof(niu_bmac_stat_keys));
+ data += sizeof(niu_bmac_stat_keys);
+ }
+ for (i = 0; i < np->num_rx_rings; i++) {
+ memcpy(data, niu_rxchan_stat_keys, sizeof(niu_rxchan_stat_keys));
+ data += sizeof(niu_rxchan_stat_keys);
+ }
+ for (i = 0; i < np->num_tx_rings; i++) {
+ memcpy(data, niu_txchan_stat_keys, sizeof(niu_txchan_stat_keys));
+ data += sizeof(niu_txchan_stat_keys);
+ }
+}
+
+static int niu_get_stats_count(struct net_device *dev)
+{
+ struct niu *np = netdev_priv(dev);
+
+ return ((np->flags & NIU_FLAGS_XMAC ?
+ NUM_XMAC_STAT_KEYS :
+ NUM_BMAC_STAT_KEYS) +
+ (np->num_rx_rings * NUM_RXCHAN_STAT_KEYS) +
+ (np->num_tx_rings * NUM_TXCHAN_STAT_KEYS));
+}
+
+static void niu_get_ethtool_stats(struct net_device *dev, struct ethtool_stats *stats, u64 *data)
+{
+ struct niu *np = netdev_priv(dev);
+ int i;
+
+ niu_sync_mac_stats(np);
+ if (np->flags & NIU_FLAGS_XMAC) {
+ memcpy(data, &np->mac_stats.xmac, sizeof(struct niu_xmac_stats));
+ data += (sizeof(struct niu_xmac_stats) / sizeof(u64));
+ } else {
+ memcpy(data, &np->mac_stats.bmac, sizeof(struct niu_bmac_stats));
+ data += (sizeof(struct niu_bmac_stats) / sizeof(u64));
+ }
+ for (i = 0; i < np->num_rx_rings; i++) {
+ struct rx_ring_info *rp = &np->rx_rings[i];
+
+ data[0] = rp->rx_channel;
+ data[1] = rp->rx_packets;
+ data[2] = rp->rx_bytes;
+ data[3] = rp->rx_dropped;
+ data[4] = rp->rx_errors;
+ data += 5;
+ }
+ for (i = 0; i < np->num_tx_rings; i++) {
+ struct tx_ring_info *rp = &np->tx_rings[i];
+
+ data[0] = rp->tx_channel;
+ data[1] = rp->tx_packets;
+ data[2] = rp->tx_bytes;
+ data[3] = rp->tx_errors;
+ data += 4;
+ }
+}
+
+static u64 niu_led_state_save(struct niu *np)
+{
+ if (np->flags & NIU_FLAGS_XMAC)
+ return nr64_mac(XMAC_CONFIG);
+ else
+ return nr64_mac(BMAC_XIF_CONFIG);
+}
+
+static void niu_led_state_restore(struct niu *np, u64 val)
+{
+ if (np->flags & NIU_FLAGS_XMAC)
+ nw64_mac(val, XMAC_CONFIG);
+ else
+ nw64_mac(val, BMAC_XIF_CONFIG);
+}
+
+static void niu_force_led(struct niu *np, int on)
+{
+ u64 val, reg, bit;
+
+ if (np->flags & NIU_FLAGS_XMAC) {
+ reg = XMAC_CONFIG;
+ bit = XMAC_CONFIG_FORCE_LED_ON;
+ } else {
+ reg = BMAC_XIF_CONFIG;
+ bit = BMAC_XIF_CONFIG_LINK_LED;
+ }
+
+ val = nr64_mac(reg);
+ if (on)
+ val |= bit;
+ else
+ val &= ~bit;
+ nw64_mac(val, reg);
+}
+
+static int niu_phys_id(struct net_device *dev, u32 data)
+{
+ struct niu *np = netdev_priv(dev);
+ u64 orig_led_state;
+ int i;
+
+ if (!netif_running(dev))
+ return -EAGAIN;
+
+ if (data == 0)
+ data = 2;
+
+ orig_led_state = niu_led_state_save(np);
+ for (i = 0; i < (data * 2); i++) {
+ int on = ((i % 2) == 0);
+
+ niu_force_led(np, on);
+
+ if (msleep_interruptible(500))
+ break;
+ }
+ niu_led_state_restore(np, orig_led_state);
+
+ return 0;
+}
+
+static const struct ethtool_ops niu_ethtool_ops = {
+ .get_drvinfo = niu_get_drvinfo,
+ .get_link = ethtool_op_get_link,
+ .get_msglevel = niu_get_msglevel,
+ .set_msglevel = niu_set_msglevel,
+ .get_eeprom_len = niu_get_eeprom_len,
+ .get_eeprom = niu_get_eeprom,
+ .get_settings = niu_get_settings,
+ .set_settings = niu_set_settings,
+ .get_strings = niu_get_strings,
+ .get_stats_count = niu_get_stats_count,
+ .get_ethtool_stats = niu_get_ethtool_stats,
+ .phys_id = niu_phys_id,
+};
+
+static int niu_ldg_assign_ldn(struct niu *np, struct niu_parent *parent,
+ int ldg, int ldn)
+{
+ if (ldg < NIU_LDG_MIN || ldg > NIU_LDG_MAX)
+ return -EINVAL;
+ if (ldn < 0 || ldn > LDN_MAX)
+ return -EINVAL;
+
+ parent->ldg_map[ldn] = ldg;
+ nw64(ldg, LDG_NUM(ldn));
+
+ return 0;
+}
+
+static int niu_set_ldg_timer_res(struct niu *np, int res)
+{
+ if (res < 0 || res > LDG_TIMER_RES_VAL)
+ return -EINVAL;
+
+
+ nw64(res, LDG_TIMER_RES);
+
+ return 0;
+}
+
+static int niu_set_ldg_sid(struct niu *np, int ldg, int func, int vector)
+{
+ if ((ldg < NIU_LDG_MIN || ldg > NIU_LDG_MAX) ||
+ (func < 0 || func > 3) ||
+ (vector < 0 || vector > 0x1f))
+ return -EINVAL;
+
+ nw64((func << SID_FUNC_SHIFT) | vector, SID(ldg));
+
+ return 0;
+}
+
+static int __devinit niu_pci_eeprom_read(struct niu *np, u32 addr)
+{
+ u64 frame, frame_base = (ESPC_PIO_STAT_READ_START |
+ (addr << ESPC_PIO_STAT_ADDR_SHIFT));
+ int limit;
+
+ if (addr > (ESPC_PIO_STAT_ADDR >> ESPC_PIO_STAT_ADDR_SHIFT))
+ return -EINVAL;
+
+ frame = frame_base;
+ nw64(frame, ESPC_PIO_STAT);
+ limit = 64;
+ do {
+ udelay(5);
+ frame = nr64(ESPC_PIO_STAT);
+ if (frame & ESPC_PIO_STAT_READ_END)
+ break;
+ } while (limit--);
+ if (!(frame & ESPC_PIO_STAT_READ_END)) {
+ printk(KERN_ERR PFX "EEPROM read timeout frame[%lx]\n",
+ frame);
+ return -ENODEV;
+ }
+
+ frame = frame_base;
+ nw64(frame, ESPC_PIO_STAT);
+ limit = 64;
+ do {
+ udelay(5);
+ frame = nr64(ESPC_PIO_STAT);
+ if (frame & ESPC_PIO_STAT_READ_END)
+ break;
+ } while (limit--);
+ if (!(frame & ESPC_PIO_STAT_READ_END)) {
+ printk(KERN_ERR PFX "EEPROM read timeout frame[%lx]\n",
+ frame);
+ return -ENODEV;
+ }
+
+ frame = nr64(ESPC_PIO_STAT);
+ return (frame & ESPC_PIO_STAT_DATA) >> ESPC_PIO_STAT_DATA_SHIFT;
+}
+
+static int __devinit niu_pci_eeprom_read16(struct niu *np, u32 off)
+{
+ int err = niu_pci_eeprom_read(np, off);
+ u16 val;
+
+ if (err < 0)
+ return err;
+ val = (err << 8);
+ err = niu_pci_eeprom_read(np, off + 1);
+ if (err < 0)
+ return err;
+ val |= (err & 0xff);
+
+ return val;
+}
+
+static int __devinit niu_pci_eeprom_read16_swp(struct niu *np, u32 off)
+{
+ int err = niu_pci_eeprom_read(np, off);
+ u16 val;
+
+ if (err < 0) {
+ return err;
+ }
+ val = (err & 0xff);
+ err = niu_pci_eeprom_read(np, off + 1);
+ if (err < 0) {
+ return err;
+ }
+ val |= (err & 0xff) << 8;
+
+ return val;
+}
+
+static int __devinit niu_pci_vpd_get_propname(struct niu *np,
+ u32 off,
+ char *namebuf,
+ int namebuf_len)
+{
+ int i;
+
+ for (i = 0; i < namebuf_len; i++) {
+ int err = niu_pci_eeprom_read(np, off + i);
+ if (err < 0)
+ return err;
+ *namebuf++ = err;
+ if (!err)
+ break;
+ }
+ if (i >= namebuf_len)
+ return -EINVAL;
+
+ return i + 1;
+}
+
+static void __devinit niu_vpd_parse_version(struct niu *np)
+{
+ struct niu_vpd *vpd = &np->vpd;
+ int len = strlen(vpd->version) + 1;
+ const char *s = vpd->version;
+ int i;
+
+ for (i = 0; i < len - 5; i++) {
+ if (!strncmp(s + i, "FCode ", 5))
+ break;
+ }
+ if (i >= len - 5)
+ return;
+
+ s += i + 5;
+ sscanf(s, "%d.%d", &vpd->fcode_major, &vpd->fcode_minor);
+
+ niudbg(PROBE, "VPD_SCAN: FCODE major(%d) minor(%d)\n",
+ vpd->fcode_major, vpd->fcode_minor);
+ if (vpd->fcode_major > NIU_VPD_MIN_MAJOR ||
+ (vpd->fcode_major == NIU_VPD_MIN_MAJOR &&
+ vpd->fcode_minor >= NIU_VPD_MIN_MINOR))
+ np->flags |= NIU_FLAGS_VPD_VALID;
+}
+
+/* ESPC_PIO_EN_ENABLE must be set */
+static int __devinit niu_pci_vpd_scan_props(struct niu *np, u32 start, u32 end)
+{
+ unsigned int found_mask = 0;
+#define FOUND_MASK_MODEL 0x00000001
+#define FOUND_MASK_BMODEL 0x00000002
+#define FOUND_MASK_VERS 0x00000004
+#define FOUND_MASK_MAC 0x00000008
+#define FOUND_MASK_NMAC 0x00000010
+#define FOUND_MASK_PHY 0x00000020
+#define FOUND_MASK_ALL 0x0000003f
+
+ niudbg(PROBE, "VPD_SCAN: start[%x] end[%x]\n",
+ start, end);
+ while (start < end) {
+ int len, err, instance, type, prop_len;
+ char namebuf[64];
+ u8 *prop_buf;
+ int max_len;
+
+ if (found_mask == FOUND_MASK_ALL) {
+ niu_vpd_parse_version(np);
+ return 1;
+ }
+
+ err = niu_pci_eeprom_read(np, start + 2);
+ if (err < 0)
+ return err;
+ len = err;
+ start += 3;
+
+ instance = niu_pci_eeprom_read(np, start);
+ type = niu_pci_eeprom_read(np, start + 3);
+ prop_len = niu_pci_eeprom_read(np, start + 4);
+ err = niu_pci_vpd_get_propname(np, start + 5, namebuf, 64);
+ if (err < 0)
+ return err;
+
+ prop_buf = NULL;
+ max_len = 0;
+ if (!strcmp(namebuf, "model")) {
+ prop_buf = np->vpd.model;
+ max_len = NIU_VPD_MODEL_MAX;
+ found_mask |= FOUND_MASK_MODEL;
+ } else if (!strcmp(namebuf, "board-model")) {
+ prop_buf = np->vpd.board_model;
+ max_len = NIU_VPD_BD_MODEL_MAX;
+ found_mask |= FOUND_MASK_BMODEL;
+ } else if (!strcmp(namebuf, "version")) {
+ prop_buf = np->vpd.version;
+ max_len = NIU_VPD_VERSION_MAX;
+ found_mask |= FOUND_MASK_VERS;
+ } else if (!strcmp(namebuf, "local-mac-address")) {
+ prop_buf = np->vpd.local_mac;
+ max_len = ETH_ALEN;
+ found_mask |= FOUND_MASK_MAC;
+ } else if (!strcmp(namebuf, "num-mac-addresses")) {
+ prop_buf = &np->vpd.mac_num;
+ max_len = 1;
+ found_mask |= FOUND_MASK_NMAC;
+ } else if (!strcmp(namebuf, "phy-type")) {
+ prop_buf = np->vpd.phy_type;
+ max_len = NIU_VPD_PHY_TYPE_MAX;
+ found_mask |= FOUND_MASK_PHY;
+ }
+
+ if (max_len && prop_len > max_len) {
+ printk(KERN_ERR PFX "Property '%s' length (%d) is "
+ "too long.\n", namebuf, prop_len);
+ return -EINVAL;
+ }
+
+ if (prop_buf) {
+ u32 off = start + 5 + err;
+ int i;
+
+ niudbg(PROBE, "VPD_SCAN: Reading in property [%s] "
+ "len[%d]\n", namebuf, prop_len);
+ for (i = 0; i < prop_len; i++)
+ *prop_buf++ = niu_pci_eeprom_read(np, off + i);
+ }
+
+ start += len;
+ }
+
+ return 0;
+}
+
+/* ESPC_PIO_EN_ENABLE must be set */
+static void __devinit niu_pci_vpd_fetch(struct niu *np, u32 start)
+{
+ u32 offset;
+ int err;
+
+ err = niu_pci_eeprom_read16_swp(np, start + 1);
+ if (err < 0)
+ return;
+
+ offset = err + 3;
+
+ while (start + offset < ESPC_EEPROM_SIZE) {
+ u32 here = start + offset;
+ u32 end;
+
+ err = niu_pci_eeprom_read(np, here);
+ if (err != 0x90)
+ return;
+
+ err = niu_pci_eeprom_read16_swp(np, here + 1);
+ if (err < 0)
+ return;
+
+ here = start + offset + 3;
+ end = start + offset + err;
+
+ offset += err;
+
+ err = niu_pci_vpd_scan_props(np, here, end);
+ if (err < 0 || err == 1)
+ return;
+ }
+}
+
+/* ESPC_PIO_EN_ENABLE must be set */
+static u32 __devinit niu_pci_vpd_offset(struct niu *np)
+{
+ u32 start = 0, end = ESPC_EEPROM_SIZE, ret;
+ int err;
+
+ while (start < end) {
+ ret = start;
+
+ /* ROM header signature? */
+ err = niu_pci_eeprom_read16(np, start + 0);
+ if (err != 0x55aa)
+ return 0;
+
+ /* Apply offset to PCI data structure. */
+ err = niu_pci_eeprom_read16(np, start + 23);
+ if (err < 0)
+ return 0;
+ start += err;
+
+ /* Check for "PCIR" signature. */
+ err = niu_pci_eeprom_read16(np, start + 0);
+ if (err != 0x5043)
+ return 0;
+ err = niu_pci_eeprom_read16(np, start + 2);
+ if (err != 0x4952)
+ return 0;
+
+ /* Check for OBP image type. */
+ err = niu_pci_eeprom_read(np, start + 20);
+ if (err < 0)
+ return 0;
+ if (err != 0x01) {
+ err = niu_pci_eeprom_read(np, ret + 2);
+ if (err < 0)
+ return 0;
+
+ start = ret + (err * 512);
+ continue;
+ }
+
+ err = niu_pci_eeprom_read16_swp(np, start + 8);
+ if (err < 0)
+ return err;
+ ret += err;
+
+ err = niu_pci_eeprom_read(np, ret + 0);
+ if (err != 0x82)
+ return 0;
+
+ return ret;
+ }
+
+ return 0;
+}
+
+static void __devinit niu_pci_vpd_validate(struct niu *np)
+{
+ struct net_device *dev = np->dev;
+ struct niu_vpd *vpd = &np->vpd;
+ u8 val8;
+
+ if (!is_valid_ether_addr(&vpd->local_mac[0])) {
+ printk(KERN_ERR PFX "VPD MAC invalid, "
+ "falling back to SPROM.\n");
+
+ np->flags &= ~NIU_FLAGS_VPD_VALID;
+ return;
+ }
+
+ if (!strcmp(np->vpd.phy_type, "mif")) {
+ /* 1G copper, MII */
+ np->flags &= ~(NIU_FLAGS_FIBER |
+ NIU_FLAGS_10G);
+ np->mac_xcvr = MAC_XCVR_MII;
+ } else if (!strcmp(np->vpd.phy_type, "xgf")) {
+ /* 10G fiber, XPCS */
+ np->flags |= (NIU_FLAGS_10G |
+ NIU_FLAGS_FIBER);
+ np->mac_xcvr = MAC_XCVR_XPCS;
+ } else if (!strcmp(np->vpd.phy_type, "pcs")) {
+ /* 1G fiber, PCS */
+ np->flags &= ~NIU_FLAGS_10G;
+ np->flags |= NIU_FLAGS_FIBER;
+ np->mac_xcvr = MAC_XCVR_PCS;
+ } else if (!strcmp(np->vpd.phy_type, "xgc")) {
+ /* 10G copper, XPCS */
+ np->flags |= NIU_FLAGS_10G;
+ np->flags &= ~NIU_FLAGS_FIBER;
+ np->mac_xcvr = MAC_XCVR_XPCS;
+ } else {
+ printk(KERN_ERR PFX "Illegal phy string [%s].\n",
+ np->vpd.phy_type);
+ printk(KERN_ERR PFX "Falling back to SPROM.\n");
+ np->flags &= ~NIU_FLAGS_VPD_VALID;
+ return;
+ }
+
+ memcpy(dev->perm_addr, vpd->local_mac, ETH_ALEN);
+
+ val8 = dev->perm_addr[5];
+ dev->perm_addr[5] += np->port;
+ if (dev->perm_addr[5] < val8)
+ dev->perm_addr[4]++;
+
+ memcpy(dev->dev_addr, dev->perm_addr, dev->addr_len);
+}
+
+static int __devinit niu_pci_probe_sprom(struct niu *np)
+{
+ struct net_device *dev = np->dev;
+ int len, i;
+ u64 val, sum;
+ u8 val8;
+
+ val = (nr64(ESPC_VER_IMGSZ) & ESPC_VER_IMGSZ_IMGSZ);
+ val >>= ESPC_VER_IMGSZ_IMGSZ_SHIFT;
+ len = val / 4;
+
+ np->eeprom_len = len;
+
+ niudbg(PROBE, "SPROM: Image size %lu\n", val);
+
+ sum = 0;
+ for (i = 0; i < len; i++) {
+ val = nr64(ESPC_NCR(i));
+ sum += (val >> 0) & 0xff;
+ sum += (val >> 8) & 0xff;
+ sum += (val >> 16) & 0xff;
+ sum += (val >> 24) & 0xff;
+ }
+ niudbg(PROBE, "SPROM: Checksum %x\n", (int)(sum & 0xff));
+ if ((sum & 0xff) != 0xab) {
+ printk(KERN_ERR PFX "Bad SPROM checksum "
+ "(%x, should be 0xab)\n", (int) (sum & 0xff));
+ return -EINVAL;
+ }
+
+ val = nr64(ESPC_PHY_TYPE);
+ switch (np->port) {
+ case 0:
+ val = (val & ESPC_PHY_TYPE_PORT0) >>
+ ESPC_PHY_TYPE_PORT0_SHIFT;
+ break;
+ case 1:
+ val = (val & ESPC_PHY_TYPE_PORT1) >>
+ ESPC_PHY_TYPE_PORT1_SHIFT;
+ break;
+ case 2:
+ val = (val & ESPC_PHY_TYPE_PORT2) >>
+ ESPC_PHY_TYPE_PORT2_SHIFT;
+ break;
+ case 3:
+ val = (val & ESPC_PHY_TYPE_PORT3) >>
+ ESPC_PHY_TYPE_PORT3_SHIFT;
+ break;
+ default:
+ printk(KERN_ERR PFX "Bogus port number %u\n",
+ np->port);
+ return -EINVAL;
+ }
+ niudbg(PROBE, "SPROM: PHY type %lx\n", val);
+
+ switch (val) {
+ case ESPC_PHY_TYPE_1G_COPPER:
+ /* 1G copper, MII */
+ np->flags &= ~(NIU_FLAGS_FIBER |
+ NIU_FLAGS_10G);
+ np->mac_xcvr = MAC_XCVR_MII;
+ break;
+
+ case ESPC_PHY_TYPE_1G_FIBER:
+ /* 1G fiber, PCS */
+ np->flags &= ~NIU_FLAGS_10G;
+ np->flags |= NIU_FLAGS_FIBER;
+ np->mac_xcvr = MAC_XCVR_PCS;
+ break;
+
+ case ESPC_PHY_TYPE_10G_COPPER:
+ /* 10G copper, XPCS */
+ np->flags |= NIU_FLAGS_10G;
+ np->flags &= ~NIU_FLAGS_FIBER;
+ np->mac_xcvr = MAC_XCVR_XPCS;
+ break;
+
+ case ESPC_PHY_TYPE_10G_FIBER:
+ /* 10G fiber, XPCS */
+ np->flags |= (NIU_FLAGS_10G |
+ NIU_FLAGS_FIBER);
+ np->mac_xcvr = MAC_XCVR_XPCS;
+ break;
+
+ default:
+ printk(KERN_ERR PFX "Bogus SPROM phy type %lu\n", val);
+ return -EINVAL;
+ }
+
+ val = nr64(ESPC_MAC_ADDR0);
+ niudbg(PROBE, "SPROM: MAC_ADDR0[%08lx]\n", val);
+ dev->perm_addr[0] = (val >> 0) & 0xff;
+ dev->perm_addr[1] = (val >> 8) & 0xff;
+ dev->perm_addr[2] = (val >> 16) & 0xff;
+ dev->perm_addr[3] = (val >> 24) & 0xff;
+
+ val = nr64(ESPC_MAC_ADDR1);
+ niudbg(PROBE, "SPROM: MAC_ADDR1[%08lx]\n", val);
+ dev->perm_addr[4] = (val >> 0) & 0xff;
+ dev->perm_addr[5] = (val >> 8) & 0xff;
+
+ if (!is_valid_ether_addr(&dev->perm_addr[0])) {
+ printk(KERN_ERR PFX "SPROM MAC address invalid\n");
+ printk(KERN_ERR PFX "[ \n");
+ for (i = 0; i < 6; i++)
+ printk("%02x ", dev->perm_addr[i]);
+ printk("]\n");
+ return -EINVAL;
+ }
+
+ val8 = dev->perm_addr[5];
+ dev->perm_addr[5] += np->port;
+ if (dev->perm_addr[5] < val8)
+ dev->perm_addr[4]++;
+
+ memcpy(dev->dev_addr, dev->perm_addr, dev->addr_len);
+
+ val = nr64(ESPC_MOD_STR_LEN);
+ niudbg(PROBE, "SPROM: MOD_STR_LEN[%lu]\n", val);
+ if (val > 8 * 4)
+ return -EINVAL;
+
+ for (i = 0; i < val; i += 4) {
+ u64 tmp = nr64(ESPC_NCR(5 + (i / 4)));
+
+ np->vpd.model[i + 3] = (tmp >> 0) & 0xff;
+ np->vpd.model[i + 2] = (tmp >> 8) & 0xff;
+ np->vpd.model[i + 1] = (tmp >> 16) & 0xff;
+ np->vpd.model[i + 0] = (tmp >> 24) & 0xff;
+ }
+ np->vpd.model[val] = '\0';
+
+ val = nr64(ESPC_BD_MOD_STR_LEN);
+ niudbg(PROBE, "SPROM: BD_MOD_STR_LEN[%lu]\n", val);
+ if (val > 4 * 4)
+ return -EINVAL;
+
+ for (i = 0; i < val; i += 4) {
+ u64 tmp = nr64(ESPC_NCR(14 + (i / 4)));
+
+ np->vpd.board_model[i + 3] = (tmp >> 0) & 0xff;
+ np->vpd.board_model[i + 2] = (tmp >> 8) & 0xff;
+ np->vpd.board_model[i + 1] = (tmp >> 16) & 0xff;
+ np->vpd.board_model[i + 0] = (tmp >> 24) & 0xff;
+ }
+ np->vpd.board_model[val] = '\0';
+
+ np->vpd.mac_num =
+ nr64(ESPC_NUM_PORTS_MACS) & ESPC_NUM_PORTS_MACS_VAL;
+ niudbg(PROBE, "SPROM: NUM_PORTS_MACS[%d]\n",
+ np->vpd.mac_num);
+
+ return 0;
+}
+
+static int __devinit niu_get_and_validate_port(struct niu *np)
+{
+ struct niu_parent *parent = np->parent;
+
+ if (np->port <= 1)
+ np->flags |= NIU_FLAGS_XMAC;
+
+ if (!parent->num_ports) {
+ parent->num_ports = nr64(ESPC_NUM_PORTS_MACS) &
+ ESPC_NUM_PORTS_MACS_VAL;
+ }
+
+ niudbg(PROBE, "niu_get_and_validate_port: port[%d] num_ports[%d]\n",
+ np->port, parent->num_ports);
+ if (np->port >= parent->num_ports)
+ return -ENODEV;
+
+ return 0;
+}
+
+static int __devinit phy_record(struct niu_parent *parent,
+ struct phy_probe_info *p,
+ int dev_id_1, int dev_id_2, __u8 phy_port,
+ int type)
+{
+ __u32 id = (dev_id_1 << 16) | dev_id_2;
+ __u8 idx;
+
+ if (dev_id_1 < 0 || dev_id_2 < 0)
+ return 0;
+ if (type == PHY_TYPE_PMA_PMD || type == PHY_TYPE_PCS) {
+ if ((id & NIU_PHY_ID_MASK) != NIU_PHY_ID_BCM8704)
+ return 0;
+ } else {
+ if ((id & NIU_PHY_ID_MASK) != NIU_PHY_ID_BCM5464R)
+ return 0;
+ }
+
+ printk(KERN_INFO PFX "niu%d: Found PHY %08x type %s at port %u\n",
+ parent->index, id,
+ (type == PHY_TYPE_PMA_PMD ?
+ "PMA/PMD" :
+ (type == PHY_TYPE_PCS ?
+ "PCS" : "MII")),
+ phy_port);
+
+ if (p->cur[type] >= NIU_MAX_PORTS) {
+ printk(KERN_ERR PFX "Too many PHY ports.\n");
+ return -EINVAL;
+ }
+ idx = p->cur[type];
+ p->phy_id[type][idx] = id;
+ p->phy_port[type][idx] = phy_port;
+ p->cur[type] = idx + 1;
+ return 0;
+}
+
+static int __devinit port_has_10g(struct phy_probe_info *p, int port)
+{
+ int i;
+
+ for (i = 0; i < p->cur[PHY_TYPE_PMA_PMD]; i++) {
+ if (p->phy_port[PHY_TYPE_PMA_PMD][i] == port)
+ return 1;
+ }
+ for (i = 0; i < p->cur[PHY_TYPE_PCS]; i++) {
+ if (p->phy_port[PHY_TYPE_PCS][i] == port)
+ return 1;
+ }
+
+ return 0;
+}
+
+static int __devinit count_10g_ports(struct phy_probe_info *p, int *lowest)
+{
+ int port, cnt;
+
+ cnt = 0;
+ *lowest = 32;
+ for (port = 8; port < 32; port++) {
+ if (port_has_10g(p, port)) {
+ if (!cnt)
+ *lowest = port;
+ cnt++;
+ }
+ }
+
+ return cnt;
+}
+
+static int __devinit count_1g_ports(struct phy_probe_info *p, int *lowest)
+{
+ *lowest = 32;
+ if (p->cur[PHY_TYPE_MII])
+ *lowest = p->phy_port[PHY_TYPE_MII][0];
+
+ return p->cur[PHY_TYPE_MII];
+}
+
+static void __devinit niu_divide_channels(struct niu_parent *parent,
+ int num_10g, int num_1g)
+{
+ int num_ports = parent->num_ports;
+ int rx_chans_per_10g, rx_chans_per_1g;
+ int tx_chans_per_10g, tx_chans_per_1g;
+ int i, tot_rx, tot_tx;
+
+ if (!num_10g || !num_1g) {
+ rx_chans_per_10g = rx_chans_per_1g =
+ (NIU_NUM_RXCHAN / num_ports);
+ tx_chans_per_10g = tx_chans_per_1g =
+ (NIU_NUM_TXCHAN / num_ports);
+ } else {
+ rx_chans_per_1g = NIU_NUM_RXCHAN / 8;
+ rx_chans_per_10g = (NIU_NUM_RXCHAN -
+ (rx_chans_per_1g * num_1g)) /
+ num_10g;
+
+ tx_chans_per_1g = NIU_NUM_TXCHAN / 6;
+ tx_chans_per_10g = (NIU_NUM_TXCHAN -
+ (tx_chans_per_1g * num_1g)) /
+ num_10g;
+ }
+
+ tot_rx = tot_tx = 0;
+ for (i = 0; i < num_ports; i++) {
+ int type = phy_decode(parent->port_phy, i);
+
+ if (type == PORT_TYPE_10G) {
+ parent->rxchan_per_port[i] = rx_chans_per_10g;
+ parent->txchan_per_port[i] = tx_chans_per_10g;
+ } else {
+ parent->rxchan_per_port[i] = rx_chans_per_1g;
+ parent->txchan_per_port[i] = tx_chans_per_1g;
+ }
+ printk(KERN_INFO PFX "niu%d: Port %u [%u RX chans] "
+ "[%u TX chans]\n",
+ parent->index, i,
+ parent->rxchan_per_port[i],
+ parent->txchan_per_port[i]);
+ tot_rx += parent->rxchan_per_port[i];
+ tot_tx += parent->txchan_per_port[i];
+ }
+
+ if (tot_rx > NIU_NUM_RXCHAN) {
+ printk(KERN_ERR PFX "niu%d: Too many RX channels (%d), "
+ "resetting to one per port.\n",
+ parent->index, tot_rx);
+ for (i = 0; i < num_ports; i++)
+ parent->rxchan_per_port[i] = 1;
+ }
+ if (tot_tx > NIU_NUM_TXCHAN) {
+ printk(KERN_ERR PFX "niu%d: Too many TX channels (%d), "
+ "resetting to one per port.\n",
+ parent->index, tot_tx);
+ for (i = 0; i < num_ports; i++)
+ parent->txchan_per_port[i] = 1;
+ }
+ if (tot_rx < NIU_NUM_RXCHAN || tot_tx < NIU_NUM_TXCHAN) {
+ printk(KERN_WARNING PFX "niu%d: Driver bug, wasted channels, "
+ "RX[%d] TX[%d]\n",
+ parent->index, tot_rx, tot_tx);
+ }
+}
+
+static void __devinit niu_divide_rdc_groups(struct niu_parent *parent,
+ int num_10g, int num_1g)
+{
+ int i, num_ports = parent->num_ports;
+ int rdc_group, rdc_groups_per_port;
+ int rdc_channel_base;
+
+ rdc_group = 0;
+ rdc_groups_per_port = NIU_NUM_RDC_TABLES / num_ports;
+
+ rdc_channel_base = 0;
+
+ for (i = 0; i < num_ports; i++) {
+ struct niu_rdc_tables *tp = &parent->rdc_group_cfg[i];
+ int grp, num_channels = parent->rxchan_per_port[i];
+ int this_channel_offset;
+
+ tp->first_table_num = rdc_group;
+ tp->num_tables = rdc_groups_per_port;
+ this_channel_offset = 0;
+ for (grp = 0; grp < tp->num_tables; grp++) {
+ struct rdc_table *rt = &tp->tables[grp];
+ int slot;
+
+ printk(KERN_INFO PFX "niu%d: Port %d RDC tbl(%d) [ ",
+ parent->index, i, tp->first_table_num + grp);
+ for (slot = 0; slot < NIU_RDC_TABLE_SLOTS; slot++) {
+ rt->rxdma_channel[slot] =
+ rdc_channel_base + this_channel_offset;
+
+ printk("%d ", rt->rxdma_channel[slot]);
+
+ if (++this_channel_offset == num_channels)
+ this_channel_offset = 0;
+ }
+ printk("]\n");
+ }
+
+ parent->rdc_default[i] = rdc_channel_base;
+
+ rdc_channel_base += num_channels;
+ rdc_group += rdc_groups_per_port;
+ }
+}
+
+static int __devinit fill_phy_probe_info(struct niu *np,
+ struct niu_parent *parent,
+ struct phy_probe_info *info)
+{
+ unsigned long flags;
+ int port, err;
+
+ memset(info, 0, sizeof(*info));
+
+ /* Port 0 to 7 are reserved for onboard Serdes, probe the rest. */
+ niu_lock_parent(np, flags);
+ err = 0;
+ for (port = 8; port < 32; port++) {
+ int dev_id_1, dev_id_2;
+
+ dev_id_1 = mdio_read(np, port,
+ NIU_PMA_PMD_DEV_ADDR, MII_PHYSID1);
+ dev_id_2 = mdio_read(np, port,
+ NIU_PMA_PMD_DEV_ADDR, MII_PHYSID2);
+ err = phy_record(parent, info, dev_id_1, dev_id_2, port,
+ PHY_TYPE_PMA_PMD);
+ if (err)
+ break;
+ dev_id_1 = mdio_read(np, port,
+ NIU_PCS_DEV_ADDR, MII_PHYSID1);
+ dev_id_2 = mdio_read(np, port,
+ NIU_PCS_DEV_ADDR, MII_PHYSID2);
+ err = phy_record(parent, info, dev_id_1, dev_id_2, port,
+ PHY_TYPE_PCS);
+ if (err)
+ break;
+ dev_id_1 = mii_read(np, port, MII_PHYSID1);
+ dev_id_2 = mii_read(np, port, MII_PHYSID2);
+ err = phy_record(parent, info, dev_id_1, dev_id_2, port,
+ PHY_TYPE_MII);
+ if (err)
+ break;
+ }
+ niu_unlock_parent(np, flags);
+
+ return err;
+}
+
+static int __devinit walk_phys(struct niu *np, struct niu_parent *parent)
+{
+ struct phy_probe_info *info = &parent->phy_probe_info;
+ int lowest_10g, lowest_1g;
+ int num_10g, num_1g;
+ __u32 val;
+ int err;
+
+ err = fill_phy_probe_info(np, parent, info);
+ if (err)
+ return err;
+
+ num_10g = count_10g_ports(info, &lowest_10g);
+ num_1g = count_1g_ports(info, &lowest_1g);
+
+ if (num_10g + num_1g != parent->num_ports) {
+ printk(KERN_ERR PFX "Bad port config 10[%d] 1[%d] "
+ "n_ports[%u]\n", num_10g, num_1g, parent->num_ports);
+ return -EINVAL;
+ }
+
+ switch ((num_10g << 4) | num_1g) {
+ case 0x24:
+ if (parent->plat_type != PLAT_TYPE_NIU)
+ goto expected_niu;
+
+ if (lowest_1g == 10)
+ parent->plat_type = PLAT_TYPE_VF_P0;
+ else if (lowest_1g == 26)
+ parent->plat_type = PLAT_TYPE_VF_P1;
+ else
+ goto unknown_vg_1g_port;
+
+ /* fallthru */
+ case 0x22:
+ val = (phy_encode(PORT_TYPE_10G, 0) |
+ phy_encode(PORT_TYPE_10G, 1) |
+ phy_encode(PORT_TYPE_1G, 2) |
+ phy_encode(PORT_TYPE_1G, 3));
+ break;
+
+ case 0x20:
+ val = (phy_encode(PORT_TYPE_10G, 0) |
+ phy_encode(PORT_TYPE_10G, 1));
+ break;
+
+ case 0x14:
+ if (parent->plat_type != PLAT_TYPE_NIU)
+ goto expected_niu;
+
+ if (lowest_1g == 10)
+ parent->plat_type = PLAT_TYPE_VF_P0;
+ else if (lowest_1g == 26)
+ parent->plat_type = PLAT_TYPE_VF_P1;
+ else
+ goto unknown_vg_1g_port;
+
+ /* fallthru */
+ case 0x13:
+ if ((lowest_10g & 0x7) == 0)
+ val = (phy_encode(PORT_TYPE_10G, 0) |
+ phy_encode(PORT_TYPE_1G, 1) |
+ phy_encode(PORT_TYPE_1G, 2) |
+ phy_encode(PORT_TYPE_1G, 3));
+ else
+ val = (phy_encode(PORT_TYPE_1G, 0) |
+ phy_encode(PORT_TYPE_10G, 1) |
+ phy_encode(PORT_TYPE_1G, 2) |
+ phy_encode(PORT_TYPE_1G, 3));
+ break;
+
+ case 0x04:
+ if (parent->plat_type != PLAT_TYPE_NIU)
+ goto expected_niu;
+
+ if (lowest_1g == 10)
+ parent->plat_type = PLAT_TYPE_VF_P0;
+ else if (lowest_1g == 26)
+ parent->plat_type = PLAT_TYPE_VF_P1;
+ else
+ goto unknown_vg_1g_port;
+
+ val = (phy_encode(PORT_TYPE_1G, 0) |
+ phy_encode(PORT_TYPE_1G, 1) |
+ phy_encode(PORT_TYPE_1G, 2) |
+ phy_encode(PORT_TYPE_1G, 3));
+ break;
+
+ default:
+ printk(KERN_ERR PFX "Unsupported port config "
+ "10G[%d] 1G[%d]\n",
+ num_10g, num_1g);
+ return -EINVAL;
+ }
+
+ parent->port_phy = val;
+
+ niu_divide_channels(parent, num_10g, num_1g);
+ niu_divide_rdc_groups(parent, num_10g, num_1g);
+
+ return 0;
+
+expected_niu:
+ printk(KERN_ERR PFX "Found 10g[%d] 1g[%d] on non-NIU plat_type[%u]\n",
+ num_10g, num_1g, parent->plat_type);
+ return -EINVAL;
+
+unknown_vg_1g_port:
+ printk(KERN_ERR PFX "Cannot identify platform type, 1gport=%d\n",
+ lowest_1g);
+ return -EINVAL;
+}
+
+static int __devinit niu_probe_ports(struct niu *np)
+{
+ struct niu_parent *parent = np->parent;
+ int err, i;
+
+ niudbg(PROBE, "niu_probe_ports(): port_phy[%08x]\n",
+ parent->port_phy);
+
+ if (parent->port_phy == PORT_PHY_UNKNOWN) {
+ err = walk_phys(np, parent);
+ if (err)
+ return err;
+
+ niu_set_ldg_timer_res(np, 2);
+ for (i = 0; i <= LDN_MAX; i++)
+ niu_ldn_irq_enable(np, i, 0);
+ }
+
+ if (parent->port_phy == PORT_PHY_INVALID)
+ return -EINVAL;
+
+ return 0;
+}
+
+static int __devinit niu_classifier_swstate_init(struct niu *np)
+{
+ struct niu_classifier *cp = &np->clas;
+
+ niudbg(PROBE, "niu_classifier_swstate_init: num_tcam(%d)\n",
+ np->parent->tcam_num_entries);
+
+ cp->tcam_index = (u16) np->port;
+ cp->h1_init = 0xffffffff;
+ cp->h2_init = 0xffff;
+
+ return fflp_early_init(np);
+}
+
+static void __devinit niu_link_config_init(struct niu *np)
+{
+ struct niu_link_config *lp = &np->link_config;
+
+ lp->advertising = (ADVERTISED_10baseT_Half |
+ ADVERTISED_10baseT_Full |
+ ADVERTISED_100baseT_Half |
+ ADVERTISED_100baseT_Full |
+ ADVERTISED_1000baseT_Half |
+ ADVERTISED_1000baseT_Full |
+ ADVERTISED_10000baseT_Full |
+ ADVERTISED_Autoneg);
+ lp->speed = lp->active_speed = SPEED_INVALID;
+ lp->duplex = lp->active_duplex = DUPLEX_INVALID;
+#if 0
+ lp->loopback_mode = LOOPBACK_MAC;
+ lp->active_speed = SPEED_10000;
+ lp->active_duplex = DUPLEX_FULL;
+#else
+ lp->loopback_mode = LOOPBACK_DISABLED;
+#endif
+}
+
+static int __devinit niu_init_mac_ipp_pcs_base(struct niu *np)
+{
+ switch (np->port) {
+ case 0:
+ np->mac_regs = np->regs + XMAC_PORT0_OFF;
+ np->ipp_off = 0x00000;
+ np->pcs_off = 0x04000;
+ np->xpcs_off = 0x02000;
+ break;
+
+ case 1:
+ np->mac_regs = np->regs + XMAC_PORT1_OFF;
+ np->ipp_off = 0x08000;
+ np->pcs_off = 0x0a000;
+ np->xpcs_off = 0x08000;
+ break;
+
+ case 2:
+ np->mac_regs = np->regs + BMAC_PORT2_OFF;
+ np->ipp_off = 0x04000;
+ np->pcs_off = 0x0e000;
+ np->xpcs_off = ~0UL;
+ break;
+
+ case 3:
+ np->mac_regs = np->regs + BMAC_PORT3_OFF;
+ np->ipp_off = 0x0c000;
+ np->pcs_off = 0x12000;
+ np->xpcs_off = ~0UL;
+ break;
+
+ default:
+ printk(KERN_ERR PFX "Port %u is invalid, cannot "
+ "compute MAC block offset.\n", np->port);
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+static void __devinit niu_try_msix(struct niu *np)
+{
+ struct msix_entry msi_vec[NIU_NUM_LDG];
+ struct niu_parent *parent = np->parent;
+ struct pci_dev *pdev = np->pdev;
+ int i, num_irqs, err;
+
+ num_irqs = (parent->rxchan_per_port[np->port] +
+ parent->txchan_per_port[np->port] +
+ 1);
+ BUG_ON(num_irqs > (NIU_NUM_LDG / parent->num_ports));
+
+retry:
+ for (i = 0; i < num_irqs; i++) {
+ msi_vec[i].vector = 0;
+ msi_vec[i].entry = i;
+ }
+
+ err = pci_enable_msix(pdev, msi_vec, num_irqs);
+ if (err < 0) {
+ np->flags &= ~NIU_FLAGS_MSIX;
+ return;
+ }
+ if (err > 0) {
+ num_irqs = err;
+ goto retry;
+ }
+
+ np->flags |= NIU_FLAGS_MSIX;
+ for (i = 0; i < num_irqs; i++)
+ np->ldg[i].irq = msi_vec[i].vector;
+ np->num_ldg = num_irqs;
+}
+
+static int __devinit niu_ldg_init(struct niu *np)
+{
+ struct niu_parent *parent = np->parent;
+ int first_chan, num_chan;
+ int i, err, ldg_rotor;
+ __u8 first_ldg, port;
+
+ np->num_ldg = 1;
+ np->ldg[0].irq = np->dev->irq;
+ if (parent->plat_type == PLAT_TYPE_ATLAS)
+ niu_try_msix(np);
+
+ port = np->port;
+ first_ldg = (NIU_NUM_LDG / parent->num_ports) * port;
+ for (i = 0; i < np->num_ldg; i++) {
+ struct niu_ldg *lp = &np->ldg[i];
+
+ netif_napi_add(np->dev, &lp->napi, niu_poll, 64);
+
+ lp->np = np;
+ lp->ldg_num = first_ldg + i;
+ lp->timer = 2; /* XXX */
+
+ err = niu_set_ldg_sid(np, lp->ldg_num, port, i);
+ if (err)
+ return err;
+ }
+
+ ldg_rotor = 0;
+
+ first_chan = 0;
+ for (i = 0; i < port; i++)
+ first_chan += parent->rxchan_per_port[port];
+ num_chan = parent->rxchan_per_port[port];
+
+ for (i = first_chan; i < (first_chan + num_chan); i++) {
+ err = niu_ldg_assign_ldn(np, parent,
+ first_ldg + ldg_rotor,
+ LDN_RXDMA(i));
+ if (err)
+ return err;
+ ldg_rotor++;
+ if (ldg_rotor == np->num_ldg)
+ ldg_rotor = 0;
+ }
+
+ first_chan = 0;
+ for (i = 0; i < port; i++)
+ first_chan += parent->txchan_per_port[port];
+ num_chan = parent->txchan_per_port[port];
+ for (i = first_chan; i < (first_chan + num_chan); i++) {
+ err = niu_ldg_assign_ldn(np, parent,
+ first_ldg + ldg_rotor,
+ LDN_TXDMA(i));
+ if (err)
+ return err;
+ ldg_rotor++;
+ if (ldg_rotor == np->num_ldg)
+ ldg_rotor = 0;
+ }
+
+ if (port == 0) {
+ err = niu_ldg_assign_ldn(np, parent,
+ first_ldg + ldg_rotor,
+ LDN_MIF);
+ if (err)
+ return err;
+ err = niu_ldg_assign_ldn(np, parent,
+ first_ldg + ldg_rotor,
+ LDN_DEVICE_ERROR);
+ if (err)
+ return err;
+ }
+
+ return niu_ldg_assign_ldn(np, parent,
+ first_ldg + ldg_rotor,
+ LDN_MAC(port));
+}
+
+static void __devexit niu_ldg_free(struct niu *np)
+{
+ if (np->flags & NIU_FLAGS_MSIX)
+ pci_disable_msix(np->pdev);
+}
+
+static int __devinit niu_get_invariants(struct niu *np)
+{
+ u32 offset;
+ int err;
+
+ err = niu_get_and_validate_port(np);
+ if (err)
+ return err;
+
+ err = niu_init_mac_ipp_pcs_base(np);
+ if (err)
+ return err;
+
+ if (np->parent->plat_type == PLAT_TYPE_ATLAS) {
+ nw64(ESPC_PIO_EN_ENABLE, ESPC_PIO_EN);
+ offset = niu_pci_vpd_offset(np);
+ niudbg(PROBE, "niu_get_invariants: VPD offset [%08x]\n",
+ offset);
+ if (offset)
+ niu_pci_vpd_fetch(np, offset);
+ nw64(0, ESPC_PIO_EN);
+
+ if (np->flags & NIU_FLAGS_VPD_VALID)
+ niu_pci_vpd_validate(np);
+
+ if (!(np->flags & NIU_FLAGS_VPD_VALID)) {
+ err = niu_pci_probe_sprom(np);
+ if (err)
+ return err;
+ }
+ } else {
+ /* XXX N2/VF invariant probing... XXX */
+ }
+ err = niu_probe_ports(np);
+ if (err)
+ return err;
+
+ niu_ldg_init(np);
+
+ niu_classifier_swstate_init(np);
+ niu_link_config_init(np);
+
+ return niu_determine_phy_disposition(np);
+}
+
+static LIST_HEAD(niu_parent_list);
+static DEFINE_MUTEX(niu_parent_lock);
+static int niu_parent_index;
+
+static ssize_t show_port_phy(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct platform_device *plat_dev = to_platform_device(dev);
+ struct niu_parent *p = plat_dev->dev.platform_data;
+ __u32 port_phy = p->port_phy;
+ char *orig_buf = buf;
+ int i;
+
+ if (port_phy == PORT_PHY_UNKNOWN ||
+ port_phy == PORT_PHY_INVALID)
+ return 0;
+
+ for (i = 0; i < p->num_ports; i++) {
+ const char *type_str;
+ int type;
+
+ type = phy_decode(port_phy, i);
+ if (type == PORT_TYPE_10G)
+ type_str = "10G";
+ else
+ type_str = "1G";
+ buf += sprintf(buf,
+ (i == 0) ? "%s" : " %s",
+ type_str);
+ }
+ buf += sprintf(buf, "\n");
+ return buf - orig_buf;
+}
+
+static ssize_t show_plat_type(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct platform_device *plat_dev = to_platform_device(dev);
+ struct niu_parent *p = plat_dev->dev.platform_data;
+ const char *type_str;
+
+ switch (p->plat_type) {
+ case PLAT_TYPE_ATLAS:
+ type_str = "atlas";
+ break;
+ case PLAT_TYPE_NIU:
+ type_str = "niu";
+ break;
+ case PLAT_TYPE_VF_P0:
+ type_str = "vf_p0";
+ break;
+ case PLAT_TYPE_VF_P1:
+ type_str = "vf_p1";
+ break;
+ default:
+ type_str = "unknown";
+ break;
+ }
+
+ return sprintf(buf, "%s\n", type_str);
+}
+
+static ssize_t __show_chan_per_port(struct device *dev,
+ struct device_attribute *attr, char *buf,
+ int rx)
+{
+ struct platform_device *plat_dev = to_platform_device(dev);
+ struct niu_parent *p = plat_dev->dev.platform_data;
+ char *orig_buf = buf;
+ __u8 *arr;
+ int i;
+
+ arr = (rx ? p->rxchan_per_port : p->txchan_per_port);
+
+ for (i = 0; i < p->num_ports; i++) {
+ buf += sprintf(buf,
+ (i == 0) ? "%d" : " %d",
+ arr[i]);
+ }
+ buf += sprintf(buf, "\n");
+
+ return buf - orig_buf;
+}
+
+static ssize_t show_rxchan_per_port(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ return __show_chan_per_port(dev, attr, buf, 1);
+}
+
+static ssize_t show_txchan_per_port(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ return __show_chan_per_port(dev, attr, buf, 1);
+}
+
+static ssize_t show_num_ports(struct device *dev,
+ struct device_attribute *attr, char *buf)
+{
+ struct platform_device *plat_dev = to_platform_device(dev);
+ struct niu_parent *p = plat_dev->dev.platform_data;
+
+ return sprintf(buf, "%d\n", p->num_ports);
+}
+
+static struct device_attribute niu_parent_attributes[] = {
+ __ATTR(port_phy, S_IRUGO, show_port_phy, NULL),
+ __ATTR(plat_type, S_IRUGO, show_plat_type, NULL),
+ __ATTR(rxchan_per_port, S_IRUGO, show_rxchan_per_port, NULL),
+ __ATTR(txchan_per_port, S_IRUGO, show_txchan_per_port, NULL),
+ __ATTR(num_ports, S_IRUGO, show_num_ports, NULL),
+ {}
+};
+
+static struct niu_parent * __devinit niu_new_parent(struct niu *np,
+ union niu_parent_id *id,
+ __u8 ptype)
+{
+ struct platform_device *plat_dev;
+ struct niu_parent *p;
+ int i;
+
+ niudbg(PROBE, "niu_new_parent: Creating new parent.\n");
+
+ plat_dev = platform_device_register_simple("niu", niu_parent_index,
+ NULL, 0);
+ if (!plat_dev)
+ return NULL;
+
+ for (i = 0; attr_name(niu_parent_attributes[i]); i++) {
+ int err = device_create_file(&plat_dev->dev,
+ &niu_parent_attributes[i]);
+ if (err)
+ goto fail_unregister;
+ }
+
+ p = kzalloc(sizeof(*p), GFP_KERNEL);
+ if (!p)
+ goto fail_unregister;
+
+ p->index = niu_parent_index++;
+
+ plat_dev->dev.platform_data = p;
+ p->plat_dev = plat_dev;
+
+ memcpy(&p->id, id, sizeof(*id));
+ p->plat_type = ptype;
+ INIT_LIST_HEAD(&p->list);
+ atomic_set(&p->refcnt, 0);
+ list_add(&p->list, &niu_parent_list);
+ spin_lock_init(&p->lock);
+
+ p->rxdma_clock_divider = 7500;
+
+ p->tcam_num_entries = NIU_PCI_TCAM_ENTRIES;
+ if (p->plat_type != PLAT_TYPE_ATLAS)
+ p->tcam_num_entries = NIU_NONPCI_TCAM_ENTRIES;
+
+ for (i = CLASS_CODE_USER_PROG1; i <= CLASS_CODE_SCTP_IPV6; i++) {
+ int index = i - CLASS_CODE_USER_PROG1;
+
+ p->tcam_key[index] = TCAM_KEY_TSEL;
+ p->flow_key[index] = (FLOW_KEY_IPSA |
+ FLOW_KEY_IPDA |
+ FLOW_KEY_PROTO |
+ (FLOW_KEY_L4_BYTE12 <<
+ FLOW_KEY_L4_0_SHIFT) |
+ (FLOW_KEY_L4_BYTE12 <<
+ FLOW_KEY_L4_1_SHIFT));
+ }
+
+ for (i = 0; i < LDN_MAX + 1; i++)
+ p->ldg_map[i] = LDG_INVALID;
+
+ return p;
+
+fail_unregister:
+ platform_device_unregister(plat_dev);
+ return NULL;
+}
+
+static struct niu_parent * __devinit niu_get_parent(struct niu *np,
+ union niu_parent_id *id,
+ __u8 ptype)
+{
+ struct niu_parent *p, *tmp;
+ int port = np->port;
+
+ niudbg(PROBE, "niu_get_parent: platform_type[%u] port[%u]\n",
+ ptype, port);
+
+ mutex_lock(&niu_parent_lock);
+ p = NULL;
+ list_for_each_entry(tmp, &niu_parent_list, list) {
+ if (!memcmp(id, &tmp->id, sizeof(*id))) {
+ p = tmp;
+ break;
+ }
+ }
+ if (!p)
+ p = niu_new_parent(np, id, ptype);
+
+ if (p) {
+ char port_name[6];
+ int err;
+
+ sprintf(port_name, "port%d", port);
+ err = sysfs_create_link(&p->plat_dev->dev.kobj,
+ &np->device->kobj,
+ port_name);
+ if (!err) {
+ p->ports[port] = np;
+ atomic_inc(&p->refcnt);
+ }
+ }
+ mutex_unlock(&niu_parent_lock);
+
+ return p;
+}
+
+static void niu_put_parent(struct niu *np, __u8 port)
+{
+ struct niu_parent *p = np->parent;
+ char port_name[6];
+
+ BUG_ON(!p || p->ports[port] != np);
+
+ niudbg(PROBE, "niu_put_parent: port[%u]\n", port);
+
+ sprintf(port_name, "port%d", port);
+
+ mutex_lock(&niu_parent_lock);
+
+ sysfs_remove_link(&p->plat_dev->dev.kobj, port_name);
+
+ p->ports[port] = NULL;
+ np->parent = NULL;
+
+ if (atomic_dec_and_test(&p->refcnt)) {
+ list_del(&p->list);
+ platform_device_unregister(p->plat_dev);
+ }
+
+ mutex_unlock(&niu_parent_lock);
+}
+
+static void *niu_pci_alloc_coherent(struct device *dev, size_t size,
+ u64 *handle, gfp_t flag)
+{
+ dma_addr_t dh;
+ void *ret;
+
+ ret = dma_alloc_coherent(dev, size, &dh, flag);
+ if (ret)
+ *handle = dh;
+ return ret;
+}
+
+static void niu_pci_free_coherent(struct device *dev, size_t size,
+ void *cpu_addr, u64 handle)
+{
+ dma_free_coherent(dev, size, cpu_addr, handle);
+}
+
+static u64 niu_pci_map_page(struct device *dev, struct page *page,
+ unsigned long offset, size_t size,
+ enum dma_data_direction direction)
+{
+ return dma_map_page(dev, page, offset, size, direction);
+}
+
+static void niu_pci_unmap_page(struct device *dev, u64 dma_address,
+ size_t size, enum dma_data_direction direction)
+{
+ return dma_unmap_page(dev, dma_address, size, direction);
+}
+
+static u64 niu_pci_map_single(struct device *dev, void *cpu_addr,
+ size_t size,
+ enum dma_data_direction direction)
+{
+ return dma_map_single(dev, cpu_addr, size, direction);
+}
+
+static void niu_pci_unmap_single(struct device *dev, u64 dma_address,
+ size_t size,
+ enum dma_data_direction direction)
+{
+ dma_unmap_single(dev, dma_address, size, direction);
+}
+
+static const struct niu_ops niu_pci_ops = {
+ .alloc_coherent = niu_pci_alloc_coherent,
+ .free_coherent = niu_pci_free_coherent,
+ .map_page = niu_pci_map_page,
+ .unmap_page = niu_pci_unmap_page,
+ .map_single = niu_pci_map_single,
+ .unmap_single = niu_pci_unmap_single,
+};
+
+static int __devinit niu_pci_init_one(struct pci_dev *pdev,
+ const struct pci_device_id *ent)
+{
+ static int niu_version_printed = 0;
+ unsigned long niureg_base, niureg_len;
+ union niu_parent_id parent_id;
+ struct net_device *dev;
+ struct niu *np;
+ int err, i, pos;
+ u64 dma_mask;
+ u16 val16;
+
+ if (niu_version_printed++ == 0)
+ printk(KERN_INFO "%s", version);
+
+ err = pci_enable_device(pdev);
+ if (err) {
+ printk(KERN_ERR PFX "Cannot enable PCI device, "
+ "aborting.\n");
+ return err;
+ }
+
+ if (!(pci_resource_flags(pdev, 0) & IORESOURCE_MEM) ||
+ !(pci_resource_flags(pdev, 2) & IORESOURCE_MEM)) {
+ printk(KERN_ERR PFX "Cannot find proper PCI device "
+ "base addresses, aborting.\n");
+ err = -ENODEV;
+ goto err_out_disable_pdev;
+ }
+
+ err = pci_request_regions(pdev, DRV_MODULE_NAME);
+ if (err) {
+ printk(KERN_ERR PFX "Cannot obtain PCI resources, "
+ "aborting.\n");
+ goto err_out_disable_pdev;
+ }
+
+ pos = pci_find_capability(pdev, PCI_CAP_ID_EXP);
+ if (pos <= 0) {
+ printk(KERN_ERR PFX "Cannot find PCI Express capability, "
+ "aborting.\n");
+ goto err_out_free_res;
+ }
+
+ dev = alloc_etherdev(sizeof(*np));
+ if (!dev) {
+ printk(KERN_ERR PFX "Etherdev alloc failed, aborting.\n");
+ err = -ENOMEM;
+ goto err_out_free_res;
+ }
+
+ SET_NETDEV_DEV(dev, &pdev->dev);
+
+ np = netdev_priv(dev);
+ np->dev = dev;
+ np->pdev = pdev;
+ np->device = &pdev->dev;
+ np->ops = &niu_pci_ops;
+
+ spin_lock_init(&np->lock);
+ INIT_WORK(&np->reset_task, niu_reset_task);
+
+ np->port = PCI_FUNC(pdev->devfn);
+
+ memset(&parent_id, 0, sizeof(parent_id));
+ parent_id.pci.domain = pci_domain_nr(pdev->bus);
+ parent_id.pci.bus = pdev->bus->number;
+ parent_id.pci.device = PCI_SLOT(pdev->devfn);
+
+ np->parent = niu_get_parent(np, &parent_id,
+ PLAT_TYPE_ATLAS);
+ if (!np->parent) {
+ err = -ENOMEM;
+ goto err_out_free_dev;
+ }
+
+ pci_read_config_word(pdev, pos + PCI_EXP_DEVCTL, &val16);
+ val16 &= ~PCI_EXP_DEVCTL_NOSNOOP_EN;
+ val16 |= (PCI_EXP_DEVCTL_CERE |
+ PCI_EXP_DEVCTL_NFERE |
+ PCI_EXP_DEVCTL_FERE |
+ PCI_EXP_DEVCTL_URRE |
+ PCI_EXP_DEVCTL_RELAX_EN);
+ pci_write_config_word(pdev, pos + PCI_EXP_DEVCTL, val16);
+
+ dma_mask = DMA_44BIT_MASK;
+ err = pci_set_dma_mask(pdev, dma_mask);
+ if (!err) {
+ dev->features |= NETIF_F_HIGHDMA;
+ err = pci_set_consistent_dma_mask(pdev, dma_mask);
+ if (err) {
+ printk(KERN_ERR PFX "Unable to obtain 44 bit "
+ "DMA for consistent allocations, "
+ "aborting.\n");
+ goto err_out_release_parent;
+ }
+ }
+ if (err || dma_mask == DMA_32BIT_MASK) {
+ err = pci_set_dma_mask(pdev, DMA_32BIT_MASK);
+ if (err) {
+ printk(KERN_ERR PFX "No usable DMA configuration, "
+ "aborting.\n");
+ goto err_out_release_parent;
+ }
+ }
+
+ dev->features |= (NETIF_F_SG | NETIF_F_HW_CSUM);
+
+ niureg_base = pci_resource_start(pdev, 0);
+ niureg_len = pci_resource_len(pdev, 0);
+
+ np->regs = ioremap_nocache(niureg_base, niureg_len);
+ if (!np->regs) {
+ printk(KERN_ERR PFX "Cannot map device registers, "
+ "aborting.\n");
+ err = -ENOMEM;
+ goto err_out_free_dev;
+ }
+
+ pci_set_master(pdev);
+ pci_save_state(pdev);
+
+ dev->open = niu_open;
+ dev->stop = niu_close;
+ dev->get_stats = niu_get_stats;
+ dev->set_multicast_list = niu_set_rx_mode;
+ dev->set_mac_address = niu_set_mac_addr;
+ dev->do_ioctl = niu_ioctl;
+ dev->tx_timeout = niu_tx_timeout;
+ dev->hard_start_xmit = niu_start_xmit;
+ dev->ethtool_ops = &niu_ethtool_ops;
+ dev->watchdog_timeo = NIU_TX_TIMEOUT;
+ dev->change_mtu = niu_change_mtu;
+ dev->irq = pdev->irq;
+
+ err = niu_get_invariants(np);
+ if (err) {
+ if (err != -ENODEV)
+ printk(KERN_ERR PFX "Problem fetching invariants of chip, "
+ "aborting.\n");
+ goto err_out_iounmap;
+ }
+
+ err = niu_init_link(np);
+ if (err) {
+ printk(KERN_ERR PFX "Problem initializing link, err=%d\n",
+ err);
+ goto err_out_iounmap;
+ }
+
+ err = register_netdev(dev);
+ if (err) {
+ printk(KERN_ERR PFX "Cannot register net device, "
+ "aborting.\n");
+ goto err_out_iounmap;
+ }
+
+ pci_set_drvdata(pdev, dev);
+
+ printk(KERN_INFO "%s: NIU Ethernet ", dev->name);
+ for (i = 0; i < 6; i++)
+ printk("%2.2x%c", dev->dev_addr[i],
+ i == 5 ? '\n' : ':');
+
+ printk(KERN_INFO "%s: Port type[%s] mode[%s:%s] XCVR[%s]\n",
+ dev->name,
+ (np->flags & NIU_FLAGS_XMAC ? "XMAC" : "BMAC"),
+ (np->flags & NIU_FLAGS_10G ? "10G" : "1G"),
+ (np->flags & NIU_FLAGS_FIBER ? "FIBER" : "COPPER"),
+ (np->mac_xcvr == MAC_XCVR_MII ? "MII" :
+ (np->mac_xcvr == MAC_XCVR_PCS ? "PCS" : "XPCS")));
+
+ printk(KERN_INFO "%s: model[%s] board[%s] phy[%s]\n",
+ dev->name,
+ np->vpd.model, np->vpd.board_model, np->vpd.phy_type);
+
+ printk(KERN_INFO "%s: version[%s]\n",
+ dev->name, np->vpd.version);
+
+ return 0;
+
+err_out_iounmap:
+ if (np->regs) {
+ iounmap(np->regs);
+ np->regs = NULL;
+ }
+
+err_out_release_parent:
+ niu_put_parent(np, PCI_FUNC(pdev->devfn));
+
+err_out_free_dev:
+ free_netdev(dev);
+
+err_out_free_res:
+ pci_release_regions(pdev);
+
+err_out_disable_pdev:
+ pci_disable_device(pdev);
+ pci_set_drvdata(pdev, NULL);
+ return err;
+}
+
+static void __devexit niu_pci_remove_one(struct pci_dev *pdev)
+{
+ struct net_device *dev = pci_get_drvdata(pdev);
+
+ if (dev) {
+ struct niu *np = netdev_priv(dev);
+
+ unregister_netdev(dev);
+ if (np->regs) {
+ iounmap(np->regs);
+ np->regs = NULL;
+ }
+
+ niu_ldg_free(np);
+
+ niu_put_parent(np, PCI_FUNC(pdev->devfn));
+
+ free_netdev(dev);
+ pci_release_regions(pdev);
+ pci_disable_device(pdev);
+ pci_set_drvdata(pdev, NULL);
+ }
+}
+
+static int niu_suspend(struct pci_dev *pdev, pm_message_t state)
+{
+ struct net_device *dev = pci_get_drvdata(pdev);
+ struct niu *np = netdev_priv(dev);
+ unsigned long flags;
+
+ if (!netif_running(dev))
+ return 0;
+
+ flush_scheduled_work();
+ niu_netif_stop(np);
+
+ del_timer_sync(&np->timer);
+
+ spin_lock_irqsave(&np->lock, flags);
+ niu_enable_interrupts(np, 0);
+ spin_unlock_irqrestore(&np->lock, flags);
+
+ netif_device_detach(dev);
+
+ spin_lock_irqsave(&np->lock, flags);
+ niu_stop_hw(np);
+ spin_unlock_irqrestore(&np->lock, flags);
+
+ pci_save_state(pdev);
+
+ return 0;
+}
+
+static int niu_resume(struct pci_dev *pdev)
+{
+ struct net_device *dev = pci_get_drvdata(pdev);
+ struct niu *np = netdev_priv(dev);
+ unsigned long flags;
+ int err;
+
+ if (!netif_running(dev))
+ return 0;
+
+ pci_restore_state(pdev);
+
+ netif_device_attach(dev);
+
+ spin_lock_irqsave(&np->lock, flags);
+
+ err = niu_init_hw(np);
+ if (!err) {
+ np->timer.expires = jiffies + HZ;
+ add_timer(&np->timer);
+ niu_netif_start(np);
+ }
+
+ spin_unlock_irqrestore(&np->lock, flags);
+
+ return err;
+}
+
+static struct pci_driver niu_driver = {
+ .name = DRV_MODULE_NAME,
+ .id_table = niu_pci_tbl,
+ .probe = niu_pci_init_one,
+ .remove = __devexit_p(niu_pci_remove_one),
+ .suspend = niu_suspend,
+ .resume = niu_resume,
+};
+
+#if 0
+static void *niu_phys_alloc_coherent(struct device *dev, size_t size,
+ u64 *dma_addr, gfp_t flag)
+{
+ unsigned long order = get_order(size);
+ unsigned long page = __get_free_pages(gfp, order);
+
+ if (page == 0UL)
+ return NULL;
+ memset((char *)page, 0, PAGE_SIZE << order);
+ *dma_addr = __pa(page);
+
+ return (void *) page;
+}
+
+static void niu_phys_free_coherent(struct device *dev, size_t size,
+ void *cpu_addr, u64 handle)
+{
+ /* XXX Implement me XXX */
+}
+
+static u64 niu_phys_map_page(struct device *dev, struct page *page,
+ unsigned long offset, size_t size,
+ enum dma_data_direction direction)
+{
+ /* XXX Implement me XXX */
+}
+
+static void niu_phys_unmap_page(struct device *dev, u64 dma_address,
+ size_t size, enum dma_data_direction direction)
+{
+ /* XXX Implement me XXX */
+}
+
+static u64 niu_phys_map_single(struct device *dev, void *cpu_addr,
+ size_t size,
+ enum dma_data_direction direction)
+{
+ /* XXX Implement me XXX */
+}
+
+static void niu_nius_unmap_single(struct device *dev, u64 dma_address,
+ size_t size,
+ enum dma_data_direction direction)
+{
+ /* XXX Implement me XXX */
+}
+
+static const struct niu_ops niu_phys_ops = {
+ .alloc_coherent = niu_phys_alloc_coherent,
+ .free_coherent = niu_phys_free_coherent,
+ .map_page = niu_phys_map_page,
+ .unmap_page = niu_phys_unmap_page,
+ .map_single = niu_phys_map_single,
+ .unmap_single = niu_phys_unmap_single,
+};
+#endif
+
+static int __init niu_init(void)
+{
+ BUILD_BUG_ON((PAGE_SIZE < 4 * 1024) ||
+ ((PAGE_SIZE > 32 * 1024) &&
+ ((PAGE_SIZE % (32 * 1024)) != 0 &&
+ (PAGE_SIZE % (16 * 1024)) != 0 &&
+ (PAGE_SIZE % (8 * 1024)) != 0 &&
+ (PAGE_SIZE % (4 * 1024)) != 0)));
+
+ return pci_register_driver(&niu_driver);
+}
+
+static void __exit niu_exit(void)
+{
+ pci_unregister_driver(&niu_driver);
+}
+
+module_init(niu_init);
+module_exit(niu_exit);
diff --git a/drivers/net/niu.h b/drivers/net/niu.h
new file mode 100644
index 0000000..f676dac
--- /dev/null
+++ b/drivers/net/niu.h
@@ -0,0 +1,3208 @@
+/* niu.h: Definitions for Neptune ethernet driver.
+ *
+ * Copyright (C) 2007 David S. Miller (davem@...emloft.net)
+ */
+
+#ifndef _NIU_H
+#define _NIU_H
+
+#define PIO 0x000000UL
+#define FZC_PIO 0x080000UL
+#define FZC_MAC 0x180000UL
+#define FZC_IPP 0x280000UL
+#define FFLP 0x300000UL
+#define FZC_FFLP 0x380000UL
+#define PIO_VADDR 0x400000UL
+#define ZCP 0x500000UL
+#define FZC_ZCP 0x580000UL
+#define DMC 0x600000UL
+#define FZC_DMC 0x680000UL
+#define TXC 0x700000UL
+#define FZC_TXC 0x780000UL
+#define PIO_LDSV 0x800000UL
+#define PIO_PIO_LDGIM 0x900000UL
+#define PIO_IMASK0 0xa00000UL
+#define PIO_IMASK1 0xb00000UL
+#define FZC_PROM 0xc80000UL
+#define FZC_PIM 0xd80000UL
+
+#define LDSV0(LDG) (PIO_LDSV + 0x00000UL + (LDG) * 0x2000UL)
+#define LDSV1(LDG) (PIO_LDSV + 0x00008UL + (LDG) * 0x2000UL)
+#define LDSV2(LDG) (PIO_LDSV + 0x00010UL + (LDG) * 0x2000UL)
+
+#define LDG_IMGMT(LDG) (PIO_LDSV + 0x00018UL + (LDG) * 0x2000UL)
+#define LDG_IMGMT_ARM 0x0000000080000000
+#define LDG_IMGMT_TIMER 0x000000000000003f
+
+#define LD_IM0(IDX) (PIO_IMASK0 + 0x00000UL + (IDX) * 0x2000UL)
+#define LD_IM0_MASK 0x0000000000000003
+
+#define LD_IM1(IDX) (PIO_IMASK1 + 0x00000UL + (IDX) * 0x2000UL)
+#define LD_IM1_MASK 0x0000000000000003
+
+#define LDG_TIMER_RES (FZC_PIO + 0x00008UL)
+#define LDG_TIMER_RES_VAL 0x00000000000fffff
+
+#define DIRTY_TID_CTL (FZC_PIO + 0x00010UL)
+#define DIRTY_TID_CTL_NPTHRED 0x00000000003f0000
+#define DIRTY_TID_CTL_RDTHRED 0x00000000000003f0
+#define DIRTY_TID_CTL_DTIDCLR 0x0000000000000002
+#define DIRTY_TID_CTL_DTIDENAB 0x0000000000000001
+
+#define DIRTY_TID_STAT (FZC_PIO + 0x00018UL)
+#define DIRTY_TID_STAT_NPWSTAT 0x0000000000003f00
+#define DIRTY_TID_STAT_RDSTAT 0x000000000000003f
+
+#define RST_CTL (FZC_PIO + 0x00038UL)
+#define RST_CTL_MAC_RST3 0x0000000000400000
+#define RST_CTL_MAC_RST2 0x0000000000200000
+#define RST_CTL_MAC_RST1 0x0000000000100000
+#define RST_CTL_MAC_RST0 0x0000000000080000
+#define RST_CTL_ACK_TO_EN 0x0000000000000800
+#define RST_CTL_ACK_TO_VAL 0x00000000000007fe
+
+#define SMX_CFIG_DAT (FZC_PIO + 0x00040UL)
+#define SMX_CFIG_DAT_RAS_DET 0x0000000080000000
+#define SMX_CFIG_DAT_RAS_INJ 0x0000000040000000
+#define SMX_CFIG_DAT_XACT_TO 0x000000000fffffff
+
+#define SMX_INT_STAT (FZC_PIO + 0x00048UL)
+#define SMX_INT_STAT_STAT 0x00000000ffffffff
+
+#define SMX_CTL (FZC_PIO + 0x00050UL)
+#define SMX_CTL_CTL 0x00000000ffffffff
+
+#define SMX_DBG_VEC (FZC_PIO + 0x00058UL)
+#define SMX_DBG_VEC_VEC 0x00000000ffffffff
+
+#define PIO_DBG_SEL (FZC_PIO + 0x00060UL)
+#define PIO_DBG_SEL_SEL 0x000000000000003f
+
+#define PIO_TRAIN_VEC (FZC_PIO + 0x00068UL)
+#define PIO_TRAIN_VEC_VEC 0x00000000ffffffff
+
+#define PIO_ARB_CTL (FZC_PIO + 0x00070UL)
+#define PIO_ARB_CTL_CTL 0x00000000ffffffff
+
+#define PIO_ARB_DBG_VEC (FZC_PIO + 0x00078UL)
+#define PIO_ARB_DBG_VEC_VEC 0x00000000ffffffff
+
+#define SYS_ERR_MASK (FZC_PIO + 0x00090UL)
+#define SYS_ERR_MASK_META2 0x0000000000000400
+#define SYS_ERR_MASK_META1 0x0000000000000200
+#define SYS_ERR_MASK_PEU 0x0000000000000100
+#define SYS_ERR_MASK_TXC 0x0000000000000080
+#define SYS_ERR_MASK_RDMC 0x0000000000000040
+#define SYS_ERR_MASK_TDMC 0x0000000000000020
+#define SYS_ERR_MASK_ZCP 0x0000000000000010
+#define SYS_ERR_MASK_FFLP 0x0000000000000008
+#define SYS_ERR_MASK_IPP 0x0000000000000004
+#define SYS_ERR_MASK_MAC 0x0000000000000002
+#define SYS_ERR_MASK_SMX 0x0000000000000001
+
+#define SYS_ERR_STAT (FZC_PIO + 0x00098UL)
+#define SYS_ERR_STAT_META2 0x0000000000000400
+#define SYS_ERR_STAT_META1 0x0000000000000200
+#define SYS_ERR_STAT_PEU 0x0000000000000100
+#define SYS_ERR_STAT_TXC 0x0000000000000080
+#define SYS_ERR_STAT_RDMC 0x0000000000000040
+#define SYS_ERR_STAT_TDMC 0x0000000000000020
+#define SYS_ERR_STAT_ZCP 0x0000000000000010
+#define SYS_ERR_STAT_FFLP 0x0000000000000008
+#define SYS_ERR_STAT_IPP 0x0000000000000004
+#define SYS_ERR_STAT_MAC 0x0000000000000002
+#define SYS_ERR_STAT_SMX 0x0000000000000001
+
+#define SID(LDG) (FZC_PIO + 0x10200UL + (LDG) * 8UL)
+#define SID_FUNC 0x0000000000000060
+#define SID_FUNC_SHIFT 5
+#define SID_VECTOR 0x000000000000001f
+#define SID_VECTOR_SHIFT 0
+
+#define LDG_NUM(LDN) (FZC_PIO + 0x20000UL + (LDN) * 8UL)
+
+#define XMAC_PORT0_OFF (FZC_MAC + 0x000000)
+#define XMAC_PORT1_OFF (FZC_MAC + 0x006000)
+#define BMAC_PORT2_OFF (FZC_MAC + 0x00c000)
+#define BMAC_PORT3_OFF (FZC_MAC + 0x010000)
+
+/* XMAC registers, offset from np->mac_regs */
+
+#define XTXMAC_SW_RST 0x00000UL
+#define XTXMAC_SW_RST_REG_RS 0x0000000000000002
+#define XTXMAC_SW_RST_SOFT_RST 0x0000000000000001
+
+#define XRXMAC_SW_RST 0x00008UL
+#define XRXMAC_SW_RST_REG_RS 0x0000000000000002
+#define XRXMAC_SW_RST_SOFT_RST 0x0000000000000001
+
+#define XTXMAC_STATUS 0x00020UL
+#define XTXMAC_STATUS_FRAME_CNT_EXP 0x0000000000000800
+#define XTXMAC_STATUS_BYTE_CNT_EXP 0x0000000000000400
+#define XTXMAC_STATUS_TXFIFO_XFR_ERR 0x0000000000000010
+#define XTXMAC_STATUS_TXMAC_OFLOW 0x0000000000000008
+#define XTXMAC_STATUS_MAX_PSIZE_ERR 0x0000000000000004
+#define XTXMAC_STATUS_TXMAC_UFLOW 0x0000000000000002
+#define XTXMAC_STATUS_FRAME_XMITED 0x0000000000000001
+
+#define XRXMAC_STATUS 0x00028UL
+#define XRXMAC_STATUS_RXHIST7_CNT_EXP 0x0000000000100000
+#define XRXMAC_STATUS_LCL_FLT_STATUS 0x0000000000080000
+#define XRXMAC_STATUS_RFLT_DET 0x0000000000040000
+#define XRXMAC_STATUS_LFLT_CNT_EXP 0x0000000000020000
+#define XRXMAC_STATUS_PHY_MDINT 0x0000000000010000
+#define XRXMAC_STATUS_ALIGNERR_CNT_EXP 0x0000000000010000
+#define XRXMAC_STATUS_RXFRAG_CNT_EXP 0x0000000000008000
+#define XRXMAC_STATUS_RXMULTF_CNT_EXP 0x0000000000004000
+#define XRXMAC_STATUS_RXBCAST_CNT_EXP 0x0000000000002000
+#define XRXMAC_STATUS_RXHIST6_CNT_EXP 0x0000000000001000
+#define XRXMAC_STATUS_RXHIST5_CNT_EXP 0x0000000000000800
+#define XRXMAC_STATUS_RXHIST4_CNT_EXP 0x0000000000000400
+#define XRXMAC_STATUS_RXHIST3_CNT_EXP 0x0000000000000200
+#define XRXMAC_STATUS_RXHIST2_CNT_EXP 0x0000000000000100
+#define XRXMAC_STATUS_RXHIST1_CNT_EXP 0x0000000000000080
+#define XRXMAC_STATUS_RXOCTET_CNT_EXP 0x0000000000000040
+#define XRXMAC_STATUS_CVIOLERR_CNT_EXP 0x0000000000000020
+#define XRXMAC_STATUS_LENERR_CNT_EXP 0x0000000000000010
+#define XRXMAC_STATUS_CRCERR_CNT_EXP 0x0000000000000008
+#define XRXMAC_STATUS_RXUFLOW 0x0000000000000004
+#define XRXMAC_STATUS_RXOFLOW 0x0000000000000002
+#define XRXMAC_STATUS_FRAME_RCVD 0x0000000000000001
+
+#define XMAC_FC_STAT 0x00030UL
+#define XMAC_FC_STAT_RX_RCV_PAUSE_TIME 0x00000000ffff0000
+#define XMAC_FC_STAT_TX_MAC_NPAUSE 0x0000000000000004
+#define XMAC_FC_STAT_TX_MAC_PAUSE 0x0000000000000002
+#define XMAC_FC_STAT_RX_MAC_RPAUSE 0x0000000000000001
+
+#define XTXMAC_STAT_MSK 0x00040UL
+#define XTXMAC_STAT_MSK_FRAME_CNT_EXP 0x0000000000000800
+#define XTXMAC_STAT_MSK_BYTE_CNT_EXP 0x0000000000000400
+#define XTXMAC_STAT_MSK_TXFIFO_XFR_ERR 0x0000000000000010
+#define XTXMAC_STAT_MSK_TXMAC_OFLOW 0x0000000000000008
+#define XTXMAC_STAT_MSK_MAX_PSIZE_ERR 0x0000000000000004
+#define XTXMAC_STAT_MSK_TXMAC_UFLOW 0x0000000000000002
+#define XTXMAC_STAT_MSK_FRAME_XMITED 0x0000000000000001
+
+#define XRXMAC_STAT_MSK 0x00048UL
+#define XRXMAC_STAT_MSK_LCL_FLT_STAT_MSK 0x0000000000080000
+#define XRXMAC_STAT_MSK_RFLT_DET 0x0000000000040000
+#define XRXMAC_STAT_MSK_LFLT_CNT_EXP 0x0000000000020000
+#define XRXMAC_STAT_MSK_PHY_MDINT 0x0000000000010000
+#define XRXMAC_STAT_MSK_RXFRAG_CNT_EXP 0x0000000000008000
+#define XRXMAC_STAT_MSK_RXMULTF_CNT_EXP 0x0000000000004000
+#define XRXMAC_STAT_MSK_RXBCAST_CNT_EXP 0x0000000000002000
+#define XRXMAC_STAT_MSK_RXHIST6_CNT_EXP 0x0000000000001000
+#define XRXMAC_STAT_MSK_RXHIST5_CNT_EXP 0x0000000000000800
+#define XRXMAC_STAT_MSK_RXHIST4_CNT_EXP 0x0000000000000400
+#define XRXMAC_STAT_MSK_RXHIST3_CNT_EXP 0x0000000000000200
+#define XRXMAC_STAT_MSK_RXHIST2_CNT_EXP 0x0000000000000100
+#define XRXMAC_STAT_MSK_RXHIST1_CNT_EXP 0x0000000000000080
+#define XRXMAC_STAT_MSK_RXOCTET_CNT_EXP 0x0000000000000040
+#define XRXMAC_STAT_MSK_CVIOLERR_CNT_EXP 0x0000000000000020
+#define XRXMAC_STAT_MSK_LENERR_CNT_EXP 0x0000000000000010
+#define XRXMAC_STAT_MSK_CRCERR_CNT_EXP 0x0000000000000008
+#define XRXMAC_STAT_MSK_RXUFLOW_CNT_EXP 0x0000000000000004
+#define XRXMAC_STAT_MSK_RXOFLOW_CNT_EXP 0x0000000000000002
+#define XRXMAC_STAT_MSK_FRAME_RCVD 0x0000000000000001
+
+#define XMAC_FC_MSK 0x00050UL
+#define XMAC_FC_MSK_TX_MAC_NPAUSE 0x0000000000000004
+#define XMAC_FC_MSK_TX_MAC_PAUSE 0x0000000000000002
+#define XMAC_FC_MSK_RX_MAC_RPAUSE 0x0000000000000001
+
+#define XMAC_CONFIG 0x00060UL
+#define XMAC_CONFIG_SEL_CLK_25MHZ 0x0000000080000000
+#define XMAC_CONFIG_1G_PCS_BYPASS 0x0000000040000000
+#define XMAC_CONFIG_10G_XPCS_BYPASS 0x0000000020000000
+#define XMAC_CONFIG_MODE_MASK 0x0000000018000000
+#define XMAC_CONFIG_MODE_XGMII 0x0000000000000000
+#define XMAC_CONFIG_MODE_GMII 0x0000000008000000
+#define XMAC_CONFIG_MODE_MII 0x0000000010000000
+#define XMAC_CONFIG_LFS_DISABLE 0x0000000004000000
+#define XMAC_CONFIG_LOOPBACK 0x0000000002000000
+#define XMAC_CONFIG_TX_OUTPUT_EN 0x0000000001000000
+#define XMAC_CONFIG_SEL_POR_CLK_SRC 0x0000000000800000
+#define XMAC_CONFIG_LED_POLARITY 0x0000000000400000
+#define XMAC_CONFIG_FORCE_LED_ON 0x0000000000200000
+#define XMAC_CONFIG_PASS_FLOW_CTRL 0x0000000000100000
+#define XMAC_CONFIG_RCV_PAUSE_ENABLE 0x0000000000080000
+#define XMAC_CONFIG_MAC2IPP_PKT_CNT_EN 0x0000000000040000
+#define XMAC_CONFIG_STRIP_CRC 0x0000000000020000
+#define XMAC_CONFIG_ADDR_FILTER_EN 0x0000000000010000
+#define XMAC_CONFIG_HASH_FILTER_EN 0x0000000000008000
+#define XMAC_CONFIG_RX_CODEV_CHK_DIS 0x0000000000004000
+#define XMAC_CONFIG_RESERVED_MULTICAST 0x0000000000002000
+#define XMAC_CONFIG_RX_CRC_CHK_DIS 0x0000000000001000
+#define XMAC_CONFIG_ERR_CHK_DIS 0x0000000000000800
+#define XMAC_CONFIG_PROMISC_GROUP 0x0000000000000400
+#define XMAC_CONFIG_PROMISCUOUS 0x0000000000000200
+#define XMAC_CONFIG_RX_MAC_ENABLE 0x0000000000000100
+#define XMAC_CONFIG_WARNING_MSG_EN 0x0000000000000080
+#define XMAC_CONFIG_ALWAYS_NO_CRC 0x0000000000000008
+#define XMAC_CONFIG_VAR_MIN_IPG_EN 0x0000000000000004
+#define XMAC_CONFIG_STRETCH_MODE 0x0000000000000002
+#define XMAC_CONFIG_TX_ENABLE 0x0000000000000001
+
+#define XMAC_IPG 0x00080UL
+#define XMAC_IPG_STRETCH_CONST 0x0000000000e00000
+#define XMAC_IPG_STRETCH_CONST_SHIFT 21
+#define XMAC_IPG_STRETCH_RATIO 0x00000000001f0000
+#define XMAC_IPG_STRETCH_RATIO_SHIFT 16
+#define XMAC_IPG_IPG_MII_GMII 0x000000000000ff00
+#define XMAC_IPG_IPG_MII_GMII_SHIFT 8
+#define XMAC_IPG_IPG_XGMII 0x0000000000000007
+#define XMAC_IPG_IPG_XGMII_SHIFT 0
+
+#define IPG_12_15_XGMII 3
+#define IPG_16_19_XGMII 4
+#define IPG_20_23_XGMII 5
+#define IPG_12_MII_GMII 10
+#define IPG_13_MII_GMII 11
+#define IPG_14_MII_GMII 12
+#define IPG_15_MII_GMII 13
+#define IPG_16_MII_GMII 14
+
+#define XMAC_MIN 0x00088UL
+#define XMAC_MIN_RX_MIN_PKT_SIZE 0x000000003ff00000
+#define XMAC_MIN_RX_MIN_PKT_SIZE_SHFT 20
+#define XMAC_MIN_SLOT_TIME 0x000000000003fc00
+#define XMAC_MIN_SLOT_TIME_SHFT 10
+#define XMAC_MIN_TX_MIN_PKT_SIZE 0x00000000000003ff
+#define XMAC_MIN_TX_MIN_PKT_SIZE_SHFT 0
+
+#define XMAC_MAX 0x00090UL
+#define XMAC_MAX_FRAME_SIZE 0x0000000000003fff
+#define XMAC_MAX_FRAME_SIZE_SHFT 0
+
+#define XMAC_ADDR0 0x000a0UL
+#define XMAC_ADDR0_ADDR0 0x000000000000ffff
+
+#define XMAC_ADDR1 0x000a8UL
+#define XMAC_ADDR1_ADDR1 0x000000000000ffff
+
+#define XMAC_ADDR2 0x000b0UL
+#define XMAC_ADDR2_ADDR2 0x000000000000ffff
+
+#define XMAC_ADDR_CMPEN 0x00208UL
+#define XMAC_ADDR_CMPEN_EN15 0x0000000000008000
+#define XMAC_ADDR_CMPEN_EN14 0x0000000000004000
+#define XMAC_ADDR_CMPEN_EN13 0x0000000000002000
+#define XMAC_ADDR_CMPEN_EN12 0x0000000000001000
+#define XMAC_ADDR_CMPEN_EN11 0x0000000000000800
+#define XMAC_ADDR_CMPEN_EN10 0x0000000000000400
+#define XMAC_ADDR_CMPEN_EN9 0x0000000000000200
+#define XMAC_ADDR_CMPEN_EN8 0x0000000000000100
+#define XMAC_ADDR_CMPEN_EN7 0x0000000000000080
+#define XMAC_ADDR_CMPEN_EN6 0x0000000000000040
+#define XMAC_ADDR_CMPEN_EN5 0x0000000000000020
+#define XMAC_ADDR_CMPEN_EN4 0x0000000000000010
+#define XMAC_ADDR_CMPEN_EN3 0x0000000000000008
+#define XMAC_ADDR_CMPEN_EN2 0x0000000000000004
+#define XMAC_ADDR_CMPEN_EN1 0x0000000000000002
+#define XMAC_ADDR_CMPEN_EN0 0x0000000000000001
+
+#define XMAC_NUM_ALT_ADDR 16
+
+#define XMAC_ALT_ADDR0(NUM) (0x00218UL + (NUM)*0x18UL)
+#define XMAC_ALT_ADDR0_ADDR0 0x000000000000ffff
+
+#define XMAC_ALT_ADDR1(NUM) (0x00220UL + (NUM)*0x18UL)
+#define XMAC_ALT_ADDR1_ADDR1 0x000000000000ffff
+
+#define XMAC_ALT_ADDR2(NUM) (0x00228UL + (NUM)*0x18UL)
+#define XMAC_ALT_ADDR2_ADDR2 0x000000000000ffff
+
+#define XMAC_ADD_FILT0 0x00818UL
+#define XMAC_ADD_FILT0_FILT0 0x000000000000ffff
+
+#define XMAC_ADD_FILT1 0x00820UL
+#define XMAC_ADD_FILT1_FILT1 0x000000000000ffff
+
+#define XMAC_ADD_FILT2 0x00828UL
+#define XMAC_ADD_FILT2_FILT2 0x000000000000ffff
+
+#define XMAC_ADD_FILT12_MASK 0x00830UL
+#define XMAC_ADD_FILT12_MASK_VAL 0x00000000000000ff
+
+#define XMAC_ADD_FILT00_MASK 0x00838UL
+#define XMAC_ADD_FILT00_MASK_VAL 0x000000000000ffff
+
+#define XMAC_HASH_TBL(NUM) (0x00840UL + (NUM) * 0x8UL)
+#define XMAC_HASH_TBL_VAL 0x000000000000ffff
+
+#define XMAC_NUM_HOST_INFO 20
+
+#define XMAC_HOST_INFO(NUM) (0x00900UL + (NUM) * 0x8UL)
+
+#define XMAC_PA_DATA0 0x00b80UL
+#define XMAC_PA_DATA0_VAL 0x00000000ffffffff
+
+#define XMAC_PA_DATA1 0x00b88UL
+#define XMAC_PA_DATA1_VAL 0x00000000ffffffff
+
+#define XMAC_DEBUG_SEL 0x00b90UL
+#define XMAC_DEBUG_SEL_XMAC 0x0000000000000078
+#define XMAC_DEBUG_SEL_MAC 0x0000000000000007
+
+#define XMAC_TRAIN_VEC 0x00b98UL
+#define XMAC_TRAIN_VEC_VAL 0x00000000ffffffff
+
+#define RXMAC_BT_CNT 0x00100UL
+#define RXMAC_BT_CNT_COUNT 0x00000000ffffffff
+
+#define RXMAC_BC_FRM_CNT 0x00108UL
+#define RXMAC_BC_FRM_CNT_COUNT 0x00000000001fffff
+
+#define RXMAC_MC_FRM_CNT 0x00110UL
+#define RXMAC_MC_FRM_CNT_COUNT 0x00000000001fffff
+
+#define RXMAC_FRAG_CNT 0x00118UL
+#define RXMAC_FRAG_CNT_COUNT 0x00000000001fffff
+
+#define RXMAC_HIST_CNT1 0x00120UL
+#define RXMAC_HIST_CNT1_COUNT 0x00000000001fffff
+
+#define RXMAC_HIST_CNT2 0x00128UL
+#define RXMAC_HIST_CNT2_COUNT 0x00000000001fffff
+
+#define RXMAC_HIST_CNT3 0x00130UL
+#define RXMAC_HIST_CNT3_COUNT 0x00000000000fffff
+
+#define RXMAC_HIST_CNT4 0x00138UL
+#define RXMAC_HIST_CNT4_COUNT 0x000000000007ffff
+
+#define RXMAC_HIST_CNT5 0x00140UL
+#define RXMAC_HIST_CNT5_COUNT 0x000000000003ffff
+
+#define RXMAC_HIST_CNT6 0x00148UL
+#define RXMAC_HIST_CNT6_COUNT 0x000000000000ffff
+
+#define RXMAC_MPSZER_CNT 0x00150UL
+#define RXMAC_MPSZER_CNT_COUNT 0x00000000000000ff
+
+#define RXMAC_CRC_ER_CNT 0x00158UL
+#define RXMAC_CRC_ER_CNT_COUNT 0x00000000000000ff
+
+#define RXMAC_CD_VIO_CNT 0x00160UL
+#define RXMAC_CD_VIO_CNT_COUNT 0x00000000000000ff
+
+#define RXMAC_ALIGN_ERR_CNT 0x00168UL
+#define RXMAC_ALIGN_ERR_CNT_COUNT 0x00000000000000ff
+
+#define TXMAC_FRM_CNT 0x00170UL
+#define TXMAC_FRM_CNT_COUNT 0x00000000ffffffff
+
+#define TXMAC_BYTE_CNT 0x00178UL
+#define TXMAC_BYTE_CNT_COUNT 0x00000000ffffffff
+
+#define LINK_FAULT_CNT 0x00180UL
+#define LINK_FAULT_CNT_COUNT 0x00000000000000ff
+
+#define RXMAC_HIST_CNT7 0x00188UL
+#define RXMAC_HIST_CNT7_COUNT 0x0000000007ffffff
+
+#define XMAC_SM_REG 0x001a8UL
+#define XMAC_SM_REG_STATE 0x00000000ffffffff
+
+#define XMAC_INTER1 0x001b0UL
+#define XMAC_INTERN1_SIGNALS1 0x00000000ffffffff
+
+#define XMAC_INTER2 0x001b8UL
+#define XMAC_INTERN2_SIGNALS2 0x00000000ffffffff
+
+/* BMAC registers, offset from np->mac_regs */
+
+#define BTXMAC_SW_RST 0x00000UL
+#define BTXMAC_SW_RST_RESET 0x0000000000000001
+
+#define BRXMAC_SW_RST 0x00008UL
+#define BRXMAC_SW_RST_RESET 0x0000000000000001
+
+#define BMAC_SEND_PAUSE 0x00010UL
+#define BMAC_SEND_PAUSE_SEND 0x0000000000010000
+#define BMAC_SEND_PAUSE_TIME 0x000000000000ffff
+
+#define BTXMAC_STATUS 0x00020UL
+#define BTXMAC_STATUS_XMIT 0x0000000000000001
+#define BTXMAC_STATUS_UNDERRUN 0x0000000000000002
+#define BTXMAC_STATUS_MAX_PKT_ERR 0x0000000000000004
+#define BTXMAC_STATUS_BYTE_CNT_EXP 0x0000000000000400
+#define BTXMAC_STATUS_FRAME_CNT_EXP 0x0000000000000800
+
+#define BRXMAC_STATUS 0x00028UL
+#define BRXMAC_STATUS_RX_PKT 0x0000000000000001
+#define BRXMAC_STATUS_OVERFLOW 0x0000000000000002
+#define BRXMAC_STATUS_FRAME_CNT_EXP 0x0000000000000004
+#define BRXMAC_STATUS_ALIGN_ERR_EXP 0x0000000000000008
+#define BRXMAC_STATUS_CRC_ERR_EXP 0x0000000000000010
+#define BRXMAC_STATUS_LEN_ERR_EXP 0x0000000000000020
+
+#define BMAC_CTRL_STATUS 0x00030UL
+#define BMAC_CTRL_STATUS_PAUSE_RECV 0x0000000000000001
+#define BMAC_CTRL_STATUS_PAUSE 0x0000000000000002
+#define BMAC_CTRL_STATUS_NOPAUSE 0x0000000000000004
+#define BMAC_CTRL_STATUS_TIME 0x00000000ffff0000
+#define BMAC_CTRL_STATUS_TIME_SHIFT 16
+
+#define BTXMAC_STATUS_MASK 0x00040UL
+#define BRXMAC_STATUS_MASK 0x00048UL
+#define BMAC_CTRL_STATUS_MASK 0x00050UL
+
+#define BTXMAC_CONFIG 0x00060UL
+#define BTXMAC_CONFIG_ENABLE 0x0000000000000001
+#define BTXMAC_CONFIG_FCS_DISABLE 0x0000000000000002
+
+#define BRXMAC_CONFIG 0x00068UL
+#define BRXMAC_CONFIG_DISCARD_DIS 0x0000000000000080
+#define BRXMAC_CONFIG_ADDR_FILT_EN 0x0000000000000040
+#define BRXMAC_CONFIG_HASH_FILT_EN 0x0000000000000020
+#define BRXMAC_CONFIG_PROMISC_GRP 0x0000000000000010
+#define BRXMAC_CONFIG_PROMISC 0x0000000000000008
+#define BRXMAC_CONFIG_STRIP_FCS 0x0000000000000004
+#define BRXMAC_CONFIG_STRIP_PAD 0x0000000000000002
+#define BRXMAC_CONFIG_ENABLE 0x0000000000000001
+
+#define BMAC_CTRL_CONFIG 0x00070UL
+#define BMAC_CTRL_CONFIG_TX_PAUSE_EN 0x0000000000000001
+#define BMAC_CTRL_CONFIG_RX_PAUSE_EN 0x0000000000000002
+#define BMAC_CTRL_CONFIG_PASS_CTRL 0x0000000000000004
+
+#define BMAC_XIF_CONFIG 0x00078UL
+#define BMAC_XIF_CONFIG_TX_OUTPUT_EN 0x0000000000000001
+#define BMAC_XIF_CONFIG_MII_LOOPBACK 0x0000000000000002
+#define BMAC_XIF_CONFIG_GMII_MODE 0x0000000000000008
+#define BMAC_XIF_CONFIG_LINK_LED 0x0000000000000020
+#define BMAC_XIF_CONFIG_LED_POLARITY 0x0000000000000040
+#define BMAC_XIF_CONFIG_25MHZ_CLOCK 0x0000000000000080
+
+#define BMAC_MIN_FRAME 0x000a0UL
+#define BMAC_MIN_FRAME_VAL 0x00000000000003ff
+
+#define BMAC_MAX_FRAME 0x000a8UL
+#define BMAC_MAX_FRAME_MAX_BURST 0x000000003fff0000
+#define BMAC_MAX_FRAME_MAX_BURST_SHIFT 16
+#define BMAC_MAX_FRAME_MAX_FRAME 0x0000000000003fff
+#define BMAC_MAX_FRAME_MAX_FRAME_SHIFT 0
+
+#define BMAC_PREAMBLE_SIZE 0x000b0UL
+#define BMAC_PREAMBLE_SIZE_VAL 0x00000000000003ff
+
+#define BMAC_CTRL_TYPE 0x000c8UL
+
+#define BMAC_ADDR0 0x00100UL
+#define BMAC_ADDR0_ADDR0 0x000000000000ffff
+
+#define BMAC_ADDR1 0x00108UL
+#define BMAC_ADDR1_ADDR1 0x000000000000ffff
+
+#define BMAC_ADDR2 0x00110UL
+#define BMAC_ADDR2_ADDR2 0x000000000000ffff
+
+#define BMAC_NUM_ALT_ADDR 7
+
+#define BMAC_ALT_ADDR0(NUM) (0x00118UL + (NUM)*0x18UL)
+#define BMAC_ALT_ADDR0_ADDR0 0x000000000000ffff
+
+#define BMAC_ALT_ADDR1(NUM) (0x00120UL + (NUM)*0x18UL)
+#define BMAC_ALT_ADDR1_ADDR1 0x000000000000ffff
+
+#define BMAC_ALT_ADDR2(NUM) (0x00128UL + (NUM)*0x18UL)
+#define BMAC_ALT_ADDR2_ADDR2 0x000000000000ffff
+
+#define BMAC_FC_ADDR0 0x00268UL
+#define BMAC_FC_ADDR0_ADDR0 0x000000000000ffff
+
+#define BMAC_FC_ADDR1 0x00270UL
+#define BMAC_FC_ADDR1_ADDR1 0x000000000000ffff
+
+#define BMAC_FC_ADDR2 0x00278UL
+#define BMAC_FC_ADDR2_ADDR2 0x000000000000ffff
+
+#define BMAC_ADD_FILT0 0x00298UL
+#define BMAC_ADD_FILT0_FILT0 0x000000000000ffff
+
+#define BMAC_ADD_FILT1 0x002a0UL
+#define BMAC_ADD_FILT1_FILT1 0x000000000000ffff
+
+#define BMAC_ADD_FILT2 0x002a8UL
+#define BMAC_ADD_FILT2_FILT2 0x000000000000ffff
+
+#define BMAC_ADD_FILT12_MASK 0x002b0UL
+#define BMAC_ADD_FILT12_MASK_VAL 0x00000000000000ff
+
+#define BMAC_ADD_FILT00_MASK 0x002b8UL
+#define BMAC_ADD_FILT00_MASK_VAL 0x000000000000ffff
+
+#define BMAC_HASH_TBL(NUM) (0x002c0UL + (NUM) * 0x8UL)
+#define BMAC_HASH_TBL_VAL 0x000000000000ffff
+
+#define BRXMAC_FRAME_CNT 0x00370
+#define BRXMAC_FRAME_CNT_COUNT 0x000000000000ffff
+
+#define BRXMAC_MAX_LEN_ERR_CNT 0x00378
+
+#define BRXMAC_ALIGN_ERR_CNT 0x00380
+#define BRXMAC_ALIGN_ERR_CNT_COUNT 0x000000000000ffff
+
+#define BRXMAC_CRC_ERR_CNT 0x00388
+#define BRXMAC_ALIGN_ERR_CNT_COUNT 0x000000000000ffff
+
+#define BRXMAC_CODE_VIOL_ERR_CNT 0x00390
+#define BRXMAC_CODE_VIOL_ERR_CNT_COUNT 0x000000000000ffff
+
+#define BMAC_STATE_MACHINE 0x003a0
+
+#define BMAC_ADDR_CMPEN 0x003f8UL
+#define BMAC_ADDR_CMPEN_EN15 0x0000000000008000
+#define BMAC_ADDR_CMPEN_EN14 0x0000000000004000
+#define BMAC_ADDR_CMPEN_EN13 0x0000000000002000
+#define BMAC_ADDR_CMPEN_EN12 0x0000000000001000
+#define BMAC_ADDR_CMPEN_EN11 0x0000000000000800
+#define BMAC_ADDR_CMPEN_EN10 0x0000000000000400
+#define BMAC_ADDR_CMPEN_EN9 0x0000000000000200
+#define BMAC_ADDR_CMPEN_EN8 0x0000000000000100
+#define BMAC_ADDR_CMPEN_EN7 0x0000000000000080
+#define BMAC_ADDR_CMPEN_EN6 0x0000000000000040
+#define BMAC_ADDR_CMPEN_EN5 0x0000000000000020
+#define BMAC_ADDR_CMPEN_EN4 0x0000000000000010
+#define BMAC_ADDR_CMPEN_EN3 0x0000000000000008
+#define BMAC_ADDR_CMPEN_EN2 0x0000000000000004
+#define BMAC_ADDR_CMPEN_EN1 0x0000000000000002
+#define BMAC_ADDR_CMPEN_EN0 0x0000000000000001
+
+#define BMAC_NUM_HOST_INFO 9
+
+#define BMAC_HOST_INFO(NUM) (0x00400UL + (NUM) * 0x8UL)
+
+#define BTXMAC_BYTE_CNT 0x00448UL
+#define BTXMAC_BYTE_CNT_COUNT 0x00000000ffffffff
+
+#define BTXMAC_FRM_CNT 0x00450UL
+#define BTXMAC_FRM_CNT_COUNT 0x00000000ffffffff
+
+#define BRXMAC_BYTE_CNT 0x00458UL
+#define BRXMAC_BYTE_CNT_COUNT 0x00000000ffffffff
+
+#define HOST_INFO_MPR 0x0000000000000100
+#define HOST_INFO_MACRDCTBLN 0x0000000000000007
+
+/* XPCS registers, offset from np->regs + np->xpcs_off */
+
+#define XPCS_CONTROL1 (FZC_MAC + 0x00000UL)
+#define XPCS_CONTROL1_RESET 0x0000000000008000
+#define XPCS_CONTROL1_LOOPBACK 0x0000000000004000
+#define XPCS_CONTROL1_SPEED_SELECT3 0x0000000000002000
+#define XPCS_CONTROL1_CSR_LOW_PWR 0x0000000000000800
+#define XPCS_CONTROL1_CSR_SPEED1 0x0000000000000040
+#define XPCS_CONTROL1_CSR_SPEED0 0x000000000000003c
+
+#define XPCS_STATUS1 (FZC_MAC + 0x00008UL)
+#define XPCS_STATUS1_CSR_FAULT 0x0000000000000080
+#define XPCS_STATUS1_CSR_RXLNK_STAT 0x0000000000000004
+#define XPCS_STATUS1_CSR_LPWR_ABLE 0x0000000000000002
+
+#define XPCS_DEVICE_IDENTIFIER (FZC_MAC + 0x00010UL)
+#define XPCS_DEVICE_IDENTIFIER_VAL 0x00000000ffffffff
+
+#define XPCS_SPEED_ABILITY (FZC_MAC + 0x00018UL)
+#define XPCS_SPEED_ABILITY_10GIG 0x0000000000000001
+
+#define XPCS_DEV_IN_PKG (FZC_MAC + 0x00020UL)
+#define XPCS_DEV_IN_PKG_CSR_VEND2 0x0000000080000000
+#define XPCS_DEV_IN_PKG_CSR_VEND1 0x0000000040000000
+#define XPCS_DEV_IN_PKG_DTE_XS 0x0000000000000020
+#define XPCS_DEV_IN_PKG_PHY_XS 0x0000000000000010
+#define XPCS_DEV_IN_PKG_PCS 0x0000000000000008
+#define XPCS_DEV_IN_PKG_WIS 0x0000000000000004
+#define XPCS_DEV_IN_PKG_PMD_PMA 0x0000000000000002
+#define XPCS_DEV_IN_PKG_CLS22 0x0000000000000001
+
+#define XPCS_CONTROL2 (FZC_MAC + 0x00028UL)
+#define XPCS_CONTROL2_CSR_PSC_SEL 0x0000000000000003
+
+#define XPCS_STATUS2 (FZC_MAC + 0x00030UL)
+#define XPCS_STATUS2_CSR_DEV_PRES 0x000000000000c000
+#define XPCS_STATUS2_CSR_TX_FAULT 0x0000000000000800
+#define XPCS_STATUS2_CSR_RCV_FAULT 0x0000000000000400
+#define XPCS_STATUS2_TEN_GBASE_W 0x0000000000000004
+#define XPCS_STATUS2_TEN_GBASE_X 0x0000000000000002
+#define XPCS_STATUS2_TEN_GBASE_R 0x0000000000000001
+
+#define XPCS_PKG_ID (FZC_MAC + 0x00038UL)
+#define XPCS_PKG_ID_VAL 0x00000000ffffffff
+
+#define XPCS_STATUS(IDX) (FZC_MAC + 0x00040UL)
+#define XPCS_STATUS_CSR_LANE_ALIGN 0x0000000000001000
+#define XPCS_STATUS_CSR_PATTEST_CAP 0x0000000000000800
+#define XPCS_STATUS_CSR_LANE3_SYNC 0x0000000000000008
+#define XPCS_STATUS_CSR_LANE2_SYNC 0x0000000000000004
+#define XPCS_STATUS_CSR_LANE1_SYNC 0x0000000000000002
+#define XPCS_STATUS_CSR_LANE0_SYNC 0x0000000000000001
+
+#define XPCS_TEST_CONTROL (FZC_MAC + 0x00048UL)
+#define XPCS_TEST_CONTROL_TXTST_EN 0x0000000000000004
+#define XPCS_TEST_CONTROL_TPAT_SEL 0x0000000000000003
+
+#define XPCS_CFG_VENDOR1 (FZC_MAC + 0x00050UL)
+#define XPCS_CFG_VENDOR1_DBG_IOTST 0x0000000000000080
+#define XPCS_CFG_VENDOR1_DBG_SEL 0x0000000000000078
+#define XPCS_CFG_VENDOR1_BYPASS_DET 0x0000000000000004
+#define XPCS_CFG_VENDOR1_TXBUF_EN 0x0000000000000002
+#define XPCS_CFG_VENDOR1_XPCS_EN 0x0000000000000001
+
+#define XPCS_DIAG_VENDOR2 (FZC_MAC + 0x00058UL)
+#define XPCS_DIAG_VENDOR2_SSM_LANE3 0x0000000001e00000
+#define XPCS_DIAG_VENDOR2_SSM_LANE2 0x00000000001e0000
+#define XPCS_DIAG_VENDOR2_SSM_LANE1 0x000000000001e000
+#define XPCS_DIAG_VENDOR2_SSM_LANE0 0x0000000000001e00
+#define XPCS_DIAG_VENDOR2_EBUF_SM 0x00000000000001fe
+#define XPCS_DIAG_VENDOR2_RCV_SM 0x0000000000000001
+
+#define XPCS_MASK1 (FZC_MAC + 0x00060UL)
+#define XPCS_MASK1_FAULT_MASK 0x0000000000000080
+#define XPCS_MASK1_RXALIGN_STAT_MSK 0x0000000000000004
+
+#define XPCS_PKT_COUNT (FZC_MAC + 0x00068UL)
+#define XPCS_PKT_COUNT_TX 0x00000000ffff0000
+#define XPCS_PKT_COUNT_RX 0x000000000000ffff
+
+#define XPCS_TX_SM (FZC_MAC + 0x00070UL)
+#define XPCS_TX_SM_VAL 0x000000000000000f
+
+#define XPCS_DESKEW_ERR_CNT (FZC_MAC + 0x00078UL)
+#define XPCS_DESKEW_ERR_CNT_VAL 0x00000000000000ff
+
+#define XPCS_SYMERR_CNT01 (FZC_MAC + 0x00080UL)
+#define XPCS_SYMERR_CNT01_LANE1 0x00000000ffff0000
+#define XPCS_SYMERR_CNT01_LANE0 0x000000000000ffff
+
+#define XPCS_SYMERR_CNT23 (FZC_MAC + 0x00088UL)
+#define XPCS_SYMERR_CNT23_LANE3 0x00000000ffff0000
+#define XPCS_SYMERR_CNT23_LANE2 0x000000000000ffff
+
+#define XPCS_TRAINING_VECTOR (FZC_MAC + 0x00090UL)
+#define XPCS_TRAINING_VECTOR_VAL 0x00000000ffffffff
+
+/* PCS registers, offset from np->regs + np->pcs_off */
+
+#define PCS_MII_CTL (FZC_MAC + 0x00000UL)
+#define PCS_MII_CTL_RST 0x0000000000008000
+#define PCS_MII_CTL_10_100_SPEED 0x0000000000002000
+#define PCS_MII_AUTONEG_EN 0x0000000000001000
+#define PCS_MII_PWR_DOWN 0x0000000000000800
+#define PCS_MII_ISOLATE 0x0000000000000400
+#define PCS_MII_AUTONEG_RESTART 0x0000000000000200
+#define PCS_MII_DUPLEX 0x0000000000000100
+#define PCS_MII_COLL_TEST 0x0000000000000080
+#define PCS_MII_1000MB_SPEED 0x0000000000000040
+
+#define PCS_MII_STAT (FZC_MAC + 0x00008UL)
+#define PCS_MII_STAT_EXT_STATUS 0x0000000000000100
+#define PCS_MII_STAT_AUTONEG_DONE 0x0000000000000020
+#define PCS_MII_STAT_REMOTE_FAULT 0x0000000000000010
+#define PCS_MII_STAT_AUTONEG_ABLE 0x0000000000000008
+#define PCS_MII_STAT_LINK_STATUS 0x0000000000000004
+#define PCS_MII_STAT_JABBER_DET 0x0000000000000002
+#define PCS_MII_STAT_EXT_CAP 0x0000000000000001
+
+#define PCS_MII_ADV (FZC_MAC + 0x00010UL)
+#define PCS_MII_ADV_NEXT_PAGE 0x0000000000008000
+#define PCS_MII_ADV_ACK 0x0000000000004000
+#define PCS_MII_ADV_REMOTE_FAULT 0x0000000000003000
+#define PCS_MII_ADV_ASM_DIR 0x0000000000000100
+#define PCS_MII_ADV_PAUSE 0x0000000000000080
+#define PCS_MII_ADV_HALF_DUPLEX 0x0000000000000040
+#define PCS_MII_ADV_FULL_DUPLEX 0x0000000000000020
+
+#define PCS_MII_PARTNER (FZC_MAC + 0x00018UL)
+#define PCS_MII_PARTNER_NEXT_PAGE 0x0000000000008000
+#define PCS_MII_PARTNER_ACK 0x0000000000004000
+#define PCS_MII_PARTNER_REMOTE_FAULT 0x0000000000002000
+#define PCS_MII_PARTNER_PAUSE 0x0000000000000180
+#define PCS_MII_PARTNER_HALF_DUPLEX 0x0000000000000040
+#define PCS_MII_PARTNER_FULL_DUPLEX 0x0000000000000020
+
+#define PCS_CONF (FZC_MAC + 0x00020UL)
+#define PCS_CONF_MASK 0x0000000000000040
+#define PCS_CONF_10MS_TMR_OVERRIDE 0x0000000000000020
+#define PCS_CONF_JITTER_STUDY 0x0000000000000018
+#define PCS_CONF_SIGDET_ACTIVE_LOW 0x0000000000000004
+#define PCS_CONF_SIGDET_OVERRIDE 0x0000000000000002
+#define PCS_CONF_ENABLE 0x0000000000000001
+
+#define PCS_STATE (FZC_MAC + 0x00028UL)
+#define PCS_STATE_D_PARTNER_FAIL 0x0000000020000000
+#define PCS_STATE_D_WAIT_C_CODES_ACK 0x0000000010000000
+#define PCS_STATE_D_SYNC_LOSS 0x0000000008000000
+#define PCS_STATE_D_NO_GOOD_C_CODES 0x0000000004000000
+#define PCS_STATE_D_SERDES 0x0000000002000000
+#define PCS_STATE_D_BREAKLINK_C_CODES 0x0000000001000000
+#define PCS_STATE_L_SIGDET 0x0000000000400000
+#define PCS_STATE_L_SYNC_LOSS 0x0000000000200000
+#define PCS_STATE_L_C_CODES 0x0000000000100000
+#define PCS_STATE_LINK_CFG_STATE 0x000000000001e000
+#define PCS_STATE_SEQ_DET_STATE 0x0000000000001800
+#define PCS_STATE_WORD_SYNC_STATE 0x0000000000000700
+#define PCS_STATE_NO_IDLE 0x000000000000000f
+
+#define PCS_INTERRUPT (FZC_MAC + 0x00030UL)
+#define PCS_INTERRUPT_LSTATUS 0x0000000000000004
+
+#define PCS_DPATH_MODE (FZC_MAC + 0x000a0UL)
+#define PCS_DPATH_MODE_PCS 0x0000000000000000
+#define PCS_DPATH_MODE_MII 0x0000000000000002
+#define PCS_DPATH_MODE_LINKUP_F_ENAB 0x0000000000000001
+
+#define PCS_PKT_CNT (FZC_MAC + 0x000c0UL)
+#define PCS_PKT_CNT_RX 0x0000000007ff0000
+#define PCS_PKT_CNT_TX 0x00000000000007ff
+
+#define MIF_BB_MDC (FZC_MAC + 0x16000UL)
+#define MIF_BB_MDC_CLK 0x0000000000000001
+
+#define MIF_BB_MDO (FZC_MAC + 0x16008UL)
+#define MIF_BB_MDO_DAT 0x0000000000000001
+
+#define MIF_BB_MDO_EN (FZC_MAC + 0x16010UL)
+#define MIF_BB_MDO_EN_VAL 0x0000000000000001
+
+#define MIF_FRAME_OUTPUT (FZC_MAC + 0x16018UL)
+#define MIF_FRAME_OUTPUT_ST 0x00000000c0000000
+#define MIF_FRAME_OUTPUT_ST_SHIFT 30
+#define MIF_FRAME_OUTPUT_OP_ADDR 0x0000000000000000
+#define MIF_FRAME_OUTPUT_OP_WRITE 0x0000000010000000
+#define MIF_FRAME_OUTPUT_OP_READ_INC 0x0000000020000000
+#define MIF_FRAME_OUTPUT_OP_READ 0x0000000030000000
+#define MIF_FRAME_OUTPUT_OP_SHIFT 28
+#define MIF_FRAME_OUTPUT_PORT 0x000000000f800000
+#define MIF_FRAME_OUTPUT_PORT_SHIFT 23
+#define MIF_FRAME_OUTPUT_REG 0x00000000007c0000
+#define MIF_FRAME_OUTPUT_REG_SHIFT 18
+#define MIF_FRAME_OUTPUT_TA 0x0000000000030000
+#define MIF_FRAME_OUTPUT_TA_SHIFT 16
+#define MIF_FRAME_OUTPUT_DATA 0x000000000000ffff
+#define MIF_FRAME_OUTPUT_DATA_SHIFT 0
+
+#define MDIO_ADDR_OP(port, dev, reg) \
+ ((0 << MIF_FRAME_OUTPUT_ST_SHIFT) | \
+ MIF_FRAME_OUTPUT_OP_ADDR | \
+ (port << MIF_FRAME_OUTPUT_PORT_SHIFT) | \
+ (dev << MIF_FRAME_OUTPUT_REG_SHIFT) | \
+ (0x2 << MIF_FRAME_OUTPUT_TA_SHIFT) | \
+ (reg << MIF_FRAME_OUTPUT_DATA_SHIFT))
+
+#define MDIO_READ_OP(port, dev) \
+ ((0 << MIF_FRAME_OUTPUT_ST_SHIFT) | \
+ MIF_FRAME_OUTPUT_OP_READ | \
+ (port << MIF_FRAME_OUTPUT_PORT_SHIFT) | \
+ (dev << MIF_FRAME_OUTPUT_REG_SHIFT) | \
+ (0x2 << MIF_FRAME_OUTPUT_TA_SHIFT))
+
+#define MDIO_WRITE_OP(port, dev, data) \
+ ((0 << MIF_FRAME_OUTPUT_ST_SHIFT) | \
+ MIF_FRAME_OUTPUT_OP_WRITE | \
+ (port << MIF_FRAME_OUTPUT_PORT_SHIFT) | \
+ (dev << MIF_FRAME_OUTPUT_REG_SHIFT) | \
+ (0x2 << MIF_FRAME_OUTPUT_TA_SHIFT) | \
+ (data << MIF_FRAME_OUTPUT_DATA_SHIFT))
+
+#define MII_READ_OP(port, reg) \
+ ((1 << MIF_FRAME_OUTPUT_ST_SHIFT) | \
+ (2 << MIF_FRAME_OUTPUT_OP_SHIFT) | \
+ (port << MIF_FRAME_OUTPUT_PORT_SHIFT) | \
+ (reg << MIF_FRAME_OUTPUT_REG_SHIFT) | \
+ (0x2 << MIF_FRAME_OUTPUT_TA_SHIFT))
+
+#define MII_WRITE_OP(port, reg, data) \
+ ((1 << MIF_FRAME_OUTPUT_ST_SHIFT) | \
+ (1 << MIF_FRAME_OUTPUT_OP_SHIFT) | \
+ (port << MIF_FRAME_OUTPUT_PORT_SHIFT) | \
+ (reg << MIF_FRAME_OUTPUT_REG_SHIFT) | \
+ (0x2 << MIF_FRAME_OUTPUT_TA_SHIFT) | \
+ (data << MIF_FRAME_OUTPUT_DATA_SHIFT))
+
+#define MIF_CONFIG (FZC_MAC + 0x16020UL)
+#define MIF_CONFIG_ATCA_GE 0x0000000000010000
+#define MIF_CONFIG_INDIRECT_MODE 0x0000000000008000
+#define MIF_CONFIG_POLL_PRT_PHYADDR 0x0000000000003c00
+#define MIF_CONFIG_POLL_DEV_REG_ADDR 0x00000000000003e0
+#define MIF_CONFIG_BB_MODE 0x0000000000000010
+#define MIF_CONFIG_POLL_EN 0x0000000000000008
+#define MIF_CONFIG_BB_SER_SEL 0x0000000000000006
+#define MIF_CONFIG_MANUAL_MODE 0x0000000000000001
+
+#define MIF_POLL_STATUS (FZC_MAC + 0x16028UL)
+#define MIF_POLL_STATUS_DATA 0x00000000ffff0000
+#define MIF_POLL_STATUS_STAT 0x000000000000ffff
+
+#define MIF_POLL_MASK (FZC_MAC + 0x16030UL)
+#define MIF_POLL_MASK_VAL 0x000000000000ffff
+
+#define MIF_SM (FZC_MAC + 0x16038UL)
+#define MIF_SM_PORT_ADDR 0x00000000001f0000
+#define MIF_SM_MDI_1 0x0000000000004000
+#define MIF_SM_MDI_0 0x0000000000002400
+#define MIF_SM_MDCLK 0x0000000000001000
+#define MIF_SM_MDO_EN 0x0000000000000800
+#define MIF_SM_MDO 0x0000000000000400
+#define MIF_SM_MDI 0x0000000000000200
+#define MIF_SM_CTL 0x00000000000001c0
+#define MIF_SM_EX 0x000000000000003f
+
+#define MIF_STATUS (FZC_MAC + 0x16040UL)
+#define MIF_STATUS_MDINT1 0x0000000000000020
+#define MIF_STATUS_MDINT0 0x0000000000000010
+
+#define MIF_MASK (FZC_MAC + 0x16048UL)
+#define MIF_MASK_MDINT1 0x0000000000000020
+#define MIF_MASK_MDINT0 0x0000000000000010
+#define MIF_MASK_PEU_ERR 0x0000000000000008
+#define MIF_MASK_YC 0x0000000000000004
+#define MIF_MASK_XGE_ERR0 0x0000000000000002
+#define MIF_MASK_MIF_INIT_DONE 0x0000000000000001
+
+#define ENET_SERDES_RESET (FZC_MAC + 0x14000UL)
+#define ENET_SERDES_RESET_1 0x0000000000000002
+#define ENET_SERDES_RESET_0 0x0000000000000001
+
+#define ENET_SERDES_CFG (FZC_MAC + 0x14008UL)
+#define ENET_SERDES_BE_LOOPBACK 0x0000000000000002
+#define ENET_SERDES_CFG_FORCE_RDY 0x0000000000000001
+
+#define ENET_SERDES_0_PLL_CFG (FZC_MAC + 0x14010UL)
+#define ENET_SERDES_PLL_FBDIV0 0x0000000000000001
+#define ENET_SERDES_PLL_FBDIV1 0x0000000000000002
+#define ENET_SERDES_PLL_FBDIV2 0x0000000000000004
+#define ENET_SERDES_PLL_HRATE0 0x0000000000000008
+#define ENET_SERDES_PLL_HRATE1 0x0000000000000010
+#define ENET_SERDES_PLL_HRATE2 0x0000000000000020
+#define ENET_SERDES_PLL_HRATE3 0x0000000000000040
+
+#define ENET_SERDES_0_CTRL_CFG (FZC_MAC + 0x14018UL)
+#define ENET_SERDES_CTRL_SDET_0 0x0000000000000001
+#define ENET_SERDES_CTRL_SDET_1 0x0000000000000002
+#define ENET_SERDES_CTRL_SDET_2 0x0000000000000004
+#define ENET_SERDES_CTRL_SDET_3 0x0000000000000008
+#define ENET_SERDES_CTRL_EMPH_0 0x0000000000000070
+#define ENET_SERDES_CTRL_EMPH_0_SHIFT 4
+#define ENET_SERDES_CTRL_EMPH_1 0x0000000000000380
+#define ENET_SERDES_CTRL_EMPH_1_SHIFT 7
+#define ENET_SERDES_CTRL_EMPH_2 0x0000000000001c00
+#define ENET_SERDES_CTRL_EMPH_2_SHIFT 10
+#define ENET_SERDES_CTRL_EMPH_3 0x000000000000e000
+#define ENET_SERDES_CTRL_EMPH_3_SHIFT 13
+#define ENET_SERDES_CTRL_LADJ_0 0x0000000000070000
+#define ENET_SERDES_CTRL_LADJ_0_SHIFT 16
+#define ENET_SERDES_CTRL_LADJ_1 0x0000000000380000
+#define ENET_SERDES_CTRL_LADJ_1_SHIFT 19
+#define ENET_SERDES_CTRL_LADJ_2 0x0000000001c00000
+#define ENET_SERDES_CTRL_LADJ_2_SHIFT 22
+#define ENET_SERDES_CTRL_LADJ_3 0x000000000e000000
+#define ENET_SERDES_CTRL_LADJ_3_SHIFT 25
+#define ENET_SERDES_CTRL_RXITERM_0 0x0000000010000000
+#define ENET_SERDES_CTRL_RXITERM_1 0x0000000020000000
+#define ENET_SERDES_CTRL_RXITERM_2 0x0000000040000000
+#define ENET_SERDES_CTRL_RXITERM_3 0x0000000080000000
+
+#define ENET_SERDES_0_TEST_CFG (FZC_MAC + 0x14020UL)
+#define ENET_SERDES_TEST_MD_0 0x0000000000000003
+#define ENET_SERDES_TEST_MD_0_SHIFT 0
+#define ENET_SERDES_TEST_MD_1 0x000000000000000c
+#define ENET_SERDES_TEST_MD_1_SHIFT 2
+#define ENET_SERDES_TEST_MD_2 0x0000000000000030
+#define ENET_SERDES_TEST_MD_2_SHIFT 4
+#define ENET_SERDES_TEST_MD_3 0x00000000000000c0
+#define ENET_SERDES_TEST_MD_3_SHIFT 6
+
+#define ENET_TEST_MD_NO_LOOPBACK 0x0
+#define ENET_TEST_MD_EWRAP 0x1
+#define ENET_TEST_MD_PAD_LOOPBACK 0x2
+#define ENET_TEST_MD_REV_LOOPBACK 0x3
+
+#define ENET_SERDES_1_PLL_CFG (FZC_MAC + 0x14028UL)
+#define ENET_SERDES_1_CTRL_CFG (FZC_MAC + 0x14030UL)
+#define ENET_SERDES_1_TEST_CFG (FZC_MAC + 0x14038UL)
+
+#define ENET_RGMII_CFG_REG (FZC_MAC + 0x14040UL)
+
+#define ESR_INT_SIGNALS (FZC_MAC + 0x14800UL)
+#define ESR_INT_SIGNALS_ALL 0x00000000ffffffff
+#define ESR_INT_SIGNALS_P0_BITS 0x0000000033e0000f
+#define ESR_INT_SIGNALS_P1_BITS 0x000000000c1f00f0
+#define ESR_INT_SRDY0_P0 0x0000000020000000
+#define ESR_INT_DET0_P0 0x0000000010000000
+#define ESR_INT_SRDY0_P1 0x0000000008000000
+#define ESR_INT_DET0_P1 0x0000000004000000
+#define ESR_INT_XSRDY_P0 0x0000000002000000
+#define ESR_INT_XDP_P0_CH3 0x0000000001000000
+#define ESR_INT_XDP_P0_CH2 0x0000000000800000
+#define ESR_INT_XDP_P0_CH1 0x0000000000400000
+#define ESR_INT_XDP_P0_CH0 0x0000000000200000
+#define ESR_INT_XSRDY_P1 0x0000000000100000
+#define ESR_INT_XDP_P1_CH3 0x0000000000080000
+#define ESR_INT_XDP_P1_CH2 0x0000000000040000
+#define ESR_INT_XDP_P1_CH1 0x0000000000020000
+#define ESR_INT_XDP_P1_CH0 0x0000000000010000
+#define ESR_INT_SLOSS_P1_CH3 0x0000000000000080
+#define ESR_INT_SLOSS_P1_CH2 0x0000000000000040
+#define ESR_INT_SLOSS_P1_CH1 0x0000000000000020
+#define ESR_INT_SLOSS_P1_CH0 0x0000000000000010
+#define ESR_INT_SLOSS_P0_CH3 0x0000000000000008
+#define ESR_INT_SLOSS_P0_CH2 0x0000000000000004
+#define ESR_INT_SLOSS_P0_CH1 0x0000000000000002
+#define ESR_INT_SLOSS_P0_CH0 0x0000000000000001
+
+#define ESR_DEBUG_SEL (FZC_MAC + 0x14808UL)
+#define ESR_DEBUG_SEL_VAL 0x000000000000003f
+
+/* SerDes registers behind MIF */
+#define NIU_ESR_DEV_ADDR 0x1e
+#define ESR_BASE 0x0000
+
+#define ESR_RXTX_COMM_CTRL_L (ESR_BASE + 0x0000)
+#define ESR_RXTX_COMM_CTRL_H (ESR_BASE + 0x0001)
+
+#define ESR_RXTX_RESET_CTRL_L (ESR_BASE + 0x0002)
+#define ESR_RXTX_RESET_CTRL_H (ESR_BASE + 0x0003)
+
+#define ESR_RX_POWER_CTRL_L (ESR_BASE + 0x0004)
+#define ESR_RX_POWER_CTRL_H (ESR_BASE + 0x0005)
+
+#define ESR_TX_POWER_CTRL_L (ESR_BASE + 0x0006)
+#define ESR_TX_POWER_CTRL_H (ESR_BASE + 0x0007)
+
+#define ESR_MISC_POWER_CTRL_L (ESR_BASE + 0x0008)
+#define ESR_MISC_POWER_CTRL_H (ESR_BASE + 0x0009)
+
+#define ESR_RXTX_CTRL_L(CHAN) (ESR_BASE + 0x0080 + (CHAN) * 0x10)
+#define ESR_RXTX_CTRL_H(CHAN) (ESR_BASE + 0x0081 + (CHAN) * 0x10)
+#define ESR_RXTX_CTRL_BIASCNTL 0x80000000
+#define ESR_RXTX_CTRL_RESV1 0x7c000000
+#define ESR_RXTX_CTRL_TDENFIFO 0x02000000
+#define ESR_RXTX_CTRL_TDWS20 0x01000000
+#define ESR_RXTX_CTRL_VMUXLO 0x00c00000
+#define ESR_RXTX_CTRL_VMUXLO_SHIFT 22
+#define ESR_RXTX_CTRL_VPULSELO 0x00300000
+#define ESR_RXTX_CTRL_VPULSELO_SHIFT 20
+#define ESR_RXTX_CTRL_RESV2 0x000f0000
+#define ESR_RXTX_CTRL_RESV3 0x0000c000
+#define ESR_RXTX_CTRL_RXPRESWIN 0x00003000
+#define ESR_RXTX_CTRL_RXPRESWIN_SHIFT 12
+#define ESR_RXTX_CTRL_RESV4 0x00000800
+#define ESR_RXTX_CTRL_RISEFALL 0x00000700
+#define ESR_RXTX_CTRL_RISEFALL_SHIFT 8
+#define ESR_RXTX_CTRL_RESV5 0x000000fe
+#define ESR_RXTX_CTRL_ENSTRETCH 0x00000001
+
+#define ESR_RXTX_TUNING_L(CHAN) (ESR_BASE + 0x0082 + (CHAN) * 0x10)
+#define ESR_RXTX_TUNING_H(CHAN) (ESR_BASE + 0x0083 + (CHAN) * 0x10)
+
+#define ESR_RX_SYNCCHAR_L(CHAN) (ESR_BASE + 0x0084 + (CHAN) * 0x10)
+#define ESR_RX_SYNCCHAR_H(CHAN) (ESR_BASE + 0x0085 + (CHAN) * 0x10)
+
+#define ESR_RXTX_TEST_L(CHAN) (ESR_BASE + 0x0086 + (CHAN) * 0x10)
+#define ESR_RXTX_TEST_H(CHAN) (ESR_BASE + 0x0087 + (CHAN) * 0x10)
+
+#define ESR_GLUE_CTRL0_L(CHAN) (ESR_BASE + 0x0088 + (CHAN) * 0x10)
+#define ESR_GLUE_CTRL0_H(CHAN) (ESR_BASE + 0x0089 + (CHAN) * 0x10)
+#define ESR_GLUE_CTRL0_RESV1 0xf8000000
+#define ESR_GLUE_CTRL0_BLTIME 0x07000000
+#define ESR_GLUE_CTRL0_BLTIME_SHIFT 24
+#define ESR_GLUE_CTRL0_RESV2 0x00ff0000
+#define ESR_GLUE_CTRL0_RXLOS_TEST 0x00008000
+#define ESR_GLUE_CTRL0_RESV3 0x00004000
+#define ESR_GLUE_CTRL0_RXLOSENAB 0x00002000
+#define ESR_GLUE_CTRL0_FASTRESYNC 0x00001000
+#define ESR_GLUE_CTRL0_SRATE 0x00000f00
+#define ESR_GLUE_CTRL0_SRATE_SHIFT 8
+#define ESR_GLUE_CTRL0_THCNT 0x000000ff
+#define ESR_GLUE_CTRL0_THCNT_SHIFT 0
+
+#define BLTIME_64_CYCLES 0
+#define BLTIME_128_CYCLES 1
+#define BLTIME_256_CYCLES 2
+#define BLTIME_300_CYCLES 3
+#define BLTIME_384_CYCLES 4
+#define BLTIME_512_CYCLES 5
+#define BLTIME_1024_CYCLES 6
+#define BLTIME_2048_CYCLES 7
+
+#define ESR_GLUE_CTRL1_L(CHAN) (ESR_BASE + 0x008a + (CHAN) * 0x10)
+#define ESR_GLUE_CTRL1_H(CHAN) (ESR_BASE + 0x008b + (CHAN) * 0x10)
+#define ESR_RXTX_TUNING1_L(CHAN) (ESR_BASE + 0x00c2 + (CHAN) * 0x10)
+#define ESR_RXTX_TUNING1_H(CHAN) (ESR_BASE + 0x00c2 + (CHAN) * 0x10)
+#define ESR_RXTX_TUNING2_L(CHAN) (ESR_BASE + 0x0102 + (CHAN) * 0x10)
+#define ESR_RXTX_TUNING2_H(CHAN) (ESR_BASE + 0x0102 + (CHAN) * 0x10)
+#define ESR_RXTX_TUNING3_L(CHAN) (ESR_BASE + 0x0142 + (CHAN) * 0x10)
+#define ESR_RXTX_TUNING3_H(CHAN) (ESR_BASE + 0x0142 + (CHAN) * 0x10)
+
+#define NIU_ESR2_DEV_ADDR 0x1e
+#define ESR2_BASE 0x8000
+
+#define ESR2_TI_PLL_CFG_L (ESR2_BASE + 0x000)
+#define ESR2_TI_PLL_CFG_H (ESR2_BASE + 0x001)
+#define PLL_CFG_STD 0x00000c00
+#define PLL_CFG_STD_SHIFT 10
+#define PLL_CFG_LD 0x00000300
+#define PLL_CFG_LD_SHIFT 8
+#define PLL_CFG_MPY 0x0000001e
+#define PLL_CFG_MPY_SHIFT 1
+#define PLL_CFG_ENPLL 0x00000001
+
+#define ESR2_TI_PLL_STS_L (ESR2_BASE + 0x002)
+#define ESR2_TI_PLL_STS_H (ESR2_BASE + 0x003)
+#define PLL_STS_LOCK 0x00000001
+
+#define ESR2_TI_PLL_TEST_CFG_L (ESR2_BASE + 0x004)
+#define ESR2_TI_PLL_TEST_CFG_H (ESR2_BASE + 0x005)
+#define PLL_TEST_INVPATT 0x00004000
+#define PLL_TEST_RATE 0x00003000
+#define PLL_TEST_RATE_SHIFT 12
+#define PLL_TEST_CFG_ENBSAC 0x00000400
+#define PLL_TEST_CFG_ENBSRX 0x00000200
+#define PLL_TEST_CFG_ENBSTX 0x00000100
+#define PLL_TEST_CFG_LOOPBACK_PAD 0x00000040
+#define PLL_TEST_CFG_LOOPBACK_CML_DIS 0x00000080
+#define PLL_TEST_CFG_LOOPBACK_CML_EN 0x000000c0
+#define PLL_TEST_CFG_CLKBYP 0x00000030
+#define PLL_TEST_CFG_CLKBYP_SHIFT 4
+#define PLL_TEST_CFG_EN_RXPATT 0x00000008
+#define PLL_TEST_CFG_EN_TXPATT 0x00000004
+#define PLL_TEST_CFG_TPATT 0x00000003
+#define PLL_TEST_CFG_TPATT_SHIFT 0
+
+#define ESR2_TI_PLL_TX_CFG_L(CHAN) (ESR2_BASE + 0x100 + (CHAN) * 4)
+#define ESR2_TI_PLL_TX_CFG_H(CHAN) (ESR2_BASE + 0x101 + (CHAN) * 4)
+#define PLL_TX_CFG_RDTCT 0x00600000
+#define PLL_TX_CFG_RDTCT_SHIFT 21
+#define PLL_TX_CFG_ENIDL 0x00100000
+#define PLL_TX_CFG_BSTX 0x00020000
+#define PLL_TX_CFG_ENFTP 0x00010000
+#define PLL_TX_CFG_DE 0x0000f000
+#define PLL_TX_CFG_DE_SHIFT 12
+#define PLL_TX_CFG_SWING_125MV 0x00000000
+#define PLL_TX_CFG_SWING_250MV 0x00000200
+#define PLL_TX_CFG_SWING_500MV 0x00000400
+#define PLL_TX_CFG_SWING_625MV 0x00000600
+#define PLL_TX_CFG_SWING_750MV 0x00000800
+#define PLL_TX_CFG_SWING_1000MV 0x00000a00
+#define PLL_TX_CFG_SWING_1250MV 0x00000c00
+#define PLL_TX_CFG_SWING_1375MV 0x00000e00
+#define PLL_TX_CFG_CM 0x00000100
+#define PLL_TX_CFG_INVPAIR 0x00000080
+#define PLL_TX_CFG_RATE 0x00000060
+#define PLL_TX_CFG_RATE_SHIFT 5
+#define PLL_TX_CFG_BUSWIDTH 0x0000001c
+#define PLL_TX_CFG_BUSWIDTH_SHIFT 2
+#define PLL_TX_CFG_ENTEST 0x00000002
+#define PLL_TX_CFG_ENTX 0x00000001
+
+#define ESR2_TI_PLL_TX_STS_L(CHAN) (ESR2_BASE + 0x102 + (CHAN) * 4)
+#define ESR2_TI_PLL_TX_STS_H(CHAN) (ESR2_BASE + 0x103 + (CHAN) * 4)
+#define PLL_TX_STS_RDTCTIP 0x00000002
+#define PLL_TX_STS_TESTFAIL 0x00000001
+
+#define ESR2_TI_PLL_RX_CFG_L(CHAN) (ESR2_BASE + 0x120 + (CHAN) * 4)
+#define ESR2_TI_PLL_RX_CFG_H(CHAN) (ESR2_BASE + 0x121 + (CHAN) * 4)
+#define PLL_RX_CFG_BSINRXN 0x02000000
+#define PLL_RX_CFG_BSINRXP 0x01000000
+#define PLL_RX_CFG_EQ_MAX_LF 0x00000000
+#define PLL_RX_CFG_EQ_LP_ADAPTIVE 0x00080000
+#define PLL_RX_CFG_EQ_LP_1084MHZ 0x00400000
+#define PLL_RX_CFG_EQ_LP_805MHZ 0x00480000
+#define PLL_RX_CFG_EQ_LP_573MHZ 0x00500000
+#define PLL_RX_CFG_EQ_LP_402MHZ 0x00580000
+#define PLL_RX_CFG_EQ_LP_304MHZ 0x00600000
+#define PLL_RX_CFG_EQ_LP_216MHZ 0x00680000
+#define PLL_RX_CFG_EQ_LP_156MHZ 0x00700000
+#define PLL_RX_CFG_EQ_LP_135MHZ 0x00780000
+#define PLL_RX_CFG_EQ_SHIFT 19
+#define PLL_RX_CFG_CDR 0x00070000
+#define PLL_RX_CFG_CDR_SHIFT 16
+#define PLL_RX_CFG_LOS_DIS 0x00000000
+#define PLL_RX_CFG_LOS_HTHRESH 0x00004000
+#define PLL_RX_CFG_LOS_LTHRESH 0x00008000
+#define PLL_RX_CFG_ALIGN_DIS 0x00000000
+#define PLL_RX_CFG_ALIGN_ENA 0x00001000
+#define PLL_RX_CFG_ALIGN_JOG 0x00002000
+#define PLL_RX_CFG_TERM_VDDT 0x00000000
+#define PLL_RX_CFG_TERM_0P8VDDT 0x00000100
+#define PLL_RX_CFG_TERM_FLOAT 0x00000300
+#define PLL_RX_CFG_INVPAIR 0x00000080
+#define PLL_RX_CFG_RATE 0x00000060
+#define PLL_RX_CFG_RATE_SHIFT 5
+#define PLL_RX_CFG_BUSWIDTH 0x0000001c
+#define PLL_RX_CFG_BUSWIDTH_SHIFT 2
+#define PLL_RX_CFG_ENTEST 0x00000002
+#define PLL_RX_CFG_ENRX 0x00000001
+
+#define ESR2_TI_PLL_RX_STS_L(CHAN) (ESR2_BASE + 0x122 + (CHAN) * 4)
+#define ESR2_TI_PLL_RX_STS_H(CHAN) (ESR2_BASE + 0x123 + (CHAN) * 4)
+#define PLL_RX_STS_CRCIDTCT 0x00000200
+#define PLL_RX_STS_CWDTCT 0x00000100
+#define PLL_RX_STS_BSRXN 0x00000020
+#define PLL_RX_STS_BSRXP 0x00000010
+#define PLL_RX_STS_LOSDTCT 0x00000008
+#define PLL_RX_STS_ODDCG 0x00000004
+#define PLL_RX_STS_SYNC 0x00000002
+#define PLL_RX_STS_TESTFAIL 0x00000001
+
+#define ENET_VLAN_TBL(IDX) (FZC_FFLP + 0x00000UL + (IDX) * 8UL)
+#define ENET_VLAN_TBL_PARITY1 0x0000000000020000
+#define ENET_VLAN_TBL_PARITY0 0x0000000000010000
+#define ENET_VLAN_TBL_VPR 0x0000000000000008
+#define ENET_VLAN_TBL_VLANRDCTBLN 0x0000000000000007
+#define ENET_VLAN_TBL_SHIFT(PORT) ((PORT) * 4)
+
+#define ENET_VLAN_TBL_NUM_ENTRIES 4096
+
+#define FFLP_VLAN_PAR_ERR (FZC_FFLP + 0x0800UL)
+#define FFLP_VLAN_PAR_ERR_ERR 0x0000000080000000
+#define FFLP_VLAN_PAR_ERR_M_ERR 0x0000000040000000
+#define FFLP_VLAN_PAR_ERR_ADDR 0x000000003ffc0000
+#define FFLP_VLAN_PAR_ERR_DATA 0x000000000003ffff
+
+#define L2_CLS(IDX) (FZC_FFLP + 0x20000UL + (IDX) * 8UL)
+#define L2_CLS_VLD 0x0000000000010000
+#define L2_CLS_ETYPE 0x000000000000ffff
+#define L2_CLS_ETYPE_SHIFT 0
+
+#define L3_CLS(IDX) (FZC_FFLP + 0x20010UL + (IDX) * 8UL)
+#define L3_CLS_VALID 0x0000000002000000
+#define L3_CLS_IPVER 0x0000000001000000
+#define L3_CLS_PID 0x0000000000ff0000
+#define L3_CLS_PID_SHIFT 16
+#define L3_CLS_TOSMASK 0x000000000000ff00
+#define L3_CLS_TOSMASK_SHIFT 8
+#define L3_CLS_TOS 0x00000000000000ff
+#define L3_CLS_TOS_SHIFT 0
+
+#define TCAM_KEY(IDX) (FZC_FFLP + 0x20030UL + (IDX) * 8UL)
+#define TCAM_KEY_DISC 0x0000000000000008
+#define TCAM_KEY_TSEL 0x0000000000000004
+#define TCAM_KEY_IPADDR 0x0000000000000001
+
+#define TCAM_KEY_0 (FZC_FFLP + 0x20090UL)
+#define TCAM_KEY_0_KEY 0x00000000000000ff /* bits 192-199 */
+
+#define TCAM_KEY_1 (FZC_FFLP + 0x20098UL)
+#define TCAM_KEY_1_KEY 0xffffffffffffffff /* bits 128-191 */
+
+#define TCAM_KEY_2 (FZC_FFLP + 0x200a0UL)
+#define TCAM_KEY_2_KEY 0xffffffffffffffff /* bits 64-127 */
+
+#define TCAM_KEY_3 (FZC_FFLP + 0x200a8UL)
+#define TCAM_KEY_3_KEY 0xffffffffffffffff /* bits 0-63 */
+
+#define TCAM_KEY_MASK_0 (FZC_FFLP + 0x200b0UL)
+#define TCAM_KEY_MASK_0_KEY_SEL 0x00000000000000ff /* bits 192-199 */
+
+#define TCAM_KEY_MASK_1 (FZC_FFLP + 0x200b8UL)
+#define TCAM_KEY_MASK_1_KEY_SEL 0xffffffffffffffff /* bits 128-191 */
+
+#define TCAM_KEY_MASK_2 (FZC_FFLP + 0x200c0UL)
+#define TCAM_KEY_MASK_2_KEY_SEL 0xffffffffffffffff /* bits 64-127 */
+
+#define TCAM_KEY_MASK_3 (FZC_FFLP + 0x200c8UL)
+#define TCAM_KEY_MASK_3_KEY_SEL 0xffffffffffffffff /* bits 0-63 */
+
+#define TCAM_CTL (FZC_FFLP + 0x200d0UL)
+#define TCAM_CTL_RWC 0x00000000001c0000
+#define TCAM_CTL_RWC_TCAM_WRITE 0x0000000000000000
+#define TCAM_CTL_RWC_TCAM_READ 0x0000000000040000
+#define TCAM_CTL_RWC_TCAM_COMPARE 0x0000000000080000
+#define TCAM_CTL_RWC_RAM_WRITE 0x0000000000100000
+#define TCAM_CTL_RWC_RAM_READ 0x0000000000140000
+#define TCAM_CTL_STAT 0x0000000000020000
+#define TCAM_CTL_MATCH 0x0000000000010000
+#define TCAM_CTL_LOC 0x00000000000003ff
+
+#define TCAM_ERR (FZC_FFLP + 0x200d8UL)
+#define TCAM_ERR_ERR 0x0000000080000000
+#define TCAM_ERR_P_ECC 0x0000000040000000
+#define TCAM_ERR_MULT 0x0000000020000000
+#define TCAM_ERR_ADDR 0x0000000000ff0000
+#define TCAM_ERR_SYNDROME 0x000000000000ffff
+
+#define HASH_LOOKUP_ERR_LOG1 (FZC_FFLP + 0x200e0UL)
+#define HASH_LOOKUP_ERR_LOG1_ERR 0x0000000000000008
+#define HASH_LOOKUP_ERR_LOG1_MULT_LK 0x0000000000000004
+#define HASH_LOOKUP_ERR_LOG1_CU 0x0000000000000002
+#define HASH_LOOKUP_ERR_LOG1_MULT_BIT 0x0000000000000001
+
+#define HASH_LOOKUP_ERR_LOG2 (FZC_FFLP + 0x200e8UL)
+#define HASH_LOOKUP_ERR_LOG2_H1 0x000000007ffff800
+#define HASH_LOOKUP_ERR_LOG2_SUBAREA 0x0000000000000700
+#define HASH_LOOKUP_ERR_LOG2_SYNDROME 0x00000000000000ff
+
+#define FFLP_CFG_1 (FZC_FFLP + 0x20100UL)
+#define FFLP_CFG_1_TCAM_DIS 0x0000000004000000
+#define FFLP_CFG_1_PIO_DBG_SEL 0x0000000003800000
+#define FFLP_CFG_1_PIO_FIO_RST 0x0000000000400000
+#define FFLP_CFG_1_PIO_FIO_LAT 0x0000000000300000
+#define FFLP_CFG_1_CAMLAT 0x00000000000f0000
+#define FFLP_CFG_1_CAMLAT_SHIFT 16
+#define FFLP_CFG_1_CAMRATIO 0x000000000000f000
+#define FFLP_CFG_1_CAMRATIO_SHIFT 12
+#define FFLP_CFG_1_FCRAMRATIO 0x0000000000000f00
+#define FFLP_CFG_1_FCRAMRATIO_SHIFT 8
+#define FFLP_CFG_1_FCRAMOUTDR_MASK 0x00000000000000f0
+#define FFLP_CFG_1_FCRAMOUTDR_NORMAL 0x0000000000000000
+#define FFLP_CFG_1_FCRAMOUTDR_STRONG 0x0000000000000050
+#define FFLP_CFG_1_FCRAMOUTDR_WEAK 0x00000000000000a0
+#define FFLP_CFG_1_FCRAMQS 0x0000000000000008
+#define FFLP_CFG_1_ERRORDIS 0x0000000000000004
+#define FFLP_CFG_1_FFLPINITDONE 0x0000000000000002
+#define FFLP_CFG_1_LLCSNAP 0x0000000000000001
+
+#define DEFAULT_FCRAMRATIO 10
+
+#define DEFAULT_TCAM_LATENCY 4
+#define DEFAULT_TCAM_ACCESS_RATIO 10
+
+#define TCP_CFLAG_MSK (FZC_FFLP + 0x20108UL)
+#define TCP_CFLAG_MSK_MASK 0x0000000000000fff
+
+#define FCRAM_REF_TMR (FZC_FFLP + 0x20110UL)
+#define FCRAM_REF_TMR_MAX 0x00000000ffff0000
+#define FCRAM_REF_TMR_MAX_SHIFT 16
+#define FCRAM_REF_TMR_MIN 0x000000000000ffff
+#define FCRAM_REF_TMR_MIN_SHIFT 0
+
+#define DEFAULT_FCRAM_REFRESH_MAX 512
+#define DEFAULT_FCRAM_REFRESH_MIN 512
+
+#define FCRAM_FIO_ADDR (FZC_FFLP + 0x20118UL)
+#define FCRAM_FIO_ADDR_ADDR 0x00000000000000ff
+
+#define FCRAM_FIO_DAT (FZC_FFLP + 0x20120UL)
+#define FCRAM_FIO_DAT_DATA 0x000000000000ffff
+
+#define FCRAM_ERR_TST0 (FZC_FFLP + 0x20128UL)
+#define FCRAM_ERR_TST0_SYND 0x00000000000000ff
+
+#define FCRAM_ERR_TST1 (FZC_FFLP + 0x20130UL)
+#define FCRAM_ERR_TST1_DAT 0x00000000ffffffff
+
+#define FCRAM_ERR_TST2 (FZC_FFLP + 0x20138UL)
+#define FCRAM_ERR_TST2_DAT 0x00000000ffffffff
+
+#define FFLP_ERR_MASK (FZC_FFLP + 0x20140UL)
+#define FFLP_ERR_MASK_HSH_TBL_DAT 0x00000000000007f8
+#define FFLP_ERR_MASK_HSH_TBL_LKUP 0x0000000000000004
+#define FFLP_ERR_MASK_TCAM 0x0000000000000002
+#define FFLP_ERR_MASK_VLAN 0x0000000000000001
+
+#define FFLP_DBG_TRAIN_VCT (FZC_FFLP + 0x20148UL)
+#define FFLP_DBG_TRAIN_VCT_VECTOR 0x00000000ffffffff
+
+#define FCRAM_PHY_RD_LAT (FZC_FFLP + 0x20150UL)
+#define FCRAM_PHY_RD_LAT_LAT 0x00000000000000ff
+
+/* Ethernet TCAM format */
+#define TCAM_ETHKEY0_RESV1 0xffffffffffffff00
+#define TCAM_ETHKEY0_CLASS_CODE 0x00000000000000f8
+#define TCAM_ETHKEY0_CLASS_CODE_SHIFT 3
+#define TCAM_ETHKEY0_RESV2 0x0000000000000007
+#define TCAM_ETHKEY1_FRAME_BYTE0_7(NUM) (0xff << ((7 - NUM) * 8))
+#define TCAM_ETHKEY2_FRAME_BYTE8 0xff00000000000000
+#define TCAM_ETHKEY2_FRAME_BYTE8_SHIFT 56
+#define TCAM_ETHKEY2_FRAME_BYTE9 0x00ff000000000000
+#define TCAM_ETHKEY2_FRAME_BYTE9_SHIFT 48
+#define TCAM_ETHKEY2_FRAME_BYTE10 0x0000ff0000000000
+#define TCAM_ETHKEY2_FRAME_BYTE10_SHIFT 40
+#define TCAM_ETHKEY2_FRAME_RESV 0x000000ffffffffff
+#define TCAM_ETHKEY3_FRAME_RESV 0xffffffffffffffff
+
+/* IPV4 TCAM format */
+#define TCAM_V4KEY0_RESV1 0xffffffffffffff00
+#define TCAM_V4KEY0_CLASS_CODE 0x00000000000000f8
+#define TCAM_V4KEY0_CLASS_CODE_SHIFT 3
+#define TCAM_V4KEY0_RESV2 0x0000000000000007
+#define TCAM_V4KEY1_L2RDCNUM 0xf800000000000000
+#define TCAM_V4KEY1_L2RDCNUM_SHIFT 59
+#define TCAM_V4KEY1_NOPORT 0x0400000000000000
+#define TCAM_V4KEY1_RESV 0x03ffffffffffffff
+#define TCAM_V4KEY2_RESV 0xffff000000000000
+#define TCAM_V4KEY2_TOS 0x0000ff0000000000
+#define TCAM_V4KEY2_TOS_SHIFT 40
+#define TCAM_V4KEY2_PROTO 0x000000ff00000000
+#define TCAM_V4KEY2_PROTO_SHIFT 32
+#define TCAM_V4KEY2_PORT_SPI 0x00000000ffffffff
+#define TCAM_V4KEY2_PORT_SPI_SHIFT 0
+#define TCAM_V4KEY3_SADDR 0xffffffff00000000
+#define TCAM_V4KEY3_SADDR_SHIFT 32
+#define TCAM_V4KEY3_DADDR 0x00000000ffffffff
+#define TCAM_V4KEY3_DADDR_SHIFT 0
+
+/* IPV6 TCAM format */
+#define TCAM_V6KEY0_RESV1 0xffffffffffffff00
+#define TCAM_V6KEY0_CLASS_CODE 0x00000000000000f8
+#define TCAM_V6KEY0_CLASS_CODE_SHIFT 3
+#define TCAM_V6KEY0_RESV2 0x0000000000000007
+#define TCAM_V6KEY1_L2RDCNUM 0xf800000000000000
+#define TCAM_V6KEY1_L2RDCNUM_SHIFT 59
+#define TCAM_V6KEY1_NOPORT 0x0400000000000000
+#define TCAM_V6KEY1_RESV 0x03ff000000000000
+#define TCAM_V6KEY1_TOS 0x0000ff0000000000
+#define TCAM_V6KEY1_TOS_SHIFT 40
+#define TCAM_V6KEY1_NEXT_HDR 0x000000ff00000000
+#define TCAM_V6KEY1_NEXT_HDR_SHIFT 32
+#define TCAM_V6KEY1_PORT_SPI 0x00000000ffffffff
+#define TCAM_V6KEY1_PORT_SPI_SHIFT 0
+#define TCAM_V6KEY2_ADDR_HIGH 0xffffffffffffffff
+#define TCAM_V6KEY3_ADDR_LOW 0xffffffffffffffff
+
+#define TCAM_ASSOCDATA_SYNDROME 0x000003fffc000000
+#define TCAM_ASSOCDATA_SYNDROME_SHIFT 26
+#define TCAM_ASSOCDATA_ZFID 0x0000000003ffc000
+#define TCAM_ASSOCDATA_ZFID_SHIFT 14
+#define TCAM_ASSOCDATA_V4_ECC_OK 0x0000000000002000
+#define TCAM_ASSOCDATA_DISC 0x0000000000001000
+#define TCAM_ASSOCDATA_TRES_MASK 0x0000000000000c00
+#define TCAM_ASSOCDATA_TRES_USE_L2RDC 0x0000000000000000
+#define TCAM_ASSOCDATA_TRES_USE_OFFSET 0x0000000000000400
+#define TCAM_ASSOCDATA_TRES_OVR_RDC 0x0000000000000800
+#define TCAM_ASSOCDATA_TRES_OVR_RDC_OFF 0x0000000000000c00
+#define TCAM_ASSOCDATA_RDCTBL 0x0000000000000380
+#define TCAM_ASSOCDATA_RDCTBL_SHIFT 7
+#define TCAM_ASSOCDATA_OFFSET 0x000000000000007c
+#define TCAM_ASSOCDATA_OFFSET_SHIFT 2
+#define TCAM_ASSOCDATA_ZFVLD 0x0000000000000002
+#define TCAM_ASSOCDATA_AGE 0x0000000000000001
+
+#define FLOW_KEY(IDX) (FZC_FFLP + 0x40000UL + (IDX) * 8UL)
+#define FLOW_KEY_PORT 0x0000000000000200
+#define FLOW_KEY_L2DA 0x0000000000000100
+#define FLOW_KEY_VLAN 0x0000000000000080
+#define FLOW_KEY_IPSA 0x0000000000000040
+#define FLOW_KEY_IPDA 0x0000000000000020
+#define FLOW_KEY_PROTO 0x0000000000000010
+#define FLOW_KEY_L4_0 0x000000000000000c
+#define FLOW_KEY_L4_0_SHIFT 2
+#define FLOW_KEY_L4_1 0x0000000000000003
+#define FLOW_KEY_L4_1_SHIFT 0
+
+#define FLOW_KEY_L4_NONE 0x0
+#define FLOW_KEY_L4_RESV 0x1
+#define FLOW_KEY_L4_BYTE12 0x2
+#define FLOW_KEY_L4_BYTE56 0x3
+
+#define H1POLY (FZC_FFLP + 0x40060UL)
+#define H1POLY_INITVAL 0x00000000ffffffff
+
+#define H2POLY (FZC_FFLP + 0x40068UL)
+#define H2POLY_INITVAL 0x000000000000ffff
+
+#define FLW_PRT_SEL(IDX) (FZC_FFLP + 0x40070UL + (IDX) * 8UL)
+#define FLW_PRT_SEL_EXT 0x0000000000010000
+#define FLW_PRT_SEL_MASK 0x0000000000001f00
+#define FLW_PRT_SEL_MASK_SHIFT 8
+#define FLW_PRT_SEL_BASE 0x000000000000001f
+#define FLW_PRT_SEL_BASE_SHIFT 0
+
+#define HASH_TBL_ADDR(IDX) (FFLP + 0x00000UL + (IDX) * 8192UL)
+#define HASH_TBL_ADDR_AUTOINC 0x0000000000800000
+#define HASH_TBL_ADDR_ADDR 0x00000000007fffff
+
+#define HASH_TBL_DATA(IDX) (FFLP + 0x00008UL + (IDX) * 8192UL)
+#define HASH_TBL_DATA_DATA 0xffffffffffffffff
+
+/* FCRAM hash table entries are up to 8 64-bit words in size.
+ * The layout of each entry is determined by the settings in the
+ * first word, which is the header.
+ *
+ * The indexing is controllable per partition (there is one partition
+ * per RDC group, thus a total of eight) using the BASE and MASK fields
+ * of FLW_PRT_SEL above.
+ */
+#define FCRAM_SIZE 0x800000
+#define FCRAM_NUM_PARTITIONS 8
+
+/* Generic HASH entry header, used for all non-optimized formats. */
+#define HASH_HEADER_FMT 0x8000000000000000
+#define HASH_HEADER_EXT 0x4000000000000000
+#define HASH_HEADER_VALID 0x2000000000000000
+#define HASH_HEADER_RESVD 0x1000000000000000
+#define HASH_HEADER_L2_DADDR 0x0ffffffffffff000
+#define HASH_HEADER_L2_DADDR_SHIFT 12
+#define HASH_HEADER_VLAN 0x0000000000000fff
+#define HASH_HEADER_VLAN_SHIFT 0
+
+/* Optimized format, just a header with a special layout defined below.
+ * Set FMT and EXT both to zero to indicate this layout is being used.
+ */
+#define HASH_OPT_HEADER_FMT 0x8000000000000000
+#define HASH_OPT_HEADER_EXT 0x4000000000000000
+#define HASH_OPT_HEADER_VALID 0x2000000000000000
+#define HASH_OPT_HEADER_RDCOFF 0x1f00000000000000
+#define HASH_OPT_HEADER_RDCOFF_SHIFT 56
+#define HASH_OPT_HEADER_HASH2 0x00ffff0000000000
+#define HASH_OPT_HEADER_HASH2_SHIFT 40
+#define HASH_OPT_HEADER_RESVD 0x000000ff00000000
+#define HASH_OPT_HEADER_USERINFO 0x00000000ffffffff
+#define HASH_OPT_HEADER_USERINFO_SHIFT 0
+
+/* Port and protocol word used for ipv4 and ipv6 layouts. */
+#define HASH_PORT_DPORT 0xffff000000000000
+#define HASH_PORT_DPORT_SHIFT 48
+#define HASH_PORT_SPORT 0x0000ffff00000000
+#define HASH_PORT_SPORT_SHIFT 32
+#define HASH_PORT_PROTO 0x00000000ff000000
+#define HASH_PORT_PROTO_SHIFT 24
+#define HASH_PORT_PORT_OFF 0x0000000000c00000
+#define HASH_PORT_PORT_OFF_SHIFT 22
+#define HASH_PORT_PORT_RESV 0x00000000003fffff
+
+/* Action word used for ipv4 and ipv6 layouts. */
+#define HASH_ACTION_RESV1 0xe000000000000000
+#define HASH_ACTION_RDCOFF 0x1f00000000000000
+#define HASH_ACTION_RDCOFF_SHIFT 56
+#define HASH_ACTION_ZFVALID 0x0080000000000000
+#define HASH_ACTION_RESV2 0x0070000000000000
+#define HASH_ACTION_ZFID 0x000fff0000000000
+#define HASH_ACTION_ZFID_SHIFT 40
+#define HASH_ACTION_RESV3 0x000000ff00000000
+#define HASH_ACTION_USERINFO 0x00000000ffffffff
+#define HASH_ACTION_USERINFO_SHIFT 0
+
+/* IPV4 address word. Addresses are in network endian. */
+#define HASH_IP4ADDR_SADDR 0xffffffff00000000
+#define HASH_IP4ADDR_SADDR_SHIFT 32
+#define HASH_IP4ADDR_DADDR 0x00000000ffffffff
+#define HASH_IP4ADDR_DADDR_SHIFT 0
+
+/* IPV6 address layout is 4 words, first two are saddr, next two
+ * are daddr. Addresses are in network endian.
+ */
+
+struct fcram_hash_opt {
+ u64 header;
+};
+
+/* EXT=1, FMT=0 */
+struct fcram_hash_ipv4 {
+ u64 header;
+ u64 addrs;
+ u64 ports;
+ u64 action;
+};
+
+/* EXT=1, FMT=1 */
+struct fcram_hash_ipv6 {
+ u64 header;
+ u64 addrs[4];
+ u64 ports;
+ u64 action;
+};
+
+#define HASH_TBL_DATA_LOG(IDX) (FFLP + 0x00010UL + (IDX) * 8192UL)
+#define HASH_TBL_DATA_LOG_ERR 0x0000000080000000
+#define HASH_TBL_DATA_LOG_ADDR 0x000000007fffff00
+#define HASH_TBL_DATA_LOG_SYNDROME 0x00000000000000ff
+
+#define RX_DMA_CK_DIV (FZC_DMC + 0x00000UL)
+#define RX_DMA_CK_DIV_CNT 0x000000000000ffff
+
+#define DEF_RDC(IDX) (FZC_DMC + 0x00008UL + (IDX) * 0x8UL)
+#define DEF_RDC_VAL 0x000000000000001f
+
+#define PT_DRR_WT(IDX) (FZC_DMC + 0x00028UL + (IDX) * 0x8UL)
+#define PT_DRR_WT_VAL 0x000000000000ffff
+
+#define PT_DRR_WEIGHT_DEFAULT_10G 0x0400
+#define PT_DRR_WEIGHT_DEFAULT_1G 0x0066
+
+#define PT_USE(IDX) (FZC_DMC + 0x00048UL + (IDX) * 0x8UL)
+#define PT_USE_CNT 0x00000000000fffff
+
+#define RED_RAN_INIT (FZC_DMC + 0x00068UL)
+#define RED_RAN_INIT_OPMODE 0x0000000000010000
+#define RED_RAN_INIT_VAL 0x000000000000ffff
+
+#define RX_ADDR_MD (FZC_DMC + 0x00070UL)
+#define RX_ADDR_MD_DBG_PT_MUX_SEL 0x000000000000000c
+#define RX_ADDR_MD_RAM_ACC 0x0000000000000002
+#define RX_ADDR_MD_MODE32 0x0000000000000001
+
+#define RDMC_PRE_PAR_ERR (FZC_DMC + 0x00078UL)
+#define RDMC_PRE_PAR_ERR_ERR 0x0000000000008000
+#define RDMC_PRE_PAR_ERR_MERR 0x0000000000004000
+#define RDMC_PRE_PAR_ERR_ADDR 0x00000000000000ff
+
+#define RDMC_SHA_PAR_ERR (FZC_DMC + 0x00080UL)
+#define RDMC_SHA_PAR_ERR_ERR 0x0000000000008000
+#define RDMC_SHA_PAR_ERR_MERR 0x0000000000004000
+#define RDMC_SHA_PAR_ERR_ADDR 0x00000000000000ff
+
+#define RDMC_MEM_ADDR (FZC_DMC + 0x00088UL)
+#define RDMC_MEM_ADDR_PRE_SHAD 0x0000000000000100
+#define RDMC_MEM_ADDR_ADDR 0x00000000000000ff
+
+#define RDMC_MEM_DAT0 (FZC_DMC + 0x00090UL)
+#define RDMC_MEM_DAT0_DATA 0x00000000ffffffff /* bits 31:0 */
+
+#define RDMC_MEM_DAT1 (FZC_DMC + 0x00098UL)
+#define RDMC_MEM_DAT1_DATA 0x00000000ffffffff /* bits 63:32 */
+
+#define RDMC_MEM_DAT2 (FZC_DMC + 0x000a0UL)
+#define RDMC_MEM_DAT2_DATA 0x00000000ffffffff /* bits 95:64 */
+
+#define RDMC_MEM_DAT3 (FZC_DMC + 0x000a8UL)
+#define RDMC_MEM_DAT3_DATA 0x00000000ffffffff /* bits 127:96 */
+
+#define RDMC_MEM_DAT4 (FZC_DMC + 0x000b0UL)
+#define RDMC_MEM_DAT4_DATA 0x00000000000fffff /* bits 147:128 */
+
+#define RX_CTL_DAT_FIFO_STAT (FZC_DMC + 0x000b8UL)
+#define RX_CTL_DAT_FIFO_STAT_ID_MISMATCH 0x0000000000000100
+#define RX_CTL_DAT_FIFO_STAT_ZCP_EOP_ERR 0x00000000000000f0
+#define RX_CTL_DAT_FIFO_STAT_IPP_EOP_ERR 0x000000000000000f
+
+#define RX_CTL_DAT_FIFO_MASK (FZC_DMC + 0x000c0UL)
+#define RX_CTL_DAT_FIFO_MASK_ID_MISMATCH 0x0000000000000100
+#define RX_CTL_DAT_FIFO_MASK_ZCP_EOP_ERR 0x00000000000000f0
+#define RX_CTL_DAT_FIFO_MASK_IPP_EOP_ERR 0x000000000000000f
+
+#define RDMC_TRAINING_VECTOR (FZC_DMC + 0x000c8UL)
+#define RDMC_TRAINING_VECTOR_TRAINING_VECTOR 0x00000000ffffffff
+
+#define RX_CTL_DAT_FIFO_STAT_DBG (FZC_DMC + 0x000d0UL)
+#define RX_CTL_DAT_FIFO_STAT_DBG_ID_MISMATCH 0x0000000000000100
+#define RX_CTL_DAT_FIFO_STAT_DBG_ZCP_EOP_ERR 0x00000000000000f0
+#define RX_CTL_DAT_FIFO_STAT_DBG_IPP_EOP_ERR 0x000000000000000f
+
+#define RDC_TBL(TBL,SLOT) (FZC_ZCP + 0x10000UL + \
+ (TBL) * (8UL * 16UL) + \
+ (SLOT) * 8UL)
+#define RDC_TBL_RDC 0x000000000000000f
+
+#define RX_LOG_PAGE_VLD(IDX) (FZC_DMC + 0x20000UL + (IDX) * 0x40UL)
+#define RX_LOG_PAGE_VLD_FUNC 0x000000000000000c
+#define RX_LOG_PAGE_VLD_FUNC_SHIFT 2
+#define RX_LOG_PAGE_VLD_PAGE1 0x0000000000000002
+#define RX_LOG_PAGE_VLD_PAGE0 0x0000000000000001
+
+#define RX_LOG_MASK1(IDX) (FZC_DMC + 0x20008UL + (IDX) * 0x40UL)
+#define RX_LOG_MASK1_MASK 0x00000000ffffffff
+
+#define RX_LOG_VAL1(IDX) (FZC_DMC + 0x20010UL + (IDX) * 0x40UL)
+#define RX_LOG_VAL1_VALUE 0x00000000ffffffff
+
+#define RX_LOG_MASK2(IDX) (FZC_DMC + 0x20018UL + (IDX) * 0x40UL)
+#define RX_LOG_MASK2_MASK 0x00000000ffffffff
+
+#define RX_LOG_VAL2(IDX) (FZC_DMC + 0x20020UL + (IDX) * 0x40UL)
+#define RX_LOG_VAL2_VALUE 0x00000000ffffffff
+
+#define RX_LOG_PAGE_RELO1(IDX) (FZC_DMC + 0x20028UL + (IDX) * 0x40UL)
+#define RX_LOG_PAGE_RELO1_RELO 0x00000000ffffffff
+
+#define RX_LOG_PAGE_RELO2(IDX) (FZC_DMC + 0x20030UL + (IDX) * 0x40UL)
+#define RX_LOG_PAGE_RELO2_RELO 0x00000000ffffffff
+
+#define RX_LOG_PAGE_HDL(IDX) (FZC_DMC + 0x20038UL + (IDX) * 0x40UL)
+#define RX_LOG_PAGE_HDL_HANDLE 0x00000000000fffff
+
+#define TX_LOG_PAGE_VLD(IDX) (FZC_DMC + 0x40000UL + (IDX) * 0x200UL)
+#define TX_LOG_PAGE_VLD_FUNC 0x000000000000000c
+#define TX_LOG_PAGE_VLD_FUNC_SHIFT 2
+#define TX_LOG_PAGE_VLD_PAGE1 0x0000000000000002
+#define TX_LOG_PAGE_VLD_PAGE0 0x0000000000000001
+
+#define TX_LOG_MASK1(IDX) (FZC_DMC + 0x40008UL + (IDX) * 0x200UL)
+#define TX_LOG_MASK1_MASK 0x00000000ffffffff
+
+#define TX_LOG_VAL1(IDX) (FZC_DMC + 0x40010UL + (IDX) * 0x200UL)
+#define TX_LOG_VAL1_VALUE 0x00000000ffffffff
+
+#define TX_LOG_MASK2(IDX) (FZC_DMC + 0x40018UL + (IDX) * 0x200UL)
+#define TX_LOG_MASK2_MASK 0x00000000ffffffff
+
+#define TX_LOG_VAL2(IDX) (FZC_DMC + 0x40020UL + (IDX) * 0x200UL)
+#define TX_LOG_VAL2_VALUE 0x00000000ffffffff
+
+#define TX_LOG_PAGE_RELO1(IDX) (FZC_DMC + 0x40028UL + (IDX) * 0x200UL)
+#define TX_LOG_PAGE_RELO1_RELO 0x00000000ffffffff
+
+#define TX_LOG_PAGE_RELO2(IDX) (FZC_DMC + 0x40030UL + (IDX) * 0x200UL)
+#define TX_LOG_PAGE_RELO2_RELO 0x00000000ffffffff
+
+#define TX_LOG_PAGE_HDL(IDX) (FZC_DMC + 0x40038UL + (IDX) * 0x200UL)
+#define TX_LOG_PAGE_HDL_HANDLE 0x00000000000fffff
+
+#define TX_ADDR_MD (FZC_DMC + 0x45000UL)
+#define TX_ADDR_MD_MODE32 0x0000000000000001
+
+#define RDC_RED_PARA(IDX) (FZC_DMC + 0x30000UL + (IDX) * 0x40UL)
+#define RDC_RED_PARA_THRE_SYN 0x00000000fff00000
+#define RDC_RED_PARA_THRE_SYN_SHIFT 20
+#define RDC_RED_PARA_WIN_SYN 0x00000000000f0000
+#define RDC_RED_PARA_WIN_SYN_SHIFT 16
+#define RDC_RED_PARA_THRE 0x000000000000fff0
+#define RDC_RED_PARA_THRE_SHIFT 4
+#define RDC_RED_PARA_WIN 0x000000000000000f
+#define RDC_RED_PARA_WIN_SHIFT 0
+
+#define RED_DIS_CNT(IDX) (FZC_DMC + 0x30008UL + (IDX) * 0x40UL)
+#define RED_DIS_CNT_OFLOW 0x0000000000010000
+#define RED_DIS_CNT_COUNT 0x000000000000ffff
+
+#define IPP_CFIG (FZC_IPP + 0x00000UL)
+#define IPP_CFIG_SOFT_RST 0x0000000080000000
+#define IPP_CFIG_IP_MAX_PKT 0x0000000001ffff00
+#define IPP_CFIG_IP_MAX_PKT_SHIFT 8
+#define IPP_CFIG_FFLP_CS_PIO_W 0x0000000000000080
+#define IPP_CFIG_PFIFO_PIO_W 0x0000000000000040
+#define IPP_CFIG_DFIFO_PIO_W 0x0000000000000020
+#define IPP_CFIG_CKSUM_EN 0x0000000000000010
+#define IPP_CFIG_DROP_BAD_CRC 0x0000000000000008
+#define IPP_CFIG_DFIFO_ECC_EN 0x0000000000000004
+#define IPP_CFIG_DEBUG_BUS_OUT_EN 0x0000000000000002
+#define IPP_CFIG_IPP_ENABLE 0x0000000000000001
+
+#define IPP_PKT_DIS (FZC_IPP + 0x00020UL)
+#define IPP_PKT_DIS_COUNT 0x0000000000003fff
+
+#define IPP_BAD_CS_CNT (FZC_IPP + 0x00028UL)
+#define IPP_BAD_CS_CNT_COUNT 0x0000000000003fff
+
+#define IPP_ECC (FZC_IPP + 0x00030UL)
+#define IPP_ECC_COUNT 0x00000000000000ff
+
+#define IPP_INT_STAT (FZC_IPP + 0x00040UL)
+#define IPP_INT_STAT_SOP_MISS 0x0000000080000000
+#define IPP_INT_STAT_EOP_MISS 0x0000000040000000
+#define IPP_INT_STAT_DFIFO_UE 0x0000000030000000
+#define IPP_INT_STAT_DFIFO_CE 0x000000000c000000
+#define IPP_INT_STAT_DFIFO_ECC 0x0000000003000000
+#define IPP_INT_STAT_DFIFO_ECC_IDX 0x00000000007ff000
+#define IPP_INT_STAT_PFIFO_PERR 0x0000000000000800
+#define IPP_INT_STAT_ECC_ERR_MAX 0x0000000000000400
+#define IPP_INT_STAT_PFIFO_ERR_IDX 0x00000000000003f0
+#define IPP_INT_STAT_PFIFO_OVER 0x0000000000000008
+#define IPP_INT_STAT_PFIFO_UND 0x0000000000000004
+#define IPP_INT_STAT_BAD_CS_MX 0x0000000000000002
+#define IPP_INT_STAT_PKT_DIS_MX 0x0000000000000001
+#define IPP_INT_STAT_ALL 0x00000000ff7fffff
+
+#define IPP_MSK (FZC_IPP + 0x00048UL)
+#define IPP_MSK_ECC_ERR_MX 0x0000000000000080
+#define IPP_MSK_DFIFO_EOP_SOP 0x0000000000000040
+#define IPP_MSK_DFIFO_UC 0x0000000000000020
+#define IPP_MSK_PFIFO_PAR 0x0000000000000010
+#define IPP_MSK_PFIFO_OVER 0x0000000000000008
+#define IPP_MSK_PFIFO_UND 0x0000000000000004
+#define IPP_MSK_BAD_CS 0x0000000000000002
+#define IPP_MSK_PKT_DIS_CNT 0x0000000000000001
+#define IPP_MSK_ALL 0x00000000000000ff
+
+#define IPP_PFIFO_RD0 (FZC_IPP + 0x00060UL)
+#define IPP_PFIFO_RD0_DATA 0x00000000ffffffff /* bits 31:0 */
+
+#define IPP_PFIFO_RD1 (FZC_IPP + 0x00068UL)
+#define IPP_PFIFO_RD1_DATA 0x00000000ffffffff /* bits 63:32 */
+
+#define IPP_PFIFO_RD2 (FZC_IPP + 0x00070UL)
+#define IPP_PFIFO_RD2_DATA 0x00000000ffffffff /* bits 95:64 */
+
+#define IPP_PFIFO_RD3 (FZC_IPP + 0x00078UL)
+#define IPP_PFIFO_RD3_DATA 0x00000000ffffffff /* bits 127:96 */
+
+#define IPP_PFIFO_RD4 (FZC_IPP + 0x00080UL)
+#define IPP_PFIFO_RD4_DATA 0x00000000ffffffff /* bits 145:128 */
+
+#define IPP_PFIFO_WR0 (FZC_IPP + 0x00088UL)
+#define IPP_PFIFO_WR0_DATA 0x00000000ffffffff /* bits 31:0 */
+
+#define IPP_PFIFO_WR1 (FZC_IPP + 0x00090UL)
+#define IPP_PFIFO_WR1_DATA 0x00000000ffffffff /* bits 63:32 */
+
+#define IPP_PFIFO_WR2 (FZC_IPP + 0x00098UL)
+#define IPP_PFIFO_WR2_DATA 0x00000000ffffffff /* bits 95:64 */
+
+#define IPP_PFIFO_WR3 (FZC_IPP + 0x000a0UL)
+#define IPP_PFIFO_WR3_DATA 0x00000000ffffffff /* bits 127:96 */
+
+#define IPP_PFIFO_WR4 (FZC_IPP + 0x000a8UL)
+#define IPP_PFIFO_WR4_DATA 0x00000000ffffffff /* bits 145:128 */
+
+#define IPP_PFIFO_RD_PTR (FZC_IPP + 0x000b0UL)
+#define IPP_PFIFO_RD_PTR_PTR 0x000000000000003f
+
+#define IPP_PFIFO_WR_PTR (FZC_IPP + 0x000b8UL)
+#define IPP_PFIFO_WR_PTR_PTR 0x000000000000007f
+
+#define IPP_DFIFO_RD0 (FZC_IPP + 0x000c0UL)
+#define IPP_DFIFO_RD0_DATA 0x00000000ffffffff /* bits 31:0 */
+
+#define IPP_DFIFO_RD1 (FZC_IPP + 0x000c8UL)
+#define IPP_DFIFO_RD1_DATA 0x00000000ffffffff /* bits 63:32 */
+
+#define IPP_DFIFO_RD2 (FZC_IPP + 0x000d0UL)
+#define IPP_DFIFO_RD2_DATA 0x00000000ffffffff /* bits 95:64 */
+
+#define IPP_DFIFO_RD3 (FZC_IPP + 0x000d8UL)
+#define IPP_DFIFO_RD3_DATA 0x00000000ffffffff /* bits 127:96 */
+
+#define IPP_DFIFO_RD4 (FZC_IPP + 0x000e0UL)
+#define IPP_DFIFO_RD4_DATA 0x00000000ffffffff /* bits 145:128 */
+
+#define IPP_DFIFO_WR0 (FZC_IPP + 0x000e8UL)
+#define IPP_DFIFO_WR0_DATA 0x00000000ffffffff /* bits 31:0 */
+
+#define IPP_DFIFO_WR1 (FZC_IPP + 0x000f0UL)
+#define IPP_DFIFO_WR1_DATA 0x00000000ffffffff /* bits 63:32 */
+
+#define IPP_DFIFO_WR2 (FZC_IPP + 0x000f8UL)
+#define IPP_DFIFO_WR2_DATA 0x00000000ffffffff /* bits 95:64 */
+
+#define IPP_DFIFO_WR3 (FZC_IPP + 0x00100UL)
+#define IPP_DFIFO_WR3_DATA 0x00000000ffffffff /* bits 127:96 */
+
+#define IPP_DFIFO_WR4 (FZC_IPP + 0x00108UL)
+#define IPP_DFIFO_WR4_DATA 0x00000000ffffffff /* bits 145:128 */
+
+#define IPP_DFIFO_RD_PTR (FZC_IPP + 0x00110UL)
+#define IPP_DFIFO_RD_PTR_PTR 0x0000000000000fff
+
+#define IPP_DFIFO_WR_PTR (FZC_IPP + 0x00118UL)
+#define IPP_DFIFO_WR_PTR_PTR 0x0000000000000fff
+
+#define IPP_SM (FZC_IPP + 0x00120UL)
+#define IPP_SM_SM 0x00000000ffffffff
+
+#define IPP_CS_STAT (FZC_IPP + 0x00128UL)
+#define IPP_CS_STAT_BCYC_CNT 0x00000000ff000000
+#define IPP_CS_STAT_IP_LEN 0x0000000000fff000
+#define IPP_CS_STAT_CS_FAIL 0x0000000000000800
+#define IPP_CS_STAT_TERM 0x0000000000000400
+#define IPP_CS_STAT_BAD_NUM 0x0000000000000200
+#define IPP_CS_STAT_CS_STATE 0x00000000000001ff
+
+#define IPP_FFLP_CS_INFO (FZC_IPP + 0x00130UL)
+#define IPP_FFLP_CS_INFO_PKT_ID 0x0000000000003c00
+#define IPP_FFLP_CS_INFO_L4_PROTO 0x0000000000000300
+#define IPP_FFLP_CS_INFO_V4_HD_LEN 0x00000000000000f0
+#define IPP_FFLP_CS_INFO_L3_VER 0x000000000000000c
+#define IPP_FFLP_CS_INFO_L2_OP 0x0000000000000003
+
+#define IPP_DBG_SEL (FZC_IPP + 0x00138UL)
+#define IPP_DBG_SEL_SEL 0x000000000000000f
+
+#define IPP_DFIFO_ECC_SYND (FZC_IPP + 0x00140UL)
+#define IPP_DFIFO_ECC_SYND_SYND 0x000000000000ffff
+
+#define IPP_DFIFO_EOP_RD_PTR (FZC_IPP + 0x00148UL)
+#define IPP_DFIFO_EOP_RD_PTR_PTR 0x0000000000000fff
+
+#define IPP_ECC_CTL (FZC_IPP + 0x00150UL)
+#define IPP_ECC_CTL_DIS_DBL 0x0000000080000000
+#define IPP_ECC_CTL_COR_DBL 0x0000000000020000
+#define IPP_ECC_CTL_COR_SNG 0x0000000000010000
+#define IPP_ECC_CTL_COR_ALL 0x0000000000000400
+#define IPP_ECC_CTL_COR_1 0x0000000000000100
+#define IPP_ECC_CTL_COR_LST 0x0000000000000004
+#define IPP_ECC_CTL_COR_SND 0x0000000000000002
+#define IPP_ECC_CTL_COR_FSR 0x0000000000000001
+
+#define NIU_DFIFO_ENTRIES 1024
+#define ATLAS_P0_P1_DFIFO_ENTRIES 2048
+#define ATLAS_P2_P3_DFIFO_ENTRIES 1024
+
+#define ZCP_CFIG (FZC_ZCP + 0x00000UL)
+#define ZCP_CFIG_ZCP_32BIT_MODE 0x0000000001000000
+#define ZCP_CFIG_ZCP_DEBUG_SEL 0x0000000000ff0000
+#define ZCP_CFIG_DMA_TH 0x000000000000ffe0
+#define ZCP_CFIG_ECC_CHK_DIS 0x0000000000000010
+#define ZCP_CFIG_PAR_CHK_DIS 0x0000000000000008
+#define ZCP_CFIG_DIS_BUFF_RSP_IF 0x0000000000000004
+#define ZCP_CFIG_DIS_BUFF_REQ_IF 0x0000000000000002
+#define ZCP_CFIG_ZC_ENABLE 0x0000000000000001
+
+#define ZCP_INT_STAT (FZC_ZCP + 0x00008UL)
+#define ZCP_INT_STAT_RRFIFO_UNDERRUN 0x0000000000008000
+#define ZCP_INT_STAT_RRFIFO_OVERRUN 0x0000000000004000
+#define ZCP_INT_STAT_RSPFIFO_UNCOR_ERR 0x0000000000001000
+#define ZCP_INT_STAT_BUFFER_OVERFLOW 0x0000000000000800
+#define ZCP_INT_STAT_STAT_TBL_PERR 0x0000000000000400
+#define ZCP_INT_STAT_DYN_TBL_PERR 0x0000000000000200
+#define ZCP_INT_STAT_BUF_TBL_PERR 0x0000000000000100
+#define ZCP_INT_STAT_TT_PROGRAM_ERR 0x0000000000000080
+#define ZCP_INT_STAT_RSP_TT_INDEX_ERR 0x0000000000000040
+#define ZCP_INT_STAT_SLV_TT_INDEX_ERR 0x0000000000000020
+#define ZCP_INT_STAT_ZCP_TT_INDEX_ERR 0x0000000000000010
+#define ZCP_INT_STAT_CFIFO_ECC3 0x0000000000000008
+#define ZCP_INT_STAT_CFIFO_ECC2 0x0000000000000004
+#define ZCP_INT_STAT_CFIFO_ECC1 0x0000000000000002
+#define ZCP_INT_STAT_CFIFO_ECC0 0x0000000000000001
+#define ZCP_INT_STAT_ALL 0x000000000000ffff
+
+#define ZCP_INT_MASK (FZC_ZCP + 0x00010UL)
+#define ZCP_INT_MASK_RRFIFO_UNDERRUN 0x0000000000008000
+#define ZCP_INT_MASK_RRFIFO_OVERRUN 0x0000000000004000
+#define ZCP_INT_MASK_LOJ 0x0000000000002000
+#define ZCP_INT_MASK_RSPFIFO_UNCOR_ERR 0x0000000000001000
+#define ZCP_INT_MASK_BUFFER_OVERFLOW 0x0000000000000800
+#define ZCP_INT_MASK_STAT_TBL_PERR 0x0000000000000400
+#define ZCP_INT_MASK_DYN_TBL_PERR 0x0000000000000200
+#define ZCP_INT_MASK_BUF_TBL_PERR 0x0000000000000100
+#define ZCP_INT_MASK_TT_PROGRAM_ERR 0x0000000000000080
+#define ZCP_INT_MASK_RSP_TT_INDEX_ERR 0x0000000000000040
+#define ZCP_INT_MASK_SLV_TT_INDEX_ERR 0x0000000000000020
+#define ZCP_INT_MASK_ZCP_TT_INDEX_ERR 0x0000000000000010
+#define ZCP_INT_MASK_CFIFO_ECC3 0x0000000000000008
+#define ZCP_INT_MASK_CFIFO_ECC2 0x0000000000000004
+#define ZCP_INT_MASK_CFIFO_ECC1 0x0000000000000002
+#define ZCP_INT_MASK_CFIFO_ECC0 0x0000000000000001
+#define ZCP_INT_MASK_ALL 0x000000000000ffff
+
+#define BAM4BUF (FZC_ZCP + 0x00018UL)
+#define BAM4BUF_LOJ 0x0000000080000000
+#define BAM4BUF_EN_CK 0x0000000040000000
+#define BAM4BUF_IDX_END0 0x000000003ff00000
+#define BAM4BUF_IDX_ST0 0x00000000000ffc00
+#define BAM4BUF_OFFSET0 0x00000000000003ff
+
+#define BAM8BUF (FZC_ZCP + 0x00020UL)
+#define BAM8BUF_LOJ 0x0000000080000000
+#define BAM8BUF_EN_CK 0x0000000040000000
+#define BAM8BUF_IDX_END1 0x000000003ff00000
+#define BAM8BUF_IDX_ST1 0x00000000000ffc00
+#define BAM8BUF_OFFSET1 0x00000000000003ff
+
+#define BAM16BUF (FZC_ZCP + 0x00028UL)
+#define BAM16BUF_LOJ 0x0000000080000000
+#define BAM16BUF_EN_CK 0x0000000040000000
+#define BAM16BUF_IDX_END2 0x000000003ff00000
+#define BAM16BUF_IDX_ST2 0x00000000000ffc00
+#define BAM16BUF_OFFSET2 0x00000000000003ff
+
+#define BAM32BUF (FZC_ZCP + 0x00030UL)
+#define BAM32BUF_LOJ 0x0000000080000000
+#define BAM32BUF_EN_CK 0x0000000040000000
+#define BAM32BUF_IDX_END3 0x000000003ff00000
+#define BAM32BUF_IDX_ST3 0x00000000000ffc00
+#define BAM32BUF_OFFSET3 0x00000000000003ff
+
+#define DST4BUF (FZC_ZCP + 0x00038UL)
+#define DST4BUF_DS_OFFSET0 0x00000000000003ff
+
+#define DST8BUF (FZC_ZCP + 0x00040UL)
+#define DST8BUF_DS_OFFSET1 0x00000000000003ff
+
+#define DST16BUF (FZC_ZCP + 0x00048UL)
+#define DST16BUF_DS_OFFSET2 0x00000000000003ff
+
+#define DST32BUF (FZC_ZCP + 0x00050UL)
+#define DST32BUF_DS_OFFSET3 0x00000000000003ff
+
+#define ZCP_RAM_DATA0 (FZC_ZCP + 0x00058UL)
+#define ZCP_RAM_DATA0_DAT0 0x00000000ffffffff
+
+#define ZCP_RAM_DATA1 (FZC_ZCP + 0x00060UL)
+#define ZCP_RAM_DAT10_DAT1 0x00000000ffffffff
+
+#define ZCP_RAM_DATA2 (FZC_ZCP + 0x00068UL)
+#define ZCP_RAM_DATA2_DAT2 0x00000000ffffffff
+
+#define ZCP_RAM_DATA3 (FZC_ZCP + 0x00070UL)
+#define ZCP_RAM_DATA3_DAT3 0x00000000ffffffff
+
+#define ZCP_RAM_DATA4 (FZC_ZCP + 0x00078UL)
+#define ZCP_RAM_DATA4_DAT4 0x00000000000000ff
+
+#define ZCP_RAM_BE (FZC_ZCP + 0x00080UL)
+#define ZCP_RAM_BE_VAL 0x000000000001ffff
+
+#define ZCP_RAM_ACC (FZC_ZCP + 0x00088UL)
+#define ZCP_RAM_ACC_BUSY 0x0000000080000000
+#define ZCP_RAM_ACC_READ 0x0000000040000000
+#define ZCP_RAM_ACC_WRITE 0x0000000000000000
+#define ZCP_RAM_ACC_LOJ 0x0000000020000000
+#define ZCP_RAM_ACC_ZFCID 0x000000001ffe0000
+#define ZCP_RAM_ACC_ZFCID_SHIFT 17
+#define ZCP_RAM_ACC_RAM_SEL 0x000000000001f000
+#define ZCP_RAM_ACC_RAM_SEL_SHIFT 12
+#define ZCP_RAM_ACC_CFIFOADDR 0x0000000000000fff
+#define ZCP_RAM_ACC_CFIFOADDR_SHIFT 0
+
+#define ZCP_RAM_SEL_BAM(INDEX) (0x00 + (INDEX))
+#define ZCP_RAM_SEL_TT_STATIC 0x08
+#define ZCP_RAM_SEL_TT_DYNAMIC 0x09
+#define ZCP_RAM_SEL_CFIFO(PORT) (0x10 + (PORT))
+
+#define NIU_CFIFO_ENTRIES 1024
+#define ATLAS_P0_P1_CFIFO_ENTRIES 2048
+#define ATLAS_P2_P3_CFIFO_ENTRIES 1024
+
+#define CHK_BIT_DATA (FZC_ZCP + 0x00090UL)
+#define CHK_BIT_DATA_DATA 0x000000000000ffff
+
+#define RESET_CFIFO (FZC_ZCP + 0x00098UL)
+#define RESET_CFIFO_RST(PORT) (0x1 << (PORT))
+
+#define CFIFO_ECC(PORT) (FZC_ZCP + 0x000a0UL + (PORT) * 8UL)
+#define CFIFO_ECC_DIS_DBLBIT_ERR 0x0000000080000000
+#define CFIFO_ECC_DBLBIT_ERR 0x0000000000020000
+#define CFIFO_ECC_SINGLEBIT_ERR 0x0000000000010000
+#define CFIFO_ECC_ALL_PKT 0x0000000000000400
+#define CFIFO_ECC_LAST_LINE 0x0000000000000004
+#define CFIFO_ECC_2ND_LINE 0x0000000000000002
+#define CFIFO_ECC_1ST_LINE 0x0000000000000001
+
+#define ZCP_TRAINING_VECTOR (FZC_ZCP + 0x000c0UL)
+#define ZCP_TRAINING_VECTOR_VECTOR 0x00000000ffffffff
+
+#define ZCP_STATE_MACHINE (FZC_ZCP + 0x000c8UL)
+#define ZCP_STATE_MACHINE_SM 0x00000000ffffffff
+
+/* Same bits as ZCP_INT_STAT */
+#define ZCP_INT_STAT_TEST (FZC_ZCP + 0x00108UL)
+
+#define RXDMA_CFIG1(IDX) (DMC + 0x00000UL + (IDX) * 0x200UL)
+#define RXDMA_CFIG1_EN 0x0000000080000000
+#define RXDMA_CFIG1_RST 0x0000000040000000
+#define RXDMA_CFIG1_QST 0x0000000020000000
+#define RXDMA_CFIG1_MBADDR_H 0x0000000000000fff /* mboxaddr 43:32 */
+
+#define RXDMA_CFIG2(IDX) (DMC + 0x00008UL + (IDX) * 0x200UL)
+#define RXDMA_CFIG2_MBADDR_L 0x00000000ffffffc0 /* mboxaddr 31:6 */
+#define RXDMA_CFIG2_OFFSET 0x0000000000000006
+#define RXDMA_CFIG2_OFFSET_SHIFT 1
+#define RXDMA_CFIG2_FULL_HDR 0x0000000000000001
+
+#define RBR_CFIG_A(IDX) (DMC + 0x00010UL + (IDX) * 0x200UL)
+#define RBR_CFIG_A_LEN 0xffff000000000000
+#define RBR_CFIG_A_LEN_SHIFT 48
+#define RBR_CFIG_A_STADDR_BASE 0x00000ffffffc0000
+#define RBR_CFIG_A_STADDR 0x000000000003ffc0
+
+#define RBR_CFIG_B(IDX) (DMC + 0x00018UL + (IDX) * 0x200UL)
+#define RBR_CFIG_B_BLKSIZE 0x0000000003000000
+#define RBR_CFIG_B_BLKSIZE_SHIFT 24
+#define RBR_CFIG_B_VLD2 0x0000000000800000
+#define RBR_CFIG_B_BUFSZ2 0x0000000000030000
+#define RBR_CFIG_B_BUFSZ2_SHIFT 16
+#define RBR_CFIG_B_VLD1 0x0000000000008000
+#define RBR_CFIG_B_BUFSZ1 0x0000000000000300
+#define RBR_CFIG_B_BUFSZ1_SHIFT 8
+#define RBR_CFIG_B_VLD0 0x0000000000000080
+#define RBR_CFIG_B_BUFSZ0 0x0000000000000003
+#define RBR_CFIG_B_BUFSZ0_SHIFT 0
+
+#define RBR_BLKSIZE_4K 0x0
+#define RBR_BLKSIZE_8K 0x1
+#define RBR_BLKSIZE_16K 0x2
+#define RBR_BLKSIZE_32K 0x3
+#define RBR_BUFSZ2_2K 0x0
+#define RBR_BUFSZ2_4K 0x1
+#define RBR_BUFSZ2_8K 0x2
+#define RBR_BUFSZ2_16K 0x3
+#define RBR_BUFSZ1_1K 0x0
+#define RBR_BUFSZ1_2K 0x1
+#define RBR_BUFSZ1_4K 0x2
+#define RBR_BUFSZ1_8K 0x3
+#define RBR_BUFSZ0_256 0x0
+#define RBR_BUFSZ0_512 0x1
+#define RBR_BUFSZ0_1K 0x2
+#define RBR_BUFSZ0_2K 0x3
+
+#define RBR_KICK(IDX) (DMC + 0x00020UL + (IDX) * 0x200UL)
+#define RBR_KICK_BKADD 0x000000000000ffff
+
+#define RBR_STAT(IDX) (DMC + 0x00028UL + (IDX) * 0x200UL)
+#define RBR_STAT_QLEN 0x000000000000ffff
+
+#define RBR_HDH(IDX) (DMC + 0x00030UL + (IDX) * 0x200UL)
+#define RBR_HDH_HEAD_H 0x0000000000000fff
+
+#define RBR_HDL(IDX) (DMC + 0x00038UL + (IDX) * 0x200UL)
+#define RBR_HDL_HEAD_L 0x00000000fffffffc
+
+#define RCRCFIG_A(IDX) (DMC + 0x00040UL + (IDX) * 0x200UL)
+#define RCRCFIG_A_LEN 0xffff000000000000
+#define RCRCFIG_A_LEN_SHIFT 48
+#define RCRCFIG_A_STADDR_BASE 0x00000ffffff80000
+#define RCRCFIG_A_STADDR 0x000000000007ffc0
+
+#define RCRCFIG_B(IDX) (DMC + 0x00048UL + (IDX) * 0x200UL)
+#define RCRCFIG_B_PTHRES 0x00000000ffff0000
+#define RCRCFIG_B_PTHRES_SHIFT 16
+#define RCRCFIG_B_ENTOUT 0x0000000000008000
+#define RCRCFIG_B_TIMEOUT 0x000000000000003f
+#define RCRCFIG_B_TIMEOUT_SHIFT 0
+
+#define RCRSTAT_A(IDX) (DMC + 0x00050UL + (IDX) * 0x200UL)
+#define RCRSTAT_A_QLEN 0x000000000000ffff
+
+#define RCRSTAT_B(IDX) (DMC + 0x00058UL + (IDX) * 0x200UL)
+#define RCRSTAT_B_TIPTR_H 0x0000000000000fff
+
+#define RCRSTAT_C(IDX) (DMC + 0x00060UL + (IDX) * 0x200UL)
+#define RCRSTAT_C_TIPTR_L 0x00000000fffffff8
+
+#define RX_DMA_CTL_STAT(IDX) (DMC + 0x00070UL + (IDX) * 0x200UL)
+#define RX_DMA_CTL_STAT_RBR_TMOUT 0x0020000000000000
+#define RX_DMA_CTL_STAT_RSP_CNT_ERR 0x0010000000000000
+#define RX_DMA_CTL_STAT_BYTE_EN_BUS 0x0008000000000000
+#define RX_DMA_CTL_STAT_RSP_DAT_ERR 0x0004000000000000
+#define RX_DMA_CTL_STAT_RCR_ACK_ERR 0x0002000000000000
+#define RX_DMA_CTL_STAT_DC_FIFO_ERR 0x0001000000000000
+#define RX_DMA_CTL_STAT_MEX 0x0000800000000000
+#define RX_DMA_CTL_STAT_RCRTHRES 0x0000400000000000
+#define RX_DMA_CTL_STAT_RCRTO 0x0000200000000000
+#define RX_DMA_CTL_STAT_RCR_SHA_PAR 0x0000100000000000
+#define RX_DMA_CTL_STAT_RBR_PRE_PAR 0x0000080000000000
+#define RX_DMA_CTL_STAT_PORT_DROP_PKT 0x0000040000000000
+#define RX_DMA_CTL_STAT_WRED_DROP 0x0000020000000000
+#define RX_DMA_CTL_STAT_RBR_PRE_EMTY 0x0000010000000000
+#define RX_DMA_CTL_STAT_RCRSHADOW_FULL 0x0000008000000000
+#define RX_DMA_CTL_STAT_CONFIG_ERR 0x0000004000000000
+#define RX_DMA_CTL_STAT_RCRINCON 0x0000002000000000
+#define RX_DMA_CTL_STAT_RCRFULL 0x0000001000000000
+#define RX_DMA_CTL_STAT_RBR_EMPTY 0x0000000800000000
+#define RX_DMA_CTL_STAT_RBRFULL 0x0000000400000000
+#define RX_DMA_CTL_STAT_RBRLOGPAGE 0x0000000200000000
+#define RX_DMA_CTL_STAT_CFIGLOGPAGE 0x0000000100000000
+#define RX_DMA_CTL_STAT_PTRREAD 0x00000000ffff0000
+#define RX_DMA_CTL_STAT_PTRREAD_SHIFT 16
+#define RX_DMA_CTL_STAT_PKTREAD 0x000000000000ffff
+#define RX_DMA_CTL_STAT_PKTREAD_SHIFT 0
+
+#define RX_DMA_CTL_STAT_CHAN_FATAL (RX_DMA_CTL_STAT_RBR_TMOUT | \
+ RX_DMA_CTL_STAT_RSP_CNT_ERR | \
+ RX_DMA_CTL_STAT_BYTE_EN_BUS | \
+ RX_DMA_CTL_STAT_RSP_DAT_ERR | \
+ RX_DMA_CTL_STAT_RCR_ACK_ERR | \
+ RX_DMA_CTL_STAT_RCR_SHA_PAR | \
+ RX_DMA_CTL_STAT_RBR_PRE_PAR | \
+ RX_DMA_CTL_STAT_CONFIG_ERR | \
+ RX_DMA_CTL_STAT_RCRINCON | \
+ RX_DMA_CTL_STAT_RCRFULL | \
+ RX_DMA_CTL_STAT_RBRFULL | \
+ RX_DMA_CTL_STAT_RBRLOGPAGE | \
+ RX_DMA_CTL_STAT_CFIGLOGPAGE)
+
+#define RX_DMA_CTL_STAT_PORT_FATAL (RX_DMA_CTL_STAT_DC_FIFO_ERR)
+
+#define RX_DMA_CTL_WRITE_CLEAR_ERRS (RX_DMA_CTL_STAT_RBR_EMPTY | \
+ RX_DMA_CTL_STAT_RCRSHADOW_FULL | \
+ RX_DMA_CTL_STAT_RBR_PRE_EMTY | \
+ RX_DMA_CTL_STAT_WRED_DROP | \
+ RX_DMA_CTL_STAT_PORT_DROP_PKT | \
+ RX_DMA_CTL_STAT_RCRTO | \
+ RX_DMA_CTL_STAT_RCRTHRES | \
+ RX_DMA_CTL_STAT_DC_FIFO_ERR)
+
+#define RCR_FLSH(IDX) (DMC + 0x00078UL + (IDX) * 0x200UL)
+#define RCR_FLSH_FLSH 0x0000000000000001
+
+#define RXMISC(IDX) (DMC + 0x00090UL + (IDX) * 0x200UL)
+#define RXMISC_OFLOW 0x0000000000010000
+#define RXMISC_COUNT 0x000000000000ffff
+
+#define RX_DMA_CTL_STAT_DBG(IDX) (DMC + 0x00098UL + (IDX) * 0x200UL)
+#define RX_DMA_CTL_STAT_DBG_RBR_TMOUT 0x0020000000000000
+#define RX_DMA_CTL_STAT_DBG_RSP_CNT_ERR 0x0010000000000000
+#define RX_DMA_CTL_STAT_DBG_BYTE_EN_BUS 0x0008000000000000
+#define RX_DMA_CTL_STAT_DBG_RSP_DAT_ERR 0x0004000000000000
+#define RX_DMA_CTL_STAT_DBG_RCR_ACK_ERR 0x0002000000000000
+#define RX_DMA_CTL_STAT_DBG_DC_FIFO_ERR 0x0001000000000000
+#define RX_DMA_CTL_STAT_DBG_MEX 0x0000800000000000
+#define RX_DMA_CTL_STAT_DBG_RCRTHRES 0x0000400000000000
+#define RX_DMA_CTL_STAT_DBG_RCRTO 0x0000200000000000
+#define RX_DMA_CTL_STAT_DBG_RCR_SHA_PAR 0x0000100000000000
+#define RX_DMA_CTL_STAT_DBG_RBR_PRE_PAR 0x0000080000000000
+#define RX_DMA_CTL_STAT_DBG_PORT_DROP_PKT 0x0000040000000000
+#define RX_DMA_CTL_STAT_DBG_WRED_DROP 0x0000020000000000
+#define RX_DMA_CTL_STAT_DBG_RBR_PRE_EMTY 0x0000010000000000
+#define RX_DMA_CTL_STAT_DBG_RCRSHADOW_FULL 0x0000008000000000
+#define RX_DMA_CTL_STAT_DBG_CONFIG_ERR 0x0000004000000000
+#define RX_DMA_CTL_STAT_DBG_RCRINCON 0x0000002000000000
+#define RX_DMA_CTL_STAT_DBG_RCRFULL 0x0000001000000000
+#define RX_DMA_CTL_STAT_DBG_RBR_EMPTY 0x0000000800000000
+#define RX_DMA_CTL_STAT_DBG_RBRFULL 0x0000000400000000
+#define RX_DMA_CTL_STAT_DBG_RBRLOGPAGE 0x0000000200000000
+#define RX_DMA_CTL_STAT_DBG_CFIGLOGPAGE 0x0000000100000000
+#define RX_DMA_CTL_STAT_DBG_PTRREAD 0x00000000ffff0000
+#define RX_DMA_CTL_STAT_DBG_PKTREAD 0x000000000000ffff
+
+#define RX_DMA_ENT_MSK(IDX) (DMC + 0x00068UL + (IDX) * 0x200UL)
+#define RX_DMA_ENT_MSK_RBR_TMOUT 0x0000000000200000
+#define RX_DMA_ENT_MSK_RSP_CNT_ERR 0x0000000000100000
+#define RX_DMA_ENT_MSK_BYTE_EN_BUS 0x0000000000080000
+#define RX_DMA_ENT_MSK_RSP_DAT_ERR 0x0000000000040000
+#define RX_DMA_ENT_MSK_RCR_ACK_ERR 0x0000000000020000
+#define RX_DMA_ENT_MSK_DC_FIFO_ERR 0x0000000000010000
+#define RX_DMA_ENT_MSK_RCRTHRES 0x0000000000004000
+#define RX_DMA_ENT_MSK_RCRTO 0x0000000000002000
+#define RX_DMA_ENT_MSK_RCR_SHA_PAR 0x0000000000001000
+#define RX_DMA_ENT_MSK_RBR_PRE_PAR 0x0000000000000800
+#define RX_DMA_ENT_MSK_PORT_DROP_PKT 0x0000000000000400
+#define RX_DMA_ENT_MSK_WRED_DROP 0x0000000000000200
+#define RX_DMA_ENT_MSK_RBR_PRE_EMTY 0x0000000000000100
+#define RX_DMA_ENT_MSK_RCR_SHADOW_FULL 0x0000000000000080
+#define RX_DMA_ENT_MSK_CONFIG_ERR 0x0000000000000040
+#define RX_DMA_ENT_MSK_RCRINCON 0x0000000000000020
+#define RX_DMA_ENT_MSK_RCRFULL 0x0000000000000010
+#define RX_DMA_ENT_MSK_RBR_EMPTY 0x0000000000000008
+#define RX_DMA_ENT_MSK_RBRFULL 0x0000000000000004
+#define RX_DMA_ENT_MSK_RBRLOGPAGE 0x0000000000000002
+#define RX_DMA_ENT_MSK_CFIGLOGPAGE 0x0000000000000001
+#define RX_DMA_ENT_MSK_ALL 0x00000000003f7fff
+
+#define TX_RNG_CFIG(IDX) (DMC + 0x40000UL + (IDX) * 0x200UL)
+#define TX_RNG_CFIG_LEN 0x1fff000000000000
+#define TX_RNG_CFIG_LEN_SHIFT 48
+#define TX_RNG_CFIG_STADDR_BASE 0x00000ffffff80000
+#define TX_RNG_CFIG_STADDR 0x000000000007ffc0
+
+#define TX_RING_HDL(IDX) (DMC + 0x40010UL + (IDX) * 0x200UL)
+#define TX_RING_HDL_WRAP 0x0000000000080000
+#define TX_RING_HDL_HEAD 0x000000000007fff8
+#define TX_RING_HDL_HEAD_SHIFT 3
+
+#define TX_RING_KICK(IDX) (DMC + 0x40018UL + (IDX) * 0x200UL)
+#define TX_RING_KICK_WRAP 0x0000000000080000
+#define TX_RING_KICK_TAIL 0x000000000007fff8
+
+#define TX_ENT_MSK(IDX) (DMC + 0x40020UL + (IDX) * 0x200UL)
+#define TX_ENT_MSK_MK 0x0000000000008000
+#define TX_ENT_MSK_MBOX_ERR 0x0000000000000080
+#define TX_ENT_MSK_PKT_SIZE_ERR 0x0000000000000040
+#define TX_ENT_MSK_TX_RING_OFLOW 0x0000000000000020
+#define TX_ENT_MSK_PREF_BUF_ECC_ERR 0x0000000000000010
+#define TX_ENT_MSK_NACK_PREF 0x0000000000000008
+#define TX_ENT_MSK_NACK_PKT_RD 0x0000000000000004
+#define TX_ENT_MSK_CONF_PART_ERR 0x0000000000000002
+#define TX_ENT_MSK_PKT_PRT_ERR 0x0000000000000001
+
+#define TX_CS(IDX) (DMC + 0x40028UL + (IDX)*0x200UL)
+#define TX_CS_PKT_CNT 0x0fff000000000000
+#define TX_CS_LASTMARK 0x00000fff00000000
+#define TX_CS_RST 0x0000000080000000
+#define TX_CS_RST_STATE 0x0000000040000000
+#define TX_CS_MB 0x0000000020000000
+#define TX_CS_STOP_N_GO 0x0000000010000000
+#define TX_CS_SNG_STATE 0x0000000008000000
+#define TX_CS_MK 0x0000000000008000
+#define TX_CS_MMK 0x0000000000004000
+#define TX_CS_MBOX_ERR 0x0000000000000080
+#define TX_CS_PKT_SIZE_ERR 0x0000000000000040
+#define TX_CS_TX_RING_OFLOW 0x0000000000000020
+#define TX_CS_PREF_BUF_PAR_ERR 0x0000000000000010
+#define TX_CS_NACK_PREF 0x0000000000000008
+#define TX_CS_NACK_PKT_RD 0x0000000000000004
+#define TX_CS_CONF_PART_ERR 0x0000000000000002
+#define TX_CS_PKT_PRT_ERR 0x0000000000000001
+
+#define TXDMA_MBH(IDX) (DMC + 0x40030UL + (IDX) * 0x200UL)
+#define TXDMA_MBH_MBADDR 0x0000000000000fff
+
+#define TXDMA_MBL(IDX) (DMC + 0x40038UL + (IDX) * 0x200UL)
+#define TXDMA_MBL_MBADDR 0x00000000ffffffc0
+
+#define TX_DMA_PRE_ST(IDX) (DMC + 0x40040UL + (IDX) * 0x200UL)
+#define TX_DMA_PRE_ST_SHADOW_HD 0x000000000007ffff
+
+#define TX_RNG_ERR_LOGH(IDX) (DMC + 0x40048UL + (IDX) * 0x200UL)
+#define TX_RNG_ERR_LOGH_ERR 0x0000000080000000
+#define TX_RNG_ERR_LOGH_MERR 0x0000000040000000
+#define TX_RNG_ERR_LOGH_ERRCODE 0x0000000038000000
+#define TX_RNG_ERR_LOGH_ERRADDR 0x0000000000000fff
+
+#define TX_RNG_ERR_LOGL(IDX) (DMC + 0x40050UL + (IDX) * 0x200UL)
+#define TX_RNG_ERR_LOGL_ERRADDR 0x00000000ffffffff
+
+#define TDMC_INTR_DBG(IDX) (DMC + 0x40060UL + (IDX) * 0x200UL)
+#define TDMC_INTR_DBG_MK 0x0000000000008000
+#define TDMC_INTR_DBG_MBOX_ERR 0x0000000000000080
+#define TDMC_INTR_DBG_PKT_SIZE_ERR 0x0000000000000040
+#define TDMC_INTR_DBG_TX_RING_OFLOW 0x0000000000000020
+#define TDMC_INTR_DBG_PREF_BUF_PAR_ERR 0x0000000000000010
+#define TDMC_INTR_DBG_NACK_PREF 0x0000000000000008
+#define TDMC_INTR_DBG_NACK_PKT_RD 0x0000000000000004
+#define TDMC_INTR_DBG_CONF_PART_ERR 0x0000000000000002
+#define TDMC_INTR_DBG_PKT_PART_ERR 0x0000000000000001
+
+#define TX_CS_DBG(IDX) (DMC + 0x40068UL + (IDX) * 0x200UL)
+#define TX_CS_DBG_PKT_CNT 0x0fff000000000000
+
+#define TDMC_INJ_PAR_ERR(IDX) (DMC + 0x45040UL + (IDX) * 0x200UL)
+#define TDMC_INJ_PAR_ERR_VAL 0x000000000000ffff
+
+#define TDMC_DBG_SEL(IDX) (DMC + 0x45080UL + (IDX) * 0x200UL)
+#define TDMC_DBG_SEL_DBG_SEL 0x000000000000003f
+
+#define TDMC_TRAINING_VECTOR(IDX) (DMC + 0x45088UL + (IDX) * 0x200UL)
+#define TDMC_TRAINING_VECTOR_VEC 0x00000000ffffffff
+
+#define TXC_DMA_MAX(CHAN) (FZC_TXC + 0x00000UL + (CHAN)*0x1000UL)
+#define TXC_DMA_MAX_LEN(CHAN) (FZC_TXC + 0x00008UL + (CHAN)*0x1000UL)
+
+#define TXC_CONTROL (FZC_TXC + 0x20000UL)
+#define TXC_CONTROL_ENABLE 0x0000000000000010
+#define TXC_CONTROL_PORT_ENABLE(X) (1 << (X))
+
+#define TXC_TRAINING_VEC (FZC_TXC + 0x20008UL)
+#define TXC_TRAINING_VEC_MASK 0x00000000ffffffff
+
+#define TXC_DEBUG (FZC_TXC + 0x20010UL)
+#define TXC_DEBUG_SELECT 0x000000000000003f
+
+#define TXC_MAX_REORDER (FZC_TXC + 0x20018UL)
+#define TXC_MAX_REORDER_PORT3 0x000000000f000000
+#define TXC_MAX_REORDER_PORT2 0x00000000000f0000
+#define TXC_MAX_REORDER_PORT1 0x0000000000000f00
+#define TXC_MAX_REORDER_PORT0 0x000000000000000f
+
+#define TXC_PORT_CTL(PORT) (FZC_TXC + 0x20020UL + (PORT)*0x100UL)
+#define TXC_PORT_CTL_CLR_ALL_STAT 0x0000000000000001
+
+#define TXC_PKT_STUFFED(PORT) (FZC_TXC + 0x20030UL + (PORT)*0x100UL)
+#define TXC_PKT_STUFFED_PP_REORDER 0x00000000ffff0000
+#define TXC_PKT_STUFFED_PP_PACKETASSY 0x000000000000ffff
+
+#define TXC_PKT_XMIT(PORT) (FZC_TXC + 0x20038UL + (PORT)*0x100UL)
+#define TXC_PKT_XMIT_BYTES 0x00000000ffff0000
+#define TXC_PKT_XMIT_PKTS 0x000000000000ffff
+
+#define TXC_ROECC_CTL(PORT) (FZC_TXC + 0x20040UL + (PORT)*0x100UL)
+#define TXC_ROECC_CTL_DISABLE_UE 0x0000000080000000
+#define TXC_ROECC_CTL_DBL_BIT_ERR 0x0000000000020000
+#define TXC_ROECC_CTL_SNGL_BIT_ERR 0x0000000000010000
+#define TXC_ROECC_CTL_ALL_PKTS 0x0000000000000400
+#define TXC_ROECC_CTL_ALT_PKTS 0x0000000000000200
+#define TXC_ROECC_CTL_ONE_PKT_ONLY 0x0000000000000100
+#define TXC_ROECC_CTL_LST_PKT_LINE 0x0000000000000004
+#define TXC_ROECC_CTL_2ND_PKT_LINE 0x0000000000000002
+#define TXC_ROECC_CTL_1ST_PKT_LINE 0x0000000000000001
+
+#define TXC_ROECC_ST(PORT) (FZC_TXC + 0x20048UL + (PORT)*0x100UL)
+#define TXC_ROECC_CLR_ST 0x0000000080000000
+#define TXC_ROECC_CE 0x0000000000020000
+#define TXC_ROECC_UE 0x0000000000010000
+#define TXC_ROECC_ST_ECC_ADDR 0x00000000000003ff
+
+#define TXC_RO_DATA0(PORT) (FZC_TXC + 0x20050UL + (PORT)*0x100UL)
+#define TXC_RO_DATA0_DATA0 0x00000000ffffffff /* bits 31:0 */
+
+#define TXC_RO_DATA1(PORT) (FZC_TXC + 0x20058UL + (PORT)*0x100UL)
+#define TXC_RO_DATA1_DATA1 0x00000000ffffffff /* bits 63:32 */
+
+#define TXC_RO_DATA2(PORT) (FZC_TXC + 0x20060UL + (PORT)*0x100UL)
+#define TXC_RO_DATA2_DATA2 0x00000000ffffffff /* bits 95:64 */
+
+#define TXC_RO_DATA3(PORT) (FZC_TXC + 0x20068UL + (PORT)*0x100UL)
+#define TXC_RO_DATA3_DATA3 0x00000000ffffffff /* bits 127:96 */
+
+#define TXC_RO_DATA4(PORT) (FZC_TXC + 0x20070UL + (PORT)*0x100UL)
+#define TXC_RO_DATA4_DATA4 0x0000000000ffffff /* bits 151:128 */
+
+#define TXC_SFECC_CTL(PORT) (FZC_TXC + 0x20078UL + (PORT)*0x100UL)
+#define TXC_SFECC_CTL_DISABLE_UE 0x0000000080000000
+#define TXC_SFECC_CTL_DBL_BIT_ERR 0x0000000000020000
+#define TXC_SFECC_CTL_SNGL_BIT_ERR 0x0000000000010000
+#define TXC_SFECC_CTL_ALL_PKTS 0x0000000000000400
+#define TXC_SFECC_CTL_ALT_PKTS 0x0000000000000200
+#define TXC_SFECC_CTL_ONE_PKT_ONLY 0x0000000000000100
+#define TXC_SFECC_CTL_LST_PKT_LINE 0x0000000000000004
+#define TXC_SFECC_CTL_2ND_PKT_LINE 0x0000000000000002
+#define TXC_SFECC_CTL_1ST_PKT_LINE 0x0000000000000001
+
+#define TXC_SFECC_ST(PORT) (FZC_TXC + 0x20080UL + (PORT)*0x100UL)
+#define TXC_SFECC_ST_CLR_ST 0x0000000080000000
+#define TXC_SFECC_ST_CE 0x0000000000020000
+#define TXC_SFECC_ST_UE 0x0000000000010000
+#define TXC_SFECC_ST_ECC_ADDR 0x00000000000003ff
+
+#define TXC_SF_DATA0(PORT) (FZC_TXC + 0x20088UL + (PORT)*0x100UL)
+#define TXC_SF_DATA0_DATA0 0x00000000ffffffff /* bits 31:0 */
+
+#define TXC_SF_DATA1(PORT) (FZC_TXC + 0x20090UL + (PORT)*0x100UL)
+#define TXC_SF_DATA1_DATA1 0x00000000ffffffff /* bits 63:32 */
+
+#define TXC_SF_DATA2(PORT) (FZC_TXC + 0x20098UL + (PORT)*0x100UL)
+#define TXC_SF_DATA2_DATA2 0x00000000ffffffff /* bits 95:64 */
+
+#define TXC_SF_DATA3(PORT) (FZC_TXC + 0x200a0UL + (PORT)*0x100UL)
+#define TXC_SF_DATA3_DATA3 0x00000000ffffffff /* bits 127:96 */
+
+#define TXC_SF_DATA4(PORT) (FZC_TXC + 0x200a8UL + (PORT)*0x100UL)
+#define TXC_SF_DATA4_DATA4 0x0000000000ffffff /* bits 151:128 */
+
+#define TXC_RO_TIDS(PORT) (FZC_TXC + 0x200b0UL + (PORT)*0x100UL)
+#define TXC_RO_TIDS_IN_USE 0x00000000ffffffff
+
+#define TXC_RO_STATE0(PORT) (FZC_TXC + 0x200b8UL + (PORT)*0x100UL)
+#define TXC_RO_STATE0_DUPLICATE_TID 0x00000000ffffffff
+
+#define TXC_RO_STATE1(PORT) (FZC_TXC + 0x200c0UL + (PORT)*0x100UL)
+#define TXC_RO_STATE1_UNUSED_TID 0x00000000ffffffff
+
+#define TXC_RO_STATE2(PORT) (FZC_TXC + 0x200c8UL + (PORT)*0x100UL)
+#define TXC_RO_STATE2_TRANS_TIMEOUT 0x00000000ffffffff
+
+#define TXC_RO_STATE3(PORT) (FZC_TXC + 0x200d0UL + (PORT)*0x100UL)
+#define TXC_RO_STATE3_ENAB_SPC_WMARK 0x0000000080000000
+#define TXC_RO_STATE3_RO_SPC_WMARK 0x000000007fe00000
+#define TXC_RO_STATE3_ROFIFO_SPC_AVAIL 0x00000000001ff800
+#define TXC_RO_STATE3_ENAB_RO_WMARK 0x0000000000000100
+#define TXC_RO_STATE3_HIGH_RO_USED 0x00000000000000f0
+#define TXC_RO_STATE3_NUM_RO_USED 0x000000000000000f
+
+#define TXC_RO_CTL(PORT) (FZC_TXC + 0x200d8UL + (PORT)*0x100UL)
+#define TXC_RO_CTL_CLR_FAIL_STATE 0x0000000080000000
+#define TXC_RO_CTL_RO_ADDR 0x000000000f000000
+#define TXC_RO_CTL_ADDR_FAILED 0x0000000000400000
+#define TXC_RO_CTL_DMA_FAILED 0x0000000000200000
+#define TXC_RO_CTL_LEN_FAILED 0x0000000000100000
+#define TXC_RO_CTL_CAPT_ADDR_FAILED 0x0000000000040000
+#define TXC_RO_CTL_CAPT_DMA_FAILED 0x0000000000020000
+#define TXC_RO_CTL_CAPT_LEN_FAILED 0x0000000000010000
+#define TXC_RO_CTL_RO_STATE_RD_DONE 0x0000000000000080
+#define TXC_RO_CTL_RO_STATE_WR_DONE 0x0000000000000040
+#define TXC_RO_CTL_RO_STATE_RD 0x0000000000000020
+#define TXC_RO_CTL_RO_STATE_WR 0x0000000000000010
+#define TXC_RO_CTL_RO_STATE_ADDR 0x000000000000000f
+
+#define TXC_RO_ST_DATA0(PORT) (FZC_TXC + 0x200e0UL + (PORT)*0x100UL)
+#define TXC_RO_ST_DATA0_DATA0 0x00000000ffffffff
+
+#define TXC_RO_ST_DATA1(PORT) (FZC_TXC + 0x200e8UL + (PORT)*0x100UL)
+#define TXC_RO_ST_DATA1_DATA1 0x00000000ffffffff
+
+#define TXC_RO_ST_DATA2(PORT) (FZC_TXC + 0x200f0UL + (PORT)*0x100UL)
+#define TXC_RO_ST_DATA2_DATA2 0x00000000ffffffff
+
+#define TXC_RO_ST_DATA3(PORT) (FZC_TXC + 0x200f8UL + (PORT)*0x100UL)
+#define TXC_RO_ST_DATA3_DATA3 0x00000000ffffffff
+
+#define TXC_PORT_PACKET_REQ(PORT) (FZC_TXC + 0x20100UL + (PORT)*0x100UL)
+#define TXC_PORT_PACKET_REQ_GATHER_REQ 0x00000000f0000000
+#define TXC_PORT_PACKET_REQ_PKT_REQ 0x000000000fff0000
+#define TXC_PORT_PACKET_REQ_PERR_ABRT 0x000000000000ffff
+
+ /* bits are same as TXC_INT_STAT */
+#define TXC_INT_STAT_DBG (FZC_TXC + 0x20420UL)
+
+#define TXC_INT_STAT (FZC_TXC + 0x20428UL)
+#define TXC_INT_STAT_VAL_SHIFT(PORT) ((PORT) * 8)
+#define TXC_INT_STAT_VAL(PORT) (0x3f << TXC_INT_STAT_VAL_SHIFT(PORT))
+#define TXC_INT_STAT_SF_CE(PORT) (0x01 << TXC_INT_STAT_VAL_SHIFT(PORT))
+#define TXC_INT_STAT_SF_UE(PORT) (0x02 << TXC_INT_STAT_VAL_SHIFT(PORT))
+#define TXC_INT_STAT_RO_CE(PORT) (0x04 << TXC_INT_STAT_VAL_SHIFT(PORT))
+#define TXC_INT_STAT_RO_UE(PORT) (0x08 << TXC_INT_STAT_VAL_SHIFT(PORT))
+#define TXC_INT_STAT_REORDER_ERR(PORT) (0x10 << TXC_INT_STAT_VAL_SHIFT(PORT))
+#define TXC_INT_STAT_PKTASM_DEAD(PORT) (0x20 << TXC_INT_STAT_VAL_SHIFT(PORT))
+
+#define TXC_INT_MASK (FZC_TXC + 0x20430UL)
+#define TXC_INT_MASK_VAL_SHIFT(PORT) ((PORT) * 8)
+#define TXC_INT_MASK_VAL(PORT) (0x3f << TXC_INT_STAT_VAL_SHIFT(PORT))
+
+#define TXC_INT_MASK_SF_CE 0x01
+#define TXC_INT_MASK_SF_UE 0x02
+#define TXC_INT_MASK_RO_CE 0x04
+#define TXC_INT_MASK_RO_UE 0x08
+#define TXC_INT_MASK_REORDER_ERR 0x10
+#define TXC_INT_MASK_PKTASM_DEAD 0x20
+#define TXC_INT_MASK_ALL 0x3f
+
+#define TXC_PORT_DMA(IDX) (FZC_TXC + 0x20028UL + (IDX)*0x100UL)
+
+#define ESPC_PIO_EN (FZC_PROM + 0x40000UL)
+#define ESPC_PIO_EN_ENABLE 0x0000000000000001
+
+#define ESPC_PIO_STAT (FZC_PROM + 0x40008UL)
+#define ESPC_PIO_STAT_READ_START 0x0000000080000000
+#define ESPC_PIO_STAT_READ_END 0x0000000040000000
+#define ESPC_PIO_STAT_WRITE_INIT 0x0000000020000000
+#define ESPC_PIO_STAT_WRITE_END 0x0000000010000000
+#define ESPC_PIO_STAT_ADDR 0x0000000003ffff00
+#define ESPC_PIO_STAT_ADDR_SHIFT 8
+#define ESPC_PIO_STAT_DATA 0x00000000000000ff
+#define ESPC_PIO_STAT_DATA_SHIFT 0
+
+#define ESPC_NCR(IDX) (FZC_PROM + 0x40020UL + (IDX)*0x8UL)
+#define ESPC_NCR_VAL 0x00000000ffffffff
+
+#define ESPC_MAC_ADDR0 ESPC_NCR(0)
+#define ESPC_MAC_ADDR1 ESPC_NCR(1)
+#define ESPC_NUM_PORTS_MACS ESPC_NCR(2)
+#define ESPC_NUM_PORTS_MACS_VAL 0x00000000000000ff
+#define ESPC_MOD_STR_LEN ESPC_NCR(4)
+#define ESPC_MOD_STR_1 ESPC_NCR(5)
+#define ESPC_MOD_STR_2 ESPC_NCR(6)
+#define ESPC_MOD_STR_3 ESPC_NCR(7)
+#define ESPC_MOD_STR_4 ESPC_NCR(8)
+#define ESPC_MOD_STR_5 ESPC_NCR(9)
+#define ESPC_MOD_STR_6 ESPC_NCR(10)
+#define ESPC_MOD_STR_7 ESPC_NCR(11)
+#define ESPC_MOD_STR_8 ESPC_NCR(12)
+#define ESPC_BD_MOD_STR_LEN ESPC_NCR(13)
+#define ESPC_BD_MOD_STR_1 ESPC_NCR(14)
+#define ESPC_BD_MOD_STR_2 ESPC_NCR(15)
+#define ESPC_BD_MOD_STR_3 ESPC_NCR(16)
+#define ESPC_BD_MOD_STR_4 ESPC_NCR(17)
+
+#define ESPC_PHY_TYPE ESPC_NCR(18)
+#define ESPC_PHY_TYPE_PORT0 0x00000000ff000000
+#define ESPC_PHY_TYPE_PORT0_SHIFT 24
+#define ESPC_PHY_TYPE_PORT1 0x0000000000ff0000
+#define ESPC_PHY_TYPE_PORT1_SHIFT 16
+#define ESPC_PHY_TYPE_PORT2 0x000000000000ff00
+#define ESPC_PHY_TYPE_PORT2_SHIFT 8
+#define ESPC_PHY_TYPE_PORT3 0x00000000000000ff
+#define ESPC_PHY_TYPE_PORT3_SHIFT 0
+
+#define ESPC_PHY_TYPE_1G_COPPER 3
+#define ESPC_PHY_TYPE_1G_FIBER 2
+#define ESPC_PHY_TYPE_10G_COPPER 1
+#define ESPC_PHY_TYPE_10G_FIBER 0
+
+#define ESPC_MAX_FM_SZ ESPC_NCR(19)
+
+#define ESPC_INTR_NUM ESPC_NCR(20)
+#define ESPC_INTR_NUM_PORT0 0x00000000ff000000
+#define ESPC_INTR_NUM_PORT1 0x0000000000ff0000
+#define ESPC_INTR_NUM_PORT2 0x000000000000ff00
+#define ESPC_INTR_NUM_PORT3 0x00000000000000ff
+
+#define ESPC_VER_IMGSZ ESPC_NCR(21)
+#define ESPC_VER_IMGSZ_IMGSZ 0x00000000ffff0000
+#define ESPC_VER_IMGSZ_IMGSZ_SHIFT 16
+#define ESPC_VER_IMGSZ_VER 0x000000000000ffff
+#define ESPC_VER_IMGSZ_VER_SHIFT 0
+
+#define ESPC_CHKSUM ESPC_NCR(22)
+#define ESPC_CHKSUM_SUM 0x00000000000000ff
+
+#define ESPC_EEPROM_SIZE 0x100000
+
+#define CLASS_CODE_UNRECOG 0x00
+#define CLASS_CODE_DUMMY1 0x01
+#define CLASS_CODE_ETHERTYPE1 0x02
+#define CLASS_CODE_ETHERTYPE2 0x03
+#define CLASS_CODE_USER_PROG1 0x04
+#define CLASS_CODE_USER_PROG2 0x05
+#define CLASS_CODE_USER_PROG3 0x06
+#define CLASS_CODE_USER_PROG4 0x07
+#define CLASS_CODE_TCP_IPV4 0x08
+#define CLASS_CODE_UDP_IPV4 0x09
+#define CLASS_CODE_AH_ESP_IPV4 0x0a
+#define CLASS_CODE_SCTP_IPV4 0x0b
+#define CLASS_CODE_TCP_IPV6 0x0c
+#define CLASS_CODE_UDP_IPV6 0x0d
+#define CLASS_CODE_AH_ESP_IPV6 0x0e
+#define CLASS_CODE_SCTP_IPV6 0x0f
+#define CLASS_CODE_ARP 0x10
+#define CLASS_CODE_RARP 0x11
+#define CLASS_CODE_DUMMY2 0x12
+#define CLASS_CODE_DUMMY3 0x13
+#define CLASS_CODE_DUMMY4 0x14
+#define CLASS_CODE_DUMMY5 0x15
+#define CLASS_CODE_DUMMY6 0x16
+#define CLASS_CODE_DUMMY7 0x17
+#define CLASS_CODE_DUMMY8 0x18
+#define CLASS_CODE_DUMMY9 0x19
+#define CLASS_CODE_DUMMY10 0x1a
+#define CLASS_CODE_DUMMY11 0x1b
+#define CLASS_CODE_DUMMY12 0x1c
+#define CLASS_CODE_DUMMY13 0x1d
+#define CLASS_CODE_DUMMY14 0x1e
+#define CLASS_CODE_DUMMY15 0x1f
+
+/* Logical devices and device groups */
+#define LDN_RXDMA(CHAN) (0 + (CHAN))
+#define LDN_RESV1(OFF) (16 + (OFF))
+#define LDN_TXDMA(CHAN) (32 + (CHAN))
+#define LDN_RESV2(OFF) (56 + (OFF))
+#define LDN_MIF 63
+#define LDN_MAC(PORT) (64 + (PORT))
+#define LDN_DEVICE_ERROR 68
+#define LDN_MAX LDN_DEVICE_ERROR
+
+#define NIU_LDG_MIN 0
+#define NIU_LDG_MAX 63
+#define NIU_NUM_LDG 64
+#define LDG_INVALID 0xff
+
+/* PHY stuff */
+#define NIU_PMA_PMD_DEV_ADDR 1
+#define NIU_PCS_DEV_ADDR 3
+
+#define NIU_PHY_ID_MASK 0xfffff0f0
+#define NIU_PHY_ID_BCM8704 0x00206030
+#define NIU_PHY_ID_BCM5464R 0x002060b0
+
+#define BCM8704_PMA_PMD_DEV_ADDR 1
+#define BCM8704_PCS_DEV_ADDR 2
+#define BCM8704_USER_DEV3_ADDR 3
+#define BCM8704_PHYXS_DEV_ADDR 4
+#define BCM8704_USER_DEV4_ADDR 4
+
+#define BCM8704_PMD_RCV_SIGDET 0x000a
+#define PMD_RCV_SIGDET_LANE3 0x0010
+#define PMD_RCV_SIGDET_LANE2 0x0008
+#define PMD_RCV_SIGDET_LANE1 0x0004
+#define PMD_RCV_SIGDET_LANE0 0x0002
+#define PMD_RCV_SIGDET_GLOBAL 0x0001
+
+#define BCM8704_PCS_10G_R_STATUS 0x0020
+#define PCS_10G_R_STATUS_LINKSTAT 0x1000
+#define PCS_10G_R_STATUS_PRBS31_ABLE 0x0004
+#define PCS_10G_R_STATUS_HI_BER 0x0002
+#define PCS_10G_R_STATUS_BLK_LOCK 0x0001
+
+#define BCM8704_USER_CONTROL 0xc800
+#define USER_CONTROL_OPTXENB_LVL 0x8000
+#define USER_CONTROL_OPTXRST_LVL 0x4000
+#define USER_CONTROL_OPBIASFLT_LVL 0x2000
+#define USER_CONTROL_OBTMPFLT_LVL 0x1000
+#define USER_CONTROL_OPPRFLT_LVL 0x0800
+#define USER_CONTROL_OPTXFLT_LVL 0x0400
+#define USER_CONTROL_OPRXLOS_LVL 0x0200
+#define USER_CONTROL_OPRXFLT_LVL 0x0100
+#define USER_CONTROL_OPTXON_LVL 0x0080
+#define USER_CONTROL_RES1 0x007f
+#define USER_CONTROL_RES1_SHIFT 0
+
+#define BCM8704_USER_ANALOG_CLK 0xc801
+#define BCM8704_USER_PMD_RX_CONTROL 0xc802
+
+#define BCM8704_USER_PMD_TX_CONTROL 0xc803
+#define USER_PMD_TX_CTL_RES1 0xfe00
+#define USER_PMD_TX_CTL_XFP_CLKEN 0x0100
+#define USER_PMD_TX_CTL_TX_DAC_TXD 0x00c0
+#define USER_PMD_TX_CTL_TX_DAC_TXD_SH 6
+#define USER_PMD_TX_CTL_TX_DAC_TXCK 0x0030
+#define USER_PMD_TX_CTL_TX_DAC_TXCK_SH 4
+#define USER_PMD_TX_CTL_TSD_LPWREN 0x0008
+#define USER_PMD_TX_CTL_TSCK_LPWREN 0x0004
+#define USER_PMD_TX_CTL_CMU_LPWREN 0x0002
+#define USER_PMD_TX_CTL_SFIFORST 0x0001
+
+#define BCM8704_USER_ANALOG_STATUS0 0xc804
+#define BCM8704_USER_OPT_DIGITAL_CTRL 0xc808
+#define BCM8704_USER_TX_ALARM_STATUS 0x9004
+
+#define USER_ODIG_CTRL_FMODE 0x8000
+#define USER_ODIG_CTRL_TX_PDOWN 0x4000
+#define USER_ODIG_CTRL_RX_PDOWN 0x2000
+#define USER_ODIG_CTRL_EFILT_EN 0x1000
+#define USER_ODIG_CTRL_OPT_RST 0x0800
+#define USER_ODIG_CTRL_PCS_TIB 0x0400
+#define USER_ODIG_CTRL_PCS_RI 0x0200
+#define USER_ODIG_CTRL_RESV1 0x0180
+#define USER_ODIG_CTRL_GPIOS 0x0060
+#define USER_ODIG_CTRL_GPIOS_SHIFT 5
+#define USER_ODIG_CTRL_RESV2 0x0010
+#define USER_ODIG_CTRL_LB_ERR_DIS 0x0008
+#define USER_ODIG_CTRL_RESV3 0x0006
+#define USER_ODIG_CTRL_TXONOFF_PD_DIS 0x0001
+
+#define BCM8704_PHYXS_XGXS_LANE_STAT 0x0018
+#define PHYXS_XGXS_LANE_STAT_ALINGED 0x1000
+#define PHYXS_XGXS_LANE_STAT_PATTEST 0x0800
+#define PHYXS_XGXS_LANE_STAT_MAGIC 0x0400
+#define PHYXS_XGXS_LANE_STAT_LANE3 0x0008
+#define PHYXS_XGXS_LANE_STAT_LANE2 0x0004
+#define PHYXS_XGXS_LANE_STAT_LANE1 0x0002
+#define PHYXS_XGXS_LANE_STAT_LANE0 0x0001
+
+#define BCM5464R_AUX_CTL 24
+#define BCM5464R_AUX_CTL_EXT_LB 0x8000
+#define BCM5464R_AUX_CTL_EXT_PLEN 0x4000
+#define BCM5464R_AUX_CTL_ER1000 0x3000
+#define BCM5464R_AUX_CTL_ER1000_SHIFT 12
+#define BCM5464R_AUX_CTL_RESV1 0x0800
+#define BCM5464R_AUX_CTL_WRITE_1 0x0400
+#define BCM5464R_AUX_CTL_RESV2 0x0300
+#define BCM5464R_AUX_CTL_PRESP_DIS 0x0080
+#define BCM5464R_AUX_CTL_RESV3 0x0040
+#define BCM5464R_AUX_CTL_ER100 0x0030
+#define BCM5464R_AUX_CTL_ER100_SHIFT 4
+#define BCM5464R_AUX_CTL_DIAG_MODE 0x0008
+#define BCM5464R_AUX_CTL_SR_SEL 0x0007
+#define BCM5464R_AUX_CTL_SR_SEL_SHIFT 0
+
+#define BCM5464R_CTRL1000_AS_MASTER 0x0800
+#define BCM5464R_CTRL1000_ENABLE_AS_MASTER 0x1000
+
+#define RCR_ENTRY_MULTI 0x8000000000000000
+#define RCR_ENTRY_PKT_TYPE 0x6000000000000000
+#define RCR_ENTRY_PKT_TYPE_SHIFT 61
+#define RCR_ENTRY_ZERO_COPY 0x1000000000000000
+#define RCR_ENTRY_NOPORT 0x0800000000000000
+#define RCR_ENTRY_PROMISC 0x0400000000000000
+#define RCR_ENTRY_ERROR 0x0380000000000000
+#define RCR_ENTRY_DCF_ERR 0x0040000000000000
+#define RCR_ENTRY_L2_LEN 0x003fff0000000000
+#define RCR_ENTRY_L2_LEN_SHIFT 40
+#define RCR_ENTRY_PKTBUFSZ 0x000000c000000000
+#define RCR_ENTRY_PKTBUFSZ_SHIFT 38
+#define RCR_ENTRY_PKT_BUF_ADDR 0x0000003fffffffff /* bits 43:6 */
+#define RCR_ENTRY_PKT_BUF_ADDR_SHIFT 6
+
+#define RCR_PKT_TYPE_OTHER 0x0
+#define RCR_PKT_TYPE_TCP 0x1
+#define RCR_PKT_TYPE_UDP 0x2
+#define RCR_PKT_TYPE_SCTP 0x3
+
+#define NIU_RXPULL_MAX 96
+
+struct rx_pkt_hdr0 {
+#if defined(__LITTLE_ENDIAN_BITFIELD)
+ __u8 inputport:2,
+ maccheck:1,
+ class:4;
+ __u8 vlan:1,
+ llcsnap:1,
+ noport:1,
+ badip:1,
+ tcamhit:1,
+ tres:2,
+ tzfvld:1;
+#elif defined(__BIG_ENDIAN_BITFIELD)
+ __u8 class:4,
+ maccheck:1,
+ inputport:2;
+ __u8 tzfvld:1,
+ tres:2,
+ tcamhit:1,
+ badip:1,
+ noport:1,
+ llcsnap:1,
+ vlan:1;
+#endif
+};
+
+struct rx_pkt_hdr1 {
+ __u8 hwrsvd1;
+ __u8 tcammatch;
+#if defined(__LITTLE_ENDIAN_BITFIELD)
+ __u8 hwrsvd2:2,
+ hashit:1,
+ exact:1,
+ hzfvld:1,
+ hashsidx:3;
+#elif defined(__BIG_ENDIAN_BITFIELD)
+ __u8 hashsidx:3,
+ hzfvld:1,
+ exact:1,
+ hashit:1,
+ hwrsvd2:2;
+#endif
+ __u8 zcrsvd;
+
+ /* Bits 11:8 of zero copy flow ID. */
+#if defined(__LITTLE_ENDIAN_BITFIELD)
+ __u8 hwrsvd3:4, zflowid0:4;
+#elif defined(__BIG_ENDIAN_BITFIELD)
+ __u8 zflowid0:4, hwrsvd3:4;
+#endif
+
+ /* Bits 7:0 of zero copy flow ID. */
+ __u8 zflowid1;
+
+ /* Bits 15:8 of hash value, H2. */
+ __u8 hashval2_0;
+
+ /* Bits 7:0 of hash value, H2. */
+ __u8 hashval2_1;
+
+ /* Bits 19:16 of hash value, H1. */
+#if defined(__LITTLE_ENDIAN_BITFIELD)
+ __u8 hwrsvd4:4, hashval1_0:4;
+#elif defined(__BIG_ENDIAN_BITFIELD)
+ __u8 hashval1_0:4, hwrsvd4:4;
+#endif
+
+ /* Bits 15:8 of hash value, H1. */
+ __u8 hashval1_1;
+
+ /* Bits 7:0 of hash value, H1. */
+ __u8 hashval1_2;
+
+ __u8 usrdata_0; /* Bits 39:32 of user data. */
+ __u8 usrdata_1; /* Bits 31:24 of user data. */
+ __u8 usrdata_2; /* Bits 23:16 of user data. */
+ __u8 usrdata_3; /* Bits 15:8 of user data. */
+ __u8 usrdata_4; /* Bits 7:0 of user data. */
+};
+
+struct tx_dma_mbox {
+ __u64 tx_dma_pre_st;
+ __u64 tx_cs;
+ __u64 tx_ring_kick;
+ __u64 tx_ring_hdl;
+ __u64 resv1;
+ __u32 tx_rng_err_logl;
+ __u32 tx_rng_err_logh;
+ __u64 resv2;
+ __u64 resv3;
+};
+
+struct tx_pkt_hdr {
+ __le64 flags;
+#define TXHDR_PAD 0x0000000000000007
+#define TXHDR_PAD_SHIFT 0
+#define TXHDR_LEN 0x000000003fff0000
+#define TXHDR_LEN_SHIFT 16
+#define TXHDR_L4STUFF 0x0000003f00000000
+#define TXHDR_L4STUFF_SHIFT 32
+#define TXHDR_L4START 0x00003f0000000000
+#define TXHDR_L4START_SHIFT 40
+#define TXHDR_L3START 0x000f000000000000
+#define TXHDR_L3START_SHIFT 48
+#define TXHDR_IHL 0x00f0000000000000
+#define TXHDR_IHL_SHIFT 52
+#define TXHDR_VLAN 0x0100000000000000
+#define TXHDR_LLC 0x0200000000000000
+#define TXHDR_IP_VER 0x2000000000000000
+#define TXHDR_CSUM_NONE 0x0000000000000000
+#define TXHDR_CSUM_TCP 0x4000000000000000
+#define TXHDR_CSUM_UDP 0x8000000000000000
+#define TXHDR_CSUM_SCTP 0xc000000000000000
+ __le64 resv;
+};
+
+#define TX_DESC_SOP 0x8000000000000000
+#define TX_DESC_MARK 0x4000000000000000
+#define TX_DESC_NUM_PTR 0x3c00000000000000
+#define TX_DESC_NUM_PTR_SHIFT 58
+#define TX_DESC_TR_LEN 0x01fff00000000000
+#define TX_DESC_TR_LEN_SHIFT 44
+#define TX_DESC_SAD 0x00000fffffffffff
+#define TX_DESC_SAD_SHIFT 0
+
+struct tx_buff_info {
+ struct sk_buff *skb;
+ u64 mapping;
+};
+
+struct txdma_mailbox {
+ __le64 tx_dma_pre_st;
+ __le64 tx_cs;
+ __le64 tx_ring_kick;
+ __le64 tx_ring_hdl;
+ __le64 resv1;
+ __le32 tx_rng_err_logl;
+ __le32 tx_rng_err_logh;
+ __le64 resv2[2];
+} __attribute__((aligned(64)));
+
+#define MAX_TX_RING_SIZE 256
+
+struct tx_ring_info {
+ struct tx_buff_info tx_buffs[MAX_TX_RING_SIZE];
+ struct niu *np;
+ u64 tx_cs;
+ u64 tx_hdl;
+ int pending;
+ int prod;
+ int cons;
+ int wrap_bit;
+ int tx_channel;
+ struct txdma_mailbox *mbox;
+ __le64 *descr;
+
+ u64 tx_packets;
+ u64 tx_bytes;
+ u64 tx_errors;
+
+ u64 mbox_dma;
+ u64 descr_dma;
+ int max_burst;
+};
+
+#define NEXT_TX(tp, index) \
+ (((index) + 1) < (tp)->pending ? ((index) + 1) : 0)
+
+static inline u32 niu_tx_avail(struct tx_ring_info *tp)
+{
+ return (tp->pending -
+ ((tp->prod - tp->cons) & (MAX_TX_RING_SIZE - 1)));
+}
+
+struct rxdma_mailbox {
+ __le64 rx_dma_ctl_stat;
+ __le64 rbr_stat;
+ __le32 rbr_hdl;
+ __le32 rbr_hdh;
+ __le64 resv1;
+ __le32 rcrstat_c;
+ __le32 rcrstat_b;
+ __le64 rcrstat_a;
+ __le64 resv2[2];
+} __attribute__((aligned(64)));
+
+#define MAX_RCR_RING_SIZE 256
+#define MAX_RBR_RING_SIZE 128
+
+#define RBR_REFILL_MIN 16
+
+#define RX_SKB_ALLOC_SIZE 128 + NET_IP_ALIGN
+
+struct rx_ring_info {
+ struct niu *np;
+ int rx_channel;
+ __u16 rbr_block_size;
+ __u16 rbr_blocks_per_page;
+ __u16 rbr_sizes[4];
+ unsigned int rcr_index;
+ unsigned int rcr_table_size;
+ unsigned int rbr_index;
+ unsigned int rbr_pending;
+ unsigned int rbr_refill_pending;
+ unsigned int rbr_kick_thresh;
+ unsigned int rbr_table_size;
+ struct page **rxhash;
+ struct rxdma_mailbox *mbox;
+ __le64 *rcr;
+ __le32 *rbr;
+#define RBR_DESCR_ADDR_SHIFT 12
+
+ u64 rx_packets;
+ u64 rx_bytes;
+ u64 rx_dropped;
+ u64 rx_errors;
+
+ u64 mbox_dma;
+ u64 rcr_dma;
+ u64 rbr_dma;
+
+ /* WRED */
+ int nonsyn_window;
+ int nonsyn_threshold;
+ int syn_window;
+ int syn_threshold;
+
+ /* interrupt mitigation */
+ int rcr_pkt_threshold;
+ int rcr_timeout;
+};
+
+#define NEXT_RCR(rp, index) \
+ (((index) + 1) < (rp)->rcr_table_size ? ((index) + 1) : 0)
+#define NEXT_RBR(rp, index) \
+ (((index) + 1) < (rp)->rbr_table_size ? ((index) + 1) : 0)
+
+#define NIU_MAX_PORTS 4
+#define NIU_NUM_RXCHAN 16
+#define NIU_NUM_TXCHAN 24
+#define MAC_NUM_HASH 16
+
+#define NIU_VPD_MIN_MAJOR 3
+#define NIU_VPD_MIN_MINOR 4
+
+#define NIU_VPD_MODEL_MAX 32
+#define NIU_VPD_BD_MODEL_MAX 16
+#define NIU_VPD_VERSION_MAX 64
+#define NIU_VPD_PHY_TYPE_MAX 8
+
+struct niu_vpd {
+ char model[NIU_VPD_MODEL_MAX];
+ char board_model[NIU_VPD_BD_MODEL_MAX];
+ char version[NIU_VPD_VERSION_MAX];
+ char phy_type[NIU_VPD_PHY_TYPE_MAX];
+ __u8 mac_num;
+ __u8 __pad;
+ __u8 local_mac[6];
+ int fcode_major;
+ int fcode_minor;
+};
+
+struct niu_altmac_rdc {
+ __u8 alt_mac_num;
+ __u8 rdc_num;
+ __u8 mac_pref;
+};
+
+struct niu_vlan_rdc {
+ __u8 rdc_num;
+ __u8 vlan_pref;
+};
+
+struct niu_classifier {
+ struct niu_altmac_rdc alt_mac_mappings[16];
+ struct niu_vlan_rdc vlan_mappings[ENET_VLAN_TBL_NUM_ENTRIES];
+
+ u16 tcam_index;
+ u16 num_alt_mac_mappings;
+
+ u32 h1_init;
+ u16 h2_init;
+};
+
+#define NIU_NUM_RDC_TABLES 8
+#define NIU_RDC_TABLE_SLOTS 16
+
+struct rdc_table {
+ __u8 rxdma_channel[NIU_RDC_TABLE_SLOTS];
+};
+
+struct niu_rdc_tables {
+ struct rdc_table tables[NIU_NUM_RDC_TABLES];
+ int first_table_num;
+ int num_tables;
+};
+
+#define PHY_TYPE_PMA_PMD 0
+#define PHY_TYPE_PCS 1
+#define PHY_TYPE_MII 2
+#define PHY_TYPE_MAX 3
+
+struct phy_probe_info {
+ __u32 phy_id[PHY_TYPE_MAX][NIU_MAX_PORTS];
+ __u8 phy_port[PHY_TYPE_MAX][NIU_MAX_PORTS];
+ __u8 cur[PHY_TYPE_MAX];
+
+ struct device_attribute phy_port_attrs[PHY_TYPE_MAX * NIU_MAX_PORTS];
+ struct device_attribute phy_type_attrs[PHY_TYPE_MAX * NIU_MAX_PORTS];
+ struct device_attribute phy_id_attrs[PHY_TYPE_MAX * NIU_MAX_PORTS];
+};
+
+struct niu_tcam_entry {
+ __u64 key[4];
+ __u64 key_mask[4];
+ __u64 assoc_data;
+};
+
+struct device_node;
+union niu_parent_id {
+ struct {
+ int domain;
+ int bus;
+ int device;
+ } pci;
+ struct device_node *of;
+};
+
+struct niu;
+struct niu_parent {
+ struct platform_device *plat_dev;
+ int index;
+
+ union niu_parent_id id;
+
+ struct niu *ports[NIU_MAX_PORTS];
+
+ atomic_t refcnt;
+ struct list_head list;
+
+ spinlock_t lock;
+
+ __u32 flags;
+#define PARENT_FLGS_CLS_HWINIT 0x00000001
+
+ __u32 port_phy;
+#define PORT_PHY_UNKNOWN 0x00000000
+#define PORT_PHY_INVALID 0xffffffff
+#define PORT_TYPE_10G 0x01
+#define PORT_TYPE_1G 0x02
+#define PORT_TYPE_MASK 0x03
+
+ __u8 rxchan_per_port[NIU_MAX_PORTS];
+ __u8 txchan_per_port[NIU_MAX_PORTS];
+
+ struct niu_rdc_tables rdc_group_cfg[NIU_MAX_PORTS];
+ __u8 rdc_default[NIU_MAX_PORTS];
+
+ __u8 ldg_map[LDN_MAX + 1];
+
+ __u8 plat_type;
+#define PLAT_TYPE_INVALID 0x00
+#define PLAT_TYPE_ATLAS 0x01
+#define PLAT_TYPE_NIU 0x02
+#define PLAT_TYPE_VF_P0 0x03
+#define PLAT_TYPE_VF_P1 0x04
+
+ __u8 num_ports;
+
+ u16 tcam_num_entries;
+#define NIU_PCI_TCAM_ENTRIES 256
+#define NIU_NONPCI_TCAM_ENTRIES 128
+#define NIU_TCAM_ENTRIES_MAX 256
+
+ int rxdma_clock_divider;
+
+ struct phy_probe_info phy_probe_info;
+
+ struct niu_tcam_entry tcam[NIU_TCAM_ENTRIES_MAX];
+ __u64 l2_cls[2];
+ __u64 l3_cls[4];
+ __u64 tcam_key[12];
+ __u64 flow_key[12];
+};
+
+struct niu_ops {
+ void *(*alloc_coherent)(struct device *dev, size_t size,
+ u64 *handle, gfp_t flag);
+ void (*free_coherent)(struct device *dev, size_t size,
+ void *cpu_addr, u64 handle);
+ u64 (*map_page)(struct device *dev, struct page *page,
+ unsigned long offset, size_t size,
+ enum dma_data_direction direction);
+ void (*unmap_page)(struct device *dev, u64 dma_address,
+ size_t size, enum dma_data_direction direction);
+ u64 (*map_single)(struct device *dev, void *cpu_addr,
+ size_t size,
+ enum dma_data_direction direction);
+ void (*unmap_single)(struct device *dev, u64 dma_address,
+ size_t size, enum dma_data_direction direction);
+};
+
+struct niu_link_config {
+ /* Describes what we're trying to get. */
+ u32 advertising;
+ u32 supported;
+ u16 speed;
+ u8 duplex;
+ u8 autoneg;
+
+ /* Describes what we actually have. */
+ u16 active_speed;
+ u8 active_duplex;
+#define SPEED_INVALID 0xffff
+#define DUPLEX_INVALID 0xff
+#define AUTONEG_INVALID 0xff
+
+ u8 loopback_mode;
+#define LOOPBACK_DISABLED 0x00
+#define LOOPBACK_PHY 0x01
+#define LOOPBACK_MAC 0x02
+};
+
+struct niu_ldg {
+ struct napi_struct napi;
+ struct niu *np;
+ __u8 ldg_num;
+ __u8 timer;
+ __u64 v0, v1, v2;
+ unsigned int irq;
+};
+
+struct niu_xmac_stats {
+ u64 tx_frames;
+ u64 tx_bytes;
+ u64 tx_fifo_errors;
+ u64 tx_overflow_errors;
+ u64 tx_max_pkt_size_errors;
+ u64 tx_underflow_errors;
+
+ u64 rx_local_faults;
+ u64 rx_remote_faults;
+ u64 rx_link_faults;
+ u64 rx_align_errors;
+ u64 rx_frags;
+ u64 rx_mcasts;
+ u64 rx_bcasts;
+ u64 rx_hist_cnt1;
+ u64 rx_hist_cnt2;
+ u64 rx_hist_cnt3;
+ u64 rx_hist_cnt4;
+ u64 rx_hist_cnt5;
+ u64 rx_hist_cnt6;
+ u64 rx_hist_cnt7;
+ u64 rx_octets;
+ u64 rx_code_violations;
+ u64 rx_len_errors;
+ u64 rx_crc_errors;
+ u64 rx_underflows;
+ u64 rx_overflows;
+
+ u64 pause_off_state;
+ u64 pause_on_state;
+ u64 pause_received;
+};
+
+struct niu_bmac_stats {
+ u64 tx_underflow_errors;
+ u64 tx_max_pkt_size_errors;
+ u64 tx_bytes;
+ u64 tx_frames;
+
+ u64 rx_overflows;
+ u64 rx_frames;
+ u64 rx_align_errors;
+ u64 rx_crc_errors;
+ u64 rx_len_errors;
+
+ u64 pause_off_state;
+ u64 pause_on_state;
+ u64 pause_received;
+};
+
+union niu_mac_stats {
+ struct niu_xmac_stats xmac;
+ struct niu_bmac_stats bmac;
+};
+
+struct niu_phy_ops {
+ int (*serdes_init)(struct niu *np);
+ int (*xcvr_init)(struct niu *np);
+ int (*link_status)(struct niu *np, int *);
+};
+
+struct niu {
+ void __iomem *regs;
+ struct net_device *dev;
+ struct pci_dev *pdev;
+ struct device *device;
+ struct niu_parent *parent;
+
+ u32 flags;
+#define NIU_FLAGS_MSIX 0x00400000 /* MSI-X in use */
+#define NIU_FLAGS_MCAST 0x00200000 /* multicast filter enabled */
+#define NIU_FLAGS_PROMISC 0x00100000 /* PROMISC enabled */
+#define NIU_FLAGS_VPD_VALID 0x00080000 /* VPD has valid version */
+#define NIU_FLAGS_10G 0x00040000 /* 0=1G 1=10G */
+#define NIU_FLAGS_FIBER 0x00020000 /* 0=COPPER 1=FIBER */
+#define NIU_FLAGS_XMAC 0x00010000 /* 0=BMAC 1=XMAC */
+
+ u32 msg_enable;
+
+ /* Protects hw programming, and ring state. */
+ spinlock_t lock;
+
+ const struct niu_ops *ops;
+ struct net_device_stats net_stats;
+ union niu_mac_stats mac_stats;
+
+ struct rx_ring_info *rx_rings;
+ struct tx_ring_info *tx_rings;
+ int num_rx_rings;
+ int num_tx_rings;
+
+ struct niu_ldg ldg[NIU_NUM_LDG];
+ int num_ldg;
+
+ void __iomem *mac_regs;
+ unsigned long ipp_off;
+ unsigned long pcs_off;
+ unsigned long xpcs_off;
+
+ struct timer_list timer;
+ const struct niu_phy_ops *phy_ops;
+ int phy_addr;
+
+ struct niu_link_config link_config;
+
+ struct work_struct reset_task;
+
+ __u8 port;
+ __u8 mac_xcvr;
+#define MAC_XCVR_MII 1
+#define MAC_XCVR_PCS 2
+#define MAC_XCVR_XPCS 3
+
+ struct niu_classifier clas;
+
+ struct niu_vpd vpd;
+ u32 eeprom_len;
+};
+
+#endif /* _NIU_H */
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists