[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1330480210-30470-2-git-send-email-rodrigue@qca.qualcomm.com>
Date: Tue, 28 Feb 2012 17:50:10 -0800
From: "Luis R. Rodriguez" <rodrigue@....qualcomm.com>
To: <davem@...emloft.net>, <netdev@...r.kernel.org>,
<linux-kernel@...r.kernel.org>, <mcgrof@...jolero.org>
CC: <qca-linux-team@...lcomm.com>, <nic-devel@...lcomm.com>,
<kgiori@....qualcomm.com>, <chris.snook@...il.com>,
<mathieu@....qualcomm.com>, <bryanh@...cinc.com>,
"Luis R. Rodriguez" <rodrigue@....qualcomm.com>,
Stevent Li <steventl@....qualcomm.com>,
"Hao-Ran Liu (Joseph Liu)" <hao-ran.liu@...onical.com>,
Cloud Ren <cjren@....qualcomm.com>,
Joe Perches <joe@...ches.com>
Subject: [PATCH] net: add new QCA alx ethernet driver which supercedes atl1c
This driver is intended to replace the atl1c driver and provide
support for a two new chipsets. Qualcomm Atheros (QCA) is commiting
to fixing all bugs found on this driver. Test results show this
driver performs better than atl1c on all supported chipsets, closes
the gap between TX and RX throughput (they now match) and has also
been verified with ASPM enabled on all supported chipsets. This
driver and patch also have addressed all sparse and checkpatch
warnings.
QCA is commiting on fixing all bugs upstream on this driver.
This driver is also permissively licensed thereby enabling
developers of other OSes to cherry pick this driver to port to
their OS, such as FreeBSD.
Both atl1c and alx driver support the following chipsets:
1969:1063 - AR8131 Gigabit Ethernet
1969:1062 - AR8132 Fast Ethernet (10/100 Mbit/s)
1969:2062 - AR8152 v2.0 Fast Ethernet
1969:2060 - AR8152 v1.1 Fast Ethernet
1969:1073 - AR8151 v1.0 Gigabit Ethernet
1969:1083 - AR8151 v2.0 Gigabit Ethernet
But alx also supports these two new chipstes:
1969:1091 - AR8161 Gigabit Ethernet
1969:1090 - AR8162 Fast Ethernet
We leave the atl1c driver in place for now but mark it as
deprecated in favor for the alx. Linux distributions should
consider using alx moving forward and any issues found with
the alx driver will be proactively addressed and tracked by
assigned by QCA engineers.
For more detail including a shiny graph of throughput in comparison
to atl1c see the new shiny alx driver home page:
https://www.linuxfoundation.org/collaborate/workgroups/networking/alx
Signed-off-by: Stevent Li <steventl@....qualcomm.com>
Signed-off-by: Hao-Ran Liu (Joseph Liu) <hao-ran.liu@...onical.com>
Signed-off-by: Cloud Ren <cjren@....qualcomm.com>
Signed-off-by: Joe Perches <joe@...ches.com>
Signed-off-by: Luis R. Rodriguez <rodrigue@....qualcomm.com>
---
MAINTAINERS | 11 +
drivers/net/ethernet/atheros/Kconfig | 42 +-
drivers/net/ethernet/atheros/Makefile | 1 +
drivers/net/ethernet/atheros/alx/Makefile | 3 +
drivers/net/ethernet/atheros/alx/alc_cb.c | 912 ++++++
drivers/net/ethernet/atheros/alx/alc_hw.c | 1087 +++++++
drivers/net/ethernet/atheros/alx/alc_hw.h | 1324 ++++++++
drivers/net/ethernet/atheros/alx/alf_cb.c | 1187 +++++++
drivers/net/ethernet/atheros/alx/alf_hw.c | 918 ++++++
drivers/net/ethernet/atheros/alx/alf_hw.h | 2098 +++++++++++++
drivers/net/ethernet/atheros/alx/alx.h | 670 ++++
drivers/net/ethernet/atheros/alx/alx_ethtool.c | 519 ++++
drivers/net/ethernet/atheros/alx/alx_hwcom.h | 187 ++
drivers/net/ethernet/atheros/alx/alx_main.c | 3899 ++++++++++++++++++++++++
drivers/net/ethernet/atheros/alx/alx_sw.h | 493 +++
15 files changed, 13350 insertions(+), 1 deletions(-)
create mode 100644 drivers/net/ethernet/atheros/alx/Makefile
create mode 100644 drivers/net/ethernet/atheros/alx/alc_cb.c
create mode 100644 drivers/net/ethernet/atheros/alx/alc_hw.c
create mode 100644 drivers/net/ethernet/atheros/alx/alc_hw.h
create mode 100644 drivers/net/ethernet/atheros/alx/alf_cb.c
create mode 100644 drivers/net/ethernet/atheros/alx/alf_hw.c
create mode 100644 drivers/net/ethernet/atheros/alx/alf_hw.h
create mode 100644 drivers/net/ethernet/atheros/alx/alx.h
create mode 100644 drivers/net/ethernet/atheros/alx/alx_ethtool.c
create mode 100644 drivers/net/ethernet/atheros/alx/alx_hwcom.h
create mode 100644 drivers/net/ethernet/atheros/alx/alx_main.c
create mode 100644 drivers/net/ethernet/atheros/alx/alx_sw.h
diff --git a/MAINTAINERS b/MAINTAINERS
index c9759ca..e4ef2c3 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1300,6 +1300,17 @@ W: http://atl1.sourceforge.net
S: Maintained
F: drivers/net/ethernet/atheros/
+ALX ETHERNET DRIVERS
+M: Cloud Ren <cjren@....qualcomm.com>
+M: Stevent Li <steventl@....qualcomm.com>
+M: Wu Ken <kenw@....qualcomm.com>
+M: David Liu <dwliu@....qualcomm.com>
+L: netdev@...r.kernel.org
+L: nic-devel@...lcomm.com
+W: http://wireless.kernel.org/en/users/Drivers/ethernet/alx
+S: Supported
+F: drivers/net/ethernet/atheros/alx/
+
ATM
M: Chas Williams <chas@....nrl.navy.mil>
L: linux-atm-general@...ts.sourceforge.net (moderated for non-subscribers)
diff --git a/drivers/net/ethernet/atheros/Kconfig b/drivers/net/ethernet/atheros/Kconfig
index 1ed886d..a1cfc98 100644
--- a/drivers/net/ethernet/atheros/Kconfig
+++ b/drivers/net/ethernet/atheros/Kconfig
@@ -56,15 +56,55 @@ config ATL1E
will be called atl1e.
config ATL1C
- tristate "Atheros L1C Gigabit Ethernet support (EXPERIMENTAL)"
+ tristate "Atheros L1C Gigabit Ethernet support (DEPRECATED)"
depends on PCI && EXPERIMENTAL
select CRC32
select NET_CORE
select MII
---help---
This driver supports the Atheros L1C gigabit ethernet adapter.
+ This driver is deprecated in favor for the alx (CONFIG_ALX) driver.
+ This driver supports the following chipsets:
+
+ 1969:1063 - AR8131 Gigabit Ethernet
+ 1969:1062 - AR8132 Fast Ethernet (10/100 Mbit/s)
+ 1969:2062 - AR8152 v2.0 Fast Ethernet
+ 1969:2060 - AR8152 v1.1 Fast Ethernet
+ 1969:1073 - AR8151 v1.0 Gigabit Ethernet
+ 1969:1083 - AR8151 v2.0 Gigabit Ethernet
To compile this driver as a module, choose M here. The module
will be called atl1c.
+config ALX
+ tristate "Atheros ALX Gigabit Ethernet support"
+ depends on PCI
+ select CRC32
+ select NET_CORE
+ select MII
+ ---help---
+ This driver supports the Atheros L1C/L1D/L1F gigabit ethernet
+ adapter. The alx driver is intended to replace completely the
+ atl1c driver with proper support and commitment from Qualcomm
+ Atheros (QCA). Both atl1c and alx supports the following chipsets:
+
+ 1969:1063 - AR8131 Gigabit Ethernet
+ 1969:1062 - AR8132 Fast Ethernet (10/100 Mbit/s)
+ 1969:2062 - AR8152 v2.0 Fast Ethernet
+ 1969:2060 - AR8152 v1.1 Fast Ethernet
+ 1969:1073 - AR8151 v1.0 Gigabit Ethernet
+ 1969:1083 - AR8151 v2.0 Gigabit Ethernet
+
+ Only alx supports the following chipsets:
+
+ 1969:1091 - AR8161
+ 1969:1090 - AR8162
+
+ For more information see:
+
+ https://www.linuxfoundation.org/collaborate/workgroups/networking/alx
+
+ To compile this driver as a module, choose M here. The module
+ will be called alx.
+
endif # NET_VENDOR_ATHEROS
diff --git a/drivers/net/ethernet/atheros/Makefile b/drivers/net/ethernet/atheros/Makefile
index e7e76fb..5cf1c65 100644
--- a/drivers/net/ethernet/atheros/Makefile
+++ b/drivers/net/ethernet/atheros/Makefile
@@ -6,3 +6,4 @@ obj-$(CONFIG_ATL1) += atlx/
obj-$(CONFIG_ATL2) += atlx/
obj-$(CONFIG_ATL1E) += atl1e/
obj-$(CONFIG_ATL1C) += atl1c/
+obj-$(CONFIG_ALX) += alx/
diff --git a/drivers/net/ethernet/atheros/alx/Makefile b/drivers/net/ethernet/atheros/alx/Makefile
new file mode 100644
index 0000000..9f607d3
--- /dev/null
+++ b/drivers/net/ethernet/atheros/alx/Makefile
@@ -0,0 +1,3 @@
+obj-$(CONFIG_ALX) += alx.o
+alx-objs := alx_main.o alx_ethtool.o alc_cb.o alc_hw.o alf_cb.o alf_hw.o
+ccflags-y += -D__CHECK_ENDIAN__
diff --git a/drivers/net/ethernet/atheros/alx/alc_cb.c b/drivers/net/ethernet/atheros/alx/alc_cb.c
new file mode 100644
index 0000000..8c42c3b
--- /dev/null
+++ b/drivers/net/ethernet/atheros/alx/alc_cb.c
@@ -0,0 +1,912 @@
+/*
+ * Copyright (c) 2012 Qualcomm Atheros, Inc.
+ *
+ * Permission to use, copy, modify, and/or distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ *
+ */
+
+#include <linux/pci_regs.h>
+#include <linux/mii.h>
+
+#include "alc_hw.h"
+
+
+/* NIC */
+static int alc_identify_nic(struct alx_hw *hw)
+{
+ return 0;
+}
+
+
+/* PHY */
+static int alc_read_phy_reg(struct alx_hw *hw, u16 reg_addr, u16 *phy_data)
+{
+ unsigned long flags;
+ int retval = 0;
+
+ spin_lock_irqsave(&hw->mdio_lock, flags);
+
+ if (l1c_read_phy(hw, false, ALX_MDIO_DEV_TYPE_NORM, false, reg_addr,
+ phy_data)) {
+ alx_hw_err(hw, "error when read phy reg\n");
+ retval = -EINVAL;
+ }
+
+ spin_unlock_irqrestore(&hw->mdio_lock, flags);
+ return retval;
+}
+
+
+static int alc_write_phy_reg(struct alx_hw *hw, u16 reg_addr, u16 phy_data)
+{
+ unsigned long flags;
+ int retval = 0;
+
+ spin_lock_irqsave(&hw->mdio_lock, flags);
+
+ if (l1c_write_phy(hw, false, ALX_MDIO_DEV_TYPE_NORM, false, reg_addr,
+ phy_data)) {
+ alx_hw_err(hw, "error when write phy reg\n");
+ retval = -EINVAL;
+ }
+
+ spin_unlock_irqrestore(&hw->mdio_lock, flags);
+ return retval;
+}
+
+
+static int alc_init_phy(struct alx_hw *hw)
+{
+ u16 phy_id[2];
+ int retval;
+
+ spin_lock_init(&hw->mdio_lock);
+
+ retval = alc_read_phy_reg(hw, MII_PHYSID1, &phy_id[0]);
+ if (retval)
+ return retval;
+ retval = alc_read_phy_reg(hw, MII_PHYSID2, &phy_id[1]);
+ if (retval)
+ return retval;
+
+ memcpy(&hw->phy_id, phy_id, sizeof(hw->phy_id));
+
+ hw->autoneg_advertised = ALX_LINK_SPEED_1GB_FULL |
+ ALX_LINK_SPEED_10_HALF |
+ ALX_LINK_SPEED_10_FULL |
+ ALX_LINK_SPEED_100_HALF |
+ ALX_LINK_SPEED_100_FULL;
+ return retval;
+}
+
+
+static int alc_reset_phy(struct alx_hw *hw)
+{
+ bool pws_en, az_en, ptp_en;
+ int retval = 0;
+
+ pws_en = az_en = ptp_en = false;
+ CLI_HW_FLAG(PWSAVE_EN);
+ CLI_HW_FLAG(AZ_EN);
+ CLI_HW_FLAG(PTP_EN);
+
+ if (CHK_HW_FLAG(PWSAVE_CAP)) {
+ pws_en = true;
+ SET_HW_FLAG(PWSAVE_EN);
+ }
+
+ if (CHK_HW_FLAG(AZ_CAP)) {
+ az_en = true;
+ SET_HW_FLAG(AZ_EN);
+ }
+
+ if (CHK_HW_FLAG(PTP_CAP)) {
+ ptp_en = true;
+ SET_HW_FLAG(PTP_EN);
+ }
+
+ alx_hw_info(hw, "reset PHY, pws = %d, az = %d, ptp = %d\n",
+ pws_en, az_en, ptp_en);
+
+ if (l1c_reset_phy(hw, pws_en, az_en, ptp_en)) {
+ alx_hw_err(hw, "error when reset phy\n");
+ retval = -EINVAL;
+ }
+
+ return retval;
+}
+
+
+/* LINK */
+static int alc_setup_phy_link(struct alx_hw *hw, u32 speed, bool autoneg,
+ bool fc)
+{
+ u8 link_cap = 0;
+ int retval = 0;
+
+ alx_hw_info(hw, "speed = 0x%x, autoneg = %d\n", speed, autoneg);
+ if (speed & ALX_LINK_SPEED_1GB_FULL)
+ link_cap |= LX_LC_1000F;
+
+ if (speed & ALX_LINK_SPEED_100_FULL)
+ link_cap |= LX_LC_100F;
+
+ if (speed & ALX_LINK_SPEED_100_HALF)
+ link_cap |= LX_LC_100H;
+
+ if (speed & ALX_LINK_SPEED_10_FULL)
+ link_cap |= LX_LC_10F;
+
+ if (speed & ALX_LINK_SPEED_10_HALF)
+ link_cap |= LX_LC_10H;
+
+ if (l1c_init_phy_spdfc(hw, autoneg, link_cap, fc)) {
+ alx_hw_err(hw, "error when init phy speed and fc\n");
+ retval = -EINVAL;
+ }
+
+ return retval;
+}
+
+
+static int alc_setup_phy_link_speed(struct alx_hw *hw, u32 speed,
+ bool autoneg, bool fc)
+{
+ /*
+ * Clear autoneg_advertised and set new values based on input link
+ * speed.
+ */
+ hw->autoneg_advertised = 0;
+
+ if (speed & ALX_LINK_SPEED_1GB_FULL)
+ hw->autoneg_advertised |= ALX_LINK_SPEED_1GB_FULL;
+
+ if (speed & ALX_LINK_SPEED_100_FULL)
+ hw->autoneg_advertised |= ALX_LINK_SPEED_100_FULL;
+
+ if (speed & ALX_LINK_SPEED_100_HALF)
+ hw->autoneg_advertised |= ALX_LINK_SPEED_100_HALF;
+
+ if (speed & ALX_LINK_SPEED_10_FULL)
+ hw->autoneg_advertised |= ALX_LINK_SPEED_10_FULL;
+
+ if (speed & ALX_LINK_SPEED_10_HALF)
+ hw->autoneg_advertised |= ALX_LINK_SPEED_10_HALF;
+
+ return alc_setup_phy_link(hw, hw->autoneg_advertised,
+ autoneg, fc);
+}
+
+
+static int alc_check_phy_link(struct alx_hw *hw, u32 *speed, bool *link_up)
+{
+ u16 bmsr, giga;
+ int retval;
+
+ alc_read_phy_reg(hw, MII_BMSR, &bmsr);
+ retval = alc_read_phy_reg(hw, MII_BMSR, &bmsr);
+ if (retval)
+ return retval;
+
+ if (!(bmsr & BMSR_LSTATUS)) {
+ *link_up = false;
+ *speed = ALX_LINK_SPEED_UNKNOWN;
+ return 0;
+ }
+ *link_up = true;
+
+ /* Read PHY Specific Status Register (17) */
+ retval = alc_read_phy_reg(hw, L1C_MII_GIGA_PSSR, &giga);
+ if (retval)
+ return retval;
+
+
+ if (!(giga & L1C_GIGA_PSSR_SPD_DPLX_RESOLVED)) {
+ alx_hw_err(hw, "error for speed duplex resolved\n");
+ return -EINVAL;
+ }
+
+ switch (giga & L1C_GIGA_PSSR_SPEED) {
+ case L1C_GIGA_PSSR_1000MBS:
+ if (giga & L1C_GIGA_PSSR_DPLX)
+ *speed = ALX_LINK_SPEED_1GB_FULL;
+ else
+ alx_hw_err(hw, "1000M half is invalid\n");
+ break;
+ case L1C_GIGA_PSSR_100MBS:
+ if (giga & L1C_GIGA_PSSR_DPLX)
+ *speed = ALX_LINK_SPEED_100_FULL;
+ else
+ *speed = ALX_LINK_SPEED_100_HALF;
+ break;
+ case L1C_GIGA_PSSR_10MBS:
+ if (giga & L1C_GIGA_PSSR_DPLX)
+ *speed = ALX_LINK_SPEED_10_FULL;
+ else
+ *speed = ALX_LINK_SPEED_10_HALF;
+ break;
+ default:
+ *speed = ALX_LINK_SPEED_UNKNOWN;
+ retval = -EINVAL;
+ break;
+ }
+
+ return retval;
+}
+
+
+/*
+ * 1. stop_mac
+ * 2. reset mac & dma by reg1400(MASTER)
+ * 3. control speed/duplex, hash-alg
+ * 4. clock switch setting
+ */
+static int alc_reset_mac(struct alx_hw *hw)
+{
+ int retval = 0;
+
+ if (l1c_reset_mac(hw)) {
+ alx_hw_err(hw, "error when reset mac\n");
+ retval = -EINVAL;
+ }
+ return retval;
+}
+
+
+static int alc_start_mac(struct alx_hw *hw)
+{
+ u16 en_ctrl = 0;
+ int retval = 0;
+
+ /* set link speed param */
+ switch (hw->link_speed) {
+ case ALX_LINK_SPEED_1GB_FULL:
+ en_ctrl |= LX_MACSPEED_1000;
+ /* fall through */
+ case ALX_LINK_SPEED_100_FULL:
+ case ALX_LINK_SPEED_10_FULL:
+ en_ctrl |= LX_MACDUPLEX_FULL;
+ break;
+ }
+
+ /* set fc param*/
+ switch (hw->cur_fc_mode) {
+ case alx_fc_full:
+ en_ctrl |= LX_FC_RXEN; /* Flow Control RX Enable */
+ en_ctrl |= LX_FC_TXEN; /* Flow Control TX Enable */
+ break;
+ case alx_fc_rx_pause:
+ en_ctrl |= LX_FC_RXEN; /* Flow Control RX Enable */
+ break;
+ case alx_fc_tx_pause:
+ en_ctrl |= LX_FC_TXEN; /* Flow Control TX Enable */
+ break;
+ default:
+ break;
+ }
+
+ if (hw->fc_single_pause)
+ en_ctrl |= LX_SINGLE_PAUSE;
+
+
+ en_ctrl |= LX_FLT_DIRECT; /* RX Enable; and TX Always Enable */
+ en_ctrl |= LX_FLT_BROADCAST; /* RX Broadcast Enable */
+ en_ctrl |= LX_ADD_FCS;
+
+ if (CHK_HW_FLAG(VLANSTRIP_EN))
+ en_ctrl |= LX_VLAN_STRIP;
+
+ if (CHK_HW_FLAG(PROMISC_EN))
+ en_ctrl |= LX_FLT_PROMISC;
+
+ if (CHK_HW_FLAG(MULTIALL_EN))
+ en_ctrl |= LX_FLT_MULTI_ALL;
+
+ if (CHK_HW_FLAG(LOOPBACK_EN))
+ en_ctrl |= LX_LOOPBACK;
+
+ if (l1c_enable_mac(hw, true, en_ctrl)) {
+ alx_hw_err(hw, "error when start mac\n");
+ retval = -EINVAL;
+ }
+ return retval;
+}
+
+
+/*
+ * 1. stop RXQ (reg15A0) and TXQ (reg1590)
+ * 2. stop MAC (reg1480)
+ */
+static int alc_stop_mac(struct alx_hw *hw)
+{
+ int retval = 0;
+
+ if (l1c_enable_mac(hw, false, 0)) {
+ alx_hw_err(hw, "error when stop mac\n");
+ retval = -EINVAL;
+ }
+ return retval;
+}
+
+
+static int alc_config_mac(struct alx_hw *hw, u16 rxbuf_sz, u16 rx_qnum,
+ u16 rxring_sz, u16 tx_qnum, u16 txring_sz)
+{
+ u8 *addr;
+
+ u32 txmem_hi, txmem_lo[4];
+
+ u32 rxmem_hi, rfdmem_lo, rrdmem_lo;
+
+ u16 smb_timer, mtu_with_eth, int_mod;
+ bool hash_legacy;
+
+ int i;
+ int retval = 0;
+
+ addr = hw->mac_addr;
+
+ txmem_hi = ALX_DMA_ADDR_HI(hw->tpdma[0]);
+ for (i = 0; i < tx_qnum; i++)
+ txmem_lo[i] = ALX_DMA_ADDR_LO(hw->tpdma[i]);
+
+
+ rxmem_hi = ALX_DMA_ADDR_HI(hw->rfdma[0]);
+ rfdmem_lo = ALX_DMA_ADDR_LO(hw->rfdma[0]);
+ rrdmem_lo = ALX_DMA_ADDR_LO(hw->rrdma[0]);
+
+
+ smb_timer = (u16)hw->smb_timer;
+ mtu_with_eth = hw->mtu + ALX_ETH_LENGTH_OF_HEADER;
+ int_mod = hw->imt;
+
+ hash_legacy = true;
+
+ if (l1c_init_mac(hw, addr, txmem_hi, txmem_lo, tx_qnum, txring_sz,
+ rxmem_hi, rfdmem_lo, rrdmem_lo, rxring_sz, rxbuf_sz,
+ smb_timer, mtu_with_eth, int_mod, hash_legacy)) {
+ alx_hw_err(hw, "error when config mac\n");
+ retval = -EINVAL;
+ }
+
+ return retval;
+}
+
+
+/**
+ * alc_get_mac_addr
+ * @hw: pointer to hardware structure
+ **/
+static int alc_get_mac_addr(struct alx_hw *hw, u8 *addr)
+{
+ int retval = 0;
+
+ if (l1c_get_perm_macaddr(hw, addr)) {
+ alx_hw_err(hw, "error when get permanent mac address\n");
+ retval = -EINVAL;
+ }
+ return retval;
+}
+
+
+static int alc_reset_pcie(struct alx_hw *hw, bool l0s_en, bool l1_en)
+{
+ int retval = 0;
+
+ if (!CHK_HW_FLAG(L0S_CAP))
+ l0s_en = false;
+
+ if (l0s_en)
+ SET_HW_FLAG(L0S_EN);
+ else
+ CLI_HW_FLAG(L0S_EN);
+
+
+ if (!CHK_HW_FLAG(L1_CAP))
+ l1_en = false;
+
+ if (l1_en)
+ SET_HW_FLAG(L1_EN);
+ else
+ CLI_HW_FLAG(L1_EN);
+
+ if (l1c_reset_pcie(hw, l0s_en, l1_en)) {
+ alx_hw_err(hw, "error when reset pcie\n");
+ retval = -EINVAL;
+ }
+ return retval;
+}
+
+
+static int alc_config_aspm(struct alx_hw *hw, bool l0s_en, bool l1_en)
+{
+ u8 link_stat;
+ int retval = 0;
+
+ if (!CHK_HW_FLAG(L0S_CAP))
+ l0s_en = false;
+
+ if (l0s_en)
+ SET_HW_FLAG(L0S_EN);
+ else
+ CLI_HW_FLAG(L0S_EN);
+
+ if (!CHK_HW_FLAG(L1_CAP))
+ l1_en = false;
+
+ if (l1_en)
+ SET_HW_FLAG(L1_EN);
+ else
+ CLI_HW_FLAG(L1_EN);
+
+ link_stat = hw->link_up ? LX_LC_ALL : 0;
+ if (l1c_enable_aspm(hw, l0s_en, l1_en, link_stat)) {
+ alx_hw_err(hw, "error when enable aspm\n");
+ retval = -EINVAL;
+ }
+ return retval;
+}
+
+
+static int alc_config_wol(struct alx_hw *hw, u32 wufc)
+{
+ u32 wol = 0;
+
+ /* turn on magic packet event */
+ if (wufc & ALX_WOL_MAGIC) {
+ wol |= L1C_WOL0_MAGIC_EN | L1C_WOL0_PME_MAGIC_EN;
+ if (hw->mac_type == alx_mac_l2cb_v1 &&
+ hw->pci_revid == ALX_REV_ID_AR8152_V1_1) {
+ wol |= L1C_WOL0_PATTERN_EN | L1C_WOL0_PME_PATTERN_EN;
+ }
+ /* magic packet maybe Broadcast&multicast&Unicast frame
+ * move to l1c_powersaving
+ */
+ }
+
+ /* turn on link up event */
+ if (wufc & ALX_WOL_PHY) {
+ wol |= L1C_WOL0_LINK_EN | L1C_WOL0_PME_LINK;
+ /* only link up can wake up */
+ alc_write_phy_reg(hw, L1C_MII_IER, L1C_IER_LINK_UP);
+ }
+
+ alx_mem_w32(hw, L1C_WOL0, wol);
+ return 0;
+}
+
+
+static int alc_config_mac_ctrl(struct alx_hw *hw)
+{
+ u32 mac;
+
+ alx_mem_r32(hw, L1C_MAC_CTRL, &mac);
+
+ /* enable/disable VLAN tag insert,strip */
+ if (CHK_HW_FLAG(VLANSTRIP_EN))
+ mac |= L1C_MAC_CTRL_VLANSTRIP;
+ else
+ mac &= ~L1C_MAC_CTRL_VLANSTRIP;
+
+ if (CHK_HW_FLAG(PROMISC_EN))
+ mac |= L1C_MAC_CTRL_PROMISC_EN;
+ else
+ mac &= ~L1C_MAC_CTRL_PROMISC_EN;
+
+ if (CHK_HW_FLAG(MULTIALL_EN))
+ mac |= L1C_MAC_CTRL_MULTIALL_EN;
+ else
+ mac &= ~L1C_MAC_CTRL_MULTIALL_EN;
+
+ if (CHK_HW_FLAG(LOOPBACK_EN))
+ mac |= L1C_MAC_CTRL_LPBACK_EN;
+ else
+ mac &= ~L1C_MAC_CTRL_LPBACK_EN;
+
+ alx_mem_w32(hw, L1C_MAC_CTRL, mac);
+ return 0;
+}
+
+
+static int alc_config_pow_save(struct alx_hw *hw, u32 speed, bool wol_en,
+ bool tx_en, bool rx_en, bool pws_en)
+{
+ u8 wire_spd = LX_LC_10H;
+ int retval = 0;
+
+ switch (speed) {
+ case ALX_LINK_SPEED_UNKNOWN:
+ case ALX_LINK_SPEED_10_HALF:
+ wire_spd = LX_LC_10H;
+ break;
+ case ALX_LINK_SPEED_10_FULL:
+ wire_spd = LX_LC_10F;
+ break;
+ case ALX_LINK_SPEED_100_HALF:
+ wire_spd = LX_LC_100H;
+ break;
+ case ALX_LINK_SPEED_100_FULL:
+ wire_spd = LX_LC_100F;
+ break;
+ case ALX_LINK_SPEED_1GB_FULL:
+ wire_spd = LX_LC_1000F;
+ break;
+ }
+
+ if (l1c_powersaving(hw, wire_spd, wol_en, tx_en, rx_en, pws_en)) {
+ alx_hw_err(hw, "error when set power saving\n");
+ retval = -EINVAL;
+ }
+ return retval;
+}
+
+
+/* RAR, Multicast, VLAN */
+static int alc_set_mac_addr(struct alx_hw *hw, u8 *addr)
+{
+ u32 sta;
+
+ /*
+ * for example: 00-0B-6A-F6-00-DC
+ * 0<-->6AF600DC, 1<-->000B.
+ */
+
+ /* low dword */
+ sta = (((u32)addr[2]) << 24) | (((u32)addr[3]) << 16) |
+ (((u32)addr[4]) << 8) | (((u32)addr[5])) ;
+ alx_mem_w32(hw, L1C_STAD0, sta);
+
+ /* hight dword */
+ sta = (((u32)addr[0]) << 8) | (((u32)addr[1])) ;
+ alx_mem_w32(hw, L1C_STAD1, sta);
+ return 0;
+}
+
+
+static int alc_set_mc_addr(struct alx_hw *hw, u8 *addr)
+{
+ u32 crc32, bit, reg, mta;
+
+ /*
+ * set hash value for a multicast address hash calcu processing.
+ * 1. calcu 32bit CRC for multicast address
+ * 2. reverse crc with MSB to LSB
+ */
+ crc32 = ALX_ETH_CRC(addr, ALX_ETH_LENGTH_OF_ADDRESS);
+
+ /*
+ * The HASH Table is a register array of 2 32-bit registers.
+ * It is treated like an array of 64 bits. We want to set
+ * bit BitArray[hash_value]. So we figure out what register
+ * the bit is in, read it, OR in the new bit, then write
+ * back the new value. The register is determined by the
+ * upper 7 bits of the hash value and the bit within that
+ * register are determined by the lower 5 bits of the value.
+ */
+ reg = (crc32 >> 31) & 0x1;
+ bit = (crc32 >> 26) & 0x1F;
+
+ alx_mem_r32(hw, L1C_HASH_TBL0 + (reg<<2), &mta);
+ mta |= (0x1 << bit);
+ alx_mem_w32(hw, L1C_HASH_TBL0 + (reg<<2), mta);
+ return 0;
+}
+
+
+static int alc_clear_mc_addr(struct alx_hw *hw)
+{
+ alx_mem_w32(hw, L1C_HASH_TBL0, 0);
+ alx_mem_w32(hw, L1C_HASH_TBL1, 0);
+ return 0;
+}
+
+
+/* RTX */
+static int alc_config_tx(struct alx_hw *hw)
+{
+ return 0;
+}
+
+
+/* INTR */
+static int alc_ack_phy_intr(struct alx_hw *hw)
+{
+ u16 isr;
+ return alc_read_phy_reg(hw, L1C_MII_ISR, &isr);
+}
+
+
+static int alc_enable_legacy_intr(struct alx_hw *hw)
+{
+ alx_mem_w32(hw, L1C_ISR, ~((u32) L1C_ISR_DIS));
+ alx_mem_w32(hw, L1C_IMR, hw->intr_mask);
+ return 0;
+}
+
+
+static int alc_disable_legacy_intr(struct alx_hw *hw)
+{
+ alx_mem_w32(hw, L1C_ISR, L1C_ISR_DIS);
+ alx_mem_w32(hw, L1C_IMR, 0);
+ alx_mem_flush(hw);
+ return 0;
+}
+
+
+/*
+ * NV Ram
+ */
+static int alc_check_nvram(struct alx_hw *hw, bool *exist)
+{
+ *exist = false;
+ return 0;
+}
+
+
+static int alc_read_nvram(struct alx_hw *hw, u16 offset, u32 *data)
+{
+ int i;
+ u32 ectrl1, ectrl2, edata;
+ int retval = 0;
+
+ if (offset & 0x3)
+ return retval; /* address do not align */
+
+ alx_mem_r32(hw, L1C_EFUSE_CTRL2, &ectrl2);
+ if (!(ectrl2 & L1C_EFUSE_CTRL2_CLK_EN))
+ alx_mem_w32(hw, L1C_EFUSE_CTRL2, ectrl2|L1C_EFUSE_CTRL2_CLK_EN);
+
+ alx_mem_w32(hw, L1C_EFUSE_DATA, 0);
+ ectrl1 = FIELDL(L1C_EFUSE_CTRL_ADDR, offset);
+ alx_mem_w32(hw, L1C_EFUSE_CTRL, ectrl1);
+
+ for (i = 0; i < 10; i++) {
+ udelay(100);
+ alx_mem_r32(hw, L1C_EFUSE_CTRL, &ectrl1);
+ if (ectrl1 & L1C_EFUSE_CTRL_FLAG)
+ break;
+ }
+ if (ectrl1 & L1C_EFUSE_CTRL_FLAG) {
+ alx_mem_r32(hw, L1C_EFUSE_CTRL, &ectrl1);
+ alx_mem_r32(hw, L1C_EFUSE_DATA, &edata);
+ *data = LX_SWAP_DW((ectrl1 << 16) | (edata >> 16));
+ return retval;
+ }
+
+ if (!(ectrl2 & L1C_EFUSE_CTRL2_CLK_EN))
+ alx_mem_w32(hw, L1C_EFUSE_CTRL2, ectrl2);
+
+ return retval;
+}
+
+
+static int alc_write_nvram(struct alx_hw *hw, u16 offset, u32 data)
+{
+ return 0;
+}
+
+
+/* fc */
+static int alc_get_fc_mode(struct alx_hw *hw, enum alx_fc_mode *mode)
+{
+ u16 bmsr, giga;
+ int i;
+ int retval = 0;
+
+ for (i = 0; i < ALX_MAX_SETUP_LNK_CYCLE; i++) {
+ alc_read_phy_reg(hw, MII_BMSR, &bmsr);
+ alc_read_phy_reg(hw, MII_BMSR, &bmsr);
+ if (bmsr & BMSR_LSTATUS) {
+ /* Read phy Specific Status Register (17) */
+ retval = alc_read_phy_reg(hw, L1C_MII_GIGA_PSSR, &giga);
+ if (retval)
+ return retval;
+
+ if (!(giga & L1C_GIGA_PSSR_SPD_DPLX_RESOLVED)) {
+ alx_hw_err(hw,
+ "error for speed duplex resolved\n");
+ return -EINVAL;
+ }
+
+ if ((giga & L1C_GIGA_PSSR_FC_TXEN) &&
+ (giga & L1C_GIGA_PSSR_FC_RXEN)) {
+ *mode = alx_fc_full;
+ } else if (giga & L1C_GIGA_PSSR_FC_TXEN) {
+ *mode = alx_fc_tx_pause;
+ } else if (giga & L1C_GIGA_PSSR_FC_RXEN) {
+ *mode = alx_fc_rx_pause;
+ } else {
+ *mode = alx_fc_none;
+ }
+ break;
+ }
+ mdelay(100);
+ }
+
+ if (i == ALX_MAX_SETUP_LNK_CYCLE) {
+ alx_hw_err(hw, "error when get flow control mode\n");
+ retval = -EINVAL;
+ }
+ return retval;
+}
+
+
+static int alc_config_fc(struct alx_hw *hw)
+{
+ u32 mac;
+ int retval = 0;
+
+ if (hw->disable_fc_autoneg) {
+ hw->fc_was_autonegged = false;
+ hw->cur_fc_mode = hw->req_fc_mode;
+ } else {
+ hw->fc_was_autonegged = true;
+ retval = alc_get_fc_mode(hw, &hw->cur_fc_mode);
+ if (retval)
+ return retval;
+ }
+
+ alx_mem_r32(hw, L1C_MAC_CTRL, &mac);
+
+ switch (hw->cur_fc_mode) {
+ case alx_fc_none: /* 0 */
+ mac &= ~(L1C_MAC_CTRL_RXFC_EN | L1C_MAC_CTRL_TXFC_EN);
+ break;
+ case alx_fc_rx_pause: /* 1 */
+ mac &= ~L1C_MAC_CTRL_TXFC_EN;
+ mac |= L1C_MAC_CTRL_RXFC_EN;
+ break;
+ case alx_fc_tx_pause: /* 2 */
+ mac |= L1C_MAC_CTRL_TXFC_EN;
+ mac &= ~L1C_MAC_CTRL_RXFC_EN;
+ break;
+ case alx_fc_full: /* 3 */
+ case alx_fc_default: /* 4 */
+ mac |= (L1C_MAC_CTRL_TXFC_EN | L1C_MAC_CTRL_RXFC_EN);
+ break;
+ default:
+ alx_hw_err(hw, "flow control param set incorrectly\n");
+ return -EINVAL;
+ }
+
+ alx_mem_w32(hw, L1C_MAC_CTRL, mac);
+ return retval;
+}
+
+
+/* ethtool */
+static int alc_get_ethtool_regs(struct alx_hw *hw, void *buff)
+{
+ int i;
+ u32 *val = buff;
+ static const int reg[] = {
+ /* 0 */
+ L1C_LNK_CAP, L1C_PMCTRL, L1C_HALFD, L1C_SLD, L1C_MASTER,
+ L1C_MANU_TIMER, L1C_IRQ_MODU_TIMER, L1C_PHY_CTRL, L1C_LNK_CTRL,
+ L1C_MAC_STS,
+
+ /* 10 */
+ L1C_MDIO, L1C_SERDES, L1C_MAC_CTRL, L1C_GAP, L1C_STAD0,
+ L1C_STAD1, L1C_HASH_TBL0, L1C_HASH_TBL1, L1C_RXQ0, L1C_RXQ1,
+
+ /* 20 */
+ L1C_RXQ2, L1C_RXQ3, L1C_TXQ0, L1C_TXQ1, L1C_TXQ2, L1C_MTU,
+ L1C_WOL0, L1C_WOL1, L1C_WOL2,
+ };
+
+ for (i = 0; i < ARRAY_SIZE(reg); i++)
+ alx_mem_r32(hw, reg[i], &val[i]);
+ return 0;
+}
+
+
+/******************************************************************************/
+static int alc_get_hw_capabilities(struct alx_hw *hw)
+{
+ /*
+ * because there is some hareware error on some platforms, just disable
+ * this feature when link connected.
+ */
+ CLI_HW_FLAG(L0S_CAP);
+ CLI_HW_FLAG(L1_CAP);
+
+ if ((hw->mac_type == alx_mac_l1c) ||
+ (hw->mac_type == alx_mac_l1d_v1) ||
+ (hw->mac_type == alx_mac_l1d_v2))
+ SET_HW_FLAG(GIGA_CAP);
+
+ SET_HW_FLAG(PWSAVE_CAP);
+ return 0;
+}
+
+
+/* alc_set_hw_info */
+static int alc_set_hw_infos(struct alx_hw *hw)
+{
+ hw->rxstat_reg = 0x1700;
+ hw->rxstat_sz = 0x60;
+ hw->txstat_reg = 0x1760;
+ hw->txstat_sz = 0x68;
+
+ hw->rx_prod_reg[0] = L1C_RFD_PIDX;
+ hw->rx_cons_reg[0] = L1C_RFD_CIDX;
+
+ hw->tx_prod_reg[0] = L1C_TPD_PRI0_PIDX;
+ hw->tx_cons_reg[0] = L1C_TPD_PRI0_CIDX;
+ hw->tx_prod_reg[1] = L1C_TPD_PRI1_PIDX;
+ hw->tx_cons_reg[1] = L1C_TPD_PRI1_CIDX;
+
+ hw->hwreg_sz = 0x80;
+ hw->eeprom_sz = 0;
+
+ return 0;
+}
+
+
+/**
+ * alc_init_hw_callbacks - Inits func ptrs and MAC type
+ * @hw: pointer to hardware structure
+ **/
+int alc_init_hw_callbacks(struct alx_hw *hw)
+{
+ /* NIC */
+ hw->cbs.identify_nic = &alc_identify_nic;
+ /* MAC*/
+ hw->cbs.reset_mac = &alc_reset_mac;
+ hw->cbs.start_mac = &alc_start_mac;
+ hw->cbs.stop_mac = &alc_stop_mac;
+ hw->cbs.config_mac = &alc_config_mac;
+ hw->cbs.get_mac_addr = &alc_get_mac_addr;
+ hw->cbs.set_mac_addr = &alc_set_mac_addr;
+ hw->cbs.set_mc_addr = &alc_set_mc_addr;
+ hw->cbs.clear_mc_addr = &alc_clear_mc_addr;
+
+ /* PHY */
+ hw->cbs.init_phy = &alc_init_phy;
+ hw->cbs.reset_phy = &alc_reset_phy;
+ hw->cbs.read_phy_reg = &alc_read_phy_reg;
+ hw->cbs.write_phy_reg = &alc_write_phy_reg;
+ hw->cbs.check_phy_link = &alc_check_phy_link;
+ hw->cbs.setup_phy_link = &alc_setup_phy_link;
+ hw->cbs.setup_phy_link_speed = &alc_setup_phy_link_speed;
+
+ /* Interrupt */
+ hw->cbs.ack_phy_intr = &alc_ack_phy_intr;
+ hw->cbs.enable_legacy_intr = &alc_enable_legacy_intr;
+ hw->cbs.disable_legacy_intr = &alc_disable_legacy_intr;
+
+ /* Configure */
+ hw->cbs.config_tx = &alc_config_tx;
+ hw->cbs.config_fc = &alc_config_fc;
+ hw->cbs.config_aspm = &alc_config_aspm;
+ hw->cbs.config_wol = &alc_config_wol;
+ hw->cbs.config_mac_ctrl = &alc_config_mac_ctrl;
+ hw->cbs.config_pow_save = &alc_config_pow_save;
+ hw->cbs.reset_pcie = &alc_reset_pcie;
+
+ /* NVRam */
+ hw->cbs.check_nvram = &alc_check_nvram;
+ hw->cbs.read_nvram = &alc_read_nvram;
+ hw->cbs.write_nvram = &alc_write_nvram;
+
+ /* Others */
+ hw->cbs.get_ethtool_regs = alc_get_ethtool_regs;
+
+ /* get hw capabilitites to HW->flags */
+ alc_get_hw_capabilities(hw);
+ alc_set_hw_infos(hw);
+
+ alx_hw_info(hw, "HW Flags = 0x%x\n", hw->flags);
+ return 0;
+}
+
diff --git a/drivers/net/ethernet/atheros/alx/alc_hw.c b/drivers/net/ethernet/atheros/alx/alc_hw.c
new file mode 100644
index 0000000..b0eb72c
--- /dev/null
+++ b/drivers/net/ethernet/atheros/alx/alc_hw.c
@@ -0,0 +1,1087 @@
+/*
+ * Copyright (c) 2012 Qualcomm Atheros, Inc.
+ *
+ * Permission to use, copy, modify, and/or distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+#include <linux/pci_regs.h>
+#include <linux/mii.h>
+
+#include "alc_hw.h"
+
+
+
+/*
+ * get permanent mac address
+ * 0: success
+ * non-0:fail
+ */
+u16 l1c_get_perm_macaddr(struct alx_hw *hw, u8 *addr)
+{
+ u32 val, otp_ctrl, otp_flag, mac0, mac1;
+ u16 i;
+ u16 phy_val;
+
+ /* get it from register first */
+ alx_mem_r32(hw, L1C_STAD0, &mac0);
+ alx_mem_r32(hw, L1C_STAD1, &mac1);
+
+ *(u32 *)(addr + 2) = LX_SWAP_DW(mac0);
+ *(u16 *)addr = (u16)LX_SWAP_W((u16)mac1);
+
+ if (macaddr_valid(addr))
+ return 0;
+
+ alx_mem_r32(hw, L1C_TWSI_DBG, &val);
+ alx_mem_r32(hw, L1C_EFUSE_CTRL2, &otp_ctrl);
+ alx_mem_r32(hw, L1C_MASTER, &otp_flag);
+
+ if ((val & L1C_TWSI_DBG_DEV_EXIST) != 0 ||
+ (otp_flag & L1C_MASTER_OTP_FLG) != 0) {
+ /* nov-memory exist, do software-autoload */
+ /* enable OTP_CLK for L1C */
+ if (hw->pci_devid == L1C_DEV_ID ||
+ hw->pci_devid == L2C_DEV_ID) {
+ if ((otp_ctrl & L1C_EFUSE_CTRL2_CLK_EN) != 0) {
+ alx_mem_w32(hw, L1C_EFUSE_CTRL2,
+ otp_ctrl | L1C_EFUSE_CTRL2_CLK_EN);
+ udelay(5);
+ }
+ }
+ /* raise voltage temporally for L2CB/L1D */
+ if (hw->pci_devid == L2CB_DEV_ID ||
+ hw->pci_devid == L2CB2_DEV_ID) {
+ /* clear bit[7] of debugport 00 */
+ l1c_read_phydbg(hw, true, L1C_MIIDBG_ANACTRL,
+ &phy_val);
+ l1c_write_phydbg(hw, true, L1C_MIIDBG_ANACTRL,
+ phy_val & ~L1C_ANACTRL_HB_EN);
+ /* set bit[3] of debugport 3B */
+ l1c_read_phydbg(hw, true, L1C_MIIDBG_VOLT_CTRL,
+ &phy_val);
+ l1c_write_phydbg(hw, true, L1C_MIIDBG_VOLT_CTRL,
+ phy_val | L1C_VOLT_CTRL_SWLOWEST);
+ udelay(20);
+ }
+ /* do load */
+ alx_mem_r32(hw, L1C_SLD, &val);
+ alx_mem_w32(hw, L1C_SLD, val | L1C_SLD_START);
+ for (i = 0; i < L1C_SLD_MAX_TO; i++) {
+ mdelay(1);
+ alx_mem_r32(hw, L1C_SLD, &val);
+ if ((val & L1C_SLD_START) == 0)
+ break;
+ }
+ /* disable OTP_CLK for L1C */
+ if (hw->pci_devid == L1C_DEV_ID ||
+ hw->pci_devid == L2C_DEV_ID) {
+ alx_mem_w32(hw, L1C_EFUSE_CTRL2,
+ otp_ctrl & ~L1C_EFUSE_CTRL2_CLK_EN);
+ udelay(5);
+ }
+ /* low voltage */
+ if (hw->pci_devid == L2CB_DEV_ID ||
+ hw->pci_devid == L2CB2_DEV_ID) {
+ /* set bit[7] of debugport 00 */
+ l1c_read_phydbg(hw, true, L1C_MIIDBG_ANACTRL,
+ &phy_val);
+ l1c_write_phydbg(hw, true, L1C_MIIDBG_ANACTRL,
+ phy_val | L1C_ANACTRL_HB_EN);
+ /* clear bit[3] of debugport 3B */
+ l1c_read_phydbg(hw, true, L1C_MIIDBG_VOLT_CTRL,
+ &phy_val);
+ l1c_write_phydbg(hw, true, L1C_MIIDBG_VOLT_CTRL,
+ phy_val & ~L1C_VOLT_CTRL_SWLOWEST);
+ udelay(20);
+ }
+ if (i == L1C_SLD_MAX_TO)
+ goto out;
+ } else {
+ if (hw->pci_devid == L1C_DEV_ID ||
+ hw->pci_devid == L2C_DEV_ID) {
+ alx_mem_w32(hw, L1C_EFUSE_CTRL2,
+ otp_ctrl & ~L1C_EFUSE_CTRL2_CLK_EN);
+ udelay(5);
+ }
+ }
+
+ alx_mem_r32(hw, L1C_STAD0, &mac0);
+ alx_mem_r32(hw, L1C_STAD1, &mac1);
+
+ *(u32 *)(addr + 2) = LX_SWAP_DW(mac0);
+ *(u16 *)addr = (u16)LX_SWAP_W((u16)mac1);
+
+ if (macaddr_valid(addr))
+ return 0;
+
+out:
+ return LX_ERR_ALOAD;
+}
+
+/*
+ * reset mac & dma
+ * return
+ * 0: success
+ * non-0:fail
+ */
+u16 l1c_reset_mac(struct alx_hw *hw)
+{
+ u32 val, mrst_val;
+ u16 ret;
+ u16 i;
+
+ /* disable all interrupts, RXQ/TXQ */
+ alx_mem_w32(hw, L1C_IMR, 0);
+ alx_mem_w32(hw, L1C_ISR, L1C_ISR_DIS);
+
+ ret = l1c_enable_mac(hw, false, 0);
+ if (ret != 0)
+ return ret;
+ /* reset whole mac safely. OOB is meaningful for L1D only */
+ alx_mem_r32(hw, L1C_MASTER, &mrst_val);
+ mrst_val |= L1C_MASTER_OOB_DIS;
+ alx_mem_w32(hw, L1C_MASTER, mrst_val | L1C_MASTER_DMA_MAC_RST);
+
+ /* make sure it's idle */
+ for (i = 0; i < L1C_DMA_MAC_RST_TO; i++) {
+ alx_mem_r32(hw, L1C_MASTER, &val);
+ if ((val & L1C_MASTER_DMA_MAC_RST) == 0)
+ break;
+ udelay(20);
+ }
+ if (i == L1C_DMA_MAC_RST_TO)
+ return LX_ERR_RSTMAC;
+ /* keep the old value */
+ alx_mem_w32(hw, L1C_MASTER, mrst_val & ~L1C_MASTER_DMA_MAC_RST);
+
+ /* driver control speed/duplex, hash-alg */
+ alx_mem_r32(hw, L1C_MAC_CTRL, &val);
+ alx_mem_w32(hw, L1C_MAC_CTRL, val | L1C_MAC_CTRL_WOLSPED_SWEN);
+
+ /* clk switch setting */
+ alx_mem_r32(hw, L1C_SERDES, &val);
+ switch (hw->pci_devid) {
+ case L2CB_DEV_ID:
+ alx_mem_w32(hw, L1C_SERDES, val & ~L1C_SERDES_PHYCLK_SLWDWN);
+ break;
+ case L2CB2_DEV_ID:
+ case L1D2_DEV_ID:
+ alx_mem_w32(hw, L1C_SERDES,
+ val | L1C_SERDES_PHYCLK_SLWDWN |
+ L1C_SERDES_MACCLK_SLWDWN);
+ break;
+ default:
+ /* the defalut value of default product is OFF */;
+ }
+
+ return 0;
+}
+
+/* reset phy
+ * return
+ * 0: success
+ * non-0:fail
+ */
+u16 l1c_reset_phy(struct alx_hw *hw, bool pws_en, bool az_en, bool ptp_en)
+{
+ u32 val;
+ u16 i, phy_val;
+
+ ptp_en = ptp_en;
+
+ /* reset PHY core */
+ alx_mem_r32(hw, L1C_PHY_CTRL, &val);
+ val &= ~(L1C_PHY_CTRL_DSPRST_OUT | L1C_PHY_CTRL_IDDQ |
+ L1C_PHY_CTRL_GATE_25M | L1C_PHY_CTRL_POWER_DOWN |
+ L1C_PHY_CTRL_CLS);
+ val |= L1C_PHY_CTRL_RST_ANALOG;
+
+ if (pws_en)
+ val |= (L1C_PHY_CTRL_HIB_PULSE | L1C_PHY_CTRL_HIB_EN);
+ else
+ val &= ~(L1C_PHY_CTRL_HIB_PULSE | L1C_PHY_CTRL_HIB_EN);
+
+ alx_mem_w32(hw, L1C_PHY_CTRL, val);
+ udelay(10); /* 5us is enough */
+ alx_mem_w32(hw, L1C_PHY_CTRL, val | L1C_PHY_CTRL_DSPRST_OUT);
+
+ /* delay 800us */
+ for (i = 0; i < L1C_PHY_CTRL_DSPRST_TO; i++)
+ udelay(10);
+
+ /* switch clock */
+ if (hw->pci_devid == L2CB_DEV_ID) {
+ l1c_read_phydbg(hw, true, L1C_MIIDBG_CFGLPSPD, &phy_val);
+ /* clear bit13 */
+ l1c_write_phydbg(hw, true, L1C_MIIDBG_CFGLPSPD,
+ phy_val & ~L1C_CFGLPSPD_RSTCNT_CLK125SW);
+ }
+
+ /* fix tx-half-amp issue */
+ if (hw->pci_devid == L2CB_DEV_ID || hw->pci_devid == L2CB2_DEV_ID) {
+ l1c_read_phydbg(hw, true, L1C_MIIDBG_CABLE1TH_DET, &phy_val);
+ l1c_write_phydbg(hw, true, L1C_MIIDBG_CABLE1TH_DET,
+ phy_val | L1C_CABLE1TH_DET_EN); /* set bit15 */
+ }
+
+ if (pws_en) {
+ /* clear bit[3] of debugport 3B to 0,
+ * lower voltage to save power */
+ if (hw->pci_devid == L2CB_DEV_ID ||
+ hw->pci_devid == L2CB2_DEV_ID) {
+ l1c_read_phydbg(hw, true, L1C_MIIDBG_VOLT_CTRL,
+ &phy_val);
+ l1c_write_phydbg(hw, true, L1C_MIIDBG_VOLT_CTRL,
+ phy_val & ~L1C_VOLT_CTRL_SWLOWEST);
+ }
+ /* power saving config */
+ l1c_write_phydbg(hw, true, L1C_MIIDBG_LEGCYPS,
+ (hw->pci_devid == L1D_DEV_ID ||
+ hw->pci_devid == L1D2_DEV_ID) ?
+ L1D_LEGCYPS_DEF : L1C_LEGCYPS_DEF);
+ /* hib */
+ l1c_write_phydbg(hw, true, L1C_MIIDBG_SYSMODCTRL,
+ L1C_SYSMODCTRL_IECHOADJ_DEF);
+ } else {
+ /*dis powersaving */
+ l1c_read_phydbg(hw, true, L1C_MIIDBG_LEGCYPS, &phy_val);
+ l1c_write_phydbg(hw, true, L1C_MIIDBG_LEGCYPS,
+ phy_val & ~L1C_LEGCYPS_EN);
+ /* disable hibernate */
+ l1c_read_phydbg(hw, true, L1C_MIIDBG_HIBNEG, &phy_val);
+ l1c_write_phydbg(hw, true, L1C_MIIDBG_HIBNEG,
+ phy_val & ~L1C_HIBNEG_PSHIB_EN);
+ }
+
+ /* az is only for l2cbv2 / l1dv1 /l1dv2 */
+ if (hw->pci_devid == L1D_DEV_ID ||
+ hw->pci_devid == L1D2_DEV_ID ||
+ hw->pci_devid == L2CB2_DEV_ID) {
+ if (az_en) {
+ switch (hw->pci_devid) {
+ case L2CB2_DEV_ID:
+ alx_mem_w32(hw, L1C_LPI_DECISN_TIMER,
+ L1C_LPI_DESISN_TIMER_L2CB);
+ /* az enable 100M */
+ l1c_write_phy(hw, true, L1C_MIIEXT_ANEG, true,
+ L1C_MIIEXT_LOCAL_EEEADV,
+ L1C_LOCAL_EEEADV_100BT);
+ /* az long wake threshold */
+ l1c_write_phy(hw, true, L1C_MIIEXT_PCS, true,
+ L1C_MIIEXT_AZCTRL5,
+ L1C_AZCTRL5_WAKE_LTH_L2CB);
+ /* az short wake threshold */
+ l1c_write_phy(hw, true, L1C_MIIEXT_PCS, true,
+ L1C_MIIEXT_AZCTRL4,
+ L1C_AZCTRL4_WAKE_STH_L2CB);
+
+ l1c_write_phy(hw, true, L1C_MIIEXT_PCS, true,
+ L1C_MIIEXT_CLDCTRL3,
+ L1C_CLDCTRL3_L2CB);
+
+ /* bit7 set to 0, otherwise ping fail */
+ l1c_write_phy(hw, true, L1C_MIIEXT_PCS, true,
+ L1C_MIIEXT_CLDCTRL7,
+ L1C_CLDCTRL7_L2CB);
+
+ l1c_write_phy(hw, true, L1C_MIIEXT_PCS, true,
+ L1C_MIIEXT_AZCTRL2,
+ L1C_AZCTRL2_L2CB);
+ break;
+
+ case L1D_DEV_ID:
+ l1c_write_phydbg(hw, true,
+ L1C_MIIDBG_AZ_ANADECT, L1C_AZ_ANADECT_DEF);
+ phy_val = hw->long_cable ? L1C_CLDCTRL3_L1D :
+ (L1C_CLDCTRL3_L1D &
+ ~(L1C_CLDCTRL3_BP_CABLE1TH_DET_GT |
+ L1C_CLDCTRL3_AZ_DISAMP));
+ l1c_write_phy(hw, true, L1C_MIIEXT_PCS, true,
+ L1C_MIIEXT_CLDCTRL3, phy_val);
+ l1c_write_phy(hw, true, L1C_MIIEXT_PCS, true,
+ L1C_MIIEXT_AZCTRL,
+ L1C_AZCTRL_L1D);
+ l1c_write_phy(hw, true, L1C_MIIEXT_PCS, true,
+ L1C_MIIEXT_AZCTRL2,
+ L1C_AZCTRL2_L2CB);
+ break;
+
+ case L1D2_DEV_ID:
+ l1c_write_phydbg(hw, true,
+ L1C_MIIDBG_AZ_ANADECT,
+ L1C_AZ_ANADECT_DEF);
+ phy_val = hw->long_cable ? L1C_CLDCTRL3_L1D :
+ (L1C_CLDCTRL3_L1D &
+ ~L1C_CLDCTRL3_BP_CABLE1TH_DET_GT);
+ l1c_write_phy(hw, true, L1C_MIIEXT_PCS, true,
+ L1C_MIIEXT_CLDCTRL3, phy_val);
+ l1c_write_phy(hw, true, L1C_MIIEXT_PCS, true,
+ L1C_MIIEXT_AZCTRL,
+ L1C_AZCTRL_L1D);
+ l1c_write_phy(hw, true, L1C_MIIEXT_PCS, true,
+ L1C_MIIEXT_AZCTRL2,
+ L1C_AZCTRL2_L1D2);
+ l1c_write_phy(hw, true, L1C_MIIEXT_PCS, true,
+ L1C_MIIEXT_AZCTRL6,
+ L1C_AZCTRL6_L1D2);
+ break;
+ }
+ } else {
+ alx_mem_r32(hw, L1C_LPI_CTRL, &val);
+ alx_mem_w32(hw, L1C_LPI_CTRL, val & ~L1C_LPI_CTRL_EN);
+ l1c_write_phy(hw, true, L1C_MIIEXT_ANEG, true,
+ L1C_MIIEXT_LOCAL_EEEADV, 0);
+ l1c_write_phy(hw, true, L1C_MIIEXT_PCS, true,
+ L1C_MIIEXT_CLDCTRL3, L1C_CLDCTRL3_L2CB);
+ }
+ }
+
+ /* other debug port need to set */
+ l1c_write_phydbg(hw, true, L1C_MIIDBG_ANACTRL, L1C_ANACTRL_DEF);
+ l1c_write_phydbg(hw, true, L1C_MIIDBG_SRDSYSMOD, L1C_SRDSYSMOD_DEF);
+ l1c_write_phydbg(hw, true, L1C_MIIDBG_TST10BTCFG, L1C_TST10BTCFG_DEF);
+ /* L1c, L2c, L1d, L2cb link fail inhibit
+ timer issue of L1c UNH-IOL test fail, set bit7*/
+ l1c_write_phydbg(hw, true, L1C_MIIDBG_TST100BTCFG,
+ L1C_TST100BTCFG_DEF | L1C_TST100BTCFG_LITCH_EN);
+
+ /* set phy interrupt mask */
+ l1c_write_phy(hw, false, 0, true,
+ L1C_MII_IER, L1C_IER_LINK_UP | L1C_IER_LINK_DOWN);
+
+ return 0;
+}
+
+
+/* reset pcie
+ * just reset pcie relative registers (pci command, clk, aspm...)
+ * return
+ * 0:success
+ * non-0:fail
+ */
+u16 l1c_reset_pcie(struct alx_hw *hw, bool l0s_en, bool l1_en)
+{
+ u32 val;
+ u16 val16;
+ u16 ret;
+
+ /* Workaround for PCI problem when BIOS sets MMRBC incorrectly. */
+ alx_cfg_r16(hw, PCI_COMMAND, &val16);
+ if ((val16 & (PCI_COMMAND_IO |
+ PCI_COMMAND_MEMORY |
+ PCI_COMMAND_MASTER)) == 0 ||
+ (val16 & PCI_COMMAND_INTX_DISABLE) != 0) {
+ val16 = (u16)((val16 | (PCI_COMMAND_IO |
+ PCI_COMMAND_MEMORY |
+ PCI_COMMAND_MASTER))
+ & ~PCI_COMMAND_INTX_DISABLE);
+ alx_cfg_w16(hw, PCI_COMMAND, val16);
+ }
+
+ /* Clear any PowerSaving Settings */
+ alx_cfg_w16(hw, L1C_PM_CSR, 0);
+
+ /* close write attr for some registes */
+ alx_mem_r32(hw, L1C_LTSSM_CTRL, &val);
+ alx_mem_w32(hw, L1C_LTSSM_CTRL, val & ~L1C_LTSSM_WRO_EN);
+
+ /* mask some pcie error bits */
+ alx_mem_r32(hw, L1C_UE_SVRT, &val);
+ val &= ~(L1C_UE_SVRT_DLPROTERR | L1C_UE_SVRT_FCPROTERR);
+ alx_mem_w32(hw, L1C_UE_SVRT, val);
+
+ /* pclk */
+ alx_mem_r32(hw, L1C_MASTER, &val);
+ val &= ~L1C_MASTER_PCLKSEL_SRDS;
+ alx_mem_w32(hw, L1C_MASTER, val);
+
+ /* Set 1000 bit 2, only used for L1c/L2c , WOL usage */
+ if (hw->pci_devid == L1C_DEV_ID || hw->pci_devid == L2C_DEV_ID) {
+ alx_mem_r32(hw, L1C_PPHY_MISC1, &val);
+ alx_mem_w32(hw, L1C_PPHY_MISC1, val | L1C_PPHY_MISC1_RCVDET);
+ } else { /* other device should set bit 5 of reg1400 for WOL */
+ if ((val & L1C_MASTER_WAKEN_25M) == 0)
+ alx_mem_w32(hw, L1C_MASTER, val | L1C_MASTER_WAKEN_25M);
+ }
+ /* l2cb 1.0*/
+ if (hw->pci_devid == L2CB_DEV_ID && hw->pci_revid == L2CB_V10) {
+ alx_mem_r32(hw, L1C_PPHY_MISC2, &val);
+ FIELD_SETL(val, L1C_PPHY_MISC2_L0S_TH,
+ L1C_PPHY_MISC2_L0S_TH_L2CB1);
+ FIELD_SETL(val, L1C_PPHY_MISC2_CDR_BW,
+ L1C_PPHY_MISC2_CDR_BW_L2CB1);
+ alx_mem_w32(hw, L1C_PPHY_MISC2, val);
+ /* extend L1 sync timer, this will use more power,
+ * only for L2cb v1.0*/
+ if (!hw->aps_en) {
+ alx_mem_r32(hw, L1C_LNK_CTRL, &val);
+ alx_mem_w32(hw, L1C_LNK_CTRL,
+ val | L1C_LNK_CTRL_EXTSYNC);
+ }
+ }
+
+ /* l2cbv1.x & l1dv1.x */
+ if (hw->pci_devid == L2CB_DEV_ID || hw->pci_devid == L1D_DEV_ID) {
+ alx_mem_r32(hw, L1C_PMCTRL, &val);
+ alx_mem_w32(hw, L1C_PMCTRL, val | L1C_PMCTRL_L0S_BUFSRX_EN);
+ /* clear vendor message for L1d & L2cb */
+ alx_mem_r32(hw, L1C_DMA_DBG, &val);
+ alx_mem_w32(hw, L1C_DMA_DBG, val & ~L1C_DMA_DBG_VENDOR_MSG);
+ }
+
+ /* hi-tx-perf */
+ if (hw->hi_txperf) {
+ alx_mem_r32(hw, L1C_PPHY_MISC1, &val);
+ FIELD_SETL(val, L1C_PPHY_MISC1_NFTS,
+ L1C_PPHY_MISC1_NFTS_HIPERF);
+ alx_mem_w32(hw, L1C_PPHY_MISC1, val);
+ }
+ /* l0s, l1 setting */
+ ret = l1c_enable_aspm(hw, l0s_en, l1_en, 0);
+
+ udelay(10);
+
+ return ret;
+}
+
+
+/* disable/enable MAC/RXQ/TXQ
+ * en
+ * true:enable
+ * false:disable
+ * return
+ * 0:success
+ * non-0-fail
+ */
+u16 l1c_enable_mac(struct alx_hw *hw, bool en, u16 en_ctrl)
+{
+ u32 rxq, txq, mac, val;
+ u16 i;
+
+ alx_mem_r32(hw, L1C_RXQ0, &rxq);
+ alx_mem_r32(hw, L1C_TXQ0, &txq);
+ alx_mem_r32(hw, L1C_MAC_CTRL, &mac);
+
+ if (en) { /* enable */
+ alx_mem_w32(hw, L1C_RXQ0, rxq | L1C_RXQ0_EN);
+ alx_mem_w32(hw, L1C_TXQ0, txq | L1C_TXQ0_EN);
+ if ((en_ctrl & LX_MACSPEED_1000) != 0) {
+ FIELD_SETL(mac, L1C_MAC_CTRL_SPEED,
+ L1C_MAC_CTRL_SPEED_1000);
+ } else {
+ FIELD_SETL(mac, L1C_MAC_CTRL_SPEED,
+ L1C_MAC_CTRL_SPEED_10_100);
+ }
+
+ test_set_or_clear(mac, en_ctrl, LX_MACDUPLEX_FULL,
+ L1C_MAC_CTRL_FULLD);
+
+ /* rx filter */
+ test_set_or_clear(mac, en_ctrl, LX_FLT_PROMISC,
+ L1C_MAC_CTRL_PROMISC_EN);
+ test_set_or_clear(mac, en_ctrl, LX_FLT_MULTI_ALL,
+ L1C_MAC_CTRL_MULTIALL_EN);
+ test_set_or_clear(mac, en_ctrl, LX_FLT_BROADCAST,
+ L1C_MAC_CTRL_BRD_EN);
+ test_set_or_clear(mac, en_ctrl, LX_FLT_DIRECT,
+ L1C_MAC_CTRL_RX_EN);
+ test_set_or_clear(mac, en_ctrl, LX_FC_TXEN,
+ L1C_MAC_CTRL_TXFC_EN);
+ test_set_or_clear(mac, en_ctrl, LX_FC_RXEN,
+ L1C_MAC_CTRL_RXFC_EN);
+ test_set_or_clear(mac, en_ctrl, LX_VLAN_STRIP,
+ L1C_MAC_CTRL_VLANSTRIP);
+ test_set_or_clear(mac, en_ctrl, LX_LOOPBACK,
+ L1C_MAC_CTRL_LPBACK_EN);
+ test_set_or_clear(mac, en_ctrl, LX_SINGLE_PAUSE,
+ L1C_MAC_CTRL_SPAUSE_EN);
+ test_set_or_clear(mac, en_ctrl, LX_ADD_FCS,
+ (L1C_MAC_CTRL_PCRCE | L1C_MAC_CTRL_CRCE));
+
+ alx_mem_w32(hw, L1C_MAC_CTRL, mac | L1C_MAC_CTRL_TX_EN);
+ } else { /* disable mac */
+ alx_mem_w32(hw, L1C_RXQ0, rxq & ~L1C_RXQ0_EN);
+ alx_mem_w32(hw, L1C_TXQ0, txq & ~L1C_TXQ0_EN);
+
+ /* waiting for rxq/txq be idle */
+ for (i = 0; i < L1C_DMA_MAC_RST_TO; i++) {/* wait atmost 1ms */
+ alx_mem_r32(hw, L1C_MAC_STS, &val);
+ if ((val & (L1C_MAC_STS_TXQ_BUSY |
+ L1C_MAC_STS_RXQ_BUSY)) == 0) {
+ break;
+ }
+ udelay(20);
+ }
+ if (L1C_DMA_MAC_RST_TO == i)
+ return LX_ERR_RSTMAC;
+ /* stop mac tx/rx */
+ alx_mem_w32(hw, L1C_MAC_CTRL,
+ mac & ~(L1C_MAC_CTRL_RX_EN | L1C_MAC_CTRL_TX_EN));
+
+ for (i = 0; i < L1C_DMA_MAC_RST_TO; i++) {
+ alx_mem_r32(hw, L1C_MAC_STS, &val);
+ if ((val & L1C_MAC_STS_IDLE) == 0)
+ break;
+ udelay(10);
+ }
+ if (L1C_DMA_MAC_RST_TO == i)
+ return LX_ERR_RSTMAC;
+ }
+
+ return 0;
+}
+
+
+/* enable/disable aspm support
+ * that will change settings for phy/mac/pcie
+ */
+u16 l1c_enable_aspm(struct alx_hw *hw, bool l0s_en, bool l1_en, u8 lnk_stat)
+{
+ u32 pmctrl;
+ bool linkon;
+
+ linkon = (lnk_stat == LX_LC_10H || lnk_stat == LX_LC_10F ||
+ lnk_stat == LX_LC_100H || lnk_stat == LX_LC_100F ||
+ lnk_stat == LX_LC_1000F) ? true : false;
+
+ alx_mem_r32(hw, L1C_PMCTRL, &pmctrl);
+ pmctrl &= ~(L1C_PMCTRL_L0S_EN |
+ L1C_PMCTRL_L1_EN |
+ L1C_PMCTRL_ASPM_FCEN);
+ FIELD_SETL(pmctrl, L1C_PMCTRL_LCKDET_TIMER,
+ L1C_PMCTRL_LCKDET_TIMER_DEF);
+
+ /* l1 timer */
+ if (hw->pci_devid == L2CB2_DEV_ID || hw->pci_devid == L1D2_DEV_ID) {
+ pmctrl &= ~L1D_PMCTRL_TXL1_AFTER_L0S;
+ FIELD_SETL(pmctrl, L1D_PMCTRL_L1_TIMER,
+ (lnk_stat == LX_LC_100H ||
+ lnk_stat == LX_LC_100F ||
+ lnk_stat == LX_LC_1000F) ?
+ L1D_PMCTRL_L1_TIMER_16US : 1);
+ } else {
+ FIELD_SETL(pmctrl, L1C_PMCTRL_L1_TIMER,
+ (lnk_stat == LX_LC_100H ||
+ lnk_stat == LX_LC_100F ||
+ lnk_stat == LX_LC_1000F) ?
+ ((hw->pci_devid == L2CB_DEV_ID) ?
+ L1C_PMCTRL_L1_TIMER_L2CB1 : L1C_PMCTRL_L1_TIMER_DEF
+ ) : 1);
+ }
+ if (l0s_en) { /* on/off l0s only if bios/system enable l0s */
+ pmctrl |= (L1C_PMCTRL_L0S_EN | L1C_PMCTRL_ASPM_FCEN);
+ }
+ if (l1_en) { /* on/off l1 only if bios/system enable l1 */
+ pmctrl |= (L1C_PMCTRL_L1_EN | L1C_PMCTRL_ASPM_FCEN);
+ }
+
+ if (hw->pci_devid == L2CB_DEV_ID || hw->pci_devid == L1D_DEV_ID ||
+ hw->pci_devid == L2CB2_DEV_ID || hw->pci_devid == L1D2_DEV_ID) {
+ /* If the pm_request_l1 time exceeds the value of this timer,
+ it will enter L0s instead of L1 for this ASPM request.*/
+ FIELD_SETL(pmctrl, L1C_PMCTRL_L1REQ_TO,
+ L1C_PMCTRL_L1REG_TO_DEF);
+
+ pmctrl |= L1C_PMCTRL_RCVR_WT_1US | /* wait 1us not 2ms */
+ L1C_PMCTRL_L1_SRDSRX_PWD | /* pwd serdes */
+ L1C_PMCTRL_L1_CLKSW_EN;
+ pmctrl &= ~(L1C_PMCTRL_L1_SRDS_EN |
+ L1C_PMCTRL_L1_SRDSPLL_EN|
+ L1C_PMCTRL_L1_BUFSRX_EN |
+ L1C_PMCTRL_SADLY_EN |
+ L1C_PMCTRL_HOTRST_WTEN);
+ /* disable l0s if linkdown or l2cbv1.x */
+ if (!linkon ||
+ (!hw->aps_en && hw->pci_devid == L2CB_DEV_ID)) {
+ pmctrl &= ~L1C_PMCTRL_L0S_EN;
+ }
+ } else { /* l1c */
+ FIELD_SETL(pmctrl, L1C_PMCTRL_L1_TIMER, 0);
+ if (linkon) {
+ pmctrl |= L1C_PMCTRL_L1_SRDS_EN |
+ L1C_PMCTRL_L1_SRDSPLL_EN |
+ L1C_PMCTRL_L1_BUFSRX_EN;
+ pmctrl &= ~(L1C_PMCTRL_L1_SRDSRX_PWD|
+ L1C_PMCTRL_L1_CLKSW_EN |
+ L1C_PMCTRL_L0S_EN |
+ L1C_PMCTRL_L1_EN);
+ } else {
+ pmctrl |= L1C_PMCTRL_L1_CLKSW_EN;
+ pmctrl &= ~(L1C_PMCTRL_L1_SRDS_EN |
+ L1C_PMCTRL_L1_SRDSPLL_EN|
+ L1C_PMCTRL_L1_BUFSRX_EN |
+ L1C_PMCTRL_L0S_EN);
+ }
+ }
+
+ alx_mem_w32(hw, L1C_PMCTRL, pmctrl);
+
+ return 0;
+}
+
+
+/* initialize phy for speed / flow control
+ * lnk_cap
+ * if autoNeg, is link capability to tell the peer
+ * if force mode, is forced speed/duplex
+ */
+u16 l1c_init_phy_spdfc(struct alx_hw *hw, bool auto_neg,
+ u8 lnk_cap, bool fc_en)
+{
+ u16 adv, giga, cr;
+ u32 val;
+ u16 ret;
+
+ /* clear flag */
+ l1c_write_phy(hw, false, 0, false, L1C_MII_DBG_ADDR, 0);
+ alx_mem_r32(hw, L1C_DRV, &val);
+ FIELD_SETL(val, LX_DRV_PHY, 0);
+
+ if (auto_neg) {
+ adv = L1C_ADVERTISE_DEFAULT_CAP & ~L1C_ADVERTISE_SPEED_MASK;
+ giga = L1C_GIGA_CR_1000T_DEFAULT_CAP &
+ ~L1C_GIGA_CR_1000T_SPEED_MASK;
+ val |= LX_DRV_PHY_AUTO;
+ if (!fc_en)
+ adv &= ~(ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM);
+ else
+ val |= LX_DRV_PHY_FC;
+ if ((LX_LC_10H & lnk_cap) != 0) {
+ adv |= ADVERTISE_10HALF;
+ val |= LX_DRV_PHY_10;
+ }
+ if ((LX_LC_10F & lnk_cap) != 0) {
+ adv |= ADVERTISE_10HALF |
+ ADVERTISE_10FULL;
+ val |= LX_DRV_PHY_10 | LX_DRV_PHY_DUPLEX;
+ }
+ if ((LX_LC_100H & lnk_cap) != 0) {
+ adv |= ADVERTISE_100HALF;
+ val |= LX_DRV_PHY_100;
+ }
+ if ((LX_LC_100F & lnk_cap) != 0) {
+ adv |= ADVERTISE_100HALF |
+ ADVERTISE_100FULL;
+ val |= LX_DRV_PHY_100 | LX_DRV_PHY_DUPLEX;
+ }
+ if ((LX_LC_1000F & lnk_cap) != 0) {
+ giga |= L1C_GIGA_CR_1000T_FD_CAPS;
+ val |= LX_DRV_PHY_1000 | LX_DRV_PHY_DUPLEX;
+ }
+
+ ret = l1c_write_phy(hw, false, 0, false, MII_ADVERTISE, adv);
+ ret = l1c_write_phy(hw, false, 0, false, MII_CTRL1000, giga);
+
+ cr = BMCR_RESET | BMCR_ANENABLE | BMCR_ANRESTART;
+ ret = l1c_write_phy(hw, false, 0, false, MII_BMCR, cr);
+ } else { /* force mode */
+ cr = BMCR_RESET;
+ switch (lnk_cap) {
+ case LX_LC_10H:
+ val |= LX_DRV_PHY_10;
+ break;
+ case LX_LC_10F:
+ cr |= BMCR_FULLDPLX;
+ val |= LX_DRV_PHY_10 | LX_DRV_PHY_DUPLEX;
+ break;
+ case LX_LC_100H:
+ cr |= BMCR_SPEED100;
+ val |= LX_DRV_PHY_100;
+ break;
+ case LX_LC_100F:
+ cr |= BMCR_SPEED100 | BMCR_FULLDPLX;
+ val |= LX_DRV_PHY_100 | LX_DRV_PHY_DUPLEX;
+ break;
+ default:
+ return LX_ERR_PARM;
+ }
+ ret = l1c_write_phy(hw, false, 0, false, MII_BMCR, cr);
+ }
+
+ if (!ret) {
+ l1c_write_phy(hw, false, 0, false, L1C_MII_DBG_ADDR,
+ LX_PHY_INITED);
+ }
+ alx_mem_w32(hw, L1C_DRV, val);
+
+ return ret;
+}
+
+
+/* do power saving setting befor enter suspend mode
+ * NOTE:
+ * 1. phy link must be established before calling this function
+ * 2. wol option (pattern,magic,link,etc.) is configed before call it.
+ */
+u16 l1c_powersaving(struct alx_hw *hw, u8 wire_spd, bool wol_en,
+ bool mac_txen, bool mac_rxen, bool pws_en)
+{
+ u32 master_ctrl, mac_ctrl, phy_ctrl;
+ u16 pm_ctrl, ret = 0;
+
+ master_ctrl = 0;
+ mac_ctrl = 0;
+ phy_ctrl = 0;
+
+ pws_en = pws_en;
+
+ alx_mem_r32(hw, L1C_MASTER, &master_ctrl);
+ master_ctrl &= ~L1C_MASTER_PCLKSEL_SRDS;
+
+ alx_mem_r32(hw, L1C_MAC_CTRL, &mac_ctrl);
+ /* 10/100 half */
+ FIELD_SETL(mac_ctrl, L1C_MAC_CTRL_SPEED, L1C_MAC_CTRL_SPEED_10_100);
+ mac_ctrl &= ~(L1C_MAC_CTRL_FULLD |
+ L1C_MAC_CTRL_RX_EN |
+ L1C_MAC_CTRL_TX_EN);
+
+ alx_mem_r32(hw, L1C_PHY_CTRL, &phy_ctrl);
+ phy_ctrl &= ~(L1C_PHY_CTRL_DSPRST_OUT | L1C_PHY_CTRL_CLS);
+ /* if (pws_en) */
+ phy_ctrl |= (L1C_PHY_CTRL_RST_ANALOG | L1C_PHY_CTRL_HIB_PULSE |
+ L1C_PHY_CTRL_HIB_EN);
+
+ if (wol_en) { /* enable rx packet or tx packet */
+ if (mac_rxen)
+ mac_ctrl |= (L1C_MAC_CTRL_RX_EN | L1C_MAC_CTRL_BRD_EN);
+ if (mac_txen)
+ mac_ctrl |= L1C_MAC_CTRL_TX_EN;
+ if (LX_LC_1000F == wire_spd) {
+ FIELD_SETL(mac_ctrl, L1C_MAC_CTRL_SPEED,
+ L1C_MAC_CTRL_SPEED_1000);
+ }
+ if (LX_LC_10F == wire_spd || LX_LC_100F == wire_spd ||
+ LX_LC_100F == wire_spd) {
+ mac_ctrl |= L1C_MAC_CTRL_FULLD;
+ }
+ phy_ctrl |= L1C_PHY_CTRL_DSPRST_OUT;
+ ret = l1c_write_phy(hw, false, 0, false,
+ L1C_MII_IER, L1C_IER_LINK_UP);
+ } else {
+ master_ctrl |= L1C_MASTER_PCLKSEL_SRDS;
+ ret = l1c_write_phy(hw, false, 0, false, L1C_MII_IER, 0);
+ phy_ctrl |= (L1C_PHY_CTRL_IDDQ | L1C_PHY_CTRL_POWER_DOWN);
+ }
+ alx_mem_w32(hw, L1C_MASTER, master_ctrl);
+ alx_mem_w32(hw, L1C_MAC_CTRL, mac_ctrl);
+ alx_mem_w32(hw, L1C_PHY_CTRL, phy_ctrl);
+
+ /* set PME_EN ?? */
+ if (wol_en) {
+ alx_cfg_r16(hw, L1C_PM_CSR, &pm_ctrl);
+ pm_ctrl |= L1C_PM_CSR_PME_EN;
+ alx_cfg_w16(hw, L1C_PM_CSR, pm_ctrl);
+ }
+
+ return ret;
+}
+
+
+/* read phy register */
+u16 l1c_read_phy(struct alx_hw *hw, bool ext, u8 dev, bool fast,
+ u16 reg, u16 *data)
+{
+ u32 val;
+ u16 clk_sel, i, ret = 0;
+
+ *data = 0;
+ clk_sel = fast ?
+ (u16)L1C_MDIO_CLK_SEL_25MD4 : (u16)L1C_MDIO_CLK_SEL_25MD128;
+
+ if (ext) {
+ val = FIELDL(L1C_MDIO_EXTN_DEVAD, dev) |
+ FIELDL(L1C_MDIO_EXTN_REG, reg);
+ alx_mem_w32(hw, L1C_MDIO_EXTN, val);
+
+ val = L1C_MDIO_SPRES_PRMBL |
+ FIELDL(L1C_MDIO_CLK_SEL, clk_sel) |
+ L1C_MDIO_START |
+ L1C_MDIO_MODE_EXT |
+ L1C_MDIO_OP_READ;
+ } else {
+ val = L1C_MDIO_SPRES_PRMBL |
+ FIELDL(L1C_MDIO_CLK_SEL, clk_sel) |
+ FIELDL(L1C_MDIO_REG, reg) |
+ L1C_MDIO_START |
+ L1C_MDIO_OP_READ;
+ }
+
+ alx_mem_w32(hw, L1C_MDIO, val);
+
+ for (i = 0; i < L1C_MDIO_MAX_AC_TO; i++) {
+ alx_mem_r32(hw, L1C_MDIO, &val);
+ if ((val & L1C_MDIO_BUSY) == 0) {
+ *data = (u16)FIELD_GETX(val, L1C_MDIO_DATA);
+ break;
+ }
+ udelay(10);
+ }
+ if (L1C_MDIO_MAX_AC_TO == i)
+ ret = LX_ERR_MIIBUSY;
+
+ return ret;
+}
+
+/* write phy register */
+u16 l1c_write_phy(struct alx_hw *hw, bool ext, u8 dev, bool fast,
+ u16 reg, u16 data)
+{
+ u32 val;
+ u16 clk_sel, i, ret = 0;
+
+ clk_sel = fast ?
+ (u16)L1C_MDIO_CLK_SEL_25MD4 : (u16)L1C_MDIO_CLK_SEL_25MD128;
+
+ if (ext) {
+ val = FIELDL(L1C_MDIO_EXTN_DEVAD, dev) |
+ FIELDL(L1C_MDIO_EXTN_REG, reg);
+ alx_mem_w32(hw, L1C_MDIO_EXTN, val);
+
+ val = L1C_MDIO_SPRES_PRMBL |
+ FIELDL(L1C_MDIO_CLK_SEL, clk_sel) |
+ FIELDL(L1C_MDIO_DATA, data) |
+ L1C_MDIO_START |
+ L1C_MDIO_MODE_EXT;
+ } else {
+ val = L1C_MDIO_SPRES_PRMBL |
+ FIELDL(L1C_MDIO_CLK_SEL, clk_sel) |
+ FIELDL(L1C_MDIO_REG, reg) |
+ FIELDL(L1C_MDIO_DATA, data) |
+ L1C_MDIO_START;
+ }
+
+ alx_mem_w32(hw, L1C_MDIO, val);
+
+ for (i = 0; i < L1C_MDIO_MAX_AC_TO; i++) {
+ alx_mem_r32(hw, L1C_MDIO, &val);
+ if ((val & L1C_MDIO_BUSY) == 0)
+ break;
+ udelay(10);
+ }
+
+ if (L1C_MDIO_MAX_AC_TO == i)
+ ret = LX_ERR_MIIBUSY;
+
+ return ret;
+}
+
+u16 l1c_read_phydbg(struct alx_hw *hw, bool fast, u16 reg, u16 *data)
+{
+ u16 ret;
+
+ ret = l1c_write_phy(hw, false, 0, fast, L1C_MII_DBG_ADDR, reg);
+ ret = l1c_read_phy(hw, false, 0, fast, L1C_MII_DBG_DATA, data);
+
+ return ret;
+}
+
+u16 l1c_write_phydbg(struct alx_hw *hw, bool fast, u16 reg, u16 data)
+{
+ u16 ret;
+
+ ret = l1c_write_phy(hw, false, 0, fast, L1C_MII_DBG_ADDR, reg);
+ ret = l1c_write_phy(hw, false, 0, fast, L1C_MII_DBG_DATA, data);
+
+ return ret;
+}
+
+
+/*
+ * initialize mac basically
+ * most of hi-feature no init
+ * MAC/PHY should be reset before call this function
+ * smb_timer : million-second
+ * int_mod : micro-second
+ * disable RSS as default
+ */
+u16 l1c_init_mac(struct alx_hw *hw, u8 *addr, u32 txmem_hi,
+ u32 *tx_mem_lo, u8 tx_qnum, u16 txring_sz,
+ u32 rxmem_hi, u32 rfdmem_lo, u32 rrdmem_lo,
+ u16 rxring_sz, u16 rxbuf_sz, u16 smb_timer,
+ u16 mtu, u16 int_mod, bool hash_legacy)
+{
+ u32 val;
+ u16 val16;
+ u8 dmar_len;
+
+ /* set mac-address */
+ val = *(u32 *)(addr + 2);
+ alx_mem_w32(hw, L1C_STAD0, LX_SWAP_DW(val));
+ val = *(u16 *)addr ;
+ alx_mem_w32(hw, L1C_STAD1, LX_SWAP_W((u16)val));
+
+ /* clear multicast hash table, algrithm */
+ alx_mem_w32(hw, L1C_HASH_TBL0, 0);
+ alx_mem_w32(hw, L1C_HASH_TBL1, 0);
+ alx_mem_r32(hw, L1C_MAC_CTRL, &val);
+ if (hash_legacy)
+ val |= L1C_MAC_CTRL_MHASH_ALG_HI5B;
+ else
+ val &= ~L1C_MAC_CTRL_MHASH_ALG_HI5B;
+ alx_mem_w32(hw, L1C_MAC_CTRL, val);
+
+ /* clear any wol setting/status */
+ alx_mem_r32(hw, L1C_WOL0, &val);
+ alx_mem_w32(hw, L1C_WOL0, 0);
+
+ /* clk gating */
+ alx_mem_w32(hw, L1C_CLK_GATE, (hw->pci_devid == L1D_DEV_ID) ? 0 :
+ (L1C_CLK_GATE_DMAR | L1C_CLK_GATE_DMAW |
+ L1C_CLK_GATE_TXQ | L1C_CLK_GATE_RXQ |
+ L1C_CLK_GATE_TXMAC));
+
+ /* descriptor ring base memory */
+ alx_mem_w32(hw, L1C_TX_BASE_ADDR_HI, txmem_hi);
+ alx_mem_w32(hw, L1C_TPD_RING_SZ, txring_sz);
+ switch (tx_qnum) {
+ case 2:
+ alx_mem_w32(hw, L1C_TPD_PRI1_ADDR_LO, tx_mem_lo[1]);
+ /* fall through */
+ case 1:
+ alx_mem_w32(hw, L1C_TPD_PRI0_ADDR_LO, tx_mem_lo[0]);
+ break;
+ default:
+ return LX_ERR_PARM;
+ }
+ alx_mem_w32(hw, L1C_RX_BASE_ADDR_HI, rxmem_hi);
+ alx_mem_w32(hw, L1C_RFD_ADDR_LO, rfdmem_lo);
+ alx_mem_w32(hw, L1C_RRD_ADDR_LO, rrdmem_lo);
+ alx_mem_w32(hw, L1C_RFD_BUF_SZ, rxbuf_sz);
+ alx_mem_w32(hw, L1C_RRD_RING_SZ, rxring_sz);
+ alx_mem_w32(hw, L1C_RFD_RING_SZ, rxring_sz);
+ alx_mem_w32(hw, L1C_SMB_TIMER, smb_timer * 500UL);
+
+ if (hw->pci_devid == L2CB_DEV_ID) {
+ /* revise SRAM configuration */
+ alx_mem_w32(hw, L1C_SRAM5, L1C_SRAM_RXF_LEN_L2CB1);
+ alx_mem_w32(hw, L1C_SRAM7, L1C_SRAM_TXF_LEN_L2CB1);
+ alx_mem_w32(hw, L1C_SRAM4, L1C_SRAM_RXF_HT_L2CB1);
+ alx_mem_w32(hw, L1C_SRAM0, L1C_SRAM_RFD_HT_L2CB1);
+ alx_mem_w32(hw, L1C_SRAM6, L1C_SRAM_TXF_HT_L2CB1);
+ alx_mem_w32(hw, L1C_SRAM2, L1C_SRAM_TRD_HT_L2CB1);
+ alx_mem_w32(hw, L1C_TXQ2, 0); /* TX watermark, goto L1 state.*/
+ alx_mem_w32(hw, L1C_RXQ3, 0); /* RXD threshold. */
+ }
+ alx_mem_w32(hw, L1C_SRAM9, L1C_SRAM_LOAD_PTR);
+
+ /* int moduration */
+ alx_mem_r32(hw, L1C_MASTER, &val);
+ val |= L1C_MASTER_IRQMOD2_EN | L1C_MASTER_IRQMOD1_EN |
+ L1C_MASTER_SYSALVTIMER_EN; /* sysalive */
+ alx_mem_w32(hw, L1C_MASTER, val);
+ /* set Interrupt Moderator Timer (max interrupt per sec)
+ * we use seperate time for rx/tx */
+ alx_mem_w32(hw, L1C_IRQ_MODU_TIMER,
+ FIELDL(L1C_IRQ_MODU_TIMER1, int_mod) |
+ FIELDL(L1C_IRQ_MODU_TIMER2, int_mod >> 1));
+
+ /* tpd threshold to trig int */
+ alx_mem_w32(hw, L1C_TINT_TPD_THRSHLD, (u32)txring_sz / 3);
+ alx_mem_w32(hw, L1C_TINT_TIMER, int_mod * 2);
+ /* re-send int */
+ alx_mem_w32(hw, L1C_INT_RETRIG, L1C_INT_RETRIG_TO);
+
+ /* mtu */
+ alx_mem_w32(hw, L1C_MTU, (u32)(mtu + 4 + 4)); /* crc + vlan */
+
+ /* txq */
+ if ((mtu + 8) < L1C_TXQ1_JUMBO_TSO_TH)
+ val = (u32)(mtu + 8 + 7); /* 7 for QWORD align */
+ else
+ val = L1C_TXQ1_JUMBO_TSO_TH;
+ alx_mem_w32(hw, L1C_TXQ1, val >> 3);
+
+ alx_mem_r32(hw, L1C_DEV_CTRL, &val);
+ dmar_len = (u8)FIELD_GETX(val, L1C_DEV_CTRL_MAXRRS);
+ /* if BIOS had changed the default dma read max length,
+ * restore it to default value */
+ if (dmar_len < L1C_DEV_CTRL_MAXRRS_MIN) {
+ FIELD_SETL(val, L1C_DEV_CTRL_MAXRRS, L1C_DEV_CTRL_MAXRRS_MIN);
+ alx_mem_w32(hw, L1C_DEV_CTRL, val);
+ dmar_len = L1C_DEV_CTRL_MAXRRS_MIN;
+ }
+ val = FIELDL(L1C_TXQ0_TPD_BURSTPREF, L1C_TXQ0_TPD_BURSTPREF_DEF) |
+ L1C_TXQ0_MODE_ENHANCE |
+ L1C_TXQ0_LSO_8023_EN |
+ L1C_TXQ0_SUPT_IPOPT |
+ FIELDL(L1C_TXQ0_TXF_BURST_PREF,
+ (hw->pci_devid == L2CB_DEV_ID ||
+ hw->pci_devid == L2CB2_DEV_ID) ?
+ L1C_TXQ0_TXF_BURST_PREF_L2CB :
+ L1C_TXQ0_TXF_BURST_PREF_DEF);
+ alx_mem_w32(hw, L1C_TXQ0, val);
+
+ /* fc */
+ alx_mem_r32(hw, L1C_SRAM5, &val);
+ val = FIELD_GETX(val, L1C_SRAM_RXF_LEN) << 3; /* bytes */
+ if (val > L1C_SRAM_RXF_LEN_8K) {
+ val16 = L1C_MTU_STD_ALGN;
+ val = (val - (2 * L1C_MTU_STD_ALGN + L1C_MTU_MIN));
+ } else {
+ val16 = L1C_MTU_STD_ALGN;
+ val = (val - L1C_MTU_STD_ALGN);
+ }
+ alx_mem_w32(hw, L1C_RXQ2,
+ FIELDL(L1C_RXQ2_RXF_XOFF_THRESH, val16 >> 3) |
+ FIELDL(L1C_RXQ2_RXF_XON_THRESH, val >> 3));
+ /* rxq */
+ val = FIELDL(L1C_RXQ0_NUM_RFD_PREF, L1C_RXQ0_NUM_RFD_PREF_DEF) |
+ L1C_RXQ0_IPV6_PARSE_EN;
+ if (mtu > L1C_MTU_JUMBO_TH)
+ val |= L1C_RXQ0_CUT_THRU_EN;
+ if ((hw->pci_devid & 1) != 0) {
+ FIELD_SETL(val, L1C_RXQ0_ASPM_THRESH,
+ (hw->pci_devid == L1D2_DEV_ID) ?
+ L1C_RXQ0_ASPM_THRESH_NO :
+ L1C_RXQ0_ASPM_THRESH_100M);
+ }
+ alx_mem_w32(hw, L1C_RXQ0, val);
+
+ /* rfd producer index */
+ alx_mem_w32(hw, L1C_RFD_PIDX, (u32)rxring_sz - 1);
+
+ /* DMA */
+ val = FIELDL(L1C_DMA_RORDER_MODE, L1C_DMA_RORDER_MODE_OUT) |
+ L1C_DMA_RREQ_PRI_DATA |
+ FIELDL(L1C_DMA_RREQ_BLEN, dmar_len) |
+ FIELDL(L1C_DMA_WDLY_CNT, L1C_DMA_WDLY_CNT_DEF) |
+ FIELDL(L1C_DMA_RDLY_CNT, L1C_DMA_RDLY_CNT_DEF) ;
+ alx_mem_w32(hw, L1C_DMA, val);
+
+ return 0;
+}
+
+
+u16 l1c_get_phy_config(struct alx_hw *hw)
+{
+ u32 val;
+ u16 phy_val;
+
+ alx_mem_r32(hw, L1C_PHY_CTRL, &val);
+ if ((val & L1C_PHY_CTRL_DSPRST_OUT) == 0) { /* phy in rst */
+ return LX_DRV_PHY_UNKNOWN;
+ }
+
+ alx_mem_r32(hw, L1C_DRV, &val);
+ val = FIELD_GETX(val, LX_DRV_PHY);
+ if (LX_DRV_PHY_UNKNOWN == val)
+ return LX_DRV_PHY_UNKNOWN;
+
+ l1c_read_phy(hw, false, 0, false, L1C_MII_DBG_ADDR, &phy_val);
+
+ if (LX_PHY_INITED == phy_val)
+ return (u16) val;
+
+ return LX_DRV_PHY_UNKNOWN;
+}
+
diff --git a/drivers/net/ethernet/atheros/alx/alc_hw.h b/drivers/net/ethernet/atheros/alx/alc_hw.h
new file mode 100644
index 0000000..492b4c1
--- /dev/null
+++ b/drivers/net/ethernet/atheros/alx/alc_hw.h
@@ -0,0 +1,1324 @@
+/*
+ * Copyright (c) 2012 Qualcomm Atheros, Inc.
+ *
+ * Permission to use, copy, modify, and/or distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+#ifndef L1C_HW_H_
+#define L1C_HW_H_
+
+/*********************************************************************
+ * some reqs for l1x_sw.h
+ *
+ * 1. some basic type must be defined if there are not defined by
+ * your compiler:
+ * u8, u16, u32, bool
+ *
+ * 2. PETHCONTEXT difinition should be in l1x_sw.h and it must contain
+ * pci_devid & pci_venid & pci_revid
+ *
+ *********************************************************************/
+
+#include "alx_hwcom.h"
+
+/******************************************************************************/
+
+#define L1C_DEV_ID 0x1063
+#define L2C_DEV_ID 0x1062
+#define L2CB_DEV_ID 0x2060
+#define L2CB2_DEV_ID 0x2062
+#define L1D_DEV_ID 0x1073
+#define L1D2_DEV_ID 0x1083
+
+#define L2CB_V10 0xC0
+#define L2CB_V11 0xC1
+#define L2CB_V20 0xC0
+#define L2CB_V21 0xC1
+
+#define L1C_PM_CSR 0x0044 /* 16bit */
+#define L1C_PM_CSR_PME_STAT BIT(15)
+#define L1C_PM_CSR_DSCAL_MASK ASHFT13(3U)
+#define L1C_PM_CSR_DSCAL_SHIFT 13
+#define L1C_PM_CSR_DSEL_MASK ASHFT9(0xFU)
+#define L1C_PM_CSR_DSEL_SHIFT 9
+#define L1C_PM_CSR_PME_EN BIT(8)
+#define L1C_PM_CSR_PWST_MASK ASHFT0(3U)
+#define L1C_PM_CSR_PWST_SHIFT 0
+
+#define L1C_PM_DATA 0x0047 /* 8bit */
+
+#define L1C_DEV_CAP 0x005C
+#define L1C_DEV_CAP_SPLSL_MASK ASHFT26(3UL)
+#define L1C_DEV_CAP_SPLSL_SHIFT 26
+#define L1C_DEV_CAP_SPLV_MASK ASHFT18(0xFFUL)
+#define L1C_DEV_CAP_SPLV_SHIFT 18
+#define L1C_DEV_CAP_RBER BIT(15)
+#define L1C_DEV_CAP_PIPRS BIT(14)
+#define L1C_DEV_CAP_AIPRS BIT(13)
+#define L1C_DEV_CAP_ABPRS BIT(12)
+#define L1C_DEV_CAP_L1ACLAT_MASK ASHFT9(7UL)
+#define L1C_DEV_CAP_L1ACLAT_SHIFT 9
+#define L1C_DEV_CAP_L0SACLAT_MASK ASHFT6(7UL)
+#define L1C_DEV_CAP_L0SACLAT_SHIFT 6
+#define L1C_DEV_CAP_EXTAG BIT(5)
+#define L1C_DEV_CAP_PHANTOM BIT(4)
+#define L1C_DEV_CAP_MPL_MASK ASHFT0(7UL)
+#define L1C_DEV_CAP_MPL_SHIFT 0
+#define L1C_DEV_CAP_MPL_128 1
+#define L1C_DEV_CAP_MPL_256 2
+#define L1C_DEV_CAP_MPL_512 3
+#define L1C_DEV_CAP_MPL_1024 4
+#define L1C_DEV_CAP_MPL_2048 5
+#define L1C_DEV_CAP_MPL_4096 6
+
+#define L1C_DEV_CTRL 0x0060 /* 16bit */
+#define L1C_DEV_CTRL_MAXRRS_MASK ASHFT12(7U)
+#define L1C_DEV_CTRL_MAXRRS_SHIFT 12
+#define L1C_DEV_CTRL_MAXRRS_MIN 2
+#define L1C_DEV_CTRL_NOSNP_EN BIT(11)
+#define L1C_DEV_CTRL_AUXPWR_EN BIT(10)
+#define L1C_DEV_CTRL_PHANTOM_EN BIT(9)
+#define L1C_DEV_CTRL_EXTAG_EN BIT(8)
+#define L1C_DEV_CTRL_MPL_MASK ASHFT5(7U)
+#define L1C_DEV_CTRL_MPL_SHIFT 5
+#define L1C_DEV_CTRL_RELORD_EN BIT(4)
+#define L1C_DEV_CTRL_URR_EN BIT(3)
+#define L1C_DEV_CTRL_FERR_EN BIT(2)
+#define L1C_DEV_CTRL_NFERR_EN BIT(1)
+#define L1C_DEV_CTRL_CERR_EN BIT(0)
+
+#define L1C_DEV_STAT 0x0062 /* 16bit */
+#define L1C_DEV_STAT_XS_PEND BIT(5)
+#define L1C_DEV_STAT_AUXPWR BIT(4)
+#define L1C_DEV_STAT_UR BIT(3)
+#define L1C_DEV_STAT_FERR BIT(2)
+#define L1C_DEV_STAT_NFERR BIT(1)
+#define L1C_DEV_STAT_CERR BIT(0)
+
+#define L1C_LNK_CAP 0x0064
+#define L1C_LNK_CAP_PRTNUM_MASK ASHFT24(0xFFUL)
+#define L1C_LNK_CAP_PRTNUM_SHIFT 24
+#define L1C_LNK_CAP_CLK_PM BIT(18)
+#define L1C_LNK_CAP_L1EXTLAT_MASK ASHFT15(7UL)
+#define L1C_LNK_CAP_L1EXTLAT_SHIFT 15
+#define L1C_LNK_CAP_L0SEXTLAT_MASK ASHFT12(7UL)
+#define L1C_LNK_CAP_L0SEXTLAT_SHIFT 12
+#define L1C_LNK_CAP_ASPM_SUP_MASK ASHFT10(3UL)
+#define L1C_LNK_CAP_ASPM_SUP_SHIFT 10
+#define L1C_LNK_CAP_ASPM_SUP_L0S 1
+#define L1C_LNK_CAP_ASPM_SUP_L0SL1 3
+#define L1C_LNK_CAP_MAX_LWH_MASK ASHFT4(0x3FUL)
+#define L1C_LNK_CAP_MAX_LWH_SHIFT 4
+#define L1C_LNK_CAP_MAX_LSPD_MASH ASHFT0(0xFUL)
+#define L1C_LNK_CAP_MAX_LSPD_SHIFT 0
+
+#define L1C_LNK_CTRL 0x0068 /* 16bit */
+#define L1C_LNK_CTRL_CLK_PM_EN BIT(8)
+#define L1C_LNK_CTRL_EXTSYNC BIT(7)
+#define L1C_LNK_CTRL_CMNCLK_CFG BIT(6)
+#define L1C_LNK_CTRL_RCB_128B BIT(3) /* 0:64b,1:128b */
+#define L1C_LNK_CTRL_ASPM_MASK ASHFT0(3U)
+#define L1C_LNK_CTRL_ASPM_SHIFT 0
+#define L1C_LNK_CTRL_ASPM_DIS 0
+#define L1C_LNK_CTRL_ASPM_ENL0S 1
+#define L1C_LNK_CTRL_ASPM_ENL1 2
+#define L1C_LNK_CTRL_ASPM_ENL0SL1 3
+
+#define L1C_LNK_STAT 0x006A /* 16bit */
+#define L1C_LNK_STAT_SCLKCFG BIT(12)
+#define L1C_LNK_STAT_LNKTRAIN BIT(11)
+#define L1C_LNK_STAT_TRNERR BIT(10)
+#define L1C_LNK_STAT_LNKSPD_MASK ASHFT0(0xFU)
+#define L1C_LNK_STAT_LNKSPD_SHIFT 0
+#define L1C_LNK_STAT_NEGLW_MASK ASHFT4(0x3FU)
+#define L1C_LNK_STAT_NEGLW_SHIFT 4
+
+#define L1C_UE_SVRT 0x010C
+#define L1C_UE_SVRT_UR BIT(20)
+#define L1C_UE_SVRT_ECRCERR BIT(19)
+#define L1C_UE_SVRT_MTLP BIT(18)
+#define L1C_UE_SVRT_RCVOVFL BIT(17)
+#define L1C_UE_SVRT_UNEXPCPL BIT(16)
+#define L1C_UE_SVRT_CPLABRT BIT(15)
+#define L1C_UE_SVRT_CPLTO BIT(14)
+#define L1C_UE_SVRT_FCPROTERR BIT(13)
+#define L1C_UE_SVRT_PTLP BIT(12)
+#define L1C_UE_SVRT_DLPROTERR BIT(4)
+#define L1C_UE_SVRT_TRNERR BIT(0)
+
+#define L1C_SLD 0x0218 /* efuse load */
+#define L1C_SLD_FREQ_MASK ASHFT24(3UL)
+#define L1C_SLD_FREQ_SHIFT 24
+#define L1C_SLD_FREQ_100K 0
+#define L1C_SLD_FREQ_200K 1
+#define L1C_SLD_FREQ_300K 2
+#define L1C_SLD_FREQ_400K 3
+#define L1C_SLD_EXIST BIT(23)
+#define L1C_SLD_SLVADDR_MASK ASHFT16(0x7FUL)
+#define L1C_SLD_SLVADDR_SHIFT 16
+#define L1C_SLD_IDLE BIT(13)
+#define L1C_SLD_STAT BIT(12) /* 0:finish,1:in progress */
+#define L1C_SLD_START BIT(11)
+#define L1C_SLD_STARTADDR_MASK ASHFT0(0xFFUL)
+#define L1C_SLD_STARTADDR_SHIFT 0
+#define L1C_SLD_MAX_TO 100
+
+#define L1C_PPHY_MISC1 0x1000
+#define L1C_PPHY_MISC1_RCVDET BIT(2)
+#define L1C_PPHY_MISC1_NFTS_MASK ASHFT16(0xFFUL)
+#define L1C_PPHY_MISC1_NFTS_SHIFT 16
+#define L1C_PPHY_MISC1_NFTS_HIPERF 0xA0 /* ???? */
+
+#define L1C_PPHY_MISC2 0x1004
+#define L1C_PPHY_MISC2_L0S_TH_MASK ASHFT18(0x3UL)
+#define L1C_PPHY_MISC2_L0S_TH_SHIFT 18
+#define L1C_PPHY_MISC2_L0S_TH_L2CB1 3
+#define L1C_PPHY_MISC2_CDR_BW_MASK ASHFT16(0x3UL)
+#define L1C_PPHY_MISC2_CDR_BW_SHIFT 16
+#define L1C_PPHY_MISC2_CDR_BW_L2CB1 3
+
+#define L1C_PDLL_TRNS1 0x1104
+#define L1C_PDLL_TRNS1_D3PLLOFF_EN BIT(11)
+#define L1C_PDLL_TRNS1_REGCLK_SEL_NORM BIT(10)
+#define L1C_PDLL_TRNS1_REPLY_TO_MASK ASHFT0(0x3FFUL)
+#define L1C_PDLL_TRNS1_REPLY_TO_SHIFT 0
+
+#define L1C_TWSI_DBG 0x1108
+#define L1C_TWSI_DBG_DEV_EXIST BIT(29)
+
+#define L1C_DMA_DBG 0x1114
+#define L1C_DMA_DBG_VENDOR_MSG BIT(0)
+
+#define L1C_TLEXTN_STATS 0x1204 /* diff with l1f */
+#define L1C_TLEXTN_STATS_DEVNO_MASK ASHFT16(0x1FUL)
+#define L1C_TLEXTN_STATS_DEVNO_SHIFT 16
+#define L1C_TLEXTN_STATS_BUSNO_MASK ASHFT8(0xFFUL)
+#define L1C_TLEXTN_STATS_BUSNO_SHIFT 8
+
+#define L1C_EFUSE_CTRL 0x12C0
+#define L1C_EFUSE_CTRL_FLAG BIT(31) /* 0:read,1:write */
+#define L1C_EUFSE_CTRL_ACK BIT(30)
+#define L1C_EFUSE_CTRL_ADDR_MASK ASHFT16(0x3FFUL)
+#define L1C_EFUSE_CTRL_ADDR_SHIFT 16
+
+#define L1C_EFUSE_DATA 0x12C4
+
+#define EFUSE_OP_MAX_AC_TIMER 100 /* 1ms */
+
+#define L1C_EFUSE_CTRL2 0x12F0
+#define L1C_EFUSE_CTRL2_CLK_EN BIT(1)
+
+#define L1C_PMCTRL 0x12F8
+#define L1C_PMCTRL_HOTRST_WTEN BIT(31)
+#define L1C_PMCTRL_ASPM_FCEN BIT(30) /* L0s/L1 dis by MAC based on
+ * thrghput(setting in 15A0) */
+#define L1C_PMCTRL_SADLY_EN BIT(29)
+#define L1C_PMCTRL_L0S_BUFSRX_EN BIT(28)
+#define L1C_PMCTRL_LCKDET_TIMER_MASK ASHFT24(0xFUL)
+#define L1C_PMCTRL_LCKDET_TIMER_SHIFT 24
+#define L1C_PMCTRL_LCKDET_TIMER_DEF 0xC
+#define L1C_PMCTRL_L1REQ_TO_MASK ASHFT20(0xFUL)
+#define L1C_PMCTRL_L1REQ_TO_SHIFT 20 /* pm_request_l1 time > @
+ * ->L0s not L1 */
+#define L1C_PMCTRL_L1REG_TO_DEF 0xC
+#define L1D_PMCTRL_TXL1_AFTER_L0S BIT(19) /* l1dv2.0+ */
+#define L1D_PMCTRL_L1_TIMER_MASK ASHFT16(7UL)
+#define L1D_PMCTRL_L1_TIMER_SHIFT 16
+#define L1D_PMCTRL_L1_TIMER_DIS 0
+#define L1D_PMCTRL_L1_TIMER_2US 1
+#define L1D_PMCTRL_L1_TIMER_4US 2
+#define L1D_PMCTRL_L1_TIMER_8US 3
+#define L1D_PMCTRL_L1_TIMER_16US 4
+#define L1D_PMCTRL_L1_TIMER_24US 5
+#define L1D_PMCTRL_L1_TIMER_32US 6
+#define L1D_PMCTRL_L1_TIMER_63US 7
+#define L1C_PMCTRL_L1_TIMER_MASK ASHFT16(0xFUL)
+#define L1C_PMCTRL_L1_TIMER_SHIFT 16
+#define L1C_PMCTRL_L1_TIMER_L2CB1 7
+#define L1C_PMCTRL_L1_TIMER_DEF 0xF
+#define L1C_PMCTRL_RCVR_WT_1US BIT(15) /* 1:1us, 0:2ms */
+#define L1C_PMCTRL_PWM_VER_11 BIT(14) /* 0:1.0a,1:1.1 */
+#define L1C_PMCTRL_L1_CLKSW_EN BIT(13) /* en pcie clk sw in L1 */
+#define L1C_PMCTRL_L0S_EN BIT(12)
+#define L1D_PMCTRL_RXL1_AFTER_L0S BIT(11) /* l1dv2.0+ */
+#define L1D_PMCTRL_L0S_TIMER_MASK ASHFT8(7UL)
+#define L1D_PMCTRL_L0S_TIMER_SHIFT 8
+#define L1C_PMCTRL_L0S_TIMER_MASK ASHFT8(0xFUL)
+#define L1C_PMCTRL_L0S_TIMER_SHIFT 8
+#define L1C_PMCTRL_L1_BUFSRX_EN BIT(7)
+#define L1C_PMCTRL_L1_SRDSRX_PWD BIT(6) /* power down serdes rx */
+#define L1C_PMCTRL_L1_SRDSPLL_EN BIT(5)
+#define L1C_PMCTRL_L1_SRDS_EN BIT(4)
+#define L1C_PMCTRL_L1_EN BIT(3)
+#define L1C_PMCTRL_CLKREQ_EN BIT(2)
+#define L1C_PMCTRL_RBER_EN BIT(1)
+#define L1C_PMCTRL_SPRSDWER_EN BIT(0)
+
+#define L1C_LTSSM_CTRL 0x12FC
+#define L1C_LTSSM_WRO_EN BIT(12)
+#define L1C_LTSSM_TXTLP_BYPASS BIT(7)
+
+#define L1C_MASTER 0x1400
+#define L1C_MASTER_OTP_FLG BIT(31)
+#define L1C_MASTER_DEV_NUM_MASK ASHFT24(0x7FUL)
+#define L1C_MASTER_DEV_NUM_SHIFT 24
+#define L1C_MASTER_REV_NUM_MASK ASHFT16(0xFFUL)
+#define L1C_MASTER_REV_NUM_SHIFT 16
+#define L1C_MASTER_RDCLR_INT BIT(14)
+#define L1C_MASTER_CLKSW_L2EV1 BIT(13) /* 0:l2ev2.0,1:l2ev1.0 */
+#define L1C_MASTER_PCLKSEL_SRDS BIT(12) /* 1:alwys sel pclk from
+ * serdes, not sw to 25M */
+#define L1C_MASTER_IRQMOD2_EN BIT(11) /* IRQ MODURATION FOR RX */
+#define L1C_MASTER_IRQMOD1_EN BIT(10) /* MODURATION FOR TX/RX */
+#define L1C_MASTER_MANU_INT BIT(9) /* SOFT MANUAL INT */
+#define L1C_MASTER_MANUTIMER_EN BIT(8)
+#define L1C_MASTER_SYSALVTIMER_EN BIT(7) /* SYS ALIVE TIMER EN */
+#define L1C_MASTER_OOB_DIS BIT(6) /* OUT OF BOX DIS */
+#define L1C_MASTER_WAKEN_25M BIT(5) /* WAKE WO. PCIE CLK */
+#define L1C_MASTER_BERT_START BIT(4)
+#define L1C_MASTER_PCIE_TSTMOD_MASK ASHFT2(3UL)
+#define L1C_MASTER_PCIE_TSTMOD_SHIFT 2
+#define L1C_MASTER_PCIE_RST BIT(1)
+#define L1C_MASTER_DMA_MAC_RST BIT(0) /* RST MAC & DMA */
+#define L1C_DMA_MAC_RST_TO 50
+
+#define L1C_MANU_TIMER 0x1404
+
+#define L1C_IRQ_MODU_TIMER 0x1408
+#define L1C_IRQ_MODU_TIMER2_MASK ASHFT16(0xFFFFUL)
+#define L1C_IRQ_MODU_TIMER2_SHIFT 16 /* ONLY FOR RX */
+#define L1C_IRQ_MODU_TIMER1_MASK ASHFT0(0xFFFFUL)
+#define L1C_IRQ_MODU_TIMER1_SHIFT 0
+
+#define L1C_PHY_CTRL 0x140C
+#define L1C_PHY_CTRL_ADDR_MASK ASHFT19(0x1FUL)
+#define L1C_PHY_CTRL_ADDR_SHIFT 19
+#define L1C_PHY_CTRL_BP_VLTGSW BIT(18)
+#define L1C_PHY_CTRL_100AB_EN BIT(17)
+#define L1C_PHY_CTRL_10AB_EN BIT(16)
+#define L1C_PHY_CTRL_PLL_BYPASS BIT(15)
+#define L1C_PHY_CTRL_POWER_DOWN BIT(14) /* affect MAC & PHY,
+ * go to low power sts */
+#define L1C_PHY_CTRL_PLL_ON BIT(13) /* 1:PLL ALWAYS ON
+ * 0:CAN SWITCH IN LPW */
+#define L1C_PHY_CTRL_RST_ANALOG BIT(12)
+#define L1C_PHY_CTRL_HIB_PULSE BIT(11)
+#define L1C_PHY_CTRL_HIB_EN BIT(10)
+#define L1C_PHY_CTRL_GIGA_DIS BIT(9)
+#define L1C_PHY_CTRL_IDDQ_DIS BIT(8) /* POWER ON RST */
+#define L1C_PHY_CTRL_IDDQ BIT(7) /* WHILE REBOOT, BIT8(1)
+ * EFFECTS BIT7 */
+#define L1C_PHY_CTRL_LPW_EXIT BIT(6)
+#define L1C_PHY_CTRL_GATE_25M BIT(5)
+#define L1C_PHY_CTRL_RVRS_ANEG BIT(4)
+#define L1C_PHY_CTRL_ANEG_NOW BIT(3)
+#define L1C_PHY_CTRL_LED_MODE BIT(2)
+#define L1C_PHY_CTRL_RTL_MODE BIT(1)
+#define L1C_PHY_CTRL_DSPRST_OUT BIT(0) /* OUT OF DSP RST STATE */
+#define L1C_PHY_CTRL_DSPRST_TO 80
+#define L1C_PHY_CTRL_CLS (\
+ L1C_PHY_CTRL_LED_MODE |\
+ L1C_PHY_CTRL_100AB_EN |\
+ L1C_PHY_CTRL_PLL_ON)
+
+
+#define L1C_MAC_STS 0x1410
+#define L1C_MAC_STS_SFORCE_MASK ASHFT14(0xFUL)
+#define L1C_MAC_STS_SFORCE_SHIFT 14
+#define L1C_MAC_STS_CALIB_DONE BIT13
+#define L1C_MAC_STS_CALIB_RES_MASK ASHFT8(0x1FUL)
+#define L1C_MAC_STS_CALIB_RES_SHIFT 8
+#define L1C_MAC_STS_CALIBERR_MASK ASHFT4(0xFUL)
+#define L1C_MAC_STS_CALIBERR_SHIFT 4
+#define L1C_MAC_STS_TXQ_BUSY BIT(3)
+#define L1C_MAC_STS_RXQ_BUSY BIT(2)
+#define L1C_MAC_STS_TXMAC_BUSY BIT(1)
+#define L1C_MAC_STS_RXMAC_BUSY BIT(0)
+#define L1C_MAC_STS_IDLE (\
+ L1C_MAC_STS_TXQ_BUSY |\
+ L1C_MAC_STS_RXQ_BUSY |\
+ L1C_MAC_STS_TXMAC_BUSY |\
+ L1C_MAC_STS_RXMAC_BUSY)
+
+#define L1C_MDIO 0x1414
+#define L1C_MDIO_MODE_EXT BIT(30) /* 0:normal,1:ext */
+#define L1C_MDIO_POST_READ BIT(29)
+#define L1C_MDIO_AUTO_POLLING BIT(28)
+#define L1C_MDIO_BUSY BIT(27)
+#define L1C_MDIO_CLK_SEL_MASK ASHFT24(7UL)
+#define L1C_MDIO_CLK_SEL_SHIFT 24
+#define L1C_MDIO_CLK_SEL_25MD4 0 /* 25M DIV 4 */
+#define L1C_MDIO_CLK_SEL_25MD6 2
+#define L1C_MDIO_CLK_SEL_25MD8 3
+#define L1C_MDIO_CLK_SEL_25MD10 4
+#define L1C_MDIO_CLK_SEL_25MD32 5
+#define L1C_MDIO_CLK_SEL_25MD64 6
+#define L1C_MDIO_CLK_SEL_25MD128 7
+#define L1C_MDIO_START BIT(23)
+#define L1C_MDIO_SPRES_PRMBL BIT(22)
+#define L1C_MDIO_OP_READ BIT(21) /* 1:read,0:write */
+#define L1C_MDIO_REG_MASK ASHFT16(0x1FUL)
+#define L1C_MDIO_REG_SHIFT 16
+#define L1C_MDIO_DATA_MASK ASHFT0(0xFFFFUL)
+#define L1C_MDIO_DATA_SHIFT 0
+#define L1C_MDIO_MAX_AC_TO 120
+
+#define L1C_MDIO_EXTN 0x1448
+#define L1C_MDIO_EXTN_PORTAD_MASK ASHFT21(0x1FUL)
+#define L1C_MDIO_EXTN_PORTAD_SHIFT 21
+#define L1C_MDIO_EXTN_DEVAD_MASK ASHFT16(0x1FUL)
+#define L1C_MDIO_EXTN_DEVAD_SHIFT 16
+#define L1C_MDIO_EXTN_REG_MASK ASHFT0(0xFFFFUL)
+#define L1C_MDIO_EXTN_REG_SHIFT 0
+
+#define L1C_PHY_STS 0x1418
+#define L1C_PHY_STS_LPW BIT(31)
+#define L1C_PHY_STS_LPI BIT(30)
+#define L1C_PHY_STS_PWON_STRIP_MASK ASHFT16(0xFFFUL)
+#define L1C_PHY_STS_PWON_STRIP_SHIFT 16
+
+#define L1C_PHY_STS_DUPLEX BIT(3)
+#define L1C_PHY_STS_LINKUP BIT(2)
+#define L1C_PHY_STS_SPEED_MASK ASHFT0(3UL)
+#define L1C_PHY_STS_SPEED_SHIFT 0
+#define L1C_PHY_STS_SPEED_SHIFT 0
+#define L1C_PHY_STS_SPEED_1000M 2
+#define L1C_PHY_STS_SPEED_100M 1
+#define L1C_PHY_STS_SPEED_10M 0
+
+#define L1C_BIST0 0x141C
+#define L1C_BIST0_COL_MASK ASHFT24(0x3FUL)
+#define L1C_BIST0_COL_SHIFT 24
+#define L1C_BIST0_ROW_MASK ASHFT12(0xFFFUL)
+#define L1C_BIST0_ROW_SHIFT 12
+#define L1C_BIST0_STEP_MASK ASHFT8(0xFUL)
+#define L1C_BIST0_STEP_SHIFT 8
+#define L1C_BIST0_PATTERN_MASK ASHFT4(7UL)
+#define L1C_BIST0_PATTERN_SHIFT 4
+#define L1C_BIST0_CRIT BIT(3)
+#define L1C_BIST0_FIXED BIT(2)
+#define L1C_BIST0_FAIL BIT(1)
+#define L1C_BIST0_START BIT(0)
+
+#define L1C_BIST1 0x1420
+#define L1C_BIST1_COL_MASK ASHFT24(0x3FUL)
+#define L1C_BIST1_COL_SHIFT 24
+#define L1C_BIST1_ROW_MASK ASHFT12(0xFFFUL)
+#define L1C_BIST1_ROW_SHIFT 12
+#define L1C_BIST1_STEP_MASK ASHFT8(0xFUL)
+#define L1C_BIST1_STEP_SHIFT 8
+#define L1C_BIST1_PATTERN_MASK ASHFT4(7UL)
+#define L1C_BIST1_PATTERN_SHIFT 4
+#define L1C_BIST1_CRIT BIT(3)
+#define L1C_BIST1_FIXED BIT(2)
+#define L1C_BIST1_FAIL BIT(1)
+#define L1C_BIST1_START BIT(0)
+
+#define L1C_SERDES 0x1424
+#define L1C_SERDES_PHYCLK_SLWDWN BIT(18)
+#define L1C_SERDES_MACCLK_SLWDWN BIT(17)
+#define L1C_SERDES_SELFB_PLL_MASK ASHFT14(3UL)
+#define L1C_SERDES_SELFB_PLL_SHIFT 14
+#define L1C_SERDES_PHYCLK_SEL_GTX BIT(13) /* 1:gtx_clk, 0:25M */
+#define L1C_SERDES_PCIECLK_SEL_SRDS BIT(12) /* 1:serdes,0:25M */
+#define L1C_SERDES_BUFS_RX_EN BIT(11)
+#define L1C_SERDES_PD_RX BIT(10)
+#define L1C_SERDES_PLL_EN BIT(9)
+#define L1C_SERDES_EN BIT(8)
+#define L1C_SERDES_SELFB_PLL_SEL_CSR BIT(6) /* 0:state-machine,1:csr */
+#define L1C_SERDES_SELFB_PLL_CSR_MASK ASHFT4(3UL)
+#define L1C_SERDES_SELFB_PLL_CSR_SHIFT 4
+#define L1C_SERDES_SELFB_PLL_CSR_4 3 /* 4-12% OV-CLK */
+#define L1C_SERDES_SELFB_PLL_CSR_0 2 /* 0-4% OV-CLK */
+#define L1C_SERDES_SELFB_PLL_CSR_12 1 /* 12-18% OV-CLK */
+#define L1C_SERDES_SELFB_PLL_CSR_18 0 /* 18-25% OV-CLK */
+#define L1C_SERDES_VCO_SLOW BIT(3)
+#define L1C_SERDES_VCO_FAST BIT(2)
+#define L1C_SERDES_LOCKDCT_EN BIT(1)
+#define L1C_SERDES_LOCKDCTED BIT(0)
+
+#define L1C_LED_CTRL 0x1428
+#define L1C_LED_CTRL_PATMAP2_MASK ASHFT8(3UL)
+#define L1C_LED_CTRL_PATMAP2_SHIFT 8
+#define L1C_LED_CTRL_PATMAP1_MASK ASHFT6(3UL)
+#define L1C_LED_CTRL_PATMAP1_SHIFT 6
+#define L1C_LED_CTRL_PATMAP0_MASK ASHFT4(3UL)
+#define L1C_LED_CTRL_PATMAP0_SHIFT 4
+#define L1C_LED_CTRL_D3_MODE_MASK ASHFT2(3UL)
+#define L1C_LED_CTRL_D3_MODE_SHIFT 2
+#define L1C_LED_CTRL_D3_MODE_NORMAL 0
+#define L1C_LED_CTRL_D3_MODE_WOL_DIS 1
+#define L1C_LED_CTRL_D3_MODE_WOL_ANY 2
+#define L1C_LED_CTRL_D3_MODE_WOL_EN 3
+#define L1C_LED_CTRL_DUTY_CYCL_MASK ASHFT0(3UL)
+#define L1C_LED_CTRL_DUTY_CYCL_SHIFT 0
+#define L1C_LED_CTRL_DUTY_CYCL_50 0 /* 50% */
+#define L1C_LED_CTRL_DUTY_CYCL_125 1 /* 12.5% */
+#define L1C_LED_CTRL_DUTY_CYCL_25 2 /* 25% */
+#define L1C_LED_CTRL_DUTY_CYCL_75 3 /* 75% */
+
+#define L1C_LED_PATN 0x142C
+#define L1C_LED_PATN1_MASK ASHFT16(0xFFFFUL)
+#define L1C_LED_PATN1_SHIFT 16
+#define L1C_LED_PATN0_MASK ASHFT0(0xFFFFUL)
+#define L1C_LED_PATN0_SHIFT 0
+
+#define L1C_LED_PATN2 0x1430
+#define L1C_LED_PATN2_MASK ASHFT0(0xFFFFUL)
+#define L1C_LED_PATN2_SHIFT 0
+
+#define L1C_SYSALV 0x1434
+#define L1C_SYSALV_FLAG BIT(0)
+
+#define L1C_PCIERR_INST 0x1438
+#define L1C_PCIERR_INST_TX_RATE_MASK ASHFT4(0xFUL)
+#define L1C_PCIERR_INST_TX_RATE_SHIFT 4
+#define L1C_PCIERR_INST_RX_RATE_MASK ASHFT0(0xFUL)
+#define L1C_PCIERR_INST_RX_RATE_SHIFT 0
+
+#define L1C_LPI_DECISN_TIMER 0x143C
+#define L1C_LPI_DESISN_TIMER_L2CB 0x7D00
+
+#define L1C_LPI_CTRL 0x1440
+#define L1C_LPI_CTRL_CHK_DA BIT(31)
+#define L1C_LPI_CTRL_ENH_TO_MASK ASHFT12(0x1FFFUL)
+#define L1C_LPI_CTRL_ENH_TO_SHIFT 12
+#define L1C_LPI_CTRL_ENH_TH_MASK ASHFT6(0x1FUL)
+#define L1C_LPI_CTRL_ENH_TH_SHIFT 6
+#define L1C_LPI_CTRL_ENH_EN BIT(5)
+#define L1C_LPI_CTRL_CHK_RX BIT(4)
+#define L1C_LPI_CTRL_CHK_STATE BIT(3)
+#define L1C_LPI_CTRL_GMII BIT(2)
+#define L1C_LPI_CTRL_TO_PHY BIT(1)
+#define L1C_LPI_CTRL_EN BIT(0)
+
+#define L1C_LPI_WAIT 0x1444
+#define L1C_LPI_WAIT_TIMER_MASK ASHFT0(0xFFFFUL)
+#define L1C_LPI_WAIT_TIMER_SHIFT 0
+
+#define L1C_MAC_CTRL 0x1480
+#define L1C_MAC_CTRL_WOLSPED_SWEN BIT(30) /* 0:phy,1:sw */
+#define L1C_MAC_CTRL_MHASH_ALG_HI5B BIT(29) /* 1:legacy, 0:marvl(low5b)*/
+#define L1C_MAC_CTRL_SPAUSE_EN BIT(28)
+#define L1C_MAC_CTRL_DBG_EN BIT(27)
+#define L1C_MAC_CTRL_BRD_EN BIT(26)
+#define L1C_MAC_CTRL_MULTIALL_EN BIT(25)
+#define L1C_MAC_CTRL_RX_XSUM_EN BIT(24)
+#define L1C_MAC_CTRL_THUGE BIT(23)
+#define L1C_MAC_CTRL_MBOF BIT(22)
+#define L1C_MAC_CTRL_SPEED_MASK ASHFT20(3UL)
+#define L1C_MAC_CTRL_SPEED_SHIFT 20
+#define L1C_MAC_CTRL_SPEED_10_100 1
+#define L1C_MAC_CTRL_SPEED_1000 2
+#define L1C_MAC_CTRL_SIMR BIT(19)
+#define L1C_MAC_CTRL_SSTCT BIT(17)
+#define L1C_MAC_CTRL_TPAUSE BIT(16)
+#define L1C_MAC_CTRL_PROMISC_EN BIT(15)
+#define L1C_MAC_CTRL_VLANSTRIP BIT(14)
+#define L1C_MAC_CTRL_PRMBLEN_MASK ASHFT10(0xFUL)
+#define L1C_MAC_CTRL_PRMBLEN_SHIFT 10
+#define L1C_MAC_CTRL_RHUGE_EN BIT(9)
+#define L1C_MAC_CTRL_FLCHK BIT(8)
+#define L1C_MAC_CTRL_PCRCE BIT(7)
+#define L1C_MAC_CTRL_CRCE BIT(6)
+#define L1C_MAC_CTRL_FULLD BIT(5)
+#define L1C_MAC_CTRL_LPBACK_EN BIT(4)
+#define L1C_MAC_CTRL_RXFC_EN BIT(3)
+#define L1C_MAC_CTRL_TXFC_EN BIT(2)
+#define L1C_MAC_CTRL_RX_EN BIT(1)
+#define L1C_MAC_CTRL_TX_EN BIT(0)
+
+#define L1C_GAP 0x1484
+#define L1C_GAP_IPGR2_MASK ASHFT24(0x7FUL)
+#define L1C_GAP_IPGR2_SHIFT 24
+#define L1C_GAP_IPGR1_MASK ASHFT16(0x7FUL)
+#define L1C_GAP_IPGR1_SHIFT 16
+#define L1C_GAP_MIN_IFG_MASK ASHFT8(0xFFUL)
+#define L1C_GAP_MIN_IFG_SHIFT 8
+#define L1C_GAP_IPGT_MASK ASHFT0(0x7FUL)
+#define L1C_GAP_IPGT_SHIFT 0
+
+#define L1C_STAD0 0x1488
+#define L1C_STAD1 0x148C
+
+#define L1C_HASH_TBL0 0x1490
+#define L1C_HASH_TBL1 0x1494
+
+#define L1C_HALFD 0x1498
+#define L1C_HALFD_JAM_IPG_MASK ASHFT24(0xFUL)
+#define L1C_HALFD_JAM_IPG_SHIFT 24
+#define L1C_HALFD_ABEBT_MASK ASHFT20(0xFUL)
+#define L1C_HALFD_ABEBT_SHIFT 20
+#define L1C_HALFD_ABEBE BIT(19)
+#define L1C_HALFD_BPNB BIT(18)
+#define L1C_HALFD_NOBO BIT(17)
+#define L1C_HALFD_EDXSDFR BIT(16)
+#define L1C_HALFD_RETRY_MASK ASHFT12(0xFUL)
+#define L1C_HALFD_RETRY_SHIFT 12
+#define L1C_HALFD_LCOL_MASK ASHFT0(0x3FFUL)
+#define L1C_HALFD_LCOL_SHIFT 0
+
+#define L1C_MTU 0x149C
+#define L1C_MTU_JUMBO_TH 1514
+#define L1C_MTU_STD_ALGN 1536
+#define L1C_MTU_MIN 64
+
+#define L1C_WOL0 0x14A0
+#define L1C_WOL0_PT7_MATCH BIT(31)
+#define L1C_WOL0_PT6_MATCH BIT(30)
+#define L1C_WOL0_PT5_MATCH BIT(29)
+#define L1C_WOL0_PT4_MATCH BIT(28)
+#define L1C_WOL0_PT3_MATCH BIT(27)
+#define L1C_WOL0_PT2_MATCH BIT(26)
+#define L1C_WOL0_PT1_MATCH BIT(25)
+#define L1C_WOL0_PT0_MATCH BIT(24)
+#define L1C_WOL0_PT7_EN BIT(23)
+#define L1C_WOL0_PT6_EN BIT(22)
+#define L1C_WOL0_PT5_EN BIT(21)
+#define L1C_WOL0_PT4_EN BIT(20)
+#define L1C_WOL0_PT3_EN BIT(19)
+#define L1C_WOL0_PT2_EN BIT(18)
+#define L1C_WOL0_PT1_EN BIT(17)
+#define L1C_WOL0_PT0_EN BIT(16)
+#define L1C_WOL0_IPV4_SYNC_EVT BIT(14)
+#define L1C_WOL0_IPV6_SYNC_EVT BIT(13)
+#define L1C_WOL0_LINK_EVT BIT(10)
+#define L1C_WOL0_MAGIC_EVT BIT(9)
+#define L1C_WOL0_PATTERN_EVT BIT(8)
+#define L1D_WOL0_OOB_EN BIT(6)
+#define L1C_WOL0_PME_LINK BIT(5)
+#define L1C_WOL0_LINK_EN BIT(4)
+#define L1C_WOL0_PME_MAGIC_EN BIT(3)
+#define L1C_WOL0_MAGIC_EN BIT(2)
+#define L1C_WOL0_PME_PATTERN_EN BIT(1)
+#define L1C_WOL0_PATTERN_EN BIT(0)
+
+#define L1C_WOL1 0x14A4
+#define L1C_WOL1_PT3_LEN_MASK ASHFT24(0xFFUL)
+#define L1C_WOL1_PT3_LEN_SHIFT 24
+#define L1C_WOL1_PT2_LEN_MASK ASHFT16(0xFFUL)
+#define L1C_WOL1_PT2_LEN_SHIFT 16
+#define L1C_WOL1_PT1_LEN_MASK ASHFT8(0xFFUL)
+#define L1C_WOL1_PT1_LEN_SHIFT 8
+#define L1C_WOL1_PT0_LEN_MASK ASHFT0(0xFFUL)
+#define L1C_WOL1_PT0_LEN_SHIFT 0
+
+#define L1C_WOL2 0x14A8
+#define L1C_WOL2_PT7_LEN_MASK ASHFT24(0xFFUL)
+#define L1C_WOL2_PT7_LEN_SHIFT 24
+#define L1C_WOL2_PT6_LEN_MASK ASHFT16(0xFFUL)
+#define L1C_WOL2_PT6_LEN_SHIFT 16
+#define L1C_WOL2_PT5_LEN_MASK ASHFT8(0xFFUL)
+#define L1C_WOL2_PT5_LEN_SHIFT 8
+#define L1C_WOL2_PT4_LEN_MASK ASHFT0(0xFFUL)
+#define L1C_WOL2_PT4_LEN_SHIFT 0
+
+#define L1C_SRAM0 0x1500
+#define L1C_SRAM_RFD_TAIL_ADDR_MASK ASHFT16(0xFFFUL)
+#define L1C_SRAM_RFD_TAIL_ADDR_SHIFT 16
+#define L1C_SRAM_RFD_HEAD_ADDR_MASK ASHFT0(0xFFFUL)
+#define L1C_SRAM_RFD_HEAD_ADDR_SHIFT 0
+#define L1C_SRAM_RFD_HT_L2CB1 0x02bf02a0L
+
+#define L1C_SRAM1 0x1510
+#define L1C_SRAM_RFD_LEN_MASK ASHFT0(0xFFFUL) /* 8BYTES UNIT */
+#define L1C_SRAM_RFD_LEN_SHIFT 0
+
+#define L1C_SRAM2 0x1518
+#define L1C_SRAM_TRD_TAIL_ADDR_MASK ASHFT16(0xFFFUL)
+#define L1C_SRAM_TRD_TAIL_ADDR_SHIFT 16
+#define L1C_SRMA_TRD_HEAD_ADDR_MASK ASHFT0(0xFFFUL)
+#define L1C_SRAM_TRD_HEAD_ADDR_SHIFT 0
+#define L1C_SRAM_TRD_HT_L2CB1 0x03df03c0L
+
+#define L1C_SRAM3 0x151C
+#define L1C_SRAM_TRD_LEN_MASK ASHFT0(0xFFFUL) /* 8BYTES UNIT */
+#define L1C_SRAM_TRD_LEN_SHIFT 0
+
+#define L1C_SRAM4 0x1520
+#define L1C_SRAM_RXF_TAIL_ADDR_MASK ASHFT16(0xFFFUL)
+#define L1C_SRAM_RXF_TAIL_ADDR_SHIFT 16
+#define L1C_SRAM_RXF_HEAD_ADDR_MASK ASHFT0(0xFFFUL)
+#define L1C_SRAM_RXF_HEAD_ADDR_SHIFT 0
+#define L1C_SRAM_RXF_HT_L2CB1 0x029f0000L
+
+#define L1C_SRAM5 0x1524
+#define L1C_SRAM_RXF_LEN_MASK ASHFT0(0xFFFUL) /* 8BYTES UNIT */
+#define L1C_SRAM_RXF_LEN_SHIFT 0
+#define L1C_SRAM_RXF_LEN_8K (8*1024)
+#define L1C_SRAM_RXF_LEN_L2CB1 0x02a0L
+
+#define L1C_SRAM6 0x1528
+#define L1C_SRAM_TXF_TAIL_ADDR_MASK ASHFT16(0xFFFUL)
+#define L1C_SRAM_TXF_TAIL_ADDR_SHIFT 16
+#define L1C_SRAM_TXF_HEAD_ADDR_MASK ASHFT0(0xFFFUL)
+#define L1C_SRAM_TXF_HEAD_ADDR_SHIFT 0
+#define L1C_SRAM_TXF_HT_L2CB1 0x03bf02c0L
+
+#define L1C_SRAM7 0x152C
+#define L1C_SRAM_TXF_LEN_MASK ASHFT0(0xFFFUL) /* 8BYTES UNIT */
+#define L1C_SRAM_TXF_LEN_SHIFT 0
+#define L1C_SRAM_TXF_LEN_L2CB1 0x0100L
+
+#define L1C_SRAM8 0x1530
+#define L1C_SRAM_PATTERN_ADDR_MASK ASHFT16(0xFFFUL)
+#define L1C_SRAM_PATTERN_ADDR_SHIFT 16
+#define L1C_SRAM_TSO_ADDR_MASK ASHFT0(0xFFFUL)
+#define L1C_SRAM_TSO_ADDR_SHIFT 0
+
+#define L1C_SRAM9 0x1534
+#define L1C_SRAM_LOAD_PTR BIT(0)
+
+#define L1C_RX_BASE_ADDR_HI 0x1540
+
+#define L1C_TX_BASE_ADDR_HI 0x1544
+
+#define L1C_RFD_ADDR_LO 0x1550
+#define L1C_RFD_RING_SZ 0x1560
+#define L1C_RFD_BUF_SZ 0x1564
+#define L1C_RFD_BUF_SZ_MASK ASHFT0(0xFFFFUL)
+#define L1C_RFD_BUF_SZ_SHIFT 0
+
+#define L1C_RRD_ADDR_LO 0x1568
+#define L1C_RRD_RING_SZ 0x1578
+#define L1C_RRD_RING_SZ_MASK ASHFT0(0xFFFUL)
+#define L1C_RRD_RING_SZ_SHIFT 0
+
+#define L1C_TPD_PRI1_ADDR_LO 0x157C
+#define L1C_TPD_PRI0_ADDR_LO 0x1580 /* LOWEST PRORITY */
+
+#define L1C_TPD_PRI1_PIDX 0x15F0 /* 16BIT */
+#define L1C_TPD_PRI0_PIDX 0x15F2 /* 16BIT */
+
+#define L1C_TPD_PRI1_CIDX 0x15F4 /* 16BIT */
+#define L1C_TPD_PRI0_CIDX 0x15F6 /* 16BIT */
+
+#define L1C_TPD_RING_SZ 0x1584
+#define L1C_TPD_RING_SZ_MASK ASHFT0(0xFFFFUL)
+#define L1C_TPD_RING_SZ_SHIFT 0
+
+#define L1C_TXQ0 0x1590
+#define L1C_TXQ0_TXF_BURST_PREF_MASK ASHFT16(0xFFFFUL)
+#define L1C_TXQ0_TXF_BURST_PREF_SHIFT 16
+#define L1C_TXQ0_TXF_BURST_PREF_DEF 0x200
+#define L1C_TXQ0_TXF_BURST_PREF_L2CB 0x40
+#define L1D_TXQ0_PEDING_CLR BIT(8)
+#define L1C_TXQ0_LSO_8023_EN BIT(7)
+#define L1C_TXQ0_MODE_ENHANCE BIT(6)
+#define L1C_TXQ0_EN BIT(5)
+#define L1C_TXQ0_SUPT_IPOPT BIT(4)
+#define L1C_TXQ0_TPD_BURSTPREF_MASK ASHFT0(0xFUL)
+#define L1C_TXQ0_TPD_BURSTPREF_SHIFT 0
+#define L1C_TXQ0_TPD_BURSTPREF_DEF 5
+
+#define L1C_TXQ1 0x1594
+#define L1C_TXQ1_JUMBO_TSOTHR_MASK ASHFT0(0x7FFUL) /* 8BYTES UNIT */
+#define L1C_TXQ1_JUMBO_TSOTHR_SHIFT 0
+#define L1C_TXQ1_JUMBO_TSO_TH (7*1024) /* byte */
+
+#define L1C_TXQ2 0x1598 /* ENTER L1 CONTROL */
+#define L1C_TXQ2_BURST_EN BIT(31)
+#define L1C_TXQ2_BURST_HI_WM_MASK ASHFT16(0xFFFUL)
+#define L1C_TXQ2_BURST_HI_WM_SHIFT 16
+#define L1C_TXQ2_BURST_LO_WM_MASK ASHFT0(0xFFFUL)
+#define L1C_TXQ2_BURST_LO_WM_SHIFT 0
+
+#define L1C_RFD_PIDX 0x15E0
+#define L1C_RFD_PIDX_MASK ASHFT0(0xFFFUL)
+#define L1C_RFD_PIDX_SHIFT 0
+
+#define L1C_RFD_CIDX 0x15F8
+#define L1C_RFD_CIDX_MASK ASHFT0(0xFFFUL)
+#define L1C_RFD_CIDX_SHIFT 0
+
+#define L1C_RXQ0 0x15A0
+#define L1C_RXQ0_EN BIT(31)
+#define L1C_RXQ0_CUT_THRU_EN BIT(30)
+#define L1C_RXQ0_RSS_HASH_EN BIT(29)
+#define L1C_RXQ0_NON_IP_QTBL BIT(28) /* 0:q0,1:table */
+#define L1C_RXQ0_RSS_MODE_MASK ASHFT26(3UL)
+#define L1C_RXQ0_RSS_MODE_SHIFT 26
+#define L1C_RXQ0_RSS_MODE_DIS 0
+#define L1C_RXQ0_RSS_MODE_SQSI 1
+#define L1C_RXQ0_RSS_MODE_MQSI 2
+#define L1C_RXQ0_RSS_MODE_MQMI 3
+#define L1C_RXQ0_NUM_RFD_PREF_MASK ASHFT20(0x3FUL)
+#define L1C_RXQ0_NUM_RFD_PREF_SHIFT 20
+#define L1C_RXQ0_NUM_RFD_PREF_DEF 8
+#define L1C_RXQ0_RSS_HSTYP_IPV6_TCP_EN BIT(19)
+#define L1C_RXQ0_RSS_HSTYP_IPV6_EN BIT(18)
+#define L1C_RXQ0_RSS_HSTYP_IPV4_TCP_EN BIT(17)
+#define L1C_RXQ0_RSS_HSTYP_IPV4_EN BIT(16)
+#define L1C_RXQ0_RSS_HSTYP_ALL (\
+ L1C_RXQ0_RSS_HSTYP_IPV6_TCP_EN |\
+ L1C_RXQ0_RSS_HSTYP_IPV4_TCP_EN |\
+ L1C_RXQ0_RSS_HSTYP_IPV6_EN |\
+ L1C_RXQ0_RSS_HSTYP_IPV4_EN)
+#define L1C_RXQ0_IDT_TBL_SIZE_MASK ASHFT8(0xFFUL)
+#define L1C_RXQ0_IDT_TBL_SIZE_SHIFT 8
+#define L1C_RXQ0_IDT_TBL_SIZE_DEF 0x80
+#define L1C_RXQ0_IPV6_PARSE_EN BIT(7)
+#define L1C_RXQ0_ASPM_THRESH_MASK ASHFT0(3UL)
+#define L1C_RXQ0_ASPM_THRESH_SHIFT 0
+#define L1C_RXQ0_ASPM_THRESH_NO 0
+#define L1C_RXQ0_ASPM_THRESH_1M 1
+#define L1C_RXQ0_ASPM_THRESH_10M 2
+#define L1C_RXQ0_ASPM_THRESH_100M 3
+
+#define L1C_RXQ1 0x15A4
+#define L1C_RXQ1_RFD_PREF_DOWN_MASK ASHFT6(0x3FUL)
+#define L1C_RXQ1_RFD_PREF_DOWN_SHIFT 6
+#define L1C_RXQ1_RFD_PREF_UP_MASK ASHFT0(0x3FUL)
+#define L1C_RXQ1_RFD_PREF_UP_SHIFT 0
+
+#define L1C_RXQ2 0x15A8
+/* XOFF: USED SRAM LOWER THAN IT, THEN NOTIFY THE PEER TO SEND AGAIN */
+#define L1C_RXQ2_RXF_XOFF_THRESH_MASK ASHFT16(0xFFFUL)
+#define L1C_RXQ2_RXF_XOFF_THRESH_SHIFT 16
+#define L1C_RXQ2_RXF_XON_THRESH_MASK ASHFT0(0xFFFUL)
+#define L1C_RXQ2_RXF_XON_THRESH_SHIFT 0
+
+#define L1C_RXQ3 0x15AC
+#define L1C_RXQ3_RXD_TIMER_MASK ASHFT16(0xFFFFUL)
+#define L1C_RXQ3_RXD_TIMER_SHIFT 16
+#define L1C_RXQ3_RXD_THRESH_MASK ASHFT0(0xFFFUL) /* 8BYTES UNIT */
+#define L1C_RXQ3_RXD_THRESH_SHIFT 0
+
+#define L1C_DMA 0x15C0
+#define L1C_DMA_WPEND_CLR BIT(30)
+#define L1C_DMA_RPEND_CLR BIT(29)
+#define L1C_DMA_WDLY_CNT_MASK ASHFT16(0xFUL)
+#define L1C_DMA_WDLY_CNT_SHIFT 16
+#define L1C_DMA_WDLY_CNT_DEF 4
+#define L1C_DMA_RDLY_CNT_MASK ASHFT11(0x1FUL)
+#define L1C_DMA_RDLY_CNT_SHIFT 11
+#define L1C_DMA_RDLY_CNT_DEF 15
+#define L1C_DMA_RREQ_PRI_DATA BIT(10) /* 0:tpd, 1:data */
+#define L1C_DMA_WREQ_BLEN_MASK ASHFT7(7UL)
+#define L1C_DMA_WREQ_BLEN_SHIFT 7
+#define L1C_DMA_RREQ_BLEN_MASK ASHFT4(7UL)
+#define L1C_DMA_RREQ_BLEN_SHIFT 4
+#define L1C_DMA_RCB_LEN128 BIT(3) /* 0:64bytes,1:128bytes */
+#define L1C_DMA_RORDER_MODE_MASK ASHFT0(7UL)
+#define L1C_DMA_RORDER_MODE_SHIFT 0
+#define L1C_DMA_RORDER_MODE_OUT 4
+#define L1C_DMA_RORDER_MODE_ENHANCE 2
+#define L1C_DMA_RORDER_MODE_IN 1
+
+#define L1C_SMB_TIMER 0x15C4
+
+#define L1C_TINT_TPD_THRSHLD 0x15C8
+
+#define L1C_TINT_TIMER 0x15CC
+
+#define L1C_ISR 0x1600
+#define L1C_ISR_DIS BIT(31)
+#define L1C_ISR_PCIE_LNKDOWN BIT(26)
+#define L1C_ISR_PCIE_CERR BIT(25)
+#define L1C_ISR_PCIE_NFERR BIT(24)
+#define L1C_ISR_PCIE_FERR BIT(23)
+#define L1C_ISR_PCIE_UR BIT(22)
+#define L1C_ISR_MAC_TX BIT(21)
+#define L1C_ISR_MAC_RX BIT(20)
+#define L1C_ISR_RX_Q0 BIT(16)
+#define L1C_ISR_TX_Q0 BIT(15)
+#define L1C_ISR_TXQ_TO BIT(14)
+#define L1C_ISR_PHY_LPW BIT(13)
+#define L1C_ISR_PHY BIT(12)
+#define L1C_ISR_TX_CREDIT BIT(11)
+#define L1C_ISR_DMAW BIT(10)
+#define L1C_ISR_DMAR BIT(9)
+#define L1C_ISR_TXF_UR BIT(8)
+#define L1C_ISR_RFD_UR BIT(4)
+#define L1C_ISR_RXF_OV BIT(3)
+#define L1C_ISR_MANU BIT(2)
+#define L1C_ISR_TIMER BIT(1)
+#define L1C_ISR_SMB BIT(0)
+
+#define L1C_IMR 0x1604
+
+#define L1C_INT_RETRIG 0x1608 /* re-send deassrt/assert
+ * if sw no reflect */
+#define L1C_INT_RETRIG_TO 20000 /* 40 ms */
+
+/* WOL mask register only for L1Dv2.0 and later chips */
+#define L1D_PATTERN_MASK 0x1620 /* 128bytes, sleep state */
+#define L1D_PATTERN_MASK_LEN 128 /* 128bytes, 32DWORDs */
+
+
+#define L1C_BTROM_CFG 0x1800 /* pwon rst */
+
+#define L1C_DRV 0x1804 /* pwon rst */
+/* bit definition is in lx_hwcomm.h */
+
+#define L1C_DRV_ERR1 0x1808 /* perst */
+#define L1C_DRV_ERR1_GEN BIT(31) /* geneneral err */
+#define L1C_DRV_ERR1_NOR BIT(30) /* rrd.nor */
+#define L1C_DRV_ERR1_TRUNC BIT(29)
+#define L1C_DRV_ERR1_RES BIT(28)
+#define L1C_DRV_ERR1_INTFATAL BIT(27)
+#define L1C_DRV_ERR1_TXQPEND BIT(26)
+#define L1C_DRV_ERR1_DMAW BIT(25)
+#define L1C_DRV_ERR1_DMAR BIT(24)
+#define L1C_DRV_ERR1_PCIELNKDWN BIT(23)
+#define L1C_DRV_ERR1_PKTSIZE BIT(22)
+#define L1C_DRV_ERR1_FIFOFUL BIT(21)
+#define L1C_DRV_ERR1_RFDUR BIT(20)
+#define L1C_DRV_ERR1_RRDSI BIT(19)
+#define L1C_DRV_ERR1_UPDATE BIT(18)
+
+#define L1C_DRV_ERR2 0x180C /* perst */
+
+#define L1C_CLK_GATE 0x1814
+#define L1C_CLK_GATE_RXMAC BIT(5)
+#define L1C_CLK_GATE_TXMAC BIT(4)
+#define L1C_CLK_GATE_RXQ BIT(3)
+#define L1C_CLK_GATE_TXQ BIT(2)
+#define L1C_CLK_GATE_DMAR BIT(1)
+#define L1C_CLK_GATE_DMAW BIT(0)
+#define L1C_CLK_GATE_ALL (\
+ L1C_CLK_GATE_RXMAC |\
+ L1C_CLK_GATE_TXMAC |\
+ L1C_CLK_GATE_RXQ |\
+ L1C_CLK_GATE_TXQ |\
+ L1C_CLK_GATE_DMAR |\
+ L1C_CLK_GATE_DMAW)
+
+#define L1C_DBG_ADDR 0x1900 /* DWORD reg */
+#define L1C_DBG_DATA 0x1904 /* DWORD reg */
+
+/***************************** IO mapping registers ***************************/
+#define L1C_IO_ADDR 0x00 /* DWORD reg */
+#define L1C_IO_DATA 0x04 /* DWORD reg */
+#define L1C_IO_MASTER 0x08 /* DWORD same as reg0x1400 */
+#define L1C_IO_MAC_CTRL 0x0C /* DWORD same as reg0x1480*/
+#define L1C_IO_ISR 0x10 /* DWORD same as reg0x1600 */
+#define L1C_IO_IMR 0x14 /* DWORD same as reg0x1604 */
+#define L1C_IO_TPD_PRI1_PIDX 0x18 /* WORD same as reg0x15F0 */
+#define L1C_IO_TPD_PRI0_PIDX 0x1A /* WORD same as reg0x15F2 */
+#define L1C_IO_TPD_PRI1_CIDX 0x1C /* WORD same as reg0x15F4 */
+#define L1C_IO_TPD_PRI0_CIDX 0x1E /* WORD same as reg0x15F6 */
+#define L1C_IO_RFD_PIDX 0x20 /* WORD same as reg0x15E0 */
+#define L1C_IO_RFD_CIDX 0x30 /* WORD same as reg0x15F8 */
+#define L1C_IO_MDIO 0x38 /* WORD same as reg0x1414 */
+#define L1C_IO_PHY_CTRL 0x3C /* DWORD same as reg0x140C */
+
+
+
+/********************* PHY regs definition ***************************/
+
+/* Autoneg Advertisement Register (0x4) */
+#define L1C_ADVERTISE_SPEED_MASK 0x01E0
+#define L1C_ADVERTISE_DEFAULT_CAP 0x0DE0 /* diff with L1C */
+
+/* 1000BASE-T Control Register (0x9) */
+#define L1C_GIGA_CR_1000T_HD_CAPS 0x0100
+#define L1C_GIGA_CR_1000T_FD_CAPS 0x0200
+#define L1C_GIGA_CR_1000T_REPEATER_DTE 0x0400
+#define L1C_GIGA_CR_1000T_MS_VALUE 0x0800
+#define L1C_GIGA_CR_1000T_MS_ENABLE 0x1000
+#define L1C_GIGA_CR_1000T_TEST_MODE_NORMAL 0x0000
+#define L1C_GIGA_CR_1000T_TEST_MODE_1 0x2000
+#define L1C_GIGA_CR_1000T_TEST_MODE_2 0x4000
+#define L1C_GIGA_CR_1000T_TEST_MODE_3 0x6000
+#define L1C_GIGA_CR_1000T_TEST_MODE_4 0x8000
+#define L1C_GIGA_CR_1000T_SPEED_MASK 0x0300
+#define L1C_GIGA_CR_1000T_DEFAULT_CAP 0x0300
+
+/* 1000BASE-T Status Register */
+#define L1C_MII_GIGA_SR 0x0A
+
+/* PHY Specific Status Register */
+#define L1C_MII_GIGA_PSSR 0x11
+#define L1C_GIGA_PSSR_FC_RXEN 0x0004
+#define L1C_GIGA_PSSR_FC_TXEN 0x0008
+#define L1C_GIGA_PSSR_SPD_DPLX_RESOLVED 0x0800
+#define L1C_GIGA_PSSR_DPLX 0x2000
+#define L1C_GIGA_PSSR_SPEED 0xC000
+#define L1C_GIGA_PSSR_10MBS 0x0000
+#define L1C_GIGA_PSSR_100MBS 0x4000
+#define L1C_GIGA_PSSR_1000MBS 0x8000
+
+/* PHY Interrupt Enable Register */
+#define L1C_MII_IER 0x12
+#define L1C_IER_LINK_UP 0x0400
+#define L1C_IER_LINK_DOWN 0x0800
+
+/* PHY Interrupt Status Register */
+#define L1C_MII_ISR 0x13
+#define L1C_ISR_LINK_UP 0x0400
+#define L1C_ISR_LINK_DOWN 0x0800
+
+/* Cable-Detect-Test Control Register */
+#define L1C_MII_CDTC 0x16
+#define L1C_CDTC_EN 1 /* sc */
+#define L1C_CDTC_PAIR_MASK ASHFT8(3U)
+#define L1C_CDTC_PAIR_SHIFT 8
+
+
+/* Cable-Detect-Test Status Register */
+#define L1C_MII_CDTS 0x1C
+#define L1C_CDTS_STATUS_MASK ASHFT8(3U)
+#define L1C_CDTS_STATUS_SHIFT 8
+#define L1C_CDTS_STATUS_NORMAL 0
+#define L1C_CDTS_STATUS_SHORT 1
+#define L1C_CDTS_STATUS_OPEN 2
+#define L1C_CDTS_STATUS_INVALID 3
+
+#define L1C_MII_DBG_ADDR 0x1D
+#define L1C_MII_DBG_DATA 0x1E
+
+/***************************** debug port *************************************/
+
+#define L1C_MIIDBG_ANACTRL 0x00
+#define L1C_ANACTRL_CLK125M_DELAY_EN BIT(15)
+#define L1C_ANACTRL_VCO_FAST BIT(14)
+#define L1C_ANACTRL_VCO_SLOW BIT(13)
+#define L1C_ANACTRL_AFE_MODE_EN BIT(12)
+#define L1C_ANACTRL_LCKDET_PHY BIT(11)
+#define L1C_ANACTRL_LCKDET_EN BIT(10)
+#define L1C_ANACTRL_OEN_125M BIT(9)
+#define L1C_ANACTRL_HBIAS_EN BIT(8)
+#define L1C_ANACTRL_HB_EN BIT(7)
+#define L1C_ANACTRL_SEL_HSP BIT(6)
+#define L1C_ANACTRL_CLASSA_EN BIT(5)
+#define L1C_ANACTRL_MANUSWON_SWR_MASK ASHFT2(3U)
+#define L1C_ANACTRL_MANUSWON_SWR_SHIFT 2
+#define L1C_ANACTRL_MANUSWON_SWR_2V 0
+#define L1C_ANACTRL_MANUSWON_SWR_1P9V 1
+#define L1C_ANACTRL_MANUSWON_SWR_1P8V 2
+#define L1C_ANACTRL_MANUSWON_SWR_1P7V 3
+#define L1C_ANACTRL_MANUSWON_BW3_4M BIT(1)
+#define L1C_ANACTRL_RESTART_CAL BIT(0)
+#define L1C_ANACTRL_DEF 0x02EF
+
+
+#define L1C_MIIDBG_SYSMODCTRL 0x04
+#define L1C_SYSMODCTRL_IECHOADJ_PFMH_PHY BIT(15)
+#define L1C_SYSMODCTRL_IECHOADJ_BIASGEN BIT(14)
+#define L1C_SYSMODCTRL_IECHOADJ_PFML_PHY BIT(13)
+#define L1C_SYSMODCTRL_IECHOADJ_PS_MASK ASHFT10(3U)
+#define L1C_SYSMODCTRL_IECHOADJ_PS_SHIFT 10
+#define L1C_SYSMODCTRL_IECHOADJ_PS_40 3
+#define L1C_SYSMODCTRL_IECHOADJ_PS_20 2
+#define L1C_SYSMODCTRL_IECHOADJ_PS_0 1
+#define L1C_SYSMODCTRL_IECHOADJ_10BT_100MV BIT(6) /* 1:100mv, 0:200mv */
+#define L1C_SYSMODCTRL_IECHOADJ_HLFAP_MASK ASHFT4(3U)
+#define L1C_SYSMODCTRL_IECHOADJ_HLFAP_SHIFT 4
+#define L1C_SYSMODCTRL_IECHOADJ_VDFULBW BIT(3)
+#define L1C_SYSMODCTRL_IECHOADJ_VDBIASHLF BIT(2)
+#define L1C_SYSMODCTRL_IECHOADJ_VDAMPHLF BIT(1)
+#define L1C_SYSMODCTRL_IECHOADJ_VDLANSW BIT(0)
+#define L1C_SYSMODCTRL_IECHOADJ_DEF 0x88BB /* ???? */
+
+
+
+#define L1D_MIIDBG_SYSMODCTRL 0x04 /* l1d & l2cb */
+#define L1D_SYSMODCTRL_IECHOADJ_CUR_ADD BIT(15)
+#define L1D_SYSMODCTRL_IECHOADJ_CUR_MASK ASHFT12(7U)
+#define L1D_SYSMODCTRL_IECHOADJ_CUR_SHIFT 12
+#define L1D_SYSMODCTRL_IECHOADJ_VOL_MASK ASHFT8(0xFU)
+#define L1D_SYSMODCTRL_IECHOADJ_VOL_SHIFT 8
+#define L1D_SYSMODCTRL_IECHOADJ_VOL_17ALL 3
+#define L1D_SYSMODCTRL_IECHOADJ_VOL_100M15 1
+#define L1D_SYSMODCTRL_IECHOADJ_VOL_10M17 0
+#define L1D_SYSMODCTRL_IECHOADJ_BIAS1_MASK ASHFT4(0xFU)
+#define L1D_SYSMODCTRL_IECHOADJ_BIAS1_SHIFT 4
+#define L1D_SYSMODCTRL_IECHOADJ_BIAS2_MASK ASHFT0(0xFU)
+#define L1D_SYSMODCTRL_IECHOADJ_BIAS2_SHIFT 0
+#define L1D_SYSMODCTRL_IECHOADJ_DEF 0x4FBB
+
+
+#define L1C_MIIDBG_SRDSYSMOD 0x05
+#define L1C_SRDSYSMOD_LCKDET_EN BIT(13)
+#define L1C_SRDSYSMOD_PLL_EN BIT(11)
+#define L1C_SRDSYSMOD_SEL_HSP BIT(10)
+#define L1C_SRDSYSMOD_HLFTXDR BIT(9)
+#define L1C_SRDSYSMOD_TXCLK_DELAY_EN BIT(8)
+#define L1C_SRDSYSMOD_TXELECIDLE BIT(7)
+#define L1C_SRDSYSMOD_DEEMP_EN BIT(6)
+#define L1C_SRDSYSMOD_MS_PAD BIT(2)
+#define L1C_SRDSYSMOD_CDR_ADC_VLTG BIT(1)
+#define L1C_SRDSYSMOD_CDR_DAC_1MA BIT(0)
+#define L1C_SRDSYSMOD_DEF 0x2C46
+
+#define L1C_MIIDBG_CFGLPSPD 0x0A
+#define L1C_CFGLPSPD_RSTCNT_MASK ASHFT(3U)
+#define L1C_CFGLPSPD_RSTCNT_SHIFT 14
+#define L1C_CFGLPSPD_RSTCNT_CLK125SW BIT(13)
+
+#define L1C_MIIDBG_HIBNEG 0x0B
+#define L1C_HIBNEG_PSHIB_EN BIT(15)
+#define L1C_HIBNEG_WAKE_BOTH BIT(14)
+#define L1C_HIBNEG_ONOFF_ANACHG_SUDEN BIT(13)
+#define L1C_HIBNEG_HIB_PULSE BIT(12)
+#define L1C_HIBNEG_GATE_25M_EN BIT(11)
+#define L1C_HIBNEG_RST_80U BIT(10)
+#define L1C_HIBNEG_RST_TIMER_MASK ASHFT8(3U)
+#define L1C_HIBNEG_RST_TIMER_SHIFT 8
+#define L1C_HIBNEG_GTX_CLK_DELAY_MASK ASHFT5(3U)
+#define L1C_HIBNEG_GTX_CLK_DELAY_SHIFT 5
+#define L1C_HIBNEG_BYPSS_BRKTIMER BIT(4)
+#define L1C_HIBNEG_DEF 0xBC40
+
+#define L1C_MIIDBG_TST10BTCFG 0x12
+#define L1C_TST10BTCFG_INTV_TIMER_MASK ASHFT14(3U)
+#define L1C_TST10BTCFG_INTV_TIMER_SHIFT 14
+#define L1C_TST10BTCFG_TRIGER_TIMER_MASK ASHFT12(3U)
+#define L1C_TST10BTCFG_TRIGER_TIMER_SHIFT 12
+#define L1C_TST10BTCFG_DIV_MAN_MLT3_EN BIT(11)
+#define L1C_TST10BTCFG_OFF_DAC_IDLE BIT(10)
+#define L1C_TST10BTCFG_LPBK_DEEP BIT(2) /* 1:deep,0:shallow */
+#define L1C_TST10BTCFG_DEF 0x4C04
+
+#define L1C_MIIDBG_AZ_ANADECT 0x15
+#define L1C_AZ_ANADECT_10BTRX_TH BIT(15)
+#define L1C_AZ_ANADECT_BOTH_01CHNL BIT(14)
+#define L1C_AZ_ANADECT_INTV_MASK ASHFT8(0x3FU)
+#define L1C_AZ_ANADECT_INTV_SHIFT 8
+#define L1C_AZ_ANADECT_THRESH_MASK ASHFT4(0xFU)
+#define L1C_AZ_ANADECT_THRESH_SHIFT 4
+#define L1C_AZ_ANADECT_CHNL_MASK ASHFT0(0xFU)
+#define L1C_AZ_ANADECT_CHNL_SHIFT 0
+#define L1C_AZ_ANADECT_DEF 0x3220
+#define L1C_AZ_ANADECT_LONG 0xb210
+
+#define L1D_MIIDBG_MSE16DB 0x18
+#define L1D_MSE16DB_UP 0x05EA
+#define L1D_MSE16DB_DOWN 0x02EA
+
+
+#define L1C_MIIDBG_LEGCYPS 0x29
+#define L1C_LEGCYPS_EN BIT(15)
+#define L1C_LEGCYPS_DAC_AMP1000_MASK ASHFT12(7U)
+#define L1C_LEGCYPS_DAC_AMP1000_SHIFT 12
+#define L1C_LEGCYPS_DAC_AMP100_MASK ASHFT9(7U)
+#define L1C_LEGCYPS_DAC_AMP100_SHIFT 9
+#define L1C_LEGCYPS_DAC_AMP10_MASK ASHFT6(7U)
+#define L1C_LEGCYPS_DAC_AMP10_SHIFT 6
+#define L1C_LEGCYPS_UNPLUG_TIMER_MASK ASHFT3(7U)
+#define L1C_LEGCYPS_UNPLUG_TIMER_SHIFT 3
+#define L1C_LEGCYPS_UNPLUG_DECT_EN BIT(2)
+#define L1C_LEGCYPS_ECNC_PS_EN BIT(0)
+#define L1D_LEGCYPS_DEF 0x129D
+#define L1C_LEGCYPS_DEF 0x36DD
+
+#define L1C_MIIDBG_TST100BTCFG 0x36
+#define L1C_TST100BTCFG_NORMAL_BW_EN BIT(15)
+#define L1C_TST100BTCFG_BADLNK_BYPASS BIT(14)
+#define L1C_TST100BTCFG_SHORTCABL_TH_MASK ASHFT8(0x3FU)
+#define L1C_TST100BTCFG_SHORTCABL_TH_SHIFT 8
+#define L1C_TST100BTCFG_LITCH_EN BIT(7)
+#define L1C_TST100BTCFG_VLT_SW BIT(6)
+#define L1C_TST100BTCFG_LONGCABL_TH_MASK ASHFT0(0x3FU)
+#define L1C_TST100BTCFG_LONGCABL_TH_SHIFT 0
+#define L1C_TST100BTCFG_DEF 0xE12C
+
+#define L1C_MIIDBG_VOLT_CTRL 0x3B
+#define L1C_VOLT_CTRL_CABLE1TH_MASK ASHFT7(0x1FFU)
+#define L1C_VOLT_CTRL_CABLE1TH_SHIFT 7
+#define L1C_VOLT_CTRL_AMPCTRL_MASK ASHFT5(3U)
+#define L1C_VOLT_CTRL_AMPCTRL_SHIFT 5
+#define L1C_VOLT_CTRL_SW_BYPASS BIT(4)
+#define L1C_VOLT_CTRL_SWLOWEST BIT(3)
+#define L1C_VOLT_CTRL_DACAMP10_MASK ASHFT0(7U)
+#define L1C_VOLT_CTRL_DACAMP10_SHIFT 0
+
+#define L1C_MIIDBG_CABLE1TH_DET 0x3E
+#define L1C_CABLE1TH_DET_EN BIT(15)
+
+/***************************** extension **************************************/
+
+/******* dev 3 *********/
+#define L1C_MIIEXT_PCS 3
+
+#define L1C_MIIEXT_CLDCTRL3 0x8003
+#define L1C_CLDCTRL3_BP_CABLE1TH_DET_GT BIT(15)
+#define L1C_CLDCTRL3_AZ_DISAMP BIT(12)
+#define L1C_CLDCTRL3_L2CB 0x4D19
+#define L1C_CLDCTRL3_L1D 0xDD19
+
+#define L1C_MIIEXT_CLDCTRL6 0x8006
+#define L1C_CLDCTRL6_CAB_LEN_MASK ASHFT0(0x1FFU)
+#define L1C_CLDCTRL6_CAB_LEN_SHIFT 0
+#define L1C_CLDCTRL6_CAB_LEN_SHORT 0x50
+
+#define L1C_MIIEXT_CLDCTRL7 0x8007
+#define L1C_CLDCTRL7_VDHLF_BIAS_TH_MASK ASHFT9(0x7FU)
+#define L1C_CLDCTRL7_VDHLF_BIAS_TH_SHIFT 9
+#define L1C_CLDCTRL7_AFE_AZ_MASK ASHFT4(0x1FU)
+#define L1C_CLDCTRL7_AFE_AZ_SHIFT 4
+#define L1C_CLDCTRL7_SIDE_PEAK_TH_MASK ASHFT0(0xFU)
+#define L1C_CLDCTRL7_SIDE_PEAK_TH_SHIFT 0
+#define L1C_CLDCTRL7_DEF 0x6BF6 /* ???? */
+#define L1C_CLDCTRL7_FPGA_DEF 0x0005
+#define L1C_CLDCTRL7_L2CB 0x0175
+
+#define L1C_MIIEXT_AZCTRL 0x8008
+#define L1C_AZCTRL_SHORT_TH_MASK ASHFT8(0xFFU)
+#define L1C_AZCTRL_SHORT_TH_SHIFT 8
+#define L1C_AZCTRL_LONG_TH_MASK ASHFT0(0xFFU)
+#define L1C_AZCTRL_LONG_TH_SHIFT 0
+#define L1C_AZCTRL_DEF 0x1629
+#define L1C_AZCTRL_FPGA_DEF 0x101D
+#define L1C_AZCTRL_L1D 0x2034
+
+#define L1C_MIIEXT_AZCTRL2 0x8009
+#define L1C_AZCTRL2_WAKETRNING_MASK ASHFT8(0xFFU)
+#define L1C_AZCTRL2_WAKETRNING_SHIFT 8
+#define L1C_AZCTRL2_QUIET_TIMER_MASH ASHFT6(3U)
+#define L1C_AZCTRL2_QUIET_TIMER_SHIFT 6
+#define L1C_AZCTRL2_PHAS_JMP2 BIT(4)
+#define L1C_AZCTRL2_CLKTRCV_125MD16 BIT(3)
+#define L1C_AZCTRL2_GATE1000_EN BIT(2)
+#define L1C_AZCTRL2_AVRG_FREQ BIT(1)
+#define L1C_AZCTRL2_PHAS_JMP4 BIT(0)
+#define L1C_AZCTRL2_DEF 0x32C0
+#define L1C_AZCTRL2_FPGA_DEF 0x40C8
+#define L1C_AZCTRL2_L2CB 0xE003
+#define L1C_AZCTRL2_L1D2 0x18C0
+
+
+#define L1C_MIIEXT_AZCTRL4 0x800B
+#define L1C_AZCTRL4_WAKE_STH_L2CB 0x0094
+
+#define L1C_MIIEXT_AZCTRL5 0x800C
+#define L1C_AZCTRL5_WAKE_LTH_L2CB 0x00EB
+
+#define L1C_MIIEXT_AZCTRL6 0x800D
+#define L1C_AZCTRL6_L1D2 0x003F
+
+
+
+/********* dev 7 **********/
+#define L1C_MIIEXT_ANEG 7
+
+#define L1C_MIIEXT_LOCAL_EEEADV 0x3C
+#define L1C_LOCAL_EEEADV_1000BT BIT(2)
+#define L1C_LOCAL_EEEADV_100BT BIT(1)
+
+#define L1C_MIIEXT_REMOTE_EEEADV 0x3D
+#define L1C_REMOTE_EEEADV_1000BT BIT(2)
+#define L1C_REMOTE_EEEADV_100BT BIT(1)
+
+#define L1C_MIIEXT_EEE_ANEG 0x8000
+#define L1C_EEE_ANEG_1000M BIT(2)
+#define L1C_EEE_ANEG_100M BIT(1)
+
+
+
+
+/******************************************************************************/
+
+/* functions */
+
+/* get permanent mac address from
+ * return
+ * 0: success
+ * non-0:fail
+ */
+u16 l1c_get_perm_macaddr(struct alx_hw *hw, u8 *addr);
+
+
+/* reset mac & dma
+ * return
+ * 0: success
+ * non-0:fail
+ */
+u16 l1c_reset_mac(struct alx_hw *hw);
+
+/* reset phy
+ * return
+ * 0: success
+ * non-0:fail
+ */
+u16 l1c_reset_phy(struct alx_hw *hw, bool pws_en, bool az_en, bool ptp_en);
+
+
+/* reset pcie
+ * just reset pcie relative registers (pci command, clk, aspm...)
+ * return
+ * 0:success
+ * non-0:fail
+ */
+u16 l1c_reset_pcie(struct alx_hw *hw, bool l0s_en, bool l1_en);
+
+
+/* disable/enable MAC/RXQ/TXQ
+ * en
+ * true:enable
+ * false:disable
+ * return
+ * 0:success
+ * non-0-fail
+ */
+u16 l1c_enable_mac(struct alx_hw *hw, bool en, u16 en_ctrl);
+
+/* enable/disable aspm support
+ * that will change settings for phy/mac/pcie
+ */
+u16 l1c_enable_aspm(struct alx_hw *hw, bool l0s_en, bool l1_en, u8 lnk_stat);
+
+
+/* initialize phy for speed / flow control
+ * lnk_cap
+ * if autoNeg, is link capability to tell the peer
+ * if force mode, is forced speed/duplex
+ */
+u16 l1c_init_phy_spdfc(struct alx_hw *hw, bool auto_neg,
+ u8 lnk_cap, bool fc_en);
+
+/* do post setting on phy if link up/down event occur
+ */
+u16 l1c_post_phy_link(struct alx_hw *hw, bool linkon, u8 wire_spd);
+
+
+/* do power saving setting befor enter suspend mode
+ * NOTE:
+ * 1. phy link must be established before calling this function
+ * 2. wol option (pattern,magic,link,etc.) is configed before call it.
+ */
+u16 l1c_powersaving(struct alx_hw *hw, u8 wire_spd, bool wol_en,
+ bool mac_txen, bool mac_rxen, bool pws_en);
+
+
+/* read phy register */
+u16 l1c_read_phy(struct alx_hw *hw, bool ext, u8 dev, bool fast, u16 reg,
+ u16 *data);
+
+/* write phy register */
+u16 l1c_write_phy(struct alx_hw *hw, bool ext, u8 dev, bool fast, u16 reg,
+ u16 data);
+
+/* phy debug port */
+u16 l1c_read_phydbg(struct alx_hw *hw, bool fast, u16 reg, u16 *data);
+u16 l1c_write_phydbg(struct alx_hw *hw, bool fast, u16 reg, u16 data);
+
+/* check the configuration of the PHY */
+u16 l1c_get_phy_config(struct alx_hw *hw);
+
+/*
+ * initialize mac basically
+ * most of hi-feature no init
+ * MAC/PHY should be reset before call this function
+ */
+u16 l1c_init_mac(struct alx_hw *hw, u8 *addr, u32 txmem_hi,
+ u32 *tx_mem_lo, u8 tx_qnum, u16 txring_sz,
+ u32 rxmem_hi, u32 rfdmem_lo, u32 rrdmem_lo,
+ u16 rxring_sz, u16 rxbuf_sz, u16 smb_timer,
+ u16 mtu, u16 int_mod, bool hash_legacy);
+
+
+
+#endif/*L1C_HW_H_*/
+
diff --git a/drivers/net/ethernet/atheros/alx/alf_cb.c b/drivers/net/ethernet/atheros/alx/alf_cb.c
new file mode 100644
index 0000000..d267760
--- /dev/null
+++ b/drivers/net/ethernet/atheros/alx/alf_cb.c
@@ -0,0 +1,1187 @@
+/*
+ * Copyright (c) 2012 Qualcomm Atheros, Inc.
+ *
+ * Permission to use, copy, modify, and/or distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+#include <linux/pci_regs.h>
+#include <linux/mii.h>
+
+#include "alf_hw.h"
+
+
+#define ALF_REV_ID_AR8161_B0 0x10
+
+/* definition for MSIX */
+#define ALF_MSIX_ENTRY_BASE 0x2000
+#define ALF_MSIX_ENTRY_SIZE 16
+#define ALF_MSIX_MSG_LOADDR_OFF 0
+#define ALF_MSIX_MSG_HIADDR_OFF 4
+#define ALF_MSIX_MSG_DATA_OFF 8
+#define ALF_MSIX_MSG_CTRL_OFF 12
+
+#define ALF_MSIX_INDEX_RXQ0 0
+#define ALF_MSIX_INDEX_RXQ1 1
+#define ALF_MSIX_INDEX_RXQ2 2
+#define ALF_MSIX_INDEX_RXQ3 3
+#define ALF_MSIX_INDEX_RXQ4 4
+#define ALF_MSIX_INDEX_RXQ5 5
+#define ALF_MSIX_INDEX_RXQ6 6
+#define ALF_MSIX_INDEX_RXQ7 7
+#define ALF_MSIX_INDEX_TXQ0 8
+#define ALF_MSIX_INDEX_TXQ1 9
+#define ALF_MSIX_INDEX_TXQ2 10
+#define ALF_MSIX_INDEX_TXQ3 11
+#define ALF_MSIX_INDEX_TIMER 12
+#define ALF_MSIX_INDEX_ALERT 13
+#define ALF_MSIX_INDEX_SMB 14
+#define ALF_MSIX_INDEX_PHY 15
+
+
+#define ALF_SRAM_BASE L1F_SRAM0
+#define ALF_SRAM(_i, _type) \
+ (ALF_SRAM_BASE + ((_i) * sizeof(_type)))
+
+#define ALF_MIB_BASE L1F_MIB_BASE
+#define ALF_MIB(_i, _type) \
+ (ALF_MIB_BASE + ((_i) * sizeof(_type)))
+
+/* definition for RSS */
+#define ALF_RSS_KEY_BASE L1F_RSS_KEY0
+#define ALF_RSS_IDT_BASE L1F_RSS_IDT_TBL0
+#define ALF_RSS_KEY(_i, _type) \
+ (ALF_RSS_KEY_BASE + ((_i) * sizeof(_type)))
+#define ALF_RSS_TBL(_i, _type) \
+ (L1F_RSS_IDT_TBL0 + ((_i) * sizeof(_type)))
+
+
+/* NIC */
+static int alf_identify_nic(struct alx_hw *hw)
+{
+ u32 drv;
+
+ if (hw->pci_revid < ALX_REV_ID_AR8161_V2_0)
+ return 0;
+
+ /* check from V2_0(b0) to ... */
+ switch (hw->pci_revid) {
+ default:
+ alx_mem_r32(hw, L1F_DRV, &drv);
+ if (drv & LX_DRV_DISABLE)
+ return -EINVAL;
+ break;
+ }
+ return 0;
+}
+
+
+/* PHY */
+static int alf_read_phy_reg(struct alx_hw *hw, u16 reg_addr, u16 *phy_data)
+{
+ unsigned long flags;
+ int retval = 0;
+
+ spin_lock_irqsave(&hw->mdio_lock, flags);
+
+ if (l1f_read_phy(hw, false, ALX_MDIO_DEV_TYPE_NORM, false, reg_addr,
+ phy_data)) {
+ alx_hw_err(hw, "error when read phy reg\n");
+ retval = -EINVAL;
+ }
+
+ spin_unlock_irqrestore(&hw->mdio_lock, flags);
+ return retval;
+}
+
+
+static int alf_write_phy_reg(struct alx_hw *hw, u16 reg_addr, u16 phy_data)
+{
+ unsigned long flags;
+ int retval = 0;
+
+ spin_lock_irqsave(&hw->mdio_lock, flags);
+
+ if (l1f_write_phy(hw, false, ALX_MDIO_DEV_TYPE_NORM, false, reg_addr,
+ phy_data)) {
+ alx_hw_err(hw, "error when write phy reg\n");
+ retval = -EINVAL;
+ }
+
+ spin_unlock_irqrestore(&hw->mdio_lock, flags);
+ return retval;
+}
+
+
+static int alf_init_phy(struct alx_hw *hw)
+{
+ u16 phy_id[2];
+ int retval;
+
+ spin_lock_init(&hw->mdio_lock);
+
+ retval = alf_read_phy_reg(hw, MII_PHYSID1, &phy_id[0]);
+ if (retval)
+ return retval;
+ retval = alf_read_phy_reg(hw, MII_PHYSID2, &phy_id[1]);
+ if (retval)
+ return retval;
+ memcpy(&hw->phy_id, phy_id, sizeof(hw->phy_id));
+
+ hw->autoneg_advertised = ALX_LINK_SPEED_1GB_FULL |
+ ALX_LINK_SPEED_10_HALF |
+ ALX_LINK_SPEED_10_FULL |
+ ALX_LINK_SPEED_100_HALF |
+ ALX_LINK_SPEED_100_FULL;
+ return retval;
+}
+
+
+static int alf_reset_phy(struct alx_hw *hw)
+{
+ int retval = 0;
+ bool pws_en, az_en, ptp_en;
+
+ pws_en = az_en = ptp_en = false;
+ CLI_HW_FLAG(PWSAVE_EN);
+ CLI_HW_FLAG(AZ_EN);
+ CLI_HW_FLAG(PTP_EN);
+
+ if (CHK_HW_FLAG(PWSAVE_CAP)) {
+ pws_en = true;
+ SET_HW_FLAG(PWSAVE_EN);
+ }
+
+ if (CHK_HW_FLAG(AZ_CAP)) {
+ az_en = true;
+ SET_HW_FLAG(AZ_EN);
+ }
+
+ if (CHK_HW_FLAG(PTP_CAP)) {
+ ptp_en = true;
+ SET_HW_FLAG(PTP_EN);
+ }
+
+ alx_hw_info(hw, "reset PHY, pws = %d, az = %d, ptp = %d\n",
+ pws_en, az_en, ptp_en);
+ if (l1f_reset_phy(hw, pws_en, az_en, ptp_en)) {
+ alx_hw_err(hw, "error when reset phy\n");
+ retval = -EINVAL;
+ }
+ return retval;
+}
+
+
+/* LINK */
+static int alf_setup_phy_link(struct alx_hw *hw, u32 speed, bool autoneg,
+ bool fc)
+{
+ u8 link_cap = 0;
+ int retval = 0;
+
+ alx_hw_info(hw, "speed = 0x%x, autoneg = %d\n", speed, autoneg);
+ if (speed & ALX_LINK_SPEED_1GB_FULL)
+ link_cap |= LX_LC_1000F;
+
+ if (speed & ALX_LINK_SPEED_100_FULL)
+ link_cap |= LX_LC_100F;
+
+ if (speed & ALX_LINK_SPEED_100_HALF)
+ link_cap |= LX_LC_100H;
+
+ if (speed & ALX_LINK_SPEED_10_FULL)
+ link_cap |= LX_LC_10F;
+
+ if (speed & ALX_LINK_SPEED_10_HALF)
+ link_cap |= LX_LC_10H;
+
+ if (l1f_init_phy_spdfc(hw, autoneg, link_cap, fc)) {
+ alx_hw_err(hw, "error when init phy speed and fc\n");
+ retval = -EINVAL;
+ }
+
+ return retval;
+}
+
+
+static int alf_setup_phy_link_speed(struct alx_hw *hw, u32 speed,
+ bool autoneg, bool fc)
+{
+ /*
+ * Clear autoneg_advertised and set new values based on input link
+ * speed.
+ */
+ hw->autoneg_advertised = 0;
+
+ if (speed & ALX_LINK_SPEED_1GB_FULL)
+ hw->autoneg_advertised |= ALX_LINK_SPEED_1GB_FULL;
+
+ if (speed & ALX_LINK_SPEED_100_FULL)
+ hw->autoneg_advertised |= ALX_LINK_SPEED_100_FULL;
+
+ if (speed & ALX_LINK_SPEED_100_HALF)
+ hw->autoneg_advertised |= ALX_LINK_SPEED_100_HALF;
+
+ if (speed & ALX_LINK_SPEED_10_FULL)
+ hw->autoneg_advertised |= ALX_LINK_SPEED_10_FULL;
+
+ if (speed & ALX_LINK_SPEED_10_HALF)
+ hw->autoneg_advertised |= ALX_LINK_SPEED_10_HALF;
+
+ return alf_setup_phy_link(hw, hw->autoneg_advertised,
+ autoneg, fc);
+}
+
+
+static int alf_check_phy_link(struct alx_hw *hw, u32 *speed, bool *link_up)
+{
+ u16 bmsr, giga;
+ int retval;
+
+ alf_read_phy_reg(hw, MII_BMSR, &bmsr);
+ retval = alf_read_phy_reg(hw, MII_BMSR, &bmsr);
+ if (retval)
+ return retval;
+
+ if (!(bmsr & BMSR_LSTATUS)) {
+ *link_up = false;
+ *speed = ALX_LINK_SPEED_UNKNOWN;
+ return 0;
+ }
+ *link_up = true;
+
+ /* Read PHY Specific Status Register (17) */
+ retval = alf_read_phy_reg(hw, L1F_MII_GIGA_PSSR, &giga);
+ if (retval)
+ return retval;
+
+
+ if (!(giga & L1F_GIGA_PSSR_SPD_DPLX_RESOLVED)) {
+ alx_hw_err(hw, "error for speed duplex resolved\n");
+ return -EINVAL;
+ }
+
+ switch (giga & L1F_GIGA_PSSR_SPEED) {
+ case L1F_GIGA_PSSR_1000MBS:
+ if (giga & L1F_GIGA_PSSR_DPLX)
+ *speed = ALX_LINK_SPEED_1GB_FULL;
+ else
+ alx_hw_err(hw, "1000M half is invalid");
+ break;
+ case L1F_GIGA_PSSR_100MBS:
+ if (giga & L1F_GIGA_PSSR_DPLX)
+ *speed = ALX_LINK_SPEED_100_FULL;
+ else
+ *speed = ALX_LINK_SPEED_100_HALF;
+ break;
+ case L1F_GIGA_PSSR_10MBS:
+ if (giga & L1F_GIGA_PSSR_DPLX)
+ *speed = ALX_LINK_SPEED_10_FULL;
+ else
+ *speed = ALX_LINK_SPEED_10_HALF;
+ break;
+ default:
+ *speed = ALX_LINK_SPEED_UNKNOWN;
+ retval = -EINVAL;
+ break;
+ }
+ return retval;
+}
+
+
+/*
+ * 1. stop_mac
+ * 2. reset mac & dma by reg1400(MASTER)
+ * 3. control speed/duplex, hash-alg
+ * 4. clock switch setting
+ */
+static int alf_reset_mac(struct alx_hw *hw)
+{
+ int retval = 0;
+
+ if (l1f_reset_mac(hw)) {
+ alx_hw_err(hw, "error when reset mac\n");
+ retval = -EINVAL;
+ }
+ return retval;
+}
+
+
+static int alf_start_mac(struct alx_hw *hw)
+{
+ u16 en_ctrl = 0;
+ int retval = 0;
+
+ /* set link speed param */
+ switch (hw->link_speed) {
+ case ALX_LINK_SPEED_1GB_FULL:
+ en_ctrl |= LX_MACSPEED_1000;
+ /* fall through */
+ case ALX_LINK_SPEED_100_FULL:
+ case ALX_LINK_SPEED_10_FULL:
+ en_ctrl |= LX_MACDUPLEX_FULL;
+ break;
+ }
+
+ /* set fc param*/
+ switch (hw->cur_fc_mode) {
+ case alx_fc_full:
+ en_ctrl |= LX_FC_RXEN; /* Flow Control RX Enable */
+ en_ctrl |= LX_FC_TXEN; /* Flow Control TX Enable */
+ break;
+ case alx_fc_rx_pause:
+ en_ctrl |= LX_FC_RXEN; /* Flow Control RX Enable */
+ break;
+ case alx_fc_tx_pause:
+ en_ctrl |= LX_FC_TXEN; /* Flow Control TX Enable */
+ break;
+ default:
+ break;
+ }
+
+ if (hw->fc_single_pause)
+ en_ctrl |= LX_SINGLE_PAUSE;
+
+ en_ctrl |= LX_FLT_DIRECT; /* RX Enable; and TX Always Enable */
+ en_ctrl |= LX_FLT_BROADCAST; /* RX Broadcast Enable */
+ en_ctrl |= LX_ADD_FCS;
+
+ if (CHK_HW_FLAG(VLANSTRIP_EN))
+ en_ctrl |= LX_VLAN_STRIP;
+
+ if (CHK_HW_FLAG(PROMISC_EN))
+ en_ctrl |= LX_FLT_PROMISC;
+
+ if (CHK_HW_FLAG(MULTIALL_EN))
+ en_ctrl |= LX_FLT_MULTI_ALL;
+
+ if (CHK_HW_FLAG(LOOPBACK_EN))
+ en_ctrl |= LX_LOOPBACK;
+
+ if (l1f_enable_mac(hw, true, en_ctrl)) {
+ alx_hw_err(hw, "error when start mac\n");
+ retval = -EINVAL;
+ }
+ return retval;
+}
+
+
+/*
+ * 1. stop RXQ (reg15A0) and TXQ (reg1590)
+ * 2. stop MAC (reg1480)
+ */
+static int alf_stop_mac(struct alx_hw *hw)
+{
+ int retval = 0;
+
+ if (l1f_enable_mac(hw, false, 0)) {
+ alx_hw_err(hw, "error when stop mac\n");
+ retval = -EINVAL;
+ }
+ return retval;
+}
+
+
+static int alf_config_mac(struct alx_hw *hw, u16 rxbuf_sz, u16 rx_qnum,
+ u16 rxring_sz, u16 tx_qnum, u16 txring_sz)
+{
+ u8 *addr;
+ u32 txmem_hi, txmem_lo[4];
+ u32 rxmem_hi, rfdmem_lo, rrdmem_lo;
+ u16 smb_timer, mtu_with_eth, int_mod;
+ bool hash_legacy;
+ int i;
+ int retval = 0;
+
+ addr = hw->mac_addr;
+
+ txmem_hi = ALX_DMA_ADDR_HI(hw->tpdma[0]);
+ for (i = 0; i < tx_qnum; i++)
+ txmem_lo[i] = ALX_DMA_ADDR_LO(hw->tpdma[i]);
+
+
+ rxmem_hi = ALX_DMA_ADDR_HI(hw->rfdma[0]);
+ rfdmem_lo = ALX_DMA_ADDR_LO(hw->rfdma[0]);
+ rrdmem_lo = ALX_DMA_ADDR_LO(hw->rrdma[0]);
+
+ smb_timer = (u16)hw->smb_timer;
+ mtu_with_eth = hw->mtu + ALX_ETH_LENGTH_OF_HEADER;
+ int_mod = hw->imt;
+
+ hash_legacy = true;
+
+ if (l1f_init_mac(hw, addr, txmem_hi, txmem_lo, tx_qnum, txring_sz,
+ rxmem_hi, rfdmem_lo, rrdmem_lo, rxring_sz, rxbuf_sz,
+ smb_timer, mtu_with_eth, int_mod, hash_legacy)) {
+ alx_hw_err(hw, "error when config mac\n");
+ retval = -EINVAL;
+ }
+
+ return retval;
+}
+
+
+/**
+ * alf_get_mac_addr
+ * @hw: pointer to hardware structure
+ **/
+static int alf_get_mac_addr(struct alx_hw *hw, u8 *addr)
+{
+ int retval = 0;
+
+ if (l1f_get_perm_macaddr(hw, addr)) {
+ alx_hw_err(hw, "error when get permanent mac address\n");
+ retval = -EINVAL;
+ }
+ return retval;
+}
+
+
+static int alf_reset_pcie(struct alx_hw *hw, bool l0s_en, bool l1_en)
+{
+ int retval = 0;
+
+ if (!CHK_HW_FLAG(L0S_CAP))
+ l0s_en = false;
+
+ if (l0s_en)
+ SET_HW_FLAG(L0S_EN);
+ else
+ CLI_HW_FLAG(L0S_EN);
+
+
+ if (!CHK_HW_FLAG(L1_CAP))
+ l1_en = false;
+
+ if (l1_en)
+ SET_HW_FLAG(L1_EN);
+ else
+ CLI_HW_FLAG(L1_EN);
+
+ if (l1f_reset_pcie(hw, l0s_en, l1_en)) {
+ alx_hw_err(hw, "error when reset pcie\n");
+ retval = -EINVAL;
+ }
+ return retval;
+}
+
+
+static int alf_config_aspm(struct alx_hw *hw, bool l0s_en, bool l1_en)
+{
+ int retval = 0;
+
+ if (!CHK_HW_FLAG(L0S_CAP))
+ l0s_en = false;
+
+ if (l0s_en)
+ SET_HW_FLAG(L0S_EN);
+ else
+ CLI_HW_FLAG(L0S_EN);
+
+ if (!CHK_HW_FLAG(L1_CAP))
+ l1_en = false;
+
+ if (l1_en)
+ SET_HW_FLAG(L1_EN);
+ else
+ CLI_HW_FLAG(L1_EN);
+
+ if (l1f_enable_aspm(hw, l0s_en, l1_en, 0)) {
+ alx_hw_err(hw, "error when enable aspm\n");
+ retval = -EINVAL;
+ }
+ return retval;
+}
+
+
+static int alf_config_wol(struct alx_hw *hw, u32 wufc)
+{
+ u32 wol;
+ int retval = 0;
+
+ wol = 0;
+ /* turn on magic packet event */
+ if (wufc & ALX_WOL_MAGIC) {
+ wol |= L1F_WOL0_MAGIC_EN | L1F_WOL0_PME_MAGIC_EN;
+ /* magic packet maybe Broadcast&multicast&Unicast frame */
+ /* mac |= MAC_CTRL_BC_EN; */
+ }
+
+ /* turn on link up event */
+ if (wufc & ALX_WOL_PHY) {
+ wol |= L1F_WOL0_LINK_EN | L1F_WOL0_PME_LINK;
+ /* only link up can wake up */
+ retval = alf_write_phy_reg(hw, L1F_MII_IER, L1F_IER_LINK_UP);
+ }
+ alx_mem_w32(hw, L1F_WOL0, wol);
+ return retval;
+}
+
+
+static int alf_config_mac_ctrl(struct alx_hw *hw)
+{
+ u32 mac;
+
+ alx_mem_r32(hw, L1F_MAC_CTRL, &mac);
+
+ /* enable/disable VLAN tag insert,strip */
+ if (CHK_HW_FLAG(VLANSTRIP_EN))
+ mac |= L1F_MAC_CTRL_VLANSTRIP;
+ else
+ mac &= ~L1F_MAC_CTRL_VLANSTRIP;
+
+ if (CHK_HW_FLAG(PROMISC_EN))
+ mac |= L1F_MAC_CTRL_PROMISC_EN;
+ else
+ mac &= ~L1F_MAC_CTRL_PROMISC_EN;
+
+ if (CHK_HW_FLAG(MULTIALL_EN))
+ mac |= L1F_MAC_CTRL_MULTIALL_EN;
+ else
+ mac &= ~L1F_MAC_CTRL_MULTIALL_EN;
+
+ if (CHK_HW_FLAG(LOOPBACK_EN))
+ mac |= L1F_MAC_CTRL_LPBACK_EN;
+ else
+ mac &= ~L1F_MAC_CTRL_LPBACK_EN;
+
+ alx_mem_w32(hw, L1F_MAC_CTRL, mac);
+ return 0;
+}
+
+
+static int alf_config_pow_save(struct alx_hw *hw, u32 speed, bool wol_en,
+ bool tx_en, bool rx_en, bool pws_en)
+{
+ u8 wire_spd = LX_LC_10H;
+ int retval = 0;
+
+ switch (speed) {
+ case ALX_LINK_SPEED_UNKNOWN:
+ case ALX_LINK_SPEED_10_HALF:
+ wire_spd = LX_LC_10H;
+ break;
+ case ALX_LINK_SPEED_10_FULL:
+ wire_spd = LX_LC_10F;
+ break;
+ case ALX_LINK_SPEED_100_HALF:
+ wire_spd = LX_LC_100H;
+ break;
+ case ALX_LINK_SPEED_100_FULL:
+ wire_spd = LX_LC_100F;
+ break;
+ case ALX_LINK_SPEED_1GB_FULL:
+ wire_spd = LX_LC_1000F;
+ break;
+ }
+
+ if (l1f_powersaving(hw, wire_spd, wol_en, tx_en, rx_en, pws_en)) {
+ alx_hw_err(hw, "error when set power saving\n");
+ retval = -EINVAL;
+ }
+ return retval;
+}
+
+
+/* RAR, Multicast, VLAN */
+static int alf_set_mac_addr(struct alx_hw *hw, u8 *addr)
+{
+ u32 sta;
+
+ /*
+ * for example: 00-0B-6A-F6-00-DC
+ * 0<-->6AF600DC, 1<-->000B.
+ */
+
+ /* low dword */
+ sta = (((u32)addr[2]) << 24) | (((u32)addr[3]) << 16) |
+ (((u32)addr[4]) << 8) | (((u32)addr[5])) ;
+ alx_mem_w32(hw, L1F_STAD0, sta);
+
+ /* hight dword */
+ sta = (((u32)addr[0]) << 8) | (((u32)addr[1])) ;
+ alx_mem_w32(hw, L1F_STAD1, sta);
+ return 0;
+}
+
+
+static int alf_set_mc_addr(struct alx_hw *hw, u8 *addr)
+{
+ u32 crc32, bit, reg, mta;
+
+ /*
+ * set hash value for a multicast address hash calcu processing.
+ * 1. calcu 32bit CRC for multicast address
+ * 2. reverse crc with MSB to LSB
+ */
+ crc32 = ALX_ETH_CRC(addr, ALX_ETH_LENGTH_OF_ADDRESS);
+
+ /*
+ * The HASH Table is a register array of 2 32-bit registers.
+ * It is treated like an array of 64 bits. We want to set
+ * bit BitArray[hash_value]. So we figure out what register
+ * the bit is in, read it, OR in the new bit, then write
+ * back the new value. The register is determined by the
+ * upper 7 bits of the hash value and the bit within that
+ * register are determined by the lower 5 bits of the value.
+ */
+ reg = (crc32 >> 31) & 0x1;
+ bit = (crc32 >> 26) & 0x1F;
+
+ alx_mem_r32(hw, L1F_HASH_TBL0 + (reg<<2), &mta);
+ mta |= (0x1 << bit);
+ alx_mem_w32(hw, L1F_HASH_TBL0 + (reg<<2), mta);
+ return 0;
+}
+
+
+static int alf_clear_mc_addr(struct alx_hw *hw)
+{
+ alx_mem_w32(hw, L1F_HASH_TBL0, 0);
+ alx_mem_w32(hw, L1F_HASH_TBL1, 0);
+ return 0;
+}
+
+
+/* RTX, IRQ */
+static int alf_config_tx(struct alx_hw *hw)
+{
+ u32 wrr;
+
+ alx_mem_r32(hw, L1F_WRR, &wrr);
+ switch (hw->wrr_mode) {
+ case alx_wrr_mode_none:
+ FIELD_SETL(wrr, L1F_WRR_PRI, L1F_WRR_PRI_RESTRICT_NONE);
+ break;
+ case alx_wrr_mode_high:
+ FIELD_SETL(wrr, L1F_WRR_PRI, L1F_WRR_PRI_RESTRICT_HI);
+ break;
+ case alx_wrr_mode_high2:
+ FIELD_SETL(wrr, L1F_WRR_PRI, L1F_WRR_PRI_RESTRICT_HI2);
+ break;
+ case alx_wrr_mode_all:
+ FIELD_SETL(wrr, L1F_WRR_PRI, L1F_WRR_PRI_RESTRICT_ALL);
+ break;
+ }
+ FIELD_SETL(wrr, L1F_WRR_PRI0, hw->wrr_prio0);
+ FIELD_SETL(wrr, L1F_WRR_PRI1, hw->wrr_prio1);
+ FIELD_SETL(wrr, L1F_WRR_PRI2, hw->wrr_prio2);
+ FIELD_SETL(wrr, L1F_WRR_PRI3, hw->wrr_prio3);
+ alx_mem_w32(hw, L1F_WRR, wrr);
+ return 0;
+}
+
+
+static int alf_config_msix(struct alx_hw *hw, u16 num_intrs,
+ bool msix_en, bool msi_en)
+{
+ u32 map[2];
+ u32 type;
+ int msix_idx;
+
+ if (!msix_en)
+ goto configure_legacy;
+
+ memset(map, 0, sizeof(map));
+ for (msix_idx = 0; msix_idx < num_intrs; msix_idx++) {
+ switch (msix_idx) {
+ case ALF_MSIX_INDEX_RXQ0:
+ FIELD_SETL(map[0], L1F_MSI_MAP_TBL1_RXQ0,
+ ALF_MSIX_INDEX_RXQ0);
+ break;
+ case ALF_MSIX_INDEX_RXQ1:
+ FIELD_SETL(map[0], L1F_MSI_MAP_TBL1_RXQ1,
+ ALF_MSIX_INDEX_RXQ1);
+ break;
+ case ALF_MSIX_INDEX_RXQ2:
+ FIELD_SETL(map[0], L1F_MSI_MAP_TBL1_RXQ2,
+ ALF_MSIX_INDEX_RXQ2);
+ break;
+ case ALF_MSIX_INDEX_RXQ3:
+ FIELD_SETL(map[0], L1F_MSI_MAP_TBL1_RXQ3,
+ ALF_MSIX_INDEX_RXQ3);
+ break;
+ case ALF_MSIX_INDEX_RXQ4:
+ FIELD_SETL(map[1], L1F_MSI_MAP_TBL2_RXQ4,
+ ALF_MSIX_INDEX_RXQ4);
+ break;
+ case ALF_MSIX_INDEX_RXQ5:
+ FIELD_SETL(map[1], L1F_MSI_MAP_TBL2_RXQ5,
+ ALF_MSIX_INDEX_RXQ5);
+ break;
+ case ALF_MSIX_INDEX_RXQ6:
+ FIELD_SETL(map[1], L1F_MSI_MAP_TBL2_RXQ6,
+ ALF_MSIX_INDEX_RXQ6);
+ break;
+ case ALF_MSIX_INDEX_RXQ7:
+ FIELD_SETL(map[1], L1F_MSI_MAP_TBL2_RXQ7,
+ ALF_MSIX_INDEX_RXQ7);
+ break;
+ case ALF_MSIX_INDEX_TXQ0:
+ FIELD_SETL(map[0], L1F_MSI_MAP_TBL1_TXQ0,
+ ALF_MSIX_INDEX_TXQ0);
+ break;
+ case ALF_MSIX_INDEX_TXQ1:
+ FIELD_SETL(map[0], L1F_MSI_MAP_TBL1_TXQ1,
+ ALF_MSIX_INDEX_TXQ1);
+ break;
+ case ALF_MSIX_INDEX_TXQ2:
+ FIELD_SETL(map[1], L1F_MSI_MAP_TBL2_TXQ2,
+ ALF_MSIX_INDEX_TXQ2);
+ break;
+ case ALF_MSIX_INDEX_TXQ3:
+ FIELD_SETL(map[1], L1F_MSI_MAP_TBL2_TXQ3,
+ ALF_MSIX_INDEX_TXQ3);
+ break;
+ case ALF_MSIX_INDEX_TIMER:
+ FIELD_SETL(map[0], L1F_MSI_MAP_TBL1_TIMER,
+ ALF_MSIX_INDEX_TIMER);
+ break;
+ case ALF_MSIX_INDEX_ALERT:
+ FIELD_SETL(map[0], L1F_MSI_MAP_TBL1_ALERT,
+ ALF_MSIX_INDEX_ALERT);
+ break;
+ case ALF_MSIX_INDEX_SMB:
+ FIELD_SETL(map[1], L1F_MSI_MAP_TBL2_SMB,
+ ALF_MSIX_INDEX_SMB);
+ break;
+ case ALF_MSIX_INDEX_PHY:
+ FIELD_SETL(map[1], L1F_MSI_MAP_TBL2_PHY,
+ ALF_MSIX_INDEX_PHY);
+ break;
+ default:
+ break;
+
+ }
+
+ }
+
+ alx_mem_w32(hw, L1F_MSI_MAP_TBL1, map[0]);
+ alx_mem_w32(hw, L1F_MSI_MAP_TBL2, map[1]);
+
+ /* 0 to alert, 1 to timer */
+ type = (L1F_MSI_ID_MAP_DMAW |
+ L1F_MSI_ID_MAP_DMAR |
+ L1F_MSI_ID_MAP_PCIELNKDW |
+ L1F_MSI_ID_MAP_PCIECERR |
+ L1F_MSI_ID_MAP_PCIENFERR |
+ L1F_MSI_ID_MAP_PCIEFERR |
+ L1F_MSI_ID_MAP_PCIEUR);
+
+ alx_mem_w32(hw, L1F_MSI_ID_MAP, type);
+ return 0;
+
+configure_legacy:
+ alx_mem_w32(hw, L1F_MSI_MAP_TBL1, 0x0);
+ alx_mem_w32(hw, L1F_MSI_MAP_TBL2, 0x0);
+ alx_mem_w32(hw, L1F_MSI_ID_MAP, 0x0);
+ if (msi_en) {
+ u32 msi;
+ alx_mem_r32(hw, 0x1920, &msi);
+ msi |= 0x10000;
+ alx_mem_w32(hw, 0x1920, msi);
+ }
+ return 0;
+}
+
+
+/*
+ * Interrupt
+ */
+static int alf_ack_phy_intr(struct alx_hw *hw)
+{
+ u16 isr;
+ return alf_read_phy_reg(hw, L1F_MII_ISR, &isr);
+}
+
+
+static int alf_enable_legacy_intr(struct alx_hw *hw)
+{
+ u16 cmd;
+
+ alx_cfg_r16(hw, PCI_COMMAND, &cmd);
+ cmd &= ~PCI_COMMAND_INTX_DISABLE;
+ alx_cfg_w16(hw, PCI_COMMAND, cmd);
+
+ alx_mem_w32(hw, L1F_ISR, ~((u32) L1F_ISR_DIS));
+ alx_mem_w32(hw, L1F_IMR, hw->intr_mask);
+ return 0;
+}
+
+
+static int alf_disable_legacy_intr(struct alx_hw *hw)
+{
+ alx_mem_w32(hw, L1F_ISR, L1F_ISR_DIS);
+ alx_mem_w32(hw, L1F_IMR, 0);
+ alx_mem_flush(hw);
+ return 0;
+}
+
+
+static int alf_enable_msix_intr(struct alx_hw *hw, u8 entry_idx)
+{
+ u32 ctrl_reg;
+
+ ctrl_reg = ALF_MSIX_ENTRY_BASE + (entry_idx * ALF_MSIX_ENTRY_SIZE) +
+ ALF_MSIX_MSG_CTRL_OFF;
+
+ alx_mem_w32(hw, ctrl_reg, 0x0);
+ alx_mem_flush(hw);
+ return 0;
+}
+
+
+static int alf_disable_msix_intr(struct alx_hw *hw, u8 entry_idx)
+{
+ u32 ctrl_reg;
+
+ ctrl_reg = ALF_MSIX_ENTRY_BASE + (entry_idx * ALF_MSIX_ENTRY_SIZE) +
+ ALF_MSIX_MSG_CTRL_OFF;
+
+ alx_mem_w32(hw, ctrl_reg, 0x1);
+ alx_mem_flush(hw);
+ return 0;
+}
+
+
+/* RSS */
+static int alf_config_rss(struct alx_hw *hw, bool rss_en)
+{
+ int key_len_by_u8 = sizeof(hw->rss_key);
+ int idt_len_by_u32 = sizeof(hw->rss_idt) / sizeof(u32);
+ u32 rxq0;
+ int i;
+
+ /* Fill out hash function keys */
+ for (i = 0; i < key_len_by_u8; i++) {
+ alx_mem_w8(hw, ALF_RSS_KEY(i, u8),
+ hw->rss_key[key_len_by_u8 - i - 1]);
+ }
+
+ /* Fill out redirection table */
+ for (i = 0; i < idt_len_by_u32; i++)
+ alx_mem_w32(hw, ALF_RSS_TBL(i, u32), hw->rss_idt[i]);
+
+ alx_mem_w32(hw, L1F_RSS_BASE_CPU_NUM, hw->rss_base_cpu);
+
+ alx_mem_r32(hw, L1F_RXQ0, &rxq0);
+ if (hw->rss_hstype & ALX_RSS_HSTYP_IPV4_EN)
+ rxq0 |= L1F_RXQ0_RSS_HSTYP_IPV4_EN;
+ else
+ rxq0 &= ~L1F_RXQ0_RSS_HSTYP_IPV4_EN;
+
+ if (hw->rss_hstype & ALX_RSS_HSTYP_TCP4_EN)
+ rxq0 |= L1F_RXQ0_RSS_HSTYP_IPV4_TCP_EN;
+ else
+ rxq0 &= ~L1F_RXQ0_RSS_HSTYP_IPV4_TCP_EN;
+
+ if (hw->rss_hstype & ALX_RSS_HSTYP_IPV6_EN)
+ rxq0 |= L1F_RXQ0_RSS_HSTYP_IPV6_EN;
+ else
+ rxq0 &= ~L1F_RXQ0_RSS_HSTYP_IPV6_EN;
+
+ if (hw->rss_hstype & ALX_RSS_HSTYP_TCP6_EN)
+ rxq0 |= L1F_RXQ0_RSS_HSTYP_IPV6_TCP_EN;
+ else
+ rxq0 &= ~L1F_RXQ0_RSS_HSTYP_IPV6_TCP_EN;
+
+ FIELD_SETL(rxq0, L1F_RXQ0_RSS_MODE, hw->rss_mode);
+ FIELD_SETL(rxq0, L1F_RXQ0_IDT_TBL_SIZE, hw->rss_idt_size);
+
+ if (rss_en)
+ rxq0 |= L1F_RXQ0_RSS_HASH_EN;
+ else
+ rxq0 &= ~L1F_RXQ0_RSS_HASH_EN;
+
+ alx_mem_w32(hw, L1F_RXQ0, rxq0);
+ return 0;
+}
+
+
+/* fc */
+static int alf_get_fc_mode(struct alx_hw *hw, enum alx_fc_mode *mode)
+{
+ u16 bmsr, giga;
+ int i;
+ int retval = 0;
+
+ for (i = 0; i < ALX_MAX_SETUP_LNK_CYCLE; i++) {
+ alf_read_phy_reg(hw, MII_BMSR, &bmsr);
+ alf_read_phy_reg(hw, MII_BMSR, &bmsr);
+ if (bmsr & BMSR_LSTATUS) {
+ /* Read phy Specific Status Register (17) */
+ retval = alf_read_phy_reg(hw, L1F_MII_GIGA_PSSR, &giga);
+ if (retval)
+ return retval;
+
+ if (!(giga & L1F_GIGA_PSSR_SPD_DPLX_RESOLVED)) {
+ alx_hw_err(hw,
+ "error for speed duplex resolved\n");
+ return -EINVAL;
+ }
+
+ if ((giga & L1F_GIGA_PSSR_FC_TXEN) &&
+ (giga & L1F_GIGA_PSSR_FC_RXEN)) {
+ *mode = alx_fc_full;
+ } else if (giga & L1F_GIGA_PSSR_FC_TXEN) {
+ *mode = alx_fc_tx_pause;
+ } else if (giga & L1F_GIGA_PSSR_FC_RXEN) {
+ *mode = alx_fc_rx_pause;
+ } else {
+ *mode = alx_fc_none;
+ }
+ break;
+ }
+ mdelay(100);
+ }
+
+ if (i == ALX_MAX_SETUP_LNK_CYCLE) {
+ alx_hw_err(hw, "error when get flow control mode\n");
+ retval = -EINVAL;
+ }
+ return retval;
+}
+
+
+static int alf_config_fc(struct alx_hw *hw)
+{
+ u32 mac;
+ int retval = 0;
+
+ if (hw->disable_fc_autoneg) {
+ hw->fc_was_autonegged = false;
+ hw->cur_fc_mode = hw->req_fc_mode;
+ } else {
+ hw->fc_was_autonegged = true;
+ retval = alf_get_fc_mode(hw, &hw->cur_fc_mode);
+ if (retval)
+ return retval;
+ }
+
+ alx_mem_r32(hw, L1F_MAC_CTRL, &mac);
+
+ switch (hw->cur_fc_mode) {
+ case alx_fc_none: /* 0 */
+ mac &= ~(L1F_MAC_CTRL_RXFC_EN | L1F_MAC_CTRL_TXFC_EN);
+ break;
+ case alx_fc_rx_pause: /* 1 */
+ mac &= ~L1F_MAC_CTRL_TXFC_EN;
+ mac |= L1F_MAC_CTRL_RXFC_EN;
+ break;
+ case alx_fc_tx_pause: /* 2 */
+ mac |= L1F_MAC_CTRL_TXFC_EN;
+ mac &= ~L1F_MAC_CTRL_RXFC_EN;
+ break;
+ case alx_fc_full: /* 3 */
+ case alx_fc_default: /* 4 */
+ mac |= (L1F_MAC_CTRL_TXFC_EN | L1F_MAC_CTRL_RXFC_EN);
+ break;
+ default:
+ alx_hw_err(hw, "flow control param set incorrectly\n");
+ return -EINVAL;
+ break;
+ }
+
+ alx_mem_w32(hw, L1F_MAC_CTRL, mac);
+
+ return retval;
+}
+
+
+/*
+ * NVRam
+ */
+static int alf_check_nvram(struct alx_hw *hw, bool *exist)
+{
+ *exist = false;
+ return 0;
+}
+
+
+/* ethtool */
+static int alf_get_ethtool_regs(struct alx_hw *hw, void *buff)
+{
+ int i;
+ u32 *val = buff;
+ static const u32 reg[] = {
+ /* 0 */
+ L1F_DEV_CAP, L1F_DEV_CTRL, L1F_LNK_CAP, L1F_LNK_CTRL,
+ L1F_UE_SVRT, L1F_EFLD, L1F_SLD, L1F_PPHY_MISC1,
+ L1F_PPHY_MISC2, L1F_PDLL_TRNS1,
+
+ /* 10 */
+ L1F_TLEXTN_STATS, L1F_EFUSE_CTRL, L1F_EFUSE_DATA, L1F_SPI_OP1,
+ L1F_SPI_OP2, L1F_SPI_OP3, L1F_EF_CTRL, L1F_EF_ADDR,
+ L1F_EF_DATA, L1F_SPI_ID,
+
+ /* 20 */
+ L1F_SPI_CFG_START, L1F_PMCTRL, L1F_LTSSM_CTRL, L1F_MASTER,
+ L1F_MANU_TIMER, L1F_IRQ_MODU_TIMER, L1F_PHY_CTRL, L1F_MAC_STS,
+ L1F_MDIO, L1F_MDIO_EXTN,
+
+ /* 30 */
+ L1F_PHY_STS, L1F_BIST0, L1F_BIST1, L1F_SERDES,
+ L1F_LED_CTRL, L1F_LED_PATN, L1F_LED_PATN2, L1F_SYSALV,
+ L1F_PCIERR_INST, L1F_LPI_DECISN_TIMER,
+
+ /* 40 */
+ L1F_LPI_CTRL, L1F_LPI_WAIT, L1F_HRTBT_VLAN, L1F_HRTBT_CTRL,
+ L1F_RXPARSE, L1F_MAC_CTRL, L1F_GAP, L1F_STAD1,
+ L1F_LED_CTRL, L1F_HASH_TBL0,
+
+ /* 50 */
+ L1F_HASH_TBL1, L1F_HALFD, L1F_DMA, L1F_WOL0,
+ L1F_WOL1, L1F_WOL2, L1F_WRR, L1F_HQTPD,
+ L1F_CPUMAP1, L1F_CPUMAP2,
+
+ /* 60 */
+ L1F_MISC, L1F_RX_BASE_ADDR_HI, L1F_RFD_ADDR_LO, L1F_RFD_RING_SZ,
+ L1F_RFD_BUF_SZ, L1F_RRD_ADDR_LO, L1F_RRD_RING_SZ,
+ L1F_RFD_PIDX, L1F_RFD_CIDX, L1F_RXQ0,
+
+ /* 70 */
+ L1F_RXQ1, L1F_RXQ2, L1F_RXQ3, L1F_TX_BASE_ADDR_HI,
+ L1F_TPD_PRI0_ADDR_LO, L1F_TPD_PRI1_ADDR_LO,
+ L1F_TPD_PRI2_ADDR_LO, L1F_TPD_PRI3_ADDR_LO,
+ L1F_TPD_PRI0_PIDX, L1F_TPD_PRI1_PIDX,
+
+ /* 80 */
+ L1F_TPD_PRI2_PIDX, L1F_TPD_PRI3_PIDX, L1F_TPD_PRI0_CIDX,
+ L1F_TPD_PRI1_CIDX, L1F_TPD_PRI2_CIDX, L1F_TPD_PRI3_CIDX,
+ L1F_TPD_RING_SZ, L1F_TXQ0, L1F_TXQ1, L1F_TXQ2,
+
+ /* 90 */
+ L1F_MSI_MAP_TBL1, L1F_MSI_MAP_TBL2, L1F_MSI_ID_MAP,
+ L1F_MSIX_MASK, L1F_MSIX_PENDING,
+ };
+
+ for (i = 0; i < ARRAY_SIZE(reg); i++)
+ alx_mem_r32(hw, reg[i], &val[i]);
+
+ /* SRAM */
+ for (i = 0; i < 16; i++)
+ alx_mem_r32(hw, ALF_SRAM(i, u32), &val[100 + i]);
+
+ /* RSS */
+ for (i = 0; i < 10; i++)
+ alx_mem_r32(hw, ALF_RSS_KEY(i, u32), &val[120 + i]);
+ for (i = 0; i < 32; i++)
+ alx_mem_r32(hw, ALF_RSS_TBL(i, u32), &val[130 + i]);
+ alx_mem_r32(hw, L1F_RSS_HASH_VAL, &val[162]);
+ alx_mem_r32(hw, L1F_RSS_HASH_FLAG, &val[163]);
+ alx_mem_r32(hw, L1F_RSS_BASE_CPU_NUM, &val[164]);
+
+ /* MIB */
+ for (i = 0; i < 48; i++)
+ alx_mem_r32(hw, ALF_MIB(i, u32), &val[170 + i]);
+ return 0;
+}
+
+
+/******************************************************************************/
+static int alf_set_hw_capabilities(struct alx_hw *hw)
+{
+ SET_HW_FLAG(L0S_CAP);
+ SET_HW_FLAG(L1_CAP);
+
+ if (hw->mac_type == alx_mac_l1f)
+ SET_HW_FLAG(GIGA_CAP);
+
+ /* set flags of alx_phy_info */
+ SET_HW_FLAG(PWSAVE_CAP);
+ return 0;
+}
+
+
+/* alc_set_hw_info */
+static int alf_set_hw_infos(struct alx_hw *hw)
+{
+ hw->rxstat_reg = L1F_MIB_RX_OK;
+ hw->rxstat_sz = 0x60;
+ hw->txstat_reg = L1F_MIB_TX_OK;
+ hw->txstat_sz = 0x68;
+
+ hw->rx_prod_reg[0] = L1F_RFD_PIDX;
+ hw->rx_cons_reg[0] = L1F_RFD_CIDX;
+
+ hw->tx_prod_reg[0] = L1F_TPD_PRI0_PIDX;
+ hw->tx_cons_reg[0] = L1F_TPD_PRI0_CIDX;
+ hw->tx_prod_reg[1] = L1F_TPD_PRI1_PIDX;
+ hw->tx_cons_reg[1] = L1F_TPD_PRI1_CIDX;
+ hw->tx_prod_reg[2] = L1F_TPD_PRI2_PIDX;
+ hw->tx_cons_reg[2] = L1F_TPD_PRI2_CIDX;
+ hw->tx_prod_reg[3] = L1F_TPD_PRI3_PIDX;
+ hw->tx_cons_reg[3] = L1F_TPD_PRI3_CIDX;
+
+ hw->hwreg_sz = 0x200;
+ hw->eeprom_sz = 0;
+
+ return 0;
+}
+
+
+/*
+ * alf_init_hw_callbacks
+ */
+int alf_init_hw_callbacks(struct alx_hw *hw)
+{
+ /* NIC */
+ hw->cbs.identify_nic = &alf_identify_nic;
+ /* MAC */
+ hw->cbs.reset_mac = &alf_reset_mac;
+ hw->cbs.start_mac = &alf_start_mac;
+ hw->cbs.stop_mac = &alf_stop_mac;
+ hw->cbs.config_mac = &alf_config_mac;
+ hw->cbs.get_mac_addr = &alf_get_mac_addr;
+ hw->cbs.set_mac_addr = &alf_set_mac_addr;
+ hw->cbs.set_mc_addr = &alf_set_mc_addr;
+ hw->cbs.clear_mc_addr = &alf_clear_mc_addr;
+
+ /* PHY */
+ hw->cbs.init_phy = &alf_init_phy;
+ hw->cbs.reset_phy = &alf_reset_phy;
+ hw->cbs.read_phy_reg = &alf_read_phy_reg;
+ hw->cbs.write_phy_reg = &alf_write_phy_reg;
+ hw->cbs.check_phy_link = &alf_check_phy_link;
+ hw->cbs.setup_phy_link = &alf_setup_phy_link;
+ hw->cbs.setup_phy_link_speed = &alf_setup_phy_link_speed;
+
+ /* Interrupt */
+ hw->cbs.ack_phy_intr = &alf_ack_phy_intr;
+ hw->cbs.enable_legacy_intr = &alf_enable_legacy_intr;
+ hw->cbs.disable_legacy_intr = &alf_disable_legacy_intr;
+ hw->cbs.enable_msix_intr = &alf_enable_msix_intr;
+ hw->cbs.disable_msix_intr = &alf_disable_msix_intr;
+
+ /* Configure */
+ hw->cbs.config_tx = &alf_config_tx;
+ hw->cbs.config_fc = &alf_config_fc;
+ hw->cbs.config_rss = &alf_config_rss;
+ hw->cbs.config_msix = &alf_config_msix;
+ hw->cbs.config_wol = &alf_config_wol;
+ hw->cbs.config_aspm = &alf_config_aspm;
+ hw->cbs.config_mac_ctrl = &alf_config_mac_ctrl;
+ hw->cbs.config_pow_save = &alf_config_pow_save;
+ hw->cbs.reset_pcie = &alf_reset_pcie;
+
+ /* NVRam */
+ hw->cbs.check_nvram = &alf_check_nvram;
+
+ /* Others */
+ hw->cbs.get_ethtool_regs = alf_get_ethtool_regs;
+
+ alf_set_hw_capabilities(hw);
+ alf_set_hw_infos(hw);
+
+ alx_hw_info(hw, "HW Flags = 0x%x\n", hw->flags);
+ return 0;
+}
+
diff --git a/drivers/net/ethernet/atheros/alx/alf_hw.c b/drivers/net/ethernet/atheros/alx/alf_hw.c
new file mode 100644
index 0000000..3301457
--- /dev/null
+++ b/drivers/net/ethernet/atheros/alx/alf_hw.c
@@ -0,0 +1,918 @@
+/*
+ * Copyright (c) 2012 Qualcomm Atheros, Inc.
+ *
+ * Permission to use, copy, modify, and/or distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+#include <linux/pci_regs.h>
+#include <linux/mii.h>
+
+#include "alf_hw.h"
+
+
+/* get permanent mac address from
+ * 0: success
+ * non-0:fail
+ */
+u16 l1f_get_perm_macaddr(struct alx_hw *hw, u8 *addr)
+{
+ u32 val, mac0, mac1;
+ u16 flag, i;
+
+#define INTN_LOADED 0x1
+#define EXTN_LOADED 0x2
+
+ flag = 0;
+ val = 0;
+
+read_mcadr:
+
+ /* get it from register first */
+ alx_mem_r32(hw, L1F_STAD0, &mac0);
+ alx_mem_r32(hw, L1F_STAD1, &mac1);
+
+ *(u32 *)(addr + 2) = LX_SWAP_DW(mac0);
+ *(u16 *)addr = (u16)LX_SWAP_W((u16)mac1);
+
+ if (macaddr_valid(addr))
+ return 0;
+
+ if ((flag & INTN_LOADED) == 0) {
+ /* load from efuse ? */
+ for (i = 0; i < L1F_SLD_MAX_TO; i++) {
+ alx_mem_r32(hw, L1F_SLD, &val);
+ if ((val & (L1F_SLD_STAT | L1F_SLD_START)) == 0)
+ break;
+ mdelay(1);
+ }
+ if (i == L1F_SLD_MAX_TO)
+ goto out;
+ alx_mem_w32(hw, L1F_SLD, val | L1F_SLD_START);
+ for (i = 0; i < L1F_SLD_MAX_TO; i++) {
+ mdelay(1);
+ alx_mem_r32(hw, L1F_SLD, &val);
+ if ((val & L1F_SLD_START) == 0)
+ break;
+ }
+ if (i == L1F_SLD_MAX_TO)
+ goto out;
+ flag |= INTN_LOADED;
+ goto read_mcadr;
+ }
+
+ if ((flag & EXTN_LOADED) == 0) {
+ alx_mem_r32(hw, L1F_EFLD, &val);
+ if ((val & (L1F_EFLD_F_EXIST | L1F_EFLD_E_EXIST)) != 0) {
+ /* load from eeprom/flash ? */
+ for (i = 0; i < L1F_SLD_MAX_TO; i++) {
+ alx_mem_r32(hw, L1F_EFLD, &val);
+ if ((val & (L1F_EFLD_STAT |
+ L1F_EFLD_START)) == 0) {
+ break;
+ }
+ mdelay(1);
+ }
+ if (i == L1F_SLD_MAX_TO)
+ goto out;
+ alx_mem_w32(hw, L1F_EFLD, val | L1F_EFLD_START);
+ for (i = 0; i < L1F_SLD_MAX_TO; i++) {
+ mdelay(1);
+ alx_mem_r32(hw, L1F_EFLD, &val);
+ if ((val & L1F_EFLD_START) == 0)
+ break;
+ }
+ if (i == L1F_SLD_MAX_TO)
+ goto out;
+ flag |= EXTN_LOADED;
+ goto read_mcadr;
+ }
+ }
+
+out:
+ return LX_ERR_ALOAD;
+}
+
+
+/* reset mac & dma
+ * return
+ * 0: success
+ * non-0:fail
+ */
+u16 l1f_reset_mac(struct alx_hw *hw)
+{
+ u32 val, pmctrl = 0;
+ u16 ret;
+ u16 i;
+ u8 rev = (u8)(FIELD_GETX(hw->pci_revid, L1F_PCI_REVID));
+
+ /* disable all interrupts, RXQ/TXQ */
+ alx_mem_w32(hw, L1F_MSIX_MASK, BIT_ALL); /* ???? msi-x */
+ alx_mem_w32(hw, L1F_IMR, 0);
+ alx_mem_w32(hw, L1F_ISR, L1F_ISR_DIS);
+
+ ret = l1f_enable_mac(hw, false, 0);
+ if (ret != 0)
+ return ret;
+
+ /* mac reset workaroud */
+ alx_mem_w32(hw, L1F_RFD_PIDX, 1);
+
+ /* dis l0s/l1 before mac reset */
+ if ((rev == L1F_REV_A0 || rev == L1F_REV_A1) &&
+ (hw->pci_revid & L1F_PCI_REVID_WTH_CR) != 0) {
+ alx_mem_r32(hw, L1F_PMCTRL, &pmctrl);
+ if ((pmctrl & (L1F_PMCTRL_L1_EN | L1F_PMCTRL_L0S_EN)) != 0) {
+ alx_mem_w32(hw, L1F_PMCTRL,
+ pmctrl & ~(L1F_PMCTRL_L1_EN |
+ L1F_PMCTRL_L0S_EN));
+ }
+ }
+
+ /* reset whole mac safely */
+ alx_mem_r32(hw, L1F_MASTER, &val);
+ alx_mem_w32(hw, L1F_MASTER,
+ val | L1F_MASTER_DMA_MAC_RST | L1F_MASTER_OOB_DIS);
+
+ /* make sure it's real idle */
+ udelay(10);
+ for (i = 0; i < L1F_DMA_MAC_RST_TO; i++) {
+ alx_mem_r32(hw, L1F_RFD_PIDX, &val);
+ if (val == 0)
+ break;
+ udelay(10);
+ }
+ for (; i < L1F_DMA_MAC_RST_TO; i++) {
+ alx_mem_r32(hw, L1F_MASTER, &val);
+ if ((val & L1F_MASTER_DMA_MAC_RST) == 0)
+ break;
+ udelay(10);
+ }
+ if (i == L1F_DMA_MAC_RST_TO)
+ return LX_ERR_RSTMAC;
+ udelay(10);
+
+ if ((rev == L1F_REV_A0 || rev == L1F_REV_A1) &&
+ (hw->pci_revid & L1F_PCI_REVID_WTH_CR) != 0) {
+ /* set L1F_MASTER_PCLKSEL_SRDS (affect by soft-rst, PERST) */
+ alx_mem_w32(hw, L1F_MASTER, val | L1F_MASTER_PCLKSEL_SRDS);
+ /* resoter l0s / l1 */
+ if ((pmctrl & (L1F_PMCTRL_L1_EN | L1F_PMCTRL_L0S_EN)) != 0)
+ alx_mem_w32(hw, L1F_PMCTRL, pmctrl);
+ }
+
+ /* clear Internal OSC settings, switching OSC by hw itself,
+ * disable isoloate for A0 */
+ alx_mem_r32(hw, L1F_MISC3, &val);
+ alx_mem_w32(hw, L1F_MISC3,
+ (val & ~L1F_MISC3_25M_BY_SW) | L1F_MISC3_25M_NOTO_INTNL);
+ alx_mem_r32(hw, L1F_MISC, &val);
+ val &= ~L1F_MISC_INTNLOSC_OPEN;
+ if (rev == L1F_REV_A0 || rev == L1F_REV_A1)
+ val &= ~L1F_MISC_ISO_EN;
+ alx_mem_w32(hw, L1F_MISC, val);
+ udelay(20);
+
+ /* driver control speed/duplex, hash-alg */
+ alx_mem_r32(hw, L1F_MAC_CTRL, &val);
+ alx_mem_w32(hw, L1F_MAC_CTRL, val | L1F_MAC_CTRL_WOLSPED_SWEN);
+
+ /* clk sw */
+ alx_mem_r32(hw, L1F_SERDES, &val);
+ alx_mem_w32(hw, L1F_SERDES,
+ val | L1F_SERDES_MACCLK_SLWDWN | L1F_SERDES_PHYCLK_SLWDWN);
+
+ return 0;
+}
+
+/* reset phy
+ * return
+ * 0: success
+ * non-0:fail
+ */
+u16 l1f_reset_phy(struct alx_hw *hw, bool pws_en, bool az_en, bool ptp_en)
+{
+ u32 val;
+ u16 i, phy_val;
+
+ az_en = az_en;
+ ptp_en = ptp_en;
+
+ /* reset PHY core */
+ alx_mem_r32(hw, L1F_PHY_CTRL, &val);
+ val &= ~(L1F_PHY_CTRL_DSPRST_OUT | L1F_PHY_CTRL_IDDQ |
+ L1F_PHY_CTRL_GATE_25M | L1F_PHY_CTRL_POWER_DOWN |
+ L1F_PHY_CTRL_CLS);
+ val |= L1F_PHY_CTRL_RST_ANALOG;
+
+ if (pws_en)
+ val |= (L1F_PHY_CTRL_HIB_PULSE | L1F_PHY_CTRL_HIB_EN);
+ else
+ val &= ~(L1F_PHY_CTRL_HIB_PULSE | L1F_PHY_CTRL_HIB_EN);
+ alx_mem_w32(hw, L1F_PHY_CTRL, val);
+ udelay(10); /* 5us is enough */
+ alx_mem_w32(hw, L1F_PHY_CTRL, val | L1F_PHY_CTRL_DSPRST_OUT);
+
+ for (i = 0; i < L1F_PHY_CTRL_DSPRST_TO; i++) { /* delay 800us */
+ udelay(10);
+ }
+
+ /* ???? phy power saving */
+
+ l1f_write_phydbg(hw, true,
+ L1F_MIIDBG_TST10BTCFG, L1F_TST10BTCFG_DEF);
+ l1f_write_phydbg(hw, true, L1F_MIIDBG_SRDSYSMOD, L1F_SRDSYSMOD_DEF);
+ l1f_write_phydbg(hw, true,
+ L1F_MIIDBG_TST100BTCFG, L1F_TST100BTCFG_DEF);
+ l1f_write_phydbg(hw, true, L1F_MIIDBG_ANACTRL, L1F_ANACTRL_DEF);
+ l1f_read_phydbg(hw, true, L1F_MIIDBG_GREENCFG2, &phy_val);
+ l1f_write_phydbg(hw, true, L1F_MIIDBG_GREENCFG2,
+ phy_val & ~L1F_GREENCFG2_GATE_DFSE_EN);
+ /* rtl8139c, 120m */
+ l1f_write_phy(hw, true, L1F_MIIEXT_ANEG, true,
+ L1F_MIIEXT_NLP78, L1F_MIIEXT_NLP78_120M_DEF);
+
+ /* set phy interrupt mask */
+ l1f_write_phy(hw, false, 0, true,
+ L1F_MII_IER, L1F_IER_LINK_UP | L1F_IER_LINK_DOWN);
+
+
+ /* TODO *****???? */
+ return 0;
+}
+
+
+/* reset pcie
+ * just reset pcie relative registers (pci command, clk, aspm...)
+ * return
+ * 0:success
+ * non-0:fail
+ */
+u16 l1f_reset_pcie(struct alx_hw *hw, bool l0s_en, bool l1_en)
+{
+ u32 val;
+ u16 val16;
+ u16 ret;
+ u8 rev = (u8)(FIELD_GETX(hw->pci_revid, L1F_PCI_REVID));
+
+ /* Workaround for PCI problem when BIOS sets MMRBC incorrectly. */
+ alx_cfg_r16(hw, PCI_COMMAND, &val16);
+ if ((val16 & (PCI_COMMAND_IO |
+ PCI_COMMAND_MEMORY |
+ PCI_COMMAND_MASTER)) == 0 ||
+ (val16 & PCI_COMMAND_INTX_DISABLE) != 0) {
+ val16 = (u16)((val16 | (PCI_COMMAND_IO |
+ PCI_COMMAND_MEMORY |
+ PCI_COMMAND_MASTER))
+ & ~PCI_COMMAND_INTX_DISABLE);
+ alx_cfg_w16(hw, PCI_COMMAND, val16);
+ }
+
+ /* Clear any PowerSaving Settings */
+ alx_cfg_w16(hw, L1F_PM_CSR, 0);
+
+ /* deflt val of PDLL D3PLLOFF */
+ alx_mem_r32(hw, L1F_PDLL_TRNS1, &val);
+ alx_mem_w32(hw, L1F_PDLL_TRNS1, val & ~L1F_PDLL_TRNS1_D3PLLOFF_EN);
+
+ /* mask some pcie error bits */
+ alx_mem_r32(hw, L1F_UE_SVRT, &val);
+ val &= ~(L1F_UE_SVRT_DLPROTERR | L1F_UE_SVRT_FCPROTERR);
+ alx_mem_w32(hw, L1F_UE_SVRT, val);
+
+ /* wol 25M & pclk */
+ alx_mem_r32(hw, L1F_MASTER, &val);
+ if ((rev == L1F_REV_A0 || rev == L1F_REV_A1) &&
+ (hw->pci_revid & L1F_PCI_REVID_WTH_CR) != 0) {
+ if ((val & L1F_MASTER_WAKEN_25M) == 0 ||
+ (val & L1F_MASTER_PCLKSEL_SRDS) == 0) {
+ alx_mem_w32(hw, L1F_MASTER,
+ val | L1F_MASTER_PCLKSEL_SRDS |
+ L1F_MASTER_WAKEN_25M);
+ }
+ } else {
+ if ((val & L1F_MASTER_WAKEN_25M) == 0 ||
+ (val & L1F_MASTER_PCLKSEL_SRDS) != 0) {
+ alx_mem_w32(hw, L1F_MASTER,
+ (val & ~L1F_MASTER_PCLKSEL_SRDS) |
+ L1F_MASTER_WAKEN_25M);
+ }
+ }
+
+ /* l0s, l1 setting */
+ ret = l1f_enable_aspm(hw, l0s_en, l1_en, 0);
+
+ udelay(10);
+
+ return ret;
+}
+
+
+/* disable/enable MAC/RXQ/TXQ
+ * en
+ * true:enable
+ * false:disable
+ * return
+ * 0:success
+ * non-0-fail
+ */
+u16 l1f_enable_mac(struct alx_hw *hw, bool en, u16 en_ctrl)
+{
+ u32 rxq, txq, mac, val;
+ u16 i;
+
+ alx_mem_r32(hw, L1F_RXQ0, &rxq);
+ alx_mem_r32(hw, L1F_TXQ0, &txq);
+ alx_mem_r32(hw, L1F_MAC_CTRL, &mac);
+
+ if (en) { /* enable */
+ alx_mem_w32(hw, L1F_RXQ0, rxq | L1F_RXQ0_EN);
+ alx_mem_w32(hw, L1F_TXQ0, txq | L1F_TXQ0_EN);
+ if ((en_ctrl & LX_MACSPEED_1000) != 0) {
+ FIELD_SETL(mac, L1F_MAC_CTRL_SPEED,
+ L1F_MAC_CTRL_SPEED_1000);
+ } else {
+ FIELD_SETL(mac, L1F_MAC_CTRL_SPEED,
+ L1F_MAC_CTRL_SPEED_10_100);
+ }
+
+ test_set_or_clear(mac, en_ctrl, LX_MACDUPLEX_FULL,
+ L1F_MAC_CTRL_FULLD);
+ /* rx filter */
+ test_set_or_clear(mac, en_ctrl, LX_FLT_PROMISC,
+ L1F_MAC_CTRL_PROMISC_EN);
+ test_set_or_clear(mac, en_ctrl, LX_FLT_MULTI_ALL,
+ L1F_MAC_CTRL_MULTIALL_EN);
+ test_set_or_clear(mac, en_ctrl, LX_FLT_BROADCAST,
+ L1F_MAC_CTRL_BRD_EN);
+ test_set_or_clear(mac, en_ctrl, LX_FLT_DIRECT,
+ L1F_MAC_CTRL_RX_EN);
+ test_set_or_clear(mac, en_ctrl, LX_FC_TXEN,
+ L1F_MAC_CTRL_TXFC_EN);
+ test_set_or_clear(mac, en_ctrl, LX_FC_RXEN,
+ L1F_MAC_CTRL_RXFC_EN);
+ test_set_or_clear(mac, en_ctrl, LX_VLAN_STRIP,
+ L1F_MAC_CTRL_VLANSTRIP);
+ test_set_or_clear(mac, en_ctrl, LX_LOOPBACK,
+ L1F_MAC_CTRL_LPBACK_EN);
+ test_set_or_clear(mac, en_ctrl, LX_SINGLE_PAUSE,
+ L1F_MAC_CTRL_SPAUSE_EN);
+ test_set_or_clear(mac, en_ctrl, LX_ADD_FCS,
+ (L1F_MAC_CTRL_PCRCE | L1F_MAC_CTRL_CRCE));
+
+ alx_mem_w32(hw, L1F_MAC_CTRL, mac | L1F_MAC_CTRL_TX_EN);
+ } else { /* disable mac */
+ alx_mem_w32(hw, L1F_RXQ0, rxq & ~L1F_RXQ0_EN);
+ alx_mem_w32(hw, L1F_TXQ0, txq & ~L1F_TXQ0_EN);
+
+ /* waiting for rxq/txq be idle */
+ udelay(40);
+
+ /* stop mac tx/rx */
+ alx_mem_w32(hw, L1F_MAC_CTRL,
+ mac & ~(L1F_MAC_CTRL_RX_EN | L1F_MAC_CTRL_TX_EN));
+
+ for (i = 0; i < L1F_DMA_MAC_RST_TO; i++) {
+ alx_mem_r32(hw, L1F_MAC_STS, &val);
+ if ((val & L1F_MAC_STS_IDLE) == 0)
+ break;
+ udelay(10);
+ }
+ if (L1F_DMA_MAC_RST_TO == i)
+ return LX_ERR_RSTMAC;
+ }
+
+ return 0;
+}
+
+/* enable/disable aspm support
+ * that will change settings for phy/mac/pcie
+ */
+u16 l1f_enable_aspm(struct alx_hw *hw, bool l0s_en, bool l1_en, u8 lnk_stat)
+{
+ u32 pmctrl;
+ u8 rev = (u8)(FIELD_GETX(hw->pci_revid, L1F_PCI_REVID));
+
+ lnk_stat = lnk_stat;
+
+
+ alx_mem_r32(hw, L1F_PMCTRL, &pmctrl);
+
+ /* ????default */
+ FIELD_SETL(pmctrl, L1F_PMCTRL_LCKDET_TIMER,
+ L1F_PMCTRL_LCKDET_TIMER_DEF);
+ pmctrl |= L1F_PMCTRL_RCVR_WT_1US | /* wait 1us */
+ L1F_PMCTRL_L1_CLKSW_EN | /* pcie clk sw */
+ L1F_PMCTRL_L1_SRDSRX_PWD ; /* pwd serdes ????default */
+ /* ????default */
+ FIELD_SETL(pmctrl, L1F_PMCTRL_L1REQ_TO, L1F_PMCTRL_L1REG_TO_DEF);
+ FIELD_SETL(pmctrl, L1F_PMCTRL_L1_TIMER, L1F_PMCTRL_L1_TIMER_16US);
+ pmctrl &= ~(L1F_PMCTRL_L1_SRDS_EN |
+ L1F_PMCTRL_L1_SRDSPLL_EN |
+ L1F_PMCTRL_L1_BUFSRX_EN |
+ L1F_PMCTRL_SADLY_EN | /* ???default */
+ L1F_PMCTRL_HOTRST_WTEN|
+ L1F_PMCTRL_L0S_EN |
+ L1F_PMCTRL_L1_EN |
+ L1F_PMCTRL_ASPM_FCEN |
+ L1F_PMCTRL_TXL1_AFTER_L0S |
+ L1F_PMCTRL_RXL1_AFTER_L0S
+ );
+ if ((rev == L1F_REV_A0 || rev == L1F_REV_A1) &&
+ (hw->pci_revid & L1F_PCI_REVID_WTH_CR) != 0) {
+ pmctrl |= L1F_PMCTRL_L1_SRDS_EN | L1F_PMCTRL_L1_SRDSPLL_EN;
+ }
+
+ /* on/off l0s only if bios/system enable l0s */
+ if (/* sysl0s_en && */ l0s_en)
+ pmctrl |= (L1F_PMCTRL_L0S_EN | L1F_PMCTRL_ASPM_FCEN);
+ /* on/off l1 only if bios/system enable l1 */
+ if (/* sysl1_en && */ l1_en)
+ pmctrl |= (L1F_PMCTRL_L1_EN | L1F_PMCTRL_ASPM_FCEN);
+
+ alx_mem_w32(hw, L1F_PMCTRL, pmctrl);
+
+ return 0;
+}
+
+
+/* initialize phy for speed / flow control
+ * lnk_cap
+ * if autoNeg, is link capability to tell the peer
+ * if force mode, is forced speed/duplex
+ */
+u16 l1f_init_phy_spdfc(struct alx_hw *hw, bool auto_neg,
+ u8 lnk_cap, bool fc_en)
+{
+ u16 adv, giga, cr;
+ u32 val;
+ u16 ret;
+
+ /* clear flag */
+ l1f_write_phy(hw, false, 0, false, L1F_MII_DBG_ADDR, 0);
+ alx_mem_r32(hw, L1F_DRV, &val);
+ FIELD_SETL(val, LX_DRV_PHY, 0);
+
+ if (auto_neg) {
+ adv = L1F_ADVERTISE_DEFAULT_CAP & ~L1F_ADVERTISE_SPEED_MASK;
+ giga = L1F_GIGA_CR_1000T_DEFAULT_CAP &
+ ~L1F_GIGA_CR_1000T_SPEED_MASK;
+ val |= LX_DRV_PHY_AUTO;
+ if (!fc_en)
+ adv &= ~(ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM);
+ else
+ val |= LX_DRV_PHY_FC;
+ if ((LX_LC_10H & lnk_cap) != 0) {
+ adv |= ADVERTISE_10HALF;
+ val |= LX_DRV_PHY_10;
+ }
+ if ((LX_LC_10F & lnk_cap) != 0) {
+ adv |= ADVERTISE_10HALF |
+ ADVERTISE_10FULL;
+ val |= LX_DRV_PHY_10 | LX_DRV_PHY_DUPLEX;
+ }
+ if ((LX_LC_100H & lnk_cap) != 0) {
+ adv |= ADVERTISE_100HALF;
+ val |= LX_DRV_PHY_100;
+ }
+ if ((LX_LC_100F & lnk_cap) != 0) {
+ adv |= ADVERTISE_100HALF |
+ ADVERTISE_100FULL;
+ val |= LX_DRV_PHY_100 | LX_DRV_PHY_DUPLEX;
+ }
+ if ((LX_LC_1000F & lnk_cap) != 0) {
+ giga |= L1F_GIGA_CR_1000T_FD_CAPS;
+ val |= LX_DRV_PHY_1000 | LX_DRV_PHY_DUPLEX;
+ }
+
+ ret = l1f_write_phy(hw, false, 0, false, MII_ADVERTISE, adv);
+ ret = l1f_write_phy(hw, false, 0, false, MII_CTRL1000, giga);
+
+ cr = BMCR_RESET | BMCR_ANENABLE | BMCR_ANRESTART;
+ ret = l1f_write_phy(hw, false, 0, false, MII_BMCR, cr);
+ } else { /* force mode */
+ cr = BMCR_RESET;
+ switch (lnk_cap) {
+ case LX_LC_10H:
+ val |= LX_DRV_PHY_10;
+ break;
+ case LX_LC_10F:
+ cr |= BMCR_FULLDPLX;
+ val |= LX_DRV_PHY_10 | LX_DRV_PHY_DUPLEX;
+ break;
+ case LX_LC_100H:
+ cr |= BMCR_SPEED100;
+ val |= LX_DRV_PHY_100;
+ break;
+ case LX_LC_100F:
+ cr |= BMCR_SPEED100 | BMCR_FULLDPLX;
+ val |= LX_DRV_PHY_100 | LX_DRV_PHY_DUPLEX;
+ break;
+ default:
+ return LX_ERR_PARM;
+ }
+ ret = l1f_write_phy(hw, false, 0, false, MII_BMCR, cr);
+ }
+
+ if (!ret) {
+ l1f_write_phy(hw, false, 0, false,
+ L1F_MII_DBG_ADDR, LX_PHY_INITED);
+ }
+ alx_mem_w32(hw, L1F_DRV, val);
+
+ return ret;
+}
+
+
+/* do power saving setting befor enter suspend mode
+ * NOTE:
+ * 1. phy link must be established before calling this function
+ * 2. wol option (pattern,magic,link,etc.) is configed before call it.
+ */
+u16 l1f_powersaving(struct alx_hw *hw,
+ u8 wire_spd,
+ bool wol_en,
+ bool mactx_en,
+ bool macrx_en,
+ bool pws_en)
+{
+ u32 master_ctrl, mac_ctrl, phy_ctrl, val;
+ u16 pm_ctrl, ret = 0;
+
+ master_ctrl = 0;
+ mac_ctrl = 0;
+ phy_ctrl = 0;
+
+ pws_en = pws_en;
+
+ alx_mem_r32(hw, L1F_MASTER, &master_ctrl);
+ master_ctrl &= ~L1F_MASTER_PCLKSEL_SRDS;
+
+ alx_mem_r32(hw, L1F_MAC_CTRL, &mac_ctrl);
+ /* 10/100 half */
+ FIELD_SETL(mac_ctrl, L1F_MAC_CTRL_SPEED, L1F_MAC_CTRL_SPEED_10_100);
+ mac_ctrl &= ~(L1F_MAC_CTRL_FULLD |
+ L1F_MAC_CTRL_RX_EN |
+ L1F_MAC_CTRL_TX_EN);
+
+ alx_mem_r32(hw, L1F_PHY_CTRL, &phy_ctrl);
+ phy_ctrl &= ~(L1F_PHY_CTRL_DSPRST_OUT | L1F_PHY_CTRL_CLS);
+ /* if (pws_en) { */
+ phy_ctrl |= (L1F_PHY_CTRL_RST_ANALOG | L1F_PHY_CTRL_HIB_PULSE |
+ L1F_PHY_CTRL_HIB_EN);
+
+ if (wol_en) { /* enable rx packet or tx packet */
+ if (macrx_en)
+ mac_ctrl |= (L1F_MAC_CTRL_RX_EN | L1F_MAC_CTRL_BRD_EN);
+ if (mactx_en)
+ mac_ctrl |= L1F_MAC_CTRL_TX_EN;
+ if (LX_LC_1000F == wire_spd) {
+ FIELD_SETL(mac_ctrl, L1F_MAC_CTRL_SPEED,
+ L1F_MAC_CTRL_SPEED_1000);
+ }
+ if (LX_LC_10F == wire_spd ||
+ LX_LC_100F == wire_spd ||
+ LX_LC_100F == wire_spd) {
+ mac_ctrl |= L1F_MAC_CTRL_FULLD;
+ }
+ phy_ctrl |= L1F_PHY_CTRL_DSPRST_OUT;
+ ret = l1f_write_phy(hw, false, 0, false, L1F_MII_IER,
+ L1F_IER_LINK_UP);
+ } else {
+ ret = l1f_write_phy(hw, false, 0, false, L1F_MII_IER, 0);
+ phy_ctrl |= (L1F_PHY_CTRL_IDDQ | L1F_PHY_CTRL_POWER_DOWN);
+ }
+ alx_mem_w32(hw, L1F_MASTER, master_ctrl);
+ alx_mem_w32(hw, L1F_MAC_CTRL, mac_ctrl);
+ alx_mem_w32(hw, L1F_PHY_CTRL, phy_ctrl);
+
+ /* set val of PDLL D3PLLOFF */
+ alx_mem_r32(hw, L1F_PDLL_TRNS1, &val);
+ alx_mem_w32(hw, L1F_PDLL_TRNS1, val | L1F_PDLL_TRNS1_D3PLLOFF_EN);
+
+ /* set PME_EN */
+ if (wol_en) {
+ alx_cfg_r16(hw, L1F_PM_CSR, &pm_ctrl);
+ pm_ctrl |= L1F_PM_CSR_PME_EN;
+ alx_cfg_w16(hw, L1F_PM_CSR, pm_ctrl);
+ }
+
+ return ret;
+}
+
+
+/* read phy register */
+u16 l1f_read_phy(struct alx_hw *hw, bool ext, u8 dev, bool fast,
+ u16 reg, u16 *data)
+{
+ u32 val;
+ u16 clk_sel, i, ret = 0;
+
+ *data = 0;
+ clk_sel = fast ?
+ (u16)L1F_MDIO_CLK_SEL_25MD4 : (u16)L1F_MDIO_CLK_SEL_25MD128;
+
+ if (ext) {
+ val = FIELDL(L1F_MDIO_EXTN_DEVAD, dev) |
+ FIELDL(L1F_MDIO_EXTN_REG, reg);
+ alx_mem_w32(hw, L1F_MDIO_EXTN, val);
+
+ val = L1F_MDIO_SPRES_PRMBL |
+ FIELDL(L1F_MDIO_CLK_SEL, clk_sel) |
+ L1F_MDIO_START |
+ L1F_MDIO_MODE_EXT |
+ L1F_MDIO_OP_READ;
+ } else {
+ val = L1F_MDIO_SPRES_PRMBL |
+ FIELDL(L1F_MDIO_CLK_SEL, clk_sel) |
+ FIELDL(L1F_MDIO_REG, reg) |
+ L1F_MDIO_START |
+ L1F_MDIO_OP_READ;
+ }
+
+ alx_mem_w32(hw, L1F_MDIO, val);
+
+ for (i = 0; i < L1F_MDIO_MAX_AC_TO; i++) {
+ alx_mem_r32(hw, L1F_MDIO, &val);
+ if ((val & L1F_MDIO_BUSY) == 0) {
+ *data = (u16)FIELD_GETX(val, L1F_MDIO_DATA);
+ break;
+ }
+ udelay(10);
+ }
+
+ if (L1F_MDIO_MAX_AC_TO == i)
+ ret = LX_ERR_MIIBUSY;
+
+ return ret;
+}
+
+/* write phy register */
+u16 l1f_write_phy(struct alx_hw *hw, bool ext, u8 dev, bool fast,
+ u16 reg, u16 data)
+{
+ u32 val;
+ u16 clk_sel, i, ret = 0;
+
+ clk_sel = fast ?
+ (u16)L1F_MDIO_CLK_SEL_25MD4 : (u16)L1F_MDIO_CLK_SEL_25MD128;
+
+ if (ext) {
+ val = FIELDL(L1F_MDIO_EXTN_DEVAD, dev) |
+ FIELDL(L1F_MDIO_EXTN_REG, reg);
+ alx_mem_w32(hw, L1F_MDIO_EXTN, val);
+
+ val = L1F_MDIO_SPRES_PRMBL |
+ FIELDL(L1F_MDIO_CLK_SEL, clk_sel) |
+ FIELDL(L1F_MDIO_DATA, data) |
+ L1F_MDIO_START |
+ L1F_MDIO_MODE_EXT;
+ } else {
+ val = L1F_MDIO_SPRES_PRMBL |
+ FIELDL(L1F_MDIO_CLK_SEL, clk_sel) |
+ FIELDL(L1F_MDIO_REG, reg) |
+ FIELDL(L1F_MDIO_DATA, data) |
+ L1F_MDIO_START;
+ }
+
+ alx_mem_w32(hw, L1F_MDIO, val);
+
+ for (i = 0; i < L1F_MDIO_MAX_AC_TO; i++) {
+ alx_mem_r32(hw, L1F_MDIO, &val);
+ if ((val & L1F_MDIO_BUSY) == 0)
+ break;
+ udelay(10);
+ }
+
+ if (L1F_MDIO_MAX_AC_TO == i)
+ ret = LX_ERR_MIIBUSY;
+
+ return ret;
+}
+
+u16 l1f_read_phydbg(struct alx_hw *hw, bool fast, u16 reg, u16 *data)
+{
+ u16 ret;
+
+ ret = l1f_write_phy(hw, false, 0, fast, L1F_MII_DBG_ADDR, reg);
+ ret = l1f_read_phy(hw, false, 0, fast, L1F_MII_DBG_DATA, data);
+
+ return ret;
+}
+
+u16 l1f_write_phydbg(struct alx_hw *hw, bool fast, u16 reg, u16 data)
+{
+ u16 ret;
+
+ ret = l1f_write_phy(hw, false, 0, fast, L1F_MII_DBG_ADDR, reg);
+ ret = l1f_write_phy(hw, false, 0, fast, L1F_MII_DBG_DATA, data);
+
+ return ret;
+}
+
+/*
+ * initialize mac basically
+ * most of hi-feature no init
+ * MAC/PHY should be reset before call this function
+ * smb_timer : million-second
+ * int_mod : micro-second
+ * disable RSS as default
+ */
+u16 l1f_init_mac(struct alx_hw *hw, u8 *addr, u32 txmem_hi,
+ u32 *tx_mem_lo, u8 tx_qnum, u16 txring_sz,
+ u32 rxmem_hi, u32 rfdmem_lo, u32 rrdmem_lo,
+ u16 rxring_sz, u16 rxbuf_sz, u16 smb_timer,
+ u16 mtu, u16 int_mod, bool hash_legacy)
+{
+ u32 val;
+ u16 val16, devid;
+ u8 dmar_len;
+
+ alx_cfg_r16(hw, PCI_DEVICE_ID, &devid);
+
+ /* set mac-address */
+ val = *(u32 *)(addr + 2);
+ alx_mem_w32(hw, L1F_STAD0, LX_SWAP_DW(val));
+ val = *(u16 *)addr ;
+ alx_mem_w32(hw, L1F_STAD1, LX_SWAP_W((u16)val));
+
+ /* clear multicast hash table, algrithm */
+ alx_mem_w32(hw, L1F_HASH_TBL0, 0);
+ alx_mem_w32(hw, L1F_HASH_TBL1, 0);
+ alx_mem_r32(hw, L1F_MAC_CTRL, &val);
+ if (hash_legacy)
+ val |= L1F_MAC_CTRL_MHASH_ALG_HI5B;
+ else
+ val &= ~L1F_MAC_CTRL_MHASH_ALG_HI5B;
+ alx_mem_w32(hw, L1F_MAC_CTRL, val);
+
+ /* clear any wol setting/status */
+ alx_mem_r32(hw, L1F_WOL0, &val);
+ alx_mem_w32(hw, L1F_WOL0, 0);
+
+ /* clk gating */
+ alx_mem_w32(hw, L1F_CLK_GATE,
+ (FIELD_GETX(hw->pci_revid, L1F_PCI_REVID) == L1F_REV_B0) ?
+ L1F_CLK_GATE_ALL_B0 : L1F_CLK_GATE_ALL_A0);
+
+ /* idle timeout to switch clk_125M */
+ if (FIELD_GETX(hw->pci_revid, L1F_PCI_REVID) == L1F_REV_B0) {
+ alx_mem_w32(hw, L1F_IDLE_DECISN_TIMER,
+ L1F_IDLE_DECISN_TIMER_DEF);
+ }
+
+ /* descriptor ring base memory */
+ alx_mem_w32(hw, L1F_TX_BASE_ADDR_HI, txmem_hi);
+ alx_mem_w32(hw, L1F_TPD_RING_SZ, txring_sz);
+ switch (tx_qnum) {
+ case 4:
+ alx_mem_w32(hw, L1F_TPD_PRI3_ADDR_LO, tx_mem_lo[3]);
+ /* fall through */
+ case 3:
+ alx_mem_w32(hw, L1F_TPD_PRI2_ADDR_LO, tx_mem_lo[2]);
+ /* fall through */
+ case 2:
+ alx_mem_w32(hw, L1F_TPD_PRI1_ADDR_LO, tx_mem_lo[1]);
+ /* fall through */
+ case 1:
+ alx_mem_w32(hw, L1F_TPD_PRI0_ADDR_LO, tx_mem_lo[0]);
+ break;
+ default:
+ return LX_ERR_PARM;
+ }
+ alx_mem_w32(hw, L1F_RX_BASE_ADDR_HI, rxmem_hi);
+ alx_mem_w32(hw, L1F_RFD_ADDR_LO, rfdmem_lo);
+ alx_mem_w32(hw, L1F_RRD_ADDR_LO, rrdmem_lo);
+ alx_mem_w32(hw, L1F_RFD_BUF_SZ, rxbuf_sz);
+ alx_mem_w32(hw, L1F_RRD_RING_SZ, rxring_sz);
+ alx_mem_w32(hw, L1F_RFD_RING_SZ, rxring_sz);
+ alx_mem_w32(hw, L1F_SMB_TIMER, smb_timer * 500UL);
+ alx_mem_w32(hw, L1F_SRAM9, L1F_SRAM_LOAD_PTR);
+
+ /* int moduration */
+ alx_mem_r32(hw, L1F_MASTER, &val);
+/* val = (val & ~L1F_MASTER_IRQMOD2_EN) | */
+ val = val | L1F_MASTER_IRQMOD2_EN |
+ L1F_MASTER_IRQMOD1_EN |
+ L1F_MASTER_SYSALVTIMER_EN; /* sysalive */
+ alx_mem_w32(hw, L1F_MASTER, val);
+ alx_mem_w32(hw, L1F_IRQ_MODU_TIMER,
+ FIELDL(L1F_IRQ_MODU_TIMER1, int_mod >> 1));
+
+ /* tpd threshold to trig int */
+ alx_mem_w32(hw, L1F_TINT_TPD_THRSHLD, (u32)txring_sz / 3);
+ alx_mem_w32(hw, L1F_TINT_TIMER, int_mod);
+ /* re-send int */
+ alx_mem_w32(hw, L1F_INT_RETRIG, L1F_INT_RETRIG_TO);
+
+ /* mtu */
+ alx_mem_w32(hw, L1F_MTU, (u32)(mtu + 4 + 4)); /* crc + vlan */
+ if (mtu > L1F_MTU_JUMBO_TH) {
+ alx_mem_r32(hw, L1F_MAC_CTRL, &val);
+ alx_mem_w32(hw, L1F_MAC_CTRL, val & ~L1F_MAC_CTRL_FAST_PAUSE);
+ }
+
+ /* txq */
+ if ((mtu + 8) < L1F_TXQ1_JUMBO_TSO_TH)
+ val = (u32)(mtu + 8 + 7) >> 3; /* 7 for QWORD align */
+ else
+ val = L1F_TXQ1_JUMBO_TSO_TH >> 3;
+ alx_mem_w32(hw, L1F_TXQ1, val | L1F_TXQ1_ERRLGPKT_DROP_EN);
+ alx_mem_r32(hw, L1F_DEV_CTRL, &val);
+ dmar_len = (u8)FIELD_GETX(val, L1F_DEV_CTRL_MAXRRS);
+ /* if BIOS had changed the default dma read max length,
+ * restore it to default value */
+ if (dmar_len < L1F_DEV_CTRL_MAXRRS_MIN) {
+ FIELD_SETL(val, L1F_DEV_CTRL_MAXRRS, L1F_DEV_CTRL_MAXRRS_MIN);
+ alx_mem_w32(hw, L1F_DEV_CTRL, val);
+ }
+ val = FIELDL(L1F_TXQ0_TPD_BURSTPREF, L1F_TXQ_TPD_BURSTPREF_DEF) |
+ L1F_TXQ0_MODE_ENHANCE |
+ L1F_TXQ0_LSO_8023_EN |
+ L1F_TXQ0_SUPT_IPOPT |
+ FIELDL(L1F_TXQ0_TXF_BURST_PREF, L1F_TXQ_TXF_BURST_PREF_DEF);
+ alx_mem_w32(hw, L1F_TXQ0, val);
+ val = FIELDL(L1F_HQTPD_Q1_NUMPREF, L1F_TXQ_TPD_BURSTPREF_DEF) |
+ FIELDL(L1F_HQTPD_Q2_NUMPREF, L1F_TXQ_TPD_BURSTPREF_DEF) |
+ FIELDL(L1F_HQTPD_Q3_NUMPREF, L1F_TXQ_TPD_BURSTPREF_DEF) |
+ L1F_HQTPD_BURST_EN;
+ alx_mem_w32(hw, L1F_HQTPD, val);
+
+ /* rxq */
+ alx_mem_r32(hw, L1F_SRAM5, &val);
+ val = FIELD_GETX(val, L1F_SRAM_RXF_LEN) << 3; /* bytes */
+ if (val > L1F_SRAM_RXF_LEN_8K) {
+ val16 = L1F_MTU_STD_ALGN >> 3;
+ val = (val - (2 * L1F_MTU_STD_ALGN + L1F_MTU_MIN)) >> 3;
+ } else {
+ val16 = L1F_MTU_STD_ALGN >> 3;
+ val = (val - L1F_MTU_STD_ALGN) >> 3;
+ }
+ alx_mem_w32(hw, L1F_RXQ2,
+ FIELDL(L1F_RXQ2_RXF_XOFF_THRESH, val16) |
+ FIELDL(L1F_RXQ2_RXF_XON_THRESH, val));
+ val = FIELDL(L1F_RXQ0_NUM_RFD_PREF, L1F_RXQ0_NUM_RFD_PREF_DEF) |
+ FIELDL(L1F_RXQ0_RSS_MODE, L1F_RXQ0_RSS_MODE_DIS) |
+ FIELDL(L1F_RXQ0_IDT_TBL_SIZE, L1F_RXQ0_IDT_TBL_SIZE_DEF) |
+ L1F_RXQ0_RSS_HSTYP_ALL |
+ L1F_RXQ0_RSS_HASH_EN |
+ L1F_RXQ0_IPV6_PARSE_EN;
+ if (mtu > L1F_MTU_JUMBO_TH)
+ val |= L1F_RXQ0_CUT_THRU_EN;
+ if ((devid & 1) != 0) {
+ FIELD_SETL(val, L1F_RXQ0_ASPM_THRESH,
+ L1F_RXQ0_ASPM_THRESH_100M);
+ }
+ alx_mem_w32(hw, L1F_RXQ0, val);
+
+ /* rfd producer index */
+ alx_mem_w32(hw, L1F_RFD_PIDX, (u32)rxring_sz - 1);
+
+ /* DMA */
+ alx_mem_r32(hw, L1F_DMA, &val);
+ val = FIELDL(L1F_DMA_RORDER_MODE, L1F_DMA_RORDER_MODE_OUT) |
+ L1F_DMA_RREQ_PRI_DATA |
+ FIELDL(L1F_DMA_RREQ_BLEN, dmar_len) |
+ FIELDL(L1F_DMA_WDLY_CNT, L1F_DMA_WDLY_CNT_DEF) |
+ FIELDL(L1F_DMA_RDLY_CNT, L1F_DMA_RDLY_CNT_DEF) |
+ FIELDL(L1F_DMA_RCHNL_SEL, hw->dma_chnl - 1);
+ alx_mem_w32(hw, L1F_DMA, val);
+
+ return 0;
+}
+
+
+u16 l1f_get_phy_config(struct alx_hw *hw)
+{
+ u32 val;
+ u16 phy_val;
+
+ alx_mem_r32(hw, L1F_PHY_CTRL, &val);
+
+ /* phy in rst */
+ if ((val & L1F_PHY_CTRL_DSPRST_OUT) == 0)
+ return LX_DRV_PHY_UNKNOWN;
+
+ alx_mem_r32(hw, L1F_DRV, &val);
+ val = FIELD_GETX(val, LX_DRV_PHY);
+
+ if (LX_DRV_PHY_UNKNOWN == val)
+ return LX_DRV_PHY_UNKNOWN;
+
+ l1f_read_phy(hw, false, 0, false, L1F_MII_DBG_ADDR, &phy_val);
+
+ if (LX_PHY_INITED == phy_val)
+ return (u16) val;
+
+ return LX_DRV_PHY_UNKNOWN;
+}
+
diff --git a/drivers/net/ethernet/atheros/alx/alf_hw.h b/drivers/net/ethernet/atheros/alx/alf_hw.h
new file mode 100644
index 0000000..384af9a
--- /dev/null
+++ b/drivers/net/ethernet/atheros/alx/alf_hw.h
@@ -0,0 +1,2098 @@
+/*
+ * Copyright (c) 2012 Qualcomm Atheros, Inc.
+ *
+ * Permission to use, copy, modify, and/or distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+#ifndef L1F_HW_H_
+#define L1F_HW_H_
+
+/*********************************************************************
+ * some reqs for l1f_sw.h
+ *
+ * 1. some basic type must be defined if there are not defined by
+ * your compiler:
+ * u8, u16, u32, bool
+ *
+ * 2. PETHCONTEXT difinition should be in l1x_sw.h and it must contain
+ * pci_devid & pci_venid
+ *
+ *********************************************************************/
+
+#include "alx_hwcom.h"
+
+/******************************************************************************/
+#define L1F_DEV_ID 0x1091
+#define L2F_DEV_ID 0x1090
+
+
+#define L1F_PCI_REVID_WTH_CR BIT(1)
+#define L1F_PCI_REVID_WTH_XD BIT(0)
+#define L1F_PCI_REVID_MASK ASHFT3(0x1FU)
+#define L1F_PCI_REVID_SHIFT 3
+#define L1F_REV_A0 0
+#define L1F_REV_A1 1
+#define L1F_REV_B0 2
+
+#define L1F_PM_CSR 0x0044 /* 16bit */
+#define L1F_PM_CSR_PME_STAT BIT(15)
+#define L1F_PM_CSR_DSCAL_MASK ASHFT13(3U)
+#define L1F_PM_CSR_DSCAL_SHIFT 13
+#define L1F_PM_CSR_DSEL_MASK ASHFT9(0xFU)
+#define L1F_PM_CSR_DSEL_SHIFT 9
+#define L1F_PM_CSR_PME_EN BIT(8)
+#define L1F_PM_CSR_PWST_MASK ASHFT0(3U)
+#define L1F_PM_CSR_PWST_SHIFT 0
+
+#define L1F_PM_DATA 0x0047 /* 8bit */
+
+
+#define L1F_DEV_CAP 0x005C
+#define L1F_DEV_CAP_SPLSL_MASK ASHFT26(3UL)
+#define L1F_DEV_CAP_SPLSL_SHIFT 26
+#define L1F_DEV_CAP_SPLV_MASK ASHFT18(0xFFUL)
+#define L1F_DEV_CAP_SPLV_SHIFT 18
+#define L1F_DEV_CAP_RBER BIT(15)
+#define L1F_DEV_CAP_PIPRS BIT(14)
+#define L1F_DEV_CAP_AIPRS BIT(13)
+#define L1F_DEV_CAP_ABPRS BIT(12)
+#define L1F_DEV_CAP_L1ACLAT_MASK ASHFT9(7UL)
+#define L1F_DEV_CAP_L1ACLAT_SHIFT 9
+#define L1F_DEV_CAP_L0SACLAT_MASK ASHFT6(7UL)
+#define L1F_DEV_CAP_L0SACLAT_SHIFT 6
+#define L1F_DEV_CAP_EXTAG BIT(5)
+#define L1F_DEV_CAP_PHANTOM BIT(4)
+#define L1F_DEV_CAP_MPL_MASK ASHFT0(7UL)
+#define L1F_DEV_CAP_MPL_SHIFT 0
+#define L1F_DEV_CAP_MPL_128 1
+#define L1F_DEV_CAP_MPL_256 2
+#define L1F_DEV_CAP_MPL_512 3
+#define L1F_DEV_CAP_MPL_1024 4
+#define L1F_DEV_CAP_MPL_2048 5
+#define L1F_DEV_CAP_MPL_4096 6
+
+#define L1F_DEV_CTRL 0x0060 /* 16bit */
+#define L1F_DEV_CTRL_MAXRRS_MASK ASHFT12(7U)
+#define L1F_DEV_CTRL_MAXRRS_SHIFT 12
+#define L1F_DEV_CTRL_MAXRRS_MIN 2
+#define L1F_DEV_CTRL_NOSNP_EN BIT(11)
+#define L1F_DEV_CTRL_AUXPWR_EN BIT(10)
+#define L1F_DEV_CTRL_PHANTOM_EN BIT(9)
+#define L1F_DEV_CTRL_EXTAG_EN BIT(8)
+#define L1F_DEV_CTRL_MPL_MASK ASHFT5(7U)
+#define L1F_DEV_CTRL_MPL_SHIFT 5
+#define L1F_DEV_CTRL_RELORD_EN BIT(4)
+#define L1F_DEV_CTRL_URR_EN BIT(3)
+#define L1F_DEV_CTRL_FERR_EN BIT(2)
+#define L1F_DEV_CTRL_NFERR_EN BIT(1)
+#define L1F_DEV_CTRL_CERR_EN BIT(0)
+
+
+#define L1F_DEV_STAT 0x0062 /* 16bit */
+#define L1F_DEV_STAT_XS_PEND BIT(5)
+#define L1F_DEV_STAT_AUXPWR BIT(4)
+#define L1F_DEV_STAT_UR BIT(3)
+#define L1F_DEV_STAT_FERR BIT(2)
+#define L1F_DEV_STAT_NFERR BIT(1)
+#define L1F_DEV_STAT_CERR BIT(0)
+
+#define L1F_LNK_CAP 0x0064
+#define L1F_LNK_CAP_PRTNUM_MASK ASHFT24(0xFFUL)
+#define L1F_LNK_CAP_PRTNUM_SHIFT 24
+#define L1F_LNK_CAP_CLK_PM BIT(18)
+#define L1F_LNK_CAP_L1EXTLAT_MASK ASHFT15(7UL)
+#define L1F_LNK_CAP_L1EXTLAT_SHIFT 15
+#define L1F_LNK_CAP_L0SEXTLAT_MASK ASHFT12(7UL)
+#define L1F_LNK_CAP_L0SEXTLAT_SHIFT 12
+#define L1F_LNK_CAP_ASPM_SUP_MASK ASHFT10(3UL)
+#define L1F_LNK_CAP_ASPM_SUP_SHIFT 10
+#define L1F_LNK_CAP_ASPM_SUP_L0S 1
+#define L1F_LNK_CAP_ASPM_SUP_L0SL1 3
+#define L1F_LNK_CAP_MAX_LWH_MASK ASHFT4(0x3FUL)
+#define L1F_LNK_CAP_MAX_LWH_SHIFT 4
+#define L1F_LNK_CAP_MAX_LSPD_MASH ASHFT0(0xFUL)
+#define L1F_LNK_CAP_MAX_LSPD_SHIFT 0
+
+#define L1F_LNK_CTRL 0x0068 /* 16bit */
+#define L1F_LNK_CTRL_CLK_PM_EN BIT(8)
+#define L1F_LNK_CTRL_EXTSYNC BIT(7)
+#define L1F_LNK_CTRL_CMNCLK_CFG BIT(6)
+#define L1F_LNK_CTRL_RCB_128B BIT(3) /* 0:64b,1:128b */
+#define L1F_LNK_CTRL_ASPM_MASK ASHFT0(3U)
+#define L1F_LNK_CTRL_ASPM_SHIFT 0
+#define L1F_LNK_CTRL_ASPM_DIS 0
+#define L1F_LNK_CTRL_ASPM_ENL0S 1
+#define L1F_LNK_CTRL_ASPM_ENL1 2
+#define L1F_LNK_CTRL_ASPM_ENL0SL1 3
+
+#define L1F_LNK_STAT 0x006A /* 16bit */
+#define L1F_LNK_STAT_SCLKCFG BIT(12)
+#define L1F_LNK_STAT_LNKTRAIN BIT(11)
+#define L1F_LNK_STAT_TRNERR BIT(10)
+#define L1F_LNK_STAT_LNKSPD_MASK ASHFT0(0xFU)
+#define L1F_LNK_STAT_LNKSPD_SHIFT 0
+#define L1F_LNK_STAT_NEGLW_MASK ASHFT4(0x3FU)
+#define L1F_LNK_STAT_NEGLW_SHIFT 4
+
+#define L1F_MSIX_MASK 0x0090
+#define L1F_MSIX_PENDING 0x0094
+
+#define L1F_UE_SVRT 0x010C
+#define L1F_UE_SVRT_UR BIT(20)
+#define L1F_UE_SVRT_ECRCERR BIT(19)
+#define L1F_UE_SVRT_MTLP BIT(18)
+#define L1F_UE_SVRT_RCVOVFL BIT(17)
+#define L1F_UE_SVRT_UNEXPCPL BIT(16)
+#define L1F_UE_SVRT_CPLABRT BIT(15)
+#define L1F_UE_SVRT_CPLTO BIT(14)
+#define L1F_UE_SVRT_FCPROTERR BIT(13)
+#define L1F_UE_SVRT_PTLP BIT(12)
+#define L1F_UE_SVRT_DLPROTERR BIT(4)
+#define L1F_UE_SVRT_TRNERR BIT(0)
+
+#define L1F_EFLD 0x0204 /* eeprom/flash load */
+#define L1F_EFLD_F_ENDADDR_MASK ASHFT16(0x3FFUL)
+#define L1F_EFLD_F_ENDADDR_SHIFT 16
+#define L1F_EFLD_F_EXIST BIT(10)
+#define L1F_EFLD_E_EXIST BIT(9)
+#define L1F_EFLD_EXIST BIT(8)
+#define L1F_EFLD_STAT BIT(5) /* 0:finish,1:in progress */
+#define L1F_EFLD_IDLE BIT(4)
+#define L1F_EFLD_START BIT(0)
+
+#define L1F_SLD 0x0218 /* efuse load */
+#define L1F_SLD_FREQ_MASK ASHFT24(3UL)
+#define L1F_SLD_FREQ_SHIFT 24
+#define L1F_SLD_FREQ_100K 0
+#define L1F_SLD_FREQ_200K 1
+#define L1F_SLD_FREQ_300K 2
+#define L1F_SLD_FREQ_400K 3
+#define L1F_SLD_EXIST BIT(23)
+#define L1F_SLD_SLVADDR_MASK ASHFT16(0x7FUL)
+#define L1F_SLD_SLVADDR_SHIFT 16
+#define L1F_SLD_IDLE BIT(13)
+#define L1F_SLD_STAT BIT(12) /* 0:finish,1:in progress */
+#define L1F_SLD_START BIT(11)
+#define L1F_SLD_STARTADDR_MASK ASHFT0(0xFFUL)
+#define L1F_SLD_STARTADDR_SHIFT 0
+#define L1F_SLD_MAX_TO 100
+
+#define L1F_PCIE_MSIC 0x021C
+#define L1F_PCIE_MSIC_MSIX_DIS BIT(22)
+#define L1F_PCIE_MSIC_MSI_DIS BIT(21)
+
+#define L1F_PPHY_MISC1 0x1000
+#define L1F_PPHY_MISC1_RCVDET BIT(2)
+#define L1F_PPHY_MISC1_NFTS_MASK ASHFT16(0xFFUL)
+#define L1F_PPHY_MISC1_NFTS_SHIFT 16
+#define L1F_PPHY_MISC1_NFTS_HIPERF 0xA0 /* ???? */
+
+#define L1F_PPHY_MISC2 0x1004
+#define L1F_PPHY_MISC2_L0S_TH_MASK ASHFT18(0x3UL)
+#define L1F_PPHY_MISC2_L0S_TH_SHIFT 18
+#define L1F_PPHY_MISC2_CDR_BW_MASK ASHFT16(0x3UL)
+#define L1F_PPHY_MISC2_CDR_BW_SHIFT 16
+
+#define L1F_PDLL_TRNS1 0x1104
+#define L1F_PDLL_TRNS1_D3PLLOFF_EN BIT(11)
+#define L1F_PDLL_TRNS1_REGCLK_SEL_NORM BIT(10)
+#define L1F_PDLL_TRNS1_REPLY_TO_MASK ASHFT0(0x3FFUL)
+#define L1F_PDLL_TRNS1_REPLY_TO_SHIFT 0
+
+
+#define L1F_TLEXTN_STATS 0x1208
+#define L1F_TLEXTN_STATS_DEVNO_MASK ASHFT16(0x1FUL)
+#define L1F_TLEXTN_STATS_DEVNO_SHIFT 16
+#define L1F_TLEXTN_STATS_BUSNO_MASK ASHFT8(0xFFUL)
+#define L1F_TLEXTN_STATS_BUSNO_SHIFT 8
+
+#define L1F_EFUSE_CTRL 0x12C0
+#define L1F_EFUSE_CTRL_FLAG BIT(31) /* 0:read,1:write */
+#define L1F_EUFSE_CTRL_ACK BIT(30)
+#define L1F_EFUSE_CTRL_ADDR_MASK ASHFT16(0x3FFUL)
+#define L1F_EFUSE_CTRL_ADDR_SHIFT 16
+
+#define L1F_EFUSE_DATA 0x12C4
+
+#define L1F_SPI_OP1 0x12C8
+#define L1F_SPI_OP1_RDID_MASK ASHFT24(0xFFUL)
+#define L1F_SPI_OP1_RDID_SHIFT 24
+#define L1F_SPI_OP1_CE_MASK ASHFT16(0xFFUL)
+#define L1F_SPI_OP1_CE_SHIFT 16
+#define L1F_SPI_OP1_SE_MASK ASHFT8(0xFFUL)
+#define L1F_SPI_OP1_SE_SHIFT 8
+#define L1F_SPI_OP1_PRGRM_MASK ASHFT0(0xFFUL)
+#define L1F_SPI_OP1_PRGRM_SHIFT 0
+
+#define L1F_SPI_OP2 0x12CC
+#define L1F_SPI_OP2_READ_MASK ASHFT24(0xFFUL)
+#define L1F_SPI_OP2_READ_SHIFT 24
+#define L1F_SPI_OP2_WRSR_MASK ASHFT16(0xFFUL)
+#define L1F_SPI_OP2_WRSR_SHIFT 16
+#define L1F_SPI_OP2_RDSR_MASK ASHFT8(0xFFUL)
+#define L1F_SPI_OP2_RDSR_SHIFT 8
+#define L1F_SPI_OP2_WREN_MASK ASHFT0(0xFFUL)
+#define L1F_SPI_OP2_WREN_SHIFT 0
+
+#define L1F_SPI_OP3 0x12E4
+#define L1F_SPI_OP3_WRDI_MASK ASHFT8(0xFFUL)
+#define L1F_SPI_OP3_WRDI_SHIFT 8
+#define L1F_SPI_OP3_EWSR_MASK ASHFT0(0xFFUL)
+#define L1F_SPI_OP3_EWSR_SHIFT 0
+
+#define L1F_EF_CTRL 0x12D0
+#define L1F_EF_CTRL_FSTS_MASK ASHFT20(0xFFUL)
+#define L1F_EF_CTRL_FSTS_SHIFT 20
+#define L1F_EF_CTRL_CLASS_MASK ASHFT16(7UL)
+#define L1F_EF_CTRL_CLASS_SHIFT 16
+#define L1F_EF_CTRL_CLASS_F_UNKNOWN 0
+#define L1F_EF_CTRL_CLASS_F_STD 1
+#define L1F_EF_CTRL_CLASS_F_SST 2
+#define L1F_EF_CTRL_CLASS_E_UNKNOWN 0
+#define L1F_EF_CTRL_CLASS_E_1K 1
+#define L1F_EF_CTRL_CLASS_E_4K 2
+#define L1F_EF_CTRL_FRET BIT(15) /* 0:OK,1:fail */
+#define L1F_EF_CTRL_TYP_MASK ASHFT12(3UL)
+#define L1F_EF_CTRL_TYP_SHIFT 12
+#define L1F_EF_CTRL_TYP_NONE 0
+#define L1F_EF_CTRL_TYP_F 1
+#define L1F_EF_CTRL_TYP_E 2
+#define L1F_EF_CTRL_TYP_UNKNOWN 3
+#define L1F_EF_CTRL_ONE_CLK BIT(10)
+#define L1F_EF_CTRL_ECLK_MASK ASHFT8(3UL)
+#define L1F_EF_CTRL_ECLK_SHIFT 8
+#define L1F_EF_CTRL_ECLK_125K 0
+#define L1F_EF_CTRL_ECLK_250K 1
+#define L1F_EF_CTRL_ECLK_500K 2
+#define L1F_EF_CTRL_ECLK_1M 3
+#define L1F_EF_CTRL_FBUSY BIT(7)
+#define L1F_EF_CTRL_ACTION BIT(6) /* 1:start,0:stop */
+#define L1F_EF_CTRL_AUTO_OP BIT(5)
+#define L1F_EF_CTRL_SST_MODE BIT(4) /* force using sst */
+#define L1F_EF_CTRL_INST_MASK ASHFT0(0xFUL)
+#define L1F_EF_CTRL_INST_SHIFT 0
+#define L1F_EF_CTRL_INST_NONE 0
+#define L1F_EF_CTRL_INST_READ 1 /* for flash & eeprom */
+#define L1F_EF_CTRL_INST_RDID 2
+#define L1F_EF_CTRL_INST_RDSR 3
+#define L1F_EF_CTRL_INST_WREN 4
+#define L1F_EF_CTRL_INST_PRGRM 5
+#define L1F_EF_CTRL_INST_SE 6
+#define L1F_EF_CTRL_INST_CE 7
+#define L1F_EF_CTRL_INST_WRSR 10
+#define L1F_EF_CTRL_INST_EWSR 11
+#define L1F_EF_CTRL_INST_WRDI 12
+#define L1F_EF_CTRL_INST_WRITE 2 /* only for eeprom */
+
+#define L1F_EF_ADDR 0x12D4
+#define L1F_EF_DATA 0x12D8
+#define L1F_SPI_ID 0x12DC
+
+#define L1F_SPI_CFG_START 0x12E0
+
+#define L1F_PMCTRL 0x12F8
+#define L1F_PMCTRL_HOTRST_WTEN BIT(31)
+#define L1F_PMCTRL_ASPM_FCEN BIT(30) /* L0s/L1 dis by MAC based on
+ * thrghput(setting in 15A0) */
+#define L1F_PMCTRL_SADLY_EN BIT(29)
+#define L1F_PMCTRL_L0S_BUFSRX_EN BIT(28)
+#define L1F_PMCTRL_LCKDET_TIMER_MASK ASHFT24(0xFUL)
+#define L1F_PMCTRL_LCKDET_TIMER_SHIFT 24
+#define L1F_PMCTRL_LCKDET_TIMER_DEF 0xC
+#define L1F_PMCTRL_L1REQ_TO_MASK ASHFT20(0xFUL)
+#define L1F_PMCTRL_L1REQ_TO_SHIFT 20 /* pm_request_l1 time > @
+ * ->L0s not L1 */
+#define L1F_PMCTRL_L1REG_TO_DEF 0xC
+#define L1F_PMCTRL_TXL1_AFTER_L0S BIT(19)
+#define L1F_PMCTRL_L1_TIMER_MASK ASHFT16(7UL)
+#define L1F_PMCTRL_L1_TIMER_SHIFT 16
+#define L1F_PMCTRL_L1_TIMER_DIS 0
+#define L1F_PMCTRL_L1_TIMER_2US 1
+#define L1F_PMCTRL_L1_TIMER_4US 2
+#define L1F_PMCTRL_L1_TIMER_8US 3
+#define L1F_PMCTRL_L1_TIMER_16US 4
+#define L1F_PMCTRL_L1_TIMER_24US 5
+#define L1F_PMCTRL_L1_TIMER_32US 6
+#define L1F_PMCTRL_L1_TIMER_63US 7
+#define L1F_PMCTRL_RCVR_WT_1US BIT(15) /* 1:1us, 0:2ms */
+#define L1F_PMCTRL_PWM_VER_11 BIT(14) /* 0:1.0a,1:1.1 */
+#define L1F_PMCTRL_L1_CLKSW_EN BIT(13) /* en pcie clk sw in L1 */
+#define L1F_PMCTRL_L0S_EN BIT(12)
+#define L1F_PMCTRL_RXL1_AFTER_L0S BIT(11)
+#define L1F_PMCTRL_L0S_TIMER_MASK ASHFT8(7UL)
+#define L1F_PMCTRL_L0S_TIMER_SHIFT 8
+#define L1F_PMCTRL_L1_BUFSRX_EN BIT(7)
+#define L1F_PMCTRL_L1_SRDSRX_PWD BIT(6) /* power down serdes rx */
+#define L1F_PMCTRL_L1_SRDSPLL_EN BIT(5)
+#define L1F_PMCTRL_L1_SRDS_EN BIT(4)
+#define L1F_PMCTRL_L1_EN BIT(3)
+#define L1F_PMCTRL_CLKREQ_EN BIT(2)
+#define L1F_PMCTRL_RBER_EN BIT(1)
+#define L1F_PMCTRL_SPRSDWER_EN BIT(0)
+
+#define L1F_LTSSM_CTRL 0x12FC
+#define L1F_LTSSM_WRO_EN BIT(12)
+
+
+/******************************************************************************/
+
+#define L1F_MASTER 0x1400
+#define L1F_MASTER_OTP_FLG BIT(31)
+#define L1F_MASTER_DEV_NUM_MASK ASHFT24(0x7FUL)
+#define L1F_MASTER_DEV_NUM_SHIFT 24
+#define L1F_MASTER_REV_NUM_MASK ASHFT16(0xFFUL)
+#define L1F_MASTER_REV_NUM_SHIFT 16
+#define L1F_MASTER_DEASSRT BIT(15) /*ISSUE DE-ASSERT MSG */
+#define L1F_MASTER_RDCLR_INT BIT(14)
+#define L1F_MASTER_DMA_RST BIT(13)
+#define L1F_MASTER_PCLKSEL_SRDS BIT(12) /* 1:alwys sel pclk from
+ * serdes, not sw to 25M */
+#define L1F_MASTER_IRQMOD2_EN BIT(11) /* IRQ MODURATION FOR RX */
+#define L1F_MASTER_IRQMOD1_EN BIT(10) /* MODURATION FOR TX/RX */
+#define L1F_MASTER_MANU_INT BIT(9) /* SOFT MANUAL INT */
+#define L1F_MASTER_MANUTIMER_EN BIT(8)
+#define L1F_MASTER_SYSALVTIMER_EN BIT(7) /* SYS ALIVE TIMER EN */
+#define L1F_MASTER_OOB_DIS BIT(6) /* OUT OF BOX DIS */
+#define L1F_MASTER_WAKEN_25M BIT(5) /* WAKE WO. PCIE CLK */
+#define L1F_MASTER_BERT_START BIT(4)
+#define L1F_MASTER_PCIE_TSTMOD_MASK ASHFT2(3UL)
+#define L1F_MASTER_PCIE_TSTMOD_SHIFT 2
+#define L1F_MASTER_PCIE_RST BIT(1)
+#define L1F_MASTER_DMA_MAC_RST BIT(0) /* RST MAC & DMA */
+#define L1F_DMA_MAC_RST_TO 50
+
+#define L1F_MANU_TIMER 0x1404
+
+#define L1F_IRQ_MODU_TIMER 0x1408
+#define L1F_IRQ_MODU_TIMER2_MASK ASHFT16(0xFFFFUL)
+#define L1F_IRQ_MODU_TIMER2_SHIFT 16 /* ONLY FOR RX */
+#define L1F_IRQ_MODU_TIMER1_MASK ASHFT0(0xFFFFUL)
+#define L1F_IRQ_MODU_TIMER1_SHIFT 0
+
+#define L1F_PHY_CTRL 0x140C
+#define L1F_PHY_CTRL_ADDR_MASK ASHFT19(0x1FUL)
+#define L1F_PHY_CTRL_ADDR_SHIFT 19
+#define L1F_PHY_CTRL_BP_VLTGSW BIT(18)
+#define L1F_PHY_CTRL_100AB_EN BIT(17)
+#define L1F_PHY_CTRL_10AB_EN BIT(16)
+#define L1F_PHY_CTRL_PLL_BYPASS BIT(15)
+#define L1F_PHY_CTRL_POWER_DOWN BIT(14) /* affect MAC & PHY,
+ * go to low power sts */
+#define L1F_PHY_CTRL_PLL_ON BIT(13) /* 1:PLL ALWAYS ON
+ * 0:CAN SWITCH IN LPW */
+#define L1F_PHY_CTRL_RST_ANALOG BIT(12)
+#define L1F_PHY_CTRL_HIB_PULSE BIT(11)
+#define L1F_PHY_CTRL_HIB_EN BIT(10)
+#define L1F_PHY_CTRL_GIGA_DIS BIT(9)
+#define L1F_PHY_CTRL_IDDQ_DIS BIT(8) /* POWER ON RST */
+#define L1F_PHY_CTRL_IDDQ BIT(7) /* WHILE REBOOT, BIT8(1)
+ * EFFECTS BIT7 */
+#define L1F_PHY_CTRL_LPW_EXIT BIT(6)
+#define L1F_PHY_CTRL_GATE_25M BIT(5)
+#define L1F_PHY_CTRL_RVRS_ANEG BIT(4)
+#define L1F_PHY_CTRL_ANEG_NOW BIT(3)
+#define L1F_PHY_CTRL_LED_MODE BIT(2)
+#define L1F_PHY_CTRL_RTL_MODE BIT(1)
+#define L1F_PHY_CTRL_DSPRST_OUT BIT(0) /* OUT OF DSP RST STATE */
+#define L1F_PHY_CTRL_DSPRST_TO 80
+#define L1F_PHY_CTRL_CLS (\
+ L1F_PHY_CTRL_LED_MODE |\
+ L1F_PHY_CTRL_100AB_EN |\
+ L1F_PHY_CTRL_PLL_ON)
+
+#define L1F_MAC_STS 0x1410
+#define L1F_MAC_STS_SFORCE_MASK ASHFT14(0xFUL)
+#define L1F_MAC_STS_SFORCE_SHIFT 14
+#define L1F_MAC_STS_CALIB_DONE BIT13
+#define L1F_MAC_STS_CALIB_RES_MASK ASHFT8(0x1FUL)
+#define L1F_MAC_STS_CALIB_RES_SHIFT 8
+#define L1F_MAC_STS_CALIBERR_MASK ASHFT4(0xFUL)
+#define L1F_MAC_STS_CALIBERR_SHIFT 4
+#define L1F_MAC_STS_TXQ_BUSY BIT(3)
+#define L1F_MAC_STS_RXQ_BUSY BIT(2)
+#define L1F_MAC_STS_TXMAC_BUSY BIT(1)
+#define L1F_MAC_STS_RXMAC_BUSY BIT(0)
+#define L1F_MAC_STS_IDLE (\
+ L1F_MAC_STS_TXQ_BUSY |\
+ L1F_MAC_STS_RXQ_BUSY |\
+ L1F_MAC_STS_TXMAC_BUSY |\
+ L1F_MAC_STS_RXMAC_BUSY)
+
+#define L1F_MDIO 0x1414
+#define L1F_MDIO_MODE_EXT BIT(30) /* 0:normal,1:ext */
+#define L1F_MDIO_POST_READ BIT(29)
+#define L1F_MDIO_AUTO_POLLING BIT(28)
+#define L1F_MDIO_BUSY BIT(27)
+#define L1F_MDIO_CLK_SEL_MASK ASHFT24(7UL)
+#define L1F_MDIO_CLK_SEL_SHIFT 24
+#define L1F_MDIO_CLK_SEL_25MD4 0 /* 25M DIV 4 */
+#define L1F_MDIO_CLK_SEL_25MD6 2
+#define L1F_MDIO_CLK_SEL_25MD8 3
+#define L1F_MDIO_CLK_SEL_25MD10 4
+#define L1F_MDIO_CLK_SEL_25MD32 5
+#define L1F_MDIO_CLK_SEL_25MD64 6
+#define L1F_MDIO_CLK_SEL_25MD128 7
+#define L1F_MDIO_START BIT(23)
+#define L1F_MDIO_SPRES_PRMBL BIT(22)
+#define L1F_MDIO_OP_READ BIT(21) /* 1:read,0:write */
+#define L1F_MDIO_REG_MASK ASHFT16(0x1FUL)
+#define L1F_MDIO_REG_SHIFT 16
+#define L1F_MDIO_DATA_MASK ASHFT0(0xFFFFUL)
+#define L1F_MDIO_DATA_SHIFT 0
+#define L1F_MDIO_MAX_AC_TO 120
+
+#define L1F_MDIO_EXTN 0x1448
+#define L1F_MDIO_EXTN_PORTAD_MASK ASHFT21(0x1FUL)
+#define L1F_MDIO_EXTN_PORTAD_SHIFT 21
+#define L1F_MDIO_EXTN_DEVAD_MASK ASHFT16(0x1FUL)
+#define L1F_MDIO_EXTN_DEVAD_SHIFT 16
+#define L1F_MDIO_EXTN_REG_MASK ASHFT0(0xFFFFUL)
+#define L1F_MDIO_EXTN_REG_SHIFT 0
+
+#define L1F_PHY_STS 0x1418
+#define L1F_PHY_STS_LPW BIT(31)
+#define L1F_PHY_STS_LPI BIT(30)
+#define L1F_PHY_STS_PWON_STRIP_MASK ASHFT16(0xFFFUL)
+#define L1F_PHY_STS_PWON_STRIP_SHIFT 16
+
+#define L1F_PHY_STS_DUPLEX BIT(3)
+#define L1F_PHY_STS_LINKUP BIT(2)
+#define L1F_PHY_STS_SPEED_MASK ASHFT0(3UL)
+#define L1F_PHY_STS_SPEED_SHIFT 0
+#define L1F_PHY_STS_SPEED_1000M 2
+#define L1F_PHY_STS_SPEED_100M 1
+#define L1F_PHY_STS_SPEED_10M 0
+
+#define L1F_BIST0 0x141C
+#define L1F_BIST0_COL_MASK ASHFT24(0x3FUL)
+#define L1F_BIST0_COL_SHIFT 24
+#define L1F_BIST0_ROW_MASK ASHFT12(0xFFFUL)
+#define L1F_BIST0_ROW_SHIFT 12
+#define L1F_BIST0_STEP_MASK ASHFT8(0xFUL)
+#define L1F_BIST0_STEP_SHIFT 8
+#define L1F_BIST0_PATTERN_MASK ASHFT4(7UL)
+#define L1F_BIST0_PATTERN_SHIFT 4
+#define L1F_BIST0_CRIT BIT(3)
+#define L1F_BIST0_FIXED BIT(2)
+#define L1F_BIST0_FAIL BIT(1)
+#define L1F_BIST0_START BIT(0)
+
+#define L1F_BIST1 0x1420
+#define L1F_BIST1_COL_MASK ASHFT24(0x3FUL)
+#define L1F_BIST1_COL_SHIFT 24
+#define L1F_BIST1_ROW_MASK ASHFT12(0xFFFUL)
+#define L1F_BIST1_ROW_SHIFT 12
+#define L1F_BIST1_STEP_MASK ASHFT8(0xFUL)
+#define L1F_BIST1_STEP_SHIFT 8
+#define L1F_BIST1_PATTERN_MASK ASHFT4(7UL)
+#define L1F_BIST1_PATTERN_SHIFT 4
+#define L1F_BIST1_CRIT BIT(3)
+#define L1F_BIST1_FIXED BIT(2)
+#define L1F_BIST1_FAIL BIT(1)
+#define L1F_BIST1_START BIT(0)
+
+#define L1F_SERDES 0x1424
+#define L1F_SERDES_PHYCLK_SLWDWN BIT(18)
+#define L1F_SERDES_MACCLK_SLWDWN BIT(17)
+#define L1F_SERDES_SELFB_PLL_MASK ASHFT14(3UL)
+#define L1F_SERDES_SELFB_PLL_SHIFT 14
+#define L1F_SERDES_PHYCLK_SEL_GTX BIT(13) /* 1:gtx_clk, 0:25M */
+#define L1F_SERDES_PCIECLK_SEL_SRDS BIT(12) /* 1:serdes,0:25M */
+#define L1F_SERDES_BUFS_RX_EN BIT(11)
+#define L1F_SERDES_PD_RX BIT(10)
+#define L1F_SERDES_PLL_EN BIT(9)
+#define L1F_SERDES_EN BIT(8)
+#define L1F_SERDES_SELFB_PLL_SEL_CSR BIT(6) /* 0:state-machine,1:csr */
+#define L1F_SERDES_SELFB_PLL_CSR_MASK ASHFT4(3UL)
+#define L1F_SERDES_SELFB_PLL_CSR_SHIFT 4
+#define L1F_SERDES_SELFB_PLL_CSR_4 3 /* 4-12% OV-CLK */
+#define L1F_SERDES_SELFB_PLL_CSR_0 2 /* 0-4% OV-CLK */
+#define L1F_SERDES_SELFB_PLL_CSR_12 1 /* 12-18% OV-CLK */
+#define L1F_SERDES_SELFB_PLL_CSR_18 0 /* 18-25% OV-CLK */
+#define L1F_SERDES_VCO_SLOW BIT(3)
+#define L1F_SERDES_VCO_FAST BIT(2)
+#define L1F_SERDES_LOCKDCT_EN BIT(1)
+#define L1F_SERDES_LOCKDCTED BIT(0)
+
+#define L1F_LED_CTRL 0x1428
+#define L1F_LED_CTRL_PATMAP2_MASK ASHFT8(3UL)
+#define L1F_LED_CTRL_PATMAP2_SHIFT 8
+#define L1F_LED_CTRL_PATMAP1_MASK ASHFT6(3UL)
+#define L1F_LED_CTRL_PATMAP1_SHIFT 6
+#define L1F_LED_CTRL_PATMAP0_MASK ASHFT4(3UL)
+#define L1F_LED_CTRL_PATMAP0_SHIFT 4
+#define L1F_LED_CTRL_D3_MODE_MASK ASHFT2(3UL)
+#define L1F_LED_CTRL_D3_MODE_SHIFT 2
+#define L1F_LED_CTRL_D3_MODE_NORMAL 0
+#define L1F_LED_CTRL_D3_MODE_WOL_DIS 1
+#define L1F_LED_CTRL_D3_MODE_WOL_ANY 2
+#define L1F_LED_CTRL_D3_MODE_WOL_EN 3
+#define L1F_LED_CTRL_DUTY_CYCL_MASK ASHFT0(3UL)
+#define L1F_LED_CTRL_DUTY_CYCL_SHIFT 0
+#define L1F_LED_CTRL_DUTY_CYCL_50 0 /* 50% */
+#define L1F_LED_CTRL_DUTY_CYCL_125 1 /* 12.5% */
+#define L1F_LED_CTRL_DUTY_CYCL_25 2 /* 25% */
+#define L1F_LED_CTRL_DUTY_CYCL_75 3 /* 75% */
+
+#define L1F_LED_PATN 0x142C
+#define L1F_LED_PATN1_MASK ASHFT16(0xFFFFUL)
+#define L1F_LED_PATN1_SHIFT 16
+#define L1F_LED_PATN0_MASK ASHFT0(0xFFFFUL)
+#define L1F_LED_PATN0_SHIFT 0
+
+#define L1F_LED_PATN2 0x1430
+#define L1F_LED_PATN2_MASK ASHFT0(0xFFFFUL)
+#define L1F_LED_PATN2_SHIFT 0
+
+#define L1F_SYSALV 0x1434
+#define L1F_SYSALV_FLAG BIT(0)
+
+#define L1F_PCIERR_INST 0x1438
+#define L1F_PCIERR_INST_TX_RATE_MASK ASHFT4(0xFUL)
+#define L1F_PCIERR_INST_TX_RATE_SHIFT 4
+#define L1F_PCIERR_INST_RX_RATE_MASK ASHFT0(0xFUL)
+#define L1F_PCIERR_INST_RX_RATE_SHIFT 0
+
+#define L1F_LPI_DECISN_TIMER 0x143C
+
+#define L1F_LPI_CTRL 0x1440
+#define L1F_LPI_CTRL_CHK_DA BIT(31)
+#define L1F_LPI_CTRL_ENH_TO_MASK ASHFT12(0x1FFFUL)
+#define L1F_LPI_CTRL_ENH_TO_SHIFT 12
+#define L1F_LPI_CTRL_ENH_TH_MASK ASHFT6(0x1FUL)
+#define L1F_LPI_CTRL_ENH_TH_SHIFT 6
+#define L1F_LPI_CTRL_ENH_EN BIT(5)
+#define L1F_LPI_CTRL_CHK_RX BIT(4)
+#define L1F_LPI_CTRL_CHK_STATE BIT(3)
+#define L1F_LPI_CTRL_GMII BIT(2)
+#define L1F_LPI_CTRL_TO_PHY BIT(1)
+#define L1F_LPI_CTRL_EN BIT(0)
+
+#define L1F_LPI_WAIT 0x1444
+#define L1F_LPI_WAIT_TIMER_MASK ASHFT0(0xFFFFUL)
+#define L1F_LPI_WAIT_TIMER_SHIFT 0
+
+#define L1F_HRTBT_VLAN 0x1450 /* HEARTBEAT, FOR CIFS */
+#define L1F_HRTBT_VLANID_MASK ASHFT0(0xFFFFUL) /* OR CLOUD */
+#define L1F_HRRBT_VLANID_SHIFT 0
+
+#define L1F_HRTBT_CTRL 0x1454
+#define L1F_HRTBT_CTRL_EN BIT(31)
+#define L1F_HRTBT_CTRL_PERIOD_MASK ASHFT25(0x3FUL)
+#define L1F_HRTBT_CTRL_PERIOD_SHIFT 25
+#define L1F_HRTBT_CTRL_HASVLAN BIT(24)
+#define L1F_HRTBT_CTRL_HDRADDR_MASK ASHFT12(0xFFFUL) /* A0 */
+#define L1F_HRTBT_CTRL_HDRADDR_SHIFT 12
+#define L1F_HRTBT_CTRL_HDRADDRB0_MASK ASHFT13(0x7FFUL) /* B0 */
+#define L1F_HRTBT_CTRL_HDRADDRB0_SHIFT 13
+#define L1F_HRTBT_CTRL_PKT_FRAG BIT(12) /* B0 */
+#define L1F_HRTBT_CTRL_PKTLEN_MASK ASHFT0(0xFFFUL)
+#define L1F_HRTBT_CTRL_PKTLEN_SHIFT 0
+
+#define L1F_HRTBT_EXT_CTRL 0x1AD0 /* B0 */
+#define L1F_HRTBT_EXT_CTRL_NS_EN BIT(12)
+#define L1F_HRTBT_EXT_CTRL_FRAG_LEN_MASK ASHFT4(0xFFUL)
+#define L1F_HRTBT_EXT_CTRL_FRAG_LEN_SHIFT 4
+#define L1F_HRTBT_EXT_CTRL_IS_8023 BIT(3)
+#define L1F_HRTBT_EXT_CTRL_IS_IPV6 BIT(2)
+#define L1F_HRTBT_EXT_CTRL_WAKEUP_EN BIT(1)
+#define L1F_HRTBT_EXT_CTRL_ARP_EN BIT(0)
+
+#define L1F_HRTBT_REM_IPV4_ADDR 0x1AD4
+#define L1F_HRTBT_HOST_IPV4_ADDR 0x1478/*use L1F_TRD_BUBBLE_DA_IP4*/
+#define L1F_HRTBT_REM_IPV6_ADDR3 0x1AD8
+#define L1F_HRTBT_REM_IPV6_ADDR2 0x1ADC
+#define L1F_HRTBT_REM_IPV6_ADDR1 0x1AE0
+#define L1F_HRTBT_REM_IPV6_ADDR0 0x1AE4
+/*SWOI_HOST_IPV6_ADDR reuse reg1a60-1a6c, 1a70-1a7c, 1aa0-1aac, 1ab0-1abc.*/
+#define L1F_HRTBT_WAKEUP_PORT 0x1AE8
+#define L1F_HRTBT_WAKEUP_PORT_SRC_MASK ASHFT16(0xFFFFUL)
+#define L1F_HRTBT_WAKEUP_PORT_SRC_SHIFT 16
+#define L1F_HRTBT_WAKEUP_PORT_DEST_MASK ASHFT0(0xFFFFUL)
+#define L1F_HRTBT_WAKEUP_PORT_DEST_SHIFT 0
+
+#define L1F_HRTBT_WAKEUP_DATA7 0x1AEC
+#define L1F_HRTBT_WAKEUP_DATA6 0x1AF0
+#define L1F_HRTBT_WAKEUP_DATA5 0x1AF4
+#define L1F_HRTBT_WAKEUP_DATA4 0x1AF8
+#define L1F_HRTBT_WAKEUP_DATA3 0x1AFC
+#define L1F_HRTBT_WAKEUP_DATA2 0x1B80
+#define L1F_HRTBT_WAKEUP_DATA1 0x1B84
+#define L1F_HRTBT_WAKEUP_DATA0 0x1B88
+
+#define L1F_RXPARSE 0x1458
+#define L1F_RXPARSE_FLT6_L4_MASK ASHFT30(3UL)
+#define L1F_RXPARSE_FLT6_L4_SHIFT 30
+#define L1F_RXPARSE_FLT6_L3_MASK ASHFT28(3UL)
+#define L1F_RXPARSE_FLT6_L3_SHIFT 28
+#define L1F_RXPARSE_FLT5_L4_MASK ASHFT26(3UL)
+#define L1F_RXPARSE_FLT5_L4_SHIFT 26
+#define L1F_RXPARSE_FLT5_L3_MASK ASHFT24(3UL)
+#define L1F_RXPARSE_FLT5_L3_SHIFT 24
+#define L1F_RXPARSE_FLT4_L4_MASK ASHFT22(3UL)
+#define L1F_RXPARSE_FLT4_L4_SHIFT 22
+#define L1F_RXPARSE_FLT4_L3_MASK ASHFT20(3UL)
+#define L1F_RXPARSE_FLT4_L3_SHIFT 20
+#define L1F_RXPARSE_FLT3_L4_MASK ASHFT18(3UL)
+#define L1F_RXPARSE_FLT3_L4_SHIFT 18
+#define L1F_RXPARSE_FLT3_L3_MASK ASHFT16(3UL)
+#define L1F_RXPARSE_FLT3_L3_SHIFT 16
+#define L1F_RXPARSE_FLT2_L4_MASK ASHFT14(3UL)
+#define L1F_RXPARSE_FLT2_L4_SHIFT 14
+#define L1F_RXPARSE_FLT2_L3_MASK ASHFT12(3UL)
+#define L1F_RXPARSE_FLT2_L3_SHIFT 12
+#define L1F_RXPARSE_FLT1_L4_MASK ASHFT10(3UL)
+#define L1F_RXPARSE_FLT1_L4_SHIFT 10
+#define L1F_RXPARSE_FLT1_L3_MASK ASHFT8(3UL)
+#define L1F_RXPARSE_FLT1_L3_SHIFT 8
+#define L1F_RXPARSE_FLT6_EN BIT(5)
+#define L1F_RXPARSE_FLT5_EN BIT(4)
+#define L1F_RXPARSE_FLT4_EN BIT(3)
+#define L1F_RXPARSE_FLT3_EN BIT(2)
+#define L1F_RXPARSE_FLT2_EN BIT(1)
+#define L1F_RXPARSE_FLT1_EN BIT(0)
+#define L1F_RXPARSE_FLT_L4_UDP 0
+#define L1F_RXPARSE_FLT_L4_TCP 1
+#define L1F_RXPARSE_FLT_L4_BOTH 2
+#define L1F_RXPARSE_FLT_L4_NONE 3
+#define L1F_RXPARSE_FLT_L3_IPV6 0
+#define L1F_RXPARSE_FLT_L3_IPV4 1
+#define L1F_RXPARSE_FLT_L3_BOTH 2
+
+/* Terodo support */
+#define L1F_TRD_CTRL 0x145C
+#define L1F_TRD_CTRL_EN BIT(31)
+#define L1F_TRD_CTRL_BUBBLE_WAKE_EN BIT(30)
+#define L1F_TRD_CTRL_PREFIX_CMP_HW BIT(28)
+#define L1F_TRD_CTRL_RSHDR_ADDR_MASK ASHFT16(0xFFFUL)
+#define L1F_TRD_CTRL_RSHDR_ADDR_SHIFT 16
+#define L1F_TRD_CTRL_SINTV_MAX_MASK ASHFT8(0xFFUL)
+#define L1F_TRD_CTRL_SINTV_MAX_SHIFT 8
+#define L1F_TRD_CTRL_SINTV_MIN_MASK ASHFT0(0xFFUL)
+#define L1F_TRD_CTRL_SINTV_MIN_SHIFT 0
+
+#define L1F_TRD_RS 0x1460
+#define L1F_TRD_RS_SZ_MASK ASHFT20(0xFFFUL)
+#define L1F_TRD_RS_SZ_SHIFT 20
+#define L1F_TRD_RS_NONCE_OFS_MASK ASHFT8(0xFFFUL)
+#define L1F_TRD_RS_NONCE_OFS_SHIFT 8
+#define L1F_TRD_RS_SEQ_OFS_MASK ASHFT0(0xFFUL)
+#define L1F_TRD_RS_SEQ_OFS_SHIFT 0
+
+#define L1F_TRD_SRV_IP4 0x1464
+
+#define L1F_TRD_CLNT_EXTNL_IP4 0x1468
+
+#define L1F_TRD_PORT 0x146C
+#define L1F_TRD_PORT_CLNT_EXTNL_MASK ASHFT16(0xFFFFUL)
+#define L1F_TRD_PORT_CLNT_EXTNL_SHIFT 16
+#define L1F_TRD_PORT_SRV_MASK ASHFT0(0xFFFFUL)
+#define L1F_TRD_PORT_SRV_SHIFT 0
+
+#define L1F_TRD_PREFIX 0x1470
+
+#define L1F_TRD_BUBBLE_DA_IP4 0x1478
+
+#define L1F_TRD_BUBBLE_DA_PORT 0x147C
+
+
+#define L1F_IDLE_DECISN_TIMER 0x1474 /* B0 */
+#define L1F_IDLE_DECISN_TIMER_DEF 0x400 /* 1ms */
+
+
+#define L1F_MAC_CTRL 0x1480
+#define L1F_MAC_CTRL_FAST_PAUSE BIT(31)
+#define L1F_MAC_CTRL_WOLSPED_SWEN BIT(30)
+#define L1F_MAC_CTRL_MHASH_ALG_HI5B BIT(29) /* 1:legacy, 0:marvl(low5b)*/
+#define L1F_MAC_CTRL_SPAUSE_EN BIT(28)
+#define L1F_MAC_CTRL_DBG_EN BIT(27)
+#define L1F_MAC_CTRL_BRD_EN BIT(26)
+#define L1F_MAC_CTRL_MULTIALL_EN BIT(25)
+#define L1F_MAC_CTRL_RX_XSUM_EN BIT(24)
+#define L1F_MAC_CTRL_THUGE BIT(23)
+#define L1F_MAC_CTRL_MBOF BIT(22)
+#define L1F_MAC_CTRL_SPEED_MASK ASHFT20(3UL)
+#define L1F_MAC_CTRL_SPEED_SHIFT 20
+#define L1F_MAC_CTRL_SPEED_10_100 1
+#define L1F_MAC_CTRL_SPEED_1000 2
+#define L1F_MAC_CTRL_SIMR BIT(19)
+#define L1F_MAC_CTRL_SSTCT BIT(17)
+#define L1F_MAC_CTRL_TPAUSE BIT(16)
+#define L1F_MAC_CTRL_PROMISC_EN BIT(15)
+#define L1F_MAC_CTRL_VLANSTRIP BIT(14)
+#define L1F_MAC_CTRL_PRMBLEN_MASK ASHFT10(0xFUL)
+#define L1F_MAC_CTRL_PRMBLEN_SHIFT 10
+#define L1F_MAC_CTRL_RHUGE_EN BIT(9)
+#define L1F_MAC_CTRL_FLCHK BIT(8)
+#define L1F_MAC_CTRL_PCRCE BIT(7)
+#define L1F_MAC_CTRL_CRCE BIT(6)
+#define L1F_MAC_CTRL_FULLD BIT(5)
+#define L1F_MAC_CTRL_LPBACK_EN BIT(4)
+#define L1F_MAC_CTRL_RXFC_EN BIT(3)
+#define L1F_MAC_CTRL_TXFC_EN BIT(2)
+#define L1F_MAC_CTRL_RX_EN BIT(1)
+#define L1F_MAC_CTRL_TX_EN BIT(0)
+
+#define L1F_GAP 0x1484
+#define L1F_GAP_IPGR2_MASK ASHFT24(0x7FUL)
+#define L1F_GAP_IPGR2_SHIFT 24
+#define L1F_GAP_IPGR1_MASK ASHFT16(0x7FUL)
+#define L1F_GAP_IPGR1_SHIFT 16
+#define L1F_GAP_MIN_IFG_MASK ASHFT8(0xFFUL)
+#define L1F_GAP_MIN_IFG_SHIFT 8
+#define L1F_GAP_IPGT_MASK ASHFT0(0x7FUL) /* A0 diff with B0 */
+#define L1F_GAP_IPGT_SHIFT 0
+
+#define L1F_STAD0 0x1488
+#define L1F_STAD1 0x148C
+
+#define L1F_HASH_TBL0 0x1490
+#define L1F_HASH_TBL1 0x1494
+
+#define L1F_HALFD 0x1498
+#define L1F_HALFD_JAM_IPG_MASK ASHFT24(0xFUL)
+#define L1F_HALFD_JAM_IPG_SHIFT 24
+#define L1F_HALFD_ABEBT_MASK ASHFT20(0xFUL)
+#define L1F_HALFD_ABEBT_SHIFT 20
+#define L1F_HALFD_ABEBE BIT(19)
+#define L1F_HALFD_BPNB BIT(18)
+#define L1F_HALFD_NOBO BIT(17)
+#define L1F_HALFD_EDXSDFR BIT(16)
+#define L1F_HALFD_RETRY_MASK ASHFT12(0xFUL)
+#define L1F_HALFD_RETRY_SHIFT 12
+#define L1F_HALFD_LCOL_MASK ASHFT0(0x3FFUL)
+#define L1F_HALFD_LCOL_SHIFT 0
+
+#define L1F_MTU 0x149C
+#define L1F_MTU_JUMBO_TH 1514
+#define L1F_MTU_STD_ALGN 1536
+#define L1F_MTU_MIN 64
+
+#define L1F_SRAM0 0x1500
+#define L1F_SRAM_RFD_TAIL_ADDR_MASK ASHFT16(0xFFFUL)
+#define L1F_SRAM_RFD_TAIL_ADDR_SHIFT 16
+#define L1F_SRAM_RFD_HEAD_ADDR_MASK ASHFT0(0xFFFUL)
+#define L1F_SRAM_RFD_HEAD_ADDR_SHIFT 0
+
+#define L1F_SRAM1 0x1510
+#define L1F_SRAM_RFD_LEN_MASK ASHFT0(0xFFFUL) /* 8BYTES UNIT */
+#define L1F_SRAM_RFD_LEN_SHIFT 0
+
+#define L1F_SRAM2 0x1518
+#define L1F_SRAM_TRD_TAIL_ADDR_MASK ASHFT16(0xFFFUL)
+#define L1F_SRAM_TRD_TAIL_ADDR_SHIFT 16
+#define L1F_SRMA_TRD_HEAD_ADDR_MASK ASHFT0(0xFFFUL)
+#define L1F_SRAM_TRD_HEAD_ADDR_SHIFT 0
+
+#define L1F_SRAM3 0x151C
+#define L1F_SRAM_TRD_LEN_MASK ASHFT0(0xFFFUL) /* 8BYTES UNIT */
+#define L1F_SRAM_TRD_LEN_SHIFT 0
+
+#define L1F_SRAM4 0x1520
+#define L1F_SRAM_RXF_TAIL_ADDR_MASK ASHFT16(0xFFFUL)
+#define L1F_SRAM_RXF_TAIL_ADDR_SHIFT 16
+#define L1F_SRAM_RXF_HEAD_ADDR_MASK ASHFT0(0xFFFUL)
+#define L1F_SRAM_RXF_HEAD_ADDR_SHIFT 0
+
+#define L1F_SRAM5 0x1524
+#define L1F_SRAM_RXF_LEN_MASK ASHFT0(0xFFFUL) /* 8BYTES UNIT */
+#define L1F_SRAM_RXF_LEN_SHIFT 0
+#define L1F_SRAM_RXF_LEN_8K (8*1024)
+
+#define L1F_SRAM6 0x1528
+#define L1F_SRAM_TXF_TAIL_ADDR_MASK ASHFT16(0xFFFUL)
+#define L1F_SRAM_TXF_TAIL_ADDR_SHIFT 16
+#define L1F_SRAM_TXF_HEAD_ADDR_MASK ASHFT0(0xFFFUL)
+#define L1F_SRAM_TXF_HEAD_ADDR_SHIFT 0
+
+#define L1F_SRAM7 0x152C
+#define L1F_SRAM_TXF_LEN_MASK ASHFT0(0xFFFUL) /* 8BYTES UNIT */
+#define L1F_SRAM_TXF_LEN_SHIFT 0
+
+#define L1F_SRAM8 0x1530
+#define L1F_SRAM_PATTERN_ADDR_MASK ASHFT16(0xFFFUL)
+#define L1F_SRAM_PATTERN_ADDR_SHIFT 16
+#define L1F_SRAM_TSO_ADDR_MASK ASHFT0(0xFFFUL)
+#define L1F_SRAM_TSO_ADDR_SHIFT 0
+
+#define L1F_SRAM9 0x1534
+#define L1F_SRAM_LOAD_PTR BIT(0)
+
+#define L1F_RX_BASE_ADDR_HI 0x1540
+
+#define L1F_TX_BASE_ADDR_HI 0x1544
+
+#define L1F_RFD_ADDR_LO 0x1550
+#define L1F_RFD_RING_SZ 0x1560
+#define L1F_RFD_BUF_SZ 0x1564
+#define L1F_RFD_BUF_SZ_MASK ASHFT0(0xFFFFUL)
+#define L1F_RFD_BUF_SZ_SHIFT 0
+
+#define L1F_RRD_ADDR_LO 0x1568
+#define L1F_RRD_RING_SZ 0x1578
+#define L1F_RRD_RING_SZ_MASK ASHFT0(0xFFFUL)
+#define L1F_RRD_RING_SZ_SHIFT 0
+
+#define L1F_TPD_PRI3_ADDR_LO 0x14E4 /* HIGHEST PRIORITY */
+#define L1F_TPD_PRI2_ADDR_LO 0x14E0
+#define L1F_TPD_PRI1_ADDR_LO 0x157C
+#define L1F_TPD_PRI0_ADDR_LO 0x1580 /* LOWEST PRORITY */
+
+#define L1F_TPD_PRI3_PIDX 0x1618 /* 16BIT */
+#define L1F_TPD_PRI2_PIDX 0x161A /* 16BIT */
+#define L1F_TPD_PRI1_PIDX 0x15F0 /* 16BIT */
+#define L1F_TPD_PRI0_PIDX 0x15F2 /* 16BIT */
+
+#define L1F_TPD_PRI3_CIDX 0x161C /* 16BIT */
+#define L1F_TPD_PRI2_CIDX 0x161E /* 16BIT */
+#define L1F_TPD_PRI1_CIDX 0x15F4 /* 16BIT */
+#define L1F_TPD_PRI0_CIDX 0x15F6 /* 16BIT */
+
+#define L1F_TPD_RING_SZ 0x1584
+#define L1F_TPD_RING_SZ_MASK ASHFT0(0xFFFFUL)
+#define L1F_TPD_RING_SZ_SHIFT 0
+
+#define L1F_CMB_ADDR_LO 0x1588 /* NOT USED */
+
+#define L1F_TXQ0 0x1590
+#define L1F_TXQ0_TXF_BURST_PREF_MASK ASHFT16(0xFFFFUL)
+#define L1F_TXQ0_TXF_BURST_PREF_SHIFT 16
+#define L1F_TXQ_TXF_BURST_PREF_DEF 0x200
+#define L1F_TXQ0_PEDING_CLR BIT(8)
+#define L1F_TXQ0_LSO_8023_EN BIT(7)
+#define L1F_TXQ0_MODE_ENHANCE BIT(6)
+#define L1F_TXQ0_EN BIT(5)
+#define L1F_TXQ0_SUPT_IPOPT BIT(4)
+#define L1F_TXQ0_TPD_BURSTPREF_MASK ASHFT0(0xFUL)
+#define L1F_TXQ0_TPD_BURSTPREF_SHIFT 0
+#define L1F_TXQ_TPD_BURSTPREF_DEF 5
+
+#define L1F_TXQ1 0x1594
+#define L1F_TXQ1_ERRLGPKT_DROP_EN BIT(11) /* drop error large
+ * (>rfd buf) packet */
+#define L1F_TXQ1_JUMBO_TSOTHR_MASK ASHFT0(0x7FFUL) /* 8BYTES UNIT */
+#define L1F_TXQ1_JUMBO_TSOTHR_SHIFT 0
+#define L1F_TXQ1_JUMBO_TSO_TH (7*1024) /* byte */
+
+#define L1F_TXQ2 0x1598 /* ENTER L1 CONTROL */
+#define L1F_TXQ2_BURST_EN BIT(31)
+#define L1F_TXQ2_BURST_HI_WM_MASK ASHFT16(0xFFFUL)
+#define L1F_TXQ2_BURST_HI_WM_SHIFT 16
+#define L1F_TXQ2_BURST_LO_WM_MASK ASHFT0(0xFFFUL)
+#define L1F_TXQ2_BURST_LO_WM_SHIFT 0
+
+#define L1F_RXQ0 0x15A0
+#define L1F_RXQ0_EN BIT(31)
+#define L1F_RXQ0_CUT_THRU_EN BIT(30)
+#define L1F_RXQ0_RSS_HASH_EN BIT(29)
+#define L1F_RXQ0_NON_IP_QTBL BIT(28) /* 0:q0,1:table */
+#define L1F_RXQ0_RSS_MODE_MASK ASHFT26(3UL)
+#define L1F_RXQ0_RSS_MODE_SHIFT 26
+#define L1F_RXQ0_RSS_MODE_DIS 0
+#define L1F_RXQ0_RSS_MODE_SQSI 1
+#define L1F_RXQ0_RSS_MODE_MQSI 2
+#define L1F_RXQ0_RSS_MODE_MQMI 3
+#define L1F_RXQ0_NUM_RFD_PREF_MASK ASHFT20(0x3FUL)
+#define L1F_RXQ0_NUM_RFD_PREF_SHIFT 20
+#define L1F_RXQ0_NUM_RFD_PREF_DEF 8
+#define L1F_RXQ0_IDT_TBL_SIZE_MASK ASHFT8(0x1FFUL)
+#define L1F_RXQ0_IDT_TBL_SIZE_SHIFT 8
+#define L1F_RXQ0_IDT_TBL_SIZE_DEF 0x100
+#define L1F_RXQ0_IPV6_PARSE_EN BIT(7)
+#define L1F_RXQ0_RSS_HSTYP_IPV6_TCP_EN BIT(5)
+#define L1F_RXQ0_RSS_HSTYP_IPV6_EN BIT(4)
+#define L1F_RXQ0_RSS_HSTYP_IPV4_TCP_EN BIT(3)
+#define L1F_RXQ0_RSS_HSTYP_IPV4_EN BIT(2)
+#define L1F_RXQ0_RSS_HSTYP_ALL (\
+ L1F_RXQ0_RSS_HSTYP_IPV6_TCP_EN |\
+ L1F_RXQ0_RSS_HSTYP_IPV4_TCP_EN |\
+ L1F_RXQ0_RSS_HSTYP_IPV6_EN |\
+ L1F_RXQ0_RSS_HSTYP_IPV4_EN)
+#define L1F_RXQ0_ASPM_THRESH_MASK ASHFT0(3UL)
+#define L1F_RXQ0_ASPM_THRESH_SHIFT 0
+#define L1F_RXQ0_ASPM_THRESH_NO 0
+#define L1F_RXQ0_ASPM_THRESH_1M 1
+#define L1F_RXQ0_ASPM_THRESH_10M 2
+#define L1F_RXQ0_ASPM_THRESH_100M 3
+
+#define L1F_RXQ1 0x15A4
+#define L1F_RXQ1_JUMBO_LKAH_MASK ASHFT12(0xFUL) /* 32BYTES UNIT */
+#define L1F_RXQ1_JUMBO_LKAH_SHIFT 12
+#define L1F_RXQ1_RFD_PREF_DOWN_MASK ASHFT6(0x3FUL)
+#define L1F_RXQ1_RFD_PREF_DOWN_SHIFT 6
+#define L1F_RXQ1_RFD_PREF_UP_MASK ASHFT0(0x3FUL)
+#define L1F_RXQ1_RFD_PREF_UP_SHIFT 0
+
+#define L1F_RXQ2 0x15A8
+/* XOFF: USED SRAM LOWER THAN IT, THEN NOTIFY THE PEER TO SEND AGAIN */
+#define L1F_RXQ2_RXF_XOFF_THRESH_MASK ASHFT16(0xFFFUL)
+#define L1F_RXQ2_RXF_XOFF_THRESH_SHIFT 16
+#define L1F_RXQ2_RXF_XON_THRESH_MASK ASHFT0(0xFFFUL)
+#define L1F_RXQ2_RXF_XON_THRESH_SHIFT 0
+
+#define L1F_RXQ3 0x15AC
+#define L1F_RXQ3_RXD_TIMER_MASK ASHFT16(0x7FFFUL)
+#define L1F_RXQ3_RXD_TIMER_SHIFT 16
+#define L1F_RXQ3_RXD_THRESH_MASK ASHFT0(0xFFFUL) /* 8BYTES UNIT */
+#define L1F_RXQ3_RXD_THRESH_SHIFT 0
+
+#define L1F_DMA 0x15C0
+#define L1F_DMA_SMB_NOW BIT(31)
+#define L1F_DMA_WPEND_CLR BIT(30)
+#define L1F_DMA_RPEND_CLR BIT(29)
+#define L1F_DMA_WSRAM_RDCTRL BIT(28)
+#define L1F_DMA_RCHNL_SEL_MASK ASHFT26(3UL)
+#define L1F_DMA_RCHNL_SEL_SHIFT 26
+#define L1F_DMA_RCHNL_SEL_1 0
+#define L1F_DMA_RCHNL_SEL_2 1
+#define L1F_DMA_RCHNL_SEL_3 2
+#define L1F_DMA_RCHNL_SEL_4 3
+#define L1F_DMA_SMB_EN BIT(21) /* smb dma enable */
+#define L1F_DMA_WDLY_CNT_MASK ASHFT16(0xFUL)
+#define L1F_DMA_WDLY_CNT_SHIFT 16
+#define L1F_DMA_WDLY_CNT_DEF 4
+#define L1F_DMA_RDLY_CNT_MASK ASHFT11(0x1FUL)
+#define L1F_DMA_RDLY_CNT_SHIFT 11
+#define L1F_DMA_RDLY_CNT_DEF 15
+#define L1F_DMA_RREQ_PRI_DATA BIT(10) /* 0:tpd, 1:data */
+#define L1F_DMA_WREQ_BLEN_MASK ASHFT7(7UL)
+#define L1F_DMA_WREQ_BLEN_SHIFT 7
+#define L1F_DMA_RREQ_BLEN_MASK ASHFT4(7UL)
+#define L1F_DMA_RREQ_BLEN_SHIFT 4
+#define L1F_DMA_PENDING_AUTO_RST BIT(3)
+#define L1F_DMA_RORDER_MODE_MASK ASHFT0(7UL)
+#define L1F_DMA_RORDER_MODE_SHIFT 0
+#define L1F_DMA_RORDER_MODE_OUT 4
+#define L1F_DMA_RORDER_MODE_ENHANCE 2
+#define L1F_DMA_RORDER_MODE_IN 1
+
+#define L1F_WOL0 0x14A0
+#define L1F_WOL0_PT7_MATCH BIT(31)
+#define L1F_WOL0_PT6_MATCH BIT(30)
+#define L1F_WOL0_PT5_MATCH BIT(29)
+#define L1F_WOL0_PT4_MATCH BIT(28)
+#define L1F_WOL0_PT3_MATCH BIT(27)
+#define L1F_WOL0_PT2_MATCH BIT(26)
+#define L1F_WOL0_PT1_MATCH BIT(25)
+#define L1F_WOL0_PT0_MATCH BIT(24)
+#define L1F_WOL0_PT7_EN BIT(23)
+#define L1F_WOL0_PT6_EN BIT(22)
+#define L1F_WOL0_PT5_EN BIT(21)
+#define L1F_WOL0_PT4_EN BIT(20)
+#define L1F_WOL0_PT3_EN BIT(19)
+#define L1F_WOL0_PT2_EN BIT(18)
+#define L1F_WOL0_PT1_EN BIT(17)
+#define L1F_WOL0_PT0_EN BIT(16)
+#define L1F_WOL0_IPV4_SYNC_EVT BIT(14)
+#define L1F_WOL0_IPV6_SYNC_EVT BIT(13)
+#define L1F_WOL0_LINK_EVT BIT(10)
+#define L1F_WOL0_MAGIC_EVT BIT(9)
+#define L1F_WOL0_PATTERN_EVT BIT(8)
+#define L1F_WOL0_OOB_EN BIT(6)
+#define L1F_WOL0_PME_LINK BIT(5)
+#define L1F_WOL0_LINK_EN BIT(4)
+#define L1F_WOL0_PME_MAGIC_EN BIT(3)
+#define L1F_WOL0_MAGIC_EN BIT(2)
+#define L1F_WOL0_PME_PATTERN_EN BIT(1)
+#define L1F_WOL0_PATTERN_EN BIT(0)
+
+#define L1F_WOL1 0x14A4
+#define L1F_WOL1_PT3_LEN_MASK ASHFT24(0xFFUL)
+#define L1F_WOL1_PT3_LEN_SHIFT 24
+#define L1F_WOL1_PT2_LEN_MASK ASHFT16(0xFFUL)
+#define L1F_WOL1_PT2_LEN_SHIFT 16
+#define L1F_WOL1_PT1_LEN_MASK ASHFT8(0xFFUL)
+#define L1F_WOL1_PT1_LEN_SHIFT 8
+#define L1F_WOL1_PT0_LEN_MASK ASHFT0(0xFFUL)
+#define L1F_WOL1_PT0_LEN_SHIFT 0
+
+#define L1F_WOL2 0x14A8
+#define L1F_WOL2_PT7_LEN_MASK ASHFT24(0xFFUL)
+#define L1F_WOL2_PT7_LEN_SHIFT 24
+#define L1F_WOL2_PT6_LEN_MASK ASHFT16(0xFFUL)
+#define L1F_WOL2_PT6_LEN_SHIFT 16
+#define L1F_WOL2_PT5_LEN_MASK ASHFT8(0xFFUL)
+#define L1F_WOL2_PT5_LEN_SHIFT 8
+#define L1F_WOL2_PT4_LEN_MASK ASHFT0(0xFFUL)
+#define L1F_WOL2_PT4_LEN_SHIFT 0
+
+#define L1F_RFD_PIDX 0x15E0
+#define L1F_RFD_PIDX_MASK ASHFT0(0xFFFUL)
+#define L1F_RFD_PIDX_SHIFT 0
+
+#define L1F_RFD_CIDX 0x15F8
+#define L1F_RFD_CIDX_MASK ASHFT0(0xFFFUL)
+#define L1F_RFD_CIDX_SHIFT 0
+
+/* MIB */
+#define L1F_MIB_BASE 0x1700
+#define L1F_MIB_RX_OK (L1F_MIB_BASE + 0)
+#define L1F_MIB_RX_BC (L1F_MIB_BASE + 4)
+#define L1F_MIB_RX_MC (L1F_MIB_BASE + 8)
+#define L1F_MIB_RX_PAUSE (L1F_MIB_BASE + 12)
+#define L1F_MIB_RX_CTRL (L1F_MIB_BASE + 16)
+#define L1F_MIB_RX_FCS (L1F_MIB_BASE + 20)
+#define L1F_MIB_RX_LENERR (L1F_MIB_BASE + 24)
+#define L1F_MIB_RX_BYTCNT (L1F_MIB_BASE + 28)
+#define L1F_MIB_RX_RUNT (L1F_MIB_BASE + 32)
+#define L1F_MIB_RX_FRAGMENT (L1F_MIB_BASE + 36)
+#define L1F_MIB_RX_64B (L1F_MIB_BASE + 40)
+#define L1F_MIB_RX_127B (L1F_MIB_BASE + 44)
+#define L1F_MIB_RX_255B (L1F_MIB_BASE + 48)
+#define L1F_MIB_RX_511B (L1F_MIB_BASE + 52)
+#define L1F_MIB_RX_1023B (L1F_MIB_BASE + 56)
+#define L1F_MIB_RX_1518B (L1F_MIB_BASE + 60)
+#define L1F_MIB_RX_SZMAX (L1F_MIB_BASE + 64)
+#define L1F_MIB_RX_OVSZ (L1F_MIB_BASE + 68)
+#define L1F_MIB_RXF_OV (L1F_MIB_BASE + 72)
+#define L1F_MIB_RRD_OV (L1F_MIB_BASE + 76)
+#define L1F_MIB_RX_ALIGN (L1F_MIB_BASE + 80)
+#define L1F_MIB_RX_BCCNT (L1F_MIB_BASE + 84)
+#define L1F_MIB_RX_MCCNT (L1F_MIB_BASE + 88)
+#define L1F_MIB_RX_ERRADDR (L1F_MIB_BASE + 92)
+#define L1F_MIB_TX_OK (L1F_MIB_BASE + 96)
+#define L1F_MIB_TX_BC (L1F_MIB_BASE + 100)
+#define L1F_MIB_TX_MC (L1F_MIB_BASE + 104)
+#define L1F_MIB_TX_PAUSE (L1F_MIB_BASE + 108)
+#define L1F_MIB_TX_EXCDEFER (L1F_MIB_BASE + 112)
+#define L1F_MIB_TX_CTRL (L1F_MIB_BASE + 116)
+#define L1F_MIB_TX_DEFER (L1F_MIB_BASE + 120)
+#define L1F_MIB_TX_BYTCNT (L1F_MIB_BASE + 124)
+#define L1F_MIB_TX_64B (L1F_MIB_BASE + 128)
+#define L1F_MIB_TX_127B (L1F_MIB_BASE + 132)
+#define L1F_MIB_TX_255B (L1F_MIB_BASE + 136)
+#define L1F_MIB_TX_511B (L1F_MIB_BASE + 140)
+#define L1F_MIB_TX_1023B (L1F_MIB_BASE + 144)
+#define L1F_MIB_TX_1518B (L1F_MIB_BASE + 148)
+#define L1F_MIB_TX_SZMAX (L1F_MIB_BASE + 152)
+#define L1F_MIB_TX_1COL (L1F_MIB_BASE + 156)
+#define L1F_MIB_TX_2COL (L1F_MIB_BASE + 160)
+#define L1F_MIB_TX_LATCOL (L1F_MIB_BASE + 164)
+#define L1F_MIB_TX_ABRTCOL (L1F_MIB_BASE + 168)
+#define L1F_MIB_TX_UNDRUN (L1F_MIB_BASE + 172)
+#define L1F_MIB_TX_TRDBEOP (L1F_MIB_BASE + 176)
+#define L1F_MIB_TX_LENERR (L1F_MIB_BASE + 180)
+#define L1F_MIB_TX_TRUNC (L1F_MIB_BASE + 184)
+#define L1F_MIB_TX_BCCNT (L1F_MIB_BASE + 188)
+#define L1F_MIB_TX_MCCNT (L1F_MIB_BASE + 192)
+#define L1F_MIB_UPDATE (L1F_MIB_BASE + 196)
+
+/******************************************************************************/
+
+#define L1F_ISR 0x1600
+#define L1F_ISR_DIS BIT(31)
+#define L1F_ISR_RX_Q7 BIT(30)
+#define L1F_ISR_RX_Q6 BIT(29)
+#define L1F_ISR_RX_Q5 BIT(28)
+#define L1F_ISR_RX_Q4 BIT(27)
+#define L1F_ISR_PCIE_LNKDOWN BIT(26)
+#define L1F_ISR_PCIE_CERR BIT(25)
+#define L1F_ISR_PCIE_NFERR BIT(24)
+#define L1F_ISR_PCIE_FERR BIT(23)
+#define L1F_ISR_PCIE_UR BIT(22)
+#define L1F_ISR_MAC_TX BIT(21)
+#define L1F_ISR_MAC_RX BIT(20)
+#define L1F_ISR_RX_Q3 BIT(19)
+#define L1F_ISR_RX_Q2 BIT(18)
+#define L1F_ISR_RX_Q1 BIT(17)
+#define L1F_ISR_RX_Q0 BIT(16)
+#define L1F_ISR_TX_Q0 BIT(15)
+#define L1F_ISR_TXQ_TO BIT(14)
+#define L1F_ISR_PHY_LPW BIT(13)
+#define L1F_ISR_PHY BIT(12)
+#define L1F_ISR_TX_CREDIT BIT(11)
+#define L1F_ISR_DMAW BIT(10)
+#define L1F_ISR_DMAR BIT(9)
+#define L1F_ISR_TXF_UR BIT(8)
+#define L1F_ISR_TX_Q3 BIT(7)
+#define L1F_ISR_TX_Q2 BIT(6)
+#define L1F_ISR_TX_Q1 BIT(5)
+#define L1F_ISR_RFD_UR BIT(4)
+#define L1F_ISR_RXF_OV BIT(3)
+#define L1F_ISR_MANU BIT(2)
+#define L1F_ISR_TIMER BIT(1)
+#define L1F_ISR_SMB BIT(0)
+
+#define L1F_IMR 0x1604
+
+#define L1F_INT_RETRIG 0x1608 /* re-send deassrt/assert
+ * if sw no reflect */
+#define L1F_INT_RETRIG_TIMER_MASK ASHFT0(0xFFFFUL)
+#define L1F_INT_RETRIG_TIMER_SHIFT 0
+#define L1F_INT_RETRIG_TO 20000 /* 40ms */
+
+#define L1F_INT_DEASST_TIMER 0x1614 /* re-send deassert
+ * if sw no reflect */
+
+#define L1F_PATTERN_MASK 0x1620 /* 128bytes, sleep state */
+#define L1F_PATTERN_MASK_LEN 128
+
+
+#define L1F_FLT1_SRC_IP0 0x1A00
+#define L1F_FLT1_SRC_IP1 0x1A04
+#define L1F_FLT1_SRC_IP2 0x1A08
+#define L1F_FLT1_SRC_IP3 0x1A0C
+#define L1F_FLT1_DST_IP0 0x1A10
+#define L1F_FLT1_DST_IP1 0x1A14
+#define L1F_FLT1_DST_IP2 0x1A18
+#define L1F_FLT1_DST_IP3 0x1A1C
+#define L1F_FLT1_PORT 0x1A20
+#define L1F_FLT1_PORT_DST_MASK ASHFT16(0xFFFFUL)
+#define L1F_FLT1_PORT_DST_SHIFT 16
+#define L1F_FLT1_PORT_SRC_MASK ASHFT0(0xFFFFUL)
+#define L1F_FLT1_PORT_SRC_SHIFT 0
+
+#define L1F_FLT2_SRC_IP0 0x1A24
+#define L1F_FLT2_SRC_IP1 0x1A28
+#define L1F_FLT2_SRC_IP2 0x1A2C
+#define L1F_FLT2_SRC_IP3 0x1A30
+#define L1F_FLT2_DST_IP0 0x1A34
+#define L1F_FLT2_DST_IP1 0x1A38
+#define L1F_FLT2_DST_IP2 0x1A40
+#define L1F_FLT2_DST_IP3 0x1A44
+#define L1F_FLT2_PORT 0x1A48
+#define L1F_FLT2_PORT_DST_MASK ASHFT16(0xFFFFUL)
+#define L1F_FLT2_PORT_DST_SHIFT 16
+#define L1F_FLT2_PORT_SRC_MASK ASHFT0(0xFFFFUL)
+#define L1F_FLT2_PORT_SRC_SHIFT 0
+
+#define L1F_FLT3_SRC_IP0 0x1A4C
+#define L1F_FLT3_SRC_IP1 0x1A50
+#define L1F_FLT3_SRC_IP2 0x1A54
+#define L1F_FLT3_SRC_IP3 0x1A58
+#define L1F_FLT3_DST_IP0 0x1A5C
+#define L1F_FLT3_DST_IP1 0x1A60
+#define L1F_FLT3_DST_IP2 0x1A64
+#define L1F_FLT3_DST_IP3 0x1A68
+#define L1F_FLT3_PORT 0x1A6C
+#define L1F_FLT3_PORT_DST_MASK ASHFT16(0xFFFFUL)
+#define L1F_FLT3_PORT_DST_SHIFT 16
+#define L1F_FLT3_PORT_SRC_MASK ASHFT0(0xFFFFUL)
+#define L1F_FLT3_PORT_SRC_SHIFT 0
+
+#define L1F_FLT4_SRC_IP0 0x1A70
+#define L1F_FLT4_SRC_IP1 0x1A74
+#define L1F_FLT4_SRC_IP2 0x1A78
+#define L1F_FLT4_SRC_IP3 0x1A7C
+#define L1F_FLT4_DST_IP0 0x1A80
+#define L1F_FLT4_DST_IP1 0x1A84
+#define L1F_FLT4_DST_IP2 0x1A88
+#define L1F_FLT4_DST_IP3 0x1A8C
+#define L1F_FLT4_PORT 0x1A90
+#define L1F_FLT4_PORT_DST_MASK ASHFT16(0xFFFFUL)
+#define L1F_FLT4_PORT_DST_SHIFT 16
+#define L1F_FLT4_PORT_SRC_MASK ASHFT0(0xFFFFUL)
+#define L1F_FLT4_PORT_SRC_SHIFT 0
+
+#define L1F_FLT5_SRC_IP0 0x1A94
+#define L1F_FLT5_SRC_IP1 0x1A98
+#define L1F_FLT5_SRC_IP2 0x1A9C
+#define L1F_FLT5_SRC_IP3 0x1AA0
+#define L1F_FLT5_DST_IP0 0x1AA4
+#define L1F_FLT5_DST_IP1 0x1AA8
+#define L1F_FLT5_DST_IP2 0x1AAC
+#define L1F_FLT5_DST_IP3 0x1AB0
+#define L1F_FLT5_PORT 0x1AB4
+#define L1F_FLT5_PORT_DST_MASK ASHFT16(0xFFFFUL)
+#define L1F_FLT5_PORT_DST_SHIFT 16
+#define L1F_FLT5_PORT_SRC_MASK ASHFT0(0xFFFFUL)
+#define L1F_FLT5_PORT_SRC_SHIFT 0
+
+#define L1F_FLT6_SRC_IP0 0x1AB8
+#define L1F_FLT6_SRC_IP1 0x1ABC
+#define L1F_FLT6_SRC_IP2 0x1AC0
+#define L1F_FLT6_SRC_IP3 0x1AC8
+#define L1F_FLT6_DST_IP0 0x1620 /* only S0 state */
+#define L1F_FLT6_DST_IP1 0x1624
+#define L1F_FLT6_DST_IP2 0x1628
+#define L1F_FLT6_DST_IP3 0x162C
+#define L1F_FLT6_PORT 0x1630
+#define L1F_FLT6_PORT_DST_MASK ASHFT16(0xFFFFUL)
+#define L1F_FLT6_PORT_DST_SHIFT 16
+#define L1F_FLT6_PORT_SRC_MASK ASHFT0(0xFFFFUL)
+#define L1F_FLT6_PORT_SRC_SHIFT 0
+
+#define L1F_FLTCTRL 0x1634
+#define L1F_FLTCTRL_PSTHR_TIMER_MASK ASHFT24(0xFFUL)
+#define L1F_FLTCTRL_PSTHR_TIMER_SHIFT 24
+#define L1F_FLTCTRL_CHK_DSTPRT6 BIT(23)
+#define L1F_FLTCTRL_CHK_SRCPRT6 BIT(22)
+#define L1F_FLTCTRL_CHK_DSTIP6 BIT(21)
+#define L1F_FLTCTRL_CHK_SRCIP6 BIT(20)
+#define L1F_FLTCTRL_CHK_DSTPRT5 BIT(19)
+#define L1F_FLTCTRL_CHK_SRCPRT5 BIT(18)
+#define L1F_FLTCTRL_CHK_DSTIP5 BIT(17)
+#define L1F_FLTCTRL_CHK_SRCIP5 BIT(16)
+#define L1F_FLTCTRL_CHK_DSTPRT4 BIT(15)
+#define L1F_FLTCTRL_CHK_SRCPRT4 BIT(14)
+#define L1F_FLTCTRL_CHK_DSTIP4 BIT(13)
+#define L1F_FLTCTRL_CHK_SRCIP4 BIT(12)
+#define L1F_FLTCTRL_CHK_DSTPRT3 BIT(11)
+#define L1F_FLTCTRL_CHK_SRCPRT3 BIT(10)
+#define L1F_FLTCTRL_CHK_DSTIP3 BIT(9)
+#define L1F_FLTCTRL_CHK_SRCIP3 BIT(8)
+#define L1F_FLTCTRL_CHK_DSTPRT2 BIT(7)
+#define L1F_FLTCTRL_CHK_SRCPRT2 BIT(6)
+#define L1F_FLTCTRL_CHK_DSTIP2 BIT(5)
+#define L1F_FLTCTRL_CHK_SRCIP2 BIT(4)
+#define L1F_FLTCTRL_CHK_DSTPRT1 BIT(3)
+#define L1F_FLTCTRL_CHK_SRCPRT1 BIT(2)
+#define L1F_FLTCTRL_CHK_DSTIP1 BIT(1)
+#define L1F_FLTCTRL_CHK_SRCIP1 BIT(0)
+
+#define L1F_DROP_ALG1 0x1638
+#define L1F_DROP_ALG1_BWCHGVAL_MASK ASHFT12(0xFFFFFUL)
+#define L1F_DROP_ALG1_BWCHGVAL_SHIFT 12
+#define L1F_DROP_ALG1_BWCHGSCL_6 BIT(11) /* 0:3.125%, 1:6.25% */
+#define L1F_DROP_ALG1_ASUR_LWQ_EN BIT(10)
+#define L1F_DROP_ALG1_BWCHGVAL_EN BIT(9)
+#define L1F_DROP_ALG1_BWCHGSCL_EN BIT(8)
+#define L1F_DROP_ALG1_PSTHR_AUTO BIT(7) /* 0:manual, 1:auto */
+#define L1F_DROP_ALG1_MIN_PSTHR_MASK ASHFT5(3UL)
+#define L1F_DROP_ALG1_MIN_PSTHR_SHIFT 5
+#define L1F_DROP_ALG1_MIN_PSTHR_1_16 0
+#define L1F_DROP_ALG1_MIN_PSTHR_1_8 1
+#define L1F_DROP_ALG1_MIN_PSTHR_1_4 2
+#define L1F_DROP_ALG1_MIN_PSTHR_1_2 3
+#define L1F_DROP_ALG1_PSCL_MASK ASHFT3(3UL)
+#define L1F_DROP_ALG1_PSCL_SHIFT 3
+#define L1F_DROP_ALG1_PSCL_1_4 0
+#define L1F_DROP_ALG1_PSCL_1_8 1
+#define L1F_DROP_ALG1_PSCL_1_16 2
+#define L1F_DROP_ALG1_PSCL_1_32 3
+#define L1F_DROP_ALG1_TIMESLOT_MASK ASHFT0(7UL)
+#define L1F_DROP_ALG1_TIMESLOT_SHIFT 0
+#define L1F_DROP_ALG1_TIMESLOT_4MS 0
+#define L1F_DROP_ALG1_TIMESLOT_8MS 1
+#define L1F_DROP_ALG1_TIMESLOT_16MS 2
+#define L1F_DROP_ALG1_TIMESLOT_32MS 3
+#define L1F_DROP_ALG1_TIMESLOT_64MS 4
+#define L1F_DROP_ALG1_TIMESLOT_128MS 5
+#define L1F_DROP_ALG1_TIMESLOT_256MS 6
+#define L1F_DROP_ALG1_TIMESLOT_512MS 7
+
+#define L1F_DROP_ALG2 0x163C
+#define L1F_DROP_ALG2_SMPLTIME_MASK ASHFT24(0xFUL)
+#define L1F_DROP_ALG2_SMPLTIME_SHIFT 24
+#define L1F_DROP_ALG2_LWQBW_MASK ASHFT0(0xFFFFFFUL)
+#define L1F_DROP_ALG2_LWQBW_SHIFT 0
+
+#define L1F_SMB_TIMER 0x15C4
+
+#define L1F_TINT_TPD_THRSHLD 0x15C8
+
+#define L1F_TINT_TIMER 0x15CC
+
+#define L1F_CLK_GATE 0x1814
+#define L1F_CLK_GATE_125M_SW_DIS_CR BIT(8) /* B0 */
+#define L1F_CLK_GATE_125M_SW_AZ BIT(7) /* B0 */
+#define L1F_CLK_GATE_125M_SW_IDLE BIT(6) /* B0 */
+#define L1F_CLK_GATE_RXMAC BIT(5)
+#define L1F_CLK_GATE_TXMAC BIT(4)
+#define L1F_CLK_GATE_RXQ BIT(3)
+#define L1F_CLK_GATE_TXQ BIT(2)
+#define L1F_CLK_GATE_DMAR BIT(1)
+#define L1F_CLK_GATE_DMAW BIT(0)
+#define L1F_CLK_GATE_ALL_A0 (\
+ L1F_CLK_GATE_RXMAC |\
+ L1F_CLK_GATE_TXMAC |\
+ L1F_CLK_GATE_RXQ |\
+ L1F_CLK_GATE_TXQ |\
+ L1F_CLK_GATE_DMAR |\
+ L1F_CLK_GATE_DMAW)
+#define L1F_CLK_GATE_ALL_B0 (\
+ L1F_CLK_GATE_ALL_A0 |\
+ L1F_CLK_GATE_125M_SW_AZ |\
+ L1F_CLK_GATE_125M_SW_IDLE)
+
+
+
+
+
+#define L1F_BTROM_CFG 0x1800 /* pwon rst */
+
+#define L1F_DRV 0x1804
+/* bit definition is in lx_hwcomm.h */
+
+#define L1F_DRV_ERR1 0x1808 /* perst */
+#define L1F_DRV_ERR1_GEN BIT(31) /* geneneral err */
+#define L1F_DRV_ERR1_NOR BIT(30) /* rrd.nor */
+#define L1F_DRV_ERR1_TRUNC BIT(29)
+#define L1F_DRV_ERR1_RES BIT(28)
+#define L1F_DRV_ERR1_INTFATAL BIT(27)
+#define L1F_DRV_ERR1_TXQPEND BIT(26)
+#define L1F_DRV_ERR1_DMAW BIT(25)
+#define L1F_DRV_ERR1_DMAR BIT(24)
+#define L1F_DRV_ERR1_PCIELNKDWN BIT(23)
+#define L1F_DRV_ERR1_PKTSIZE BIT(22)
+#define L1F_DRV_ERR1_FIFOFUL BIT(21)
+#define L1F_DRV_ERR1_RFDUR BIT(20)
+#define L1F_DRV_ERR1_RRDSI BIT(19)
+#define L1F_DRV_ERR1_UPDATE BIT(18)
+
+#define L1F_DRV_ERR2 0x180C
+
+#define L1F_DBG_ADDR 0x1900 /* DWORD reg */
+#define L1F_DBG_DATA 0x1904 /* DWORD reg */
+
+#define L1F_SYNC_IPV4_SA 0x1A00
+#define L1F_SYNC_IPV4_DA 0x1A04
+
+#define L1F_SYNC_V4PORT 0x1A08
+#define L1F_SYNC_V4PORT_DST_MASK ASHFT16(0xFFFFUL)
+#define L1F_SYNC_V4PORT_DST_SHIFT 16
+#define L1F_SYNC_V4PORT_SRC_MASK ASHFT0(0xFFFFUL)
+#define L1F_SYNC_V4PORT_SRC_SHIFT 0
+
+#define L1F_SYNC_IPV6_SA0 0x1A0C
+#define L1F_SYNC_IPV6_SA1 0x1A10
+#define L1F_SYNC_IPV6_SA2 0x1A14
+#define L1F_SYNC_IPV6_SA3 0x1A18
+#define L1F_SYNC_IPV6_DA0 0x1A1C
+#define L1F_SYNC_IPV6_DA1 0x1A20
+#define L1F_SYNC_IPV6_DA2 0x1A24
+#define L1F_SYNC_IPV6_DA3 0x1A28
+
+#define L1F_SYNC_V6PORT 0x1A2C
+#define L1F_SYNC_V6PORT_DST_MASK ASHFT16(0xFFFFUL)
+#define L1F_SYNC_V6PORT_DST_SHIFT 16
+#define L1F_SYNC_V6PORT_SRC_MASK ASHFT0(0xFFFFUL)
+#define L1F_SYNC_V6PORT_SRC_SHIFT 0
+
+#define L1F_ARP_REMOTE_IPV4 0x1A30
+#define L1F_ARP_HOST_IPV4 0x1A34
+#define L1F_ARP_MAC0 0x1A38
+#define L1F_ARP_MAC1 0x1A3C
+
+#define L1F_1ST_REMOTE_IPV6_0 0x1A40
+#define L1F_1ST_REMOTE_IPV6_1 0x1A44
+#define L1F_1ST_REMOTE_IPV6_2 0x1A48
+#define L1F_1ST_REMOTE_IPV6_3 0x1A4C
+
+#define L1F_1ST_SN_IPV6_0 0x1A50
+#define L1F_1ST_SN_IPV6_1 0x1A54
+#define L1F_1ST_SN_IPV6_2 0x1A58
+#define L1F_1ST_SN_IPV6_3 0x1A5C
+
+#define L1F_1ST_TAR_IPV6_1_0 0x1A60
+#define L1F_1ST_TAR_IPV6_1_1 0x1A64
+#define L1F_1ST_TAR_IPV6_1_2 0x1A68
+#define L1F_1ST_TAR_IPV6_1_3 0x1A6C
+#define L1F_1ST_TAR_IPV6_2_0 0x1A70
+#define L1F_1ST_TAR_IPV6_2_1 0x1A74
+#define L1F_1ST_TAR_IPV6_2_2 0x1A78
+#define L1F_1ST_TAR_IPV6_2_3 0x1A7C
+
+#define L1F_2ND_REMOTE_IPV6_0 0x1A80
+#define L1F_2ND_REMOTE_IPV6_1 0x1A84
+#define L1F_2ND_REMOTE_IPV6_2 0x1A88
+#define L1F_2ND_REMOTE_IPV6_3 0x1A8C
+
+#define L1F_2ND_SN_IPV6_0 0x1A90
+#define L1F_2ND_SN_IPV6_1 0x1A94
+#define L1F_2ND_SN_IPV6_2 0x1A98
+#define L1F_2ND_SN_IPV6_3 0x1A9C
+
+#define L1F_2ND_TAR_IPV6_1_0 0x1AA0
+#define L1F_2ND_TAR_IPV6_1_1 0x1AA4
+#define L1F_2ND_TAR_IPV6_1_2 0x1AA8
+#define L1F_2ND_TAR_IPV6_1_3 0x1AAC
+#define L1F_2ND_TAR_IPV6_2_0 0x1AB0
+#define L1F_2ND_TAR_IPV6_2_1 0x1AB4
+#define L1F_2ND_TAR_IPV6_2_2 0x1AB8
+#define L1F_2ND_TAR_IPV6_2_3 0x1ABC
+
+#define L1F_1ST_NS_MAC0 0x1AC0
+#define L1F_1ST_NS_MAC1 0x1AC4
+
+#define L1F_2ND_NS_MAC0 0x1AC8
+#define L1F_2ND_NS_MAC1 0x1ACC
+
+#define L1F_PMOFLD 0x144C
+#define L1F_PMOFLD_ECMA_IGNR_FRG_SSSR BIT(11) /* B0 */
+#define L1F_PMOFLD_ARP_CNFLCT_WAKEUP BIT(10) /* B0 */
+#define L1F_PMOFLD_MULTI_SOLD BIT(9)
+#define L1F_PMOFLD_ICMP_XSUM BIT(8)
+#define L1F_PMOFLD_GARP_REPLY BIT(7)
+#define L1F_PMOFLD_SYNCV6_ANY BIT(6)
+#define L1F_PMOFLD_SYNCV4_ANY BIT(5)
+#define L1F_PMOFLD_BY_HW BIT(4)
+#define L1F_PMOFLD_NS_EN BIT(3)
+#define L1F_PMOFLD_ARP_EN BIT(2)
+#define L1F_PMOFLD_SYNCV6_EN BIT(1)
+#define L1F_PMOFLD_SYNCV4_EN BIT(0)
+
+#define L1F_RSS_KEY0 0x14B0
+#define L1F_RSS_KEY1 0x14B4
+#define L1F_RSS_KEY2 0x14B8
+#define L1F_RSS_KEY3 0x14BC
+#define L1F_RSS_KEY4 0x14C0
+#define L1F_RSS_KEY5 0x14C4
+#define L1F_RSS_KEY6 0x14C8
+#define L1F_RSS_KEY7 0x14CC
+#define L1F_RSS_KEY8 0x14D0
+#define L1F_RSS_KEY9 0x14D4
+
+#define L1F_RSS_IDT_TBL0 0x1B00
+#define L1F_RSS_IDT_TBL1 0x1B04
+#define L1F_RSS_IDT_TBL2 0x1B08
+#define L1F_RSS_IDT_TBL3 0x1B0C
+#define L1F_RSS_IDT_TBL4 0x1B10
+#define L1F_RSS_IDT_TBL5 0x1B14
+#define L1F_RSS_IDT_TBL6 0x1B18
+#define L1F_RSS_IDT_TBL7 0x1B1C
+#define L1F_RSS_IDT_TBL8 0x1B20
+#define L1F_RSS_IDT_TBL9 0x1B24
+#define L1F_RSS_IDT_TBL10 0x1B28
+#define L1F_RSS_IDT_TBL11 0x1B2C
+#define L1F_RSS_IDT_TBL12 0x1B30
+#define L1F_RSS_IDT_TBL13 0x1B34
+#define L1F_RSS_IDT_TBL14 0x1B38
+#define L1F_RSS_IDT_TBL15 0x1B3C
+#define L1F_RSS_IDT_TBL16 0x1B40
+#define L1F_RSS_IDT_TBL17 0x1B44
+#define L1F_RSS_IDT_TBL18 0x1B48
+#define L1F_RSS_IDT_TBL19 0x1B4C
+#define L1F_RSS_IDT_TBL20 0x1B50
+#define L1F_RSS_IDT_TBL21 0x1B54
+#define L1F_RSS_IDT_TBL22 0x1B58
+#define L1F_RSS_IDT_TBL23 0x1B5C
+#define L1F_RSS_IDT_TBL24 0x1B60
+#define L1F_RSS_IDT_TBL25 0x1B64
+#define L1F_RSS_IDT_TBL26 0x1B68
+#define L1F_RSS_IDT_TBL27 0x1B6C
+#define L1F_RSS_IDT_TBL28 0x1B70
+#define L1F_RSS_IDT_TBL29 0x1B74
+#define L1F_RSS_IDT_TBL30 0x1B78
+#define L1F_RSS_IDT_TBL31 0x1B7C
+
+#define L1F_RSS_HASH_VAL 0x15B0
+#define L1F_RSS_HASH_FLAG 0x15B4
+
+#define L1F_RSS_BASE_CPU_NUM 0x15B8
+
+#define L1F_MSI_MAP_TBL1 0x15D0
+#define L1F_MSI_MAP_TBL1_ALERT_MASK ASHFT28(0xFUL)
+#define L1F_MSI_MAP_TBL1_ALERT_SHIFT 28
+#define L1F_MSI_MAP_TBL1_TIMER_MASK ASHFT24(0xFUL)
+#define L1F_MSI_MAP_TBL1_TIMER_SHIFT 24
+#define L1F_MSI_MAP_TBL1_TXQ1_MASK ASHFT20(0xFUL)
+#define L1F_MSI_MAP_TBL1_TXQ1_SHIFT 20
+#define L1F_MSI_MAP_TBL1_TXQ0_MASK ASHFT16(0xFUL)
+#define L1F_MSI_MAP_TBL1_TXQ0_SHIFT 16
+#define L1F_MSI_MAP_TBL1_RXQ3_MASK ASHFT12(0xFUL)
+#define L1F_MSI_MAP_TBL1_RXQ3_SHIFT 12
+#define L1F_MSI_MAP_TBL1_RXQ2_MASK ASHFT8(0xFUL)
+#define L1F_MSI_MAP_TBL1_RXQ2_SHIFT 8
+#define L1F_MSI_MAP_TBL1_RXQ1_MASK ASHFT4(0xFUL)
+#define L1F_MSI_MAP_TBL1_RXQ1_SHIFT 4
+#define L1F_MSI_MAP_TBL1_RXQ0_MASK ASHFT0(0xFUL)
+#define L1F_MSI_MAP_TBL1_RXQ0_SHIFT 0
+
+#define L1F_MSI_MAP_TBL2 0x15D8
+#define L1F_MSI_MAP_TBL2_PHY_MASK ASHFT28(0xFUL)
+#define L1F_MSI_MAP_TBL2_PHY_SHIFT 28
+#define L1F_MSI_MAP_TBL2_SMB_MASK ASHFT24(0xFUL)
+#define L1F_MSI_MAP_TBL2_SMB_SHIFT 24
+#define L1F_MSI_MAP_TBL2_TXQ3_MASK ASHFT20(0xFUL)
+#define L1F_MSI_MAP_TBL2_TXQ3_SHIFT 20
+#define L1F_MSI_MAP_TBL2_TXQ2_MASK ASHFT16(0xFUL)
+#define L1F_MSI_MAP_TBL2_TXQ2_SHIFT 16
+#define L1F_MSI_MAP_TBL2_RXQ7_MASK ASHFT12(0xFUL)
+#define L1F_MSI_MAP_TBL2_RXQ7_SHIFT 12
+#define L1F_MSI_MAP_TBL2_RXQ6_MASK ASHFT8(0xFUL)
+#define L1F_MSI_MAP_TBL2_RXQ6_SHIFT 8
+#define L1F_MSI_MAP_TBL2_RXQ5_MASK ASHFT4(0xFUL)
+#define L1F_MSI_MAP_TBL2_RXQ5_SHIFT 4
+#define L1F_MSI_MAP_TBL2_RXQ4_MASK ASHFT0(0xFUL)
+#define L1F_MSI_MAP_TBL2_RXQ4_SHIFT 0
+
+#define L1F_MSI_ID_MAP 0x15D4
+#define L1F_MSI_ID_MAP_RXQ7 BIT(30)
+#define L1F_MSI_ID_MAP_RXQ6 BIT(29)
+#define L1F_MSI_ID_MAP_RXQ5 BIT(28)
+#define L1F_MSI_ID_MAP_RXQ4 BIT(27)
+#define L1F_MSI_ID_MAP_PCIELNKDW BIT(26) /* 0:common,1:timer */
+#define L1F_MSI_ID_MAP_PCIECERR BIT(25)
+#define L1F_MSI_ID_MAP_PCIENFERR BIT(24)
+#define L1F_MSI_ID_MAP_PCIEFERR BIT(23)
+#define L1F_MSI_ID_MAP_PCIEUR BIT(22)
+#define L1F_MSI_ID_MAP_MACTX BIT(21)
+#define L1F_MSI_ID_MAP_MACRX BIT(20)
+#define L1F_MSI_ID_MAP_RXQ3 BIT(19)
+#define L1F_MSI_ID_MAP_RXQ2 BIT(18)
+#define L1F_MSI_ID_MAP_RXQ1 BIT(17)
+#define L1F_MSI_ID_MAP_RXQ0 BIT(16)
+#define L1F_MSI_ID_MAP_TXQ0 BIT(15)
+#define L1F_MSI_ID_MAP_TXQTO BIT(14)
+#define L1F_MSI_ID_MAP_LPW BIT(13)
+#define L1F_MSI_ID_MAP_PHY BIT(12)
+#define L1F_MSI_ID_MAP_TXCREDIT BIT(11)
+#define L1F_MSI_ID_MAP_DMAW BIT(10)
+#define L1F_MSI_ID_MAP_DMAR BIT(9)
+#define L1F_MSI_ID_MAP_TXFUR BIT(8)
+#define L1F_MSI_ID_MAP_TXQ3 BIT(7)
+#define L1F_MSI_ID_MAP_TXQ2 BIT(6)
+#define L1F_MSI_ID_MAP_TXQ1 BIT(5)
+#define L1F_MSI_ID_MAP_RFDUR BIT(4)
+#define L1F_MSI_ID_MAP_RXFOV BIT(3)
+#define L1F_MSI_ID_MAP_MANU BIT(2)
+#define L1F_MSI_ID_MAP_TIMER BIT(1)
+#define L1F_MSI_ID_MAP_SMB BIT(0)
+
+#define L1F_MSI_RETRANS_TIMER 0x1920
+#define L1F_MSI_MASK_SEL_LINE BIT(16) /* 1:line,0:standard*/
+#define L1F_MSI_RETRANS_TM_MASK ASHFT0(0xFFFFUL)
+#define L1F_MSI_RETRANS_TM_SHIFT 0
+
+#define L1F_CR_DMA_CTRL 0x1930
+#define L1F_CR_DMA_CTRL_PRI BIT(22)
+#define L1F_CR_DMA_CTRL_RRDRXD_JOINT BIT(21)
+#define L1F_CR_DMA_CTRL_BWCREDIT_MASK ASHFT19(0x3UL)
+#define L1F_CR_DMA_CTRL_BWCREDIT_SHIFT 19
+#define L1F_CR_DMA_CTRL_BWCREDIT_2KB 0
+#define L1F_CR_DMA_CTRL_BWCREDIT_1KB 1
+#define L1F_CR_DMA_CTRL_BWCREDIT_4KB 2
+#define L1F_CR_DMA_CTRL_BWCREDIT_8KB 3
+#define L1F_CR_DMA_CTRL_BW_EN BIT(18)
+#define L1F_CR_DMA_CTRL_BW_RATIO_MASK ASHFT16(0x3UL)
+#define L1F_CR_DMA_CTRL_BW_RATIO_1_2 0
+#define L1F_CR_DMA_CTRL_BW_RATIO_1_4 1
+#define L1F_CR_DMA_CTRL_BW_RATIO_1_8 2
+#define L1F_CR_DMA_CTRL_BW_RATIO_2_1 3
+#define L1F_CR_DMA_CTRL_SOFT_RST BIT(11)
+#define L1F_CR_DMA_CTRL_TXEARLY_EN BIT(10)
+#define L1F_CR_DMA_CTRL_RXEARLY_EN BIT(9)
+#define L1F_CR_DMA_CTRL_WEARLY_EN BIT(8)
+#define L1F_CR_DMA_CTRL_RXTH_MASK ASHFT4(0xFUL)
+#define L1F_CR_DMA_CTRL_WTH_MASK ASHFT0(0xFUL)
+
+
+#define L1F_EFUSE_BIST 0x1934
+#define L1F_EFUSE_BIST_COL_MASK ASHFT24(0x3FUL)
+#define L1F_EFUSE_BIST_COL_SHIFT 24
+#define L1F_EFUSE_BIST_ROW_MASK ASHFT12(0x7FUL)
+#define L1F_EFUSE_BIST_ROW_SHIFT 12
+#define L1F_EFUSE_BIST_STEP_MASK ASHFT8(0xFUL)
+#define L1F_EFUSE_BIST_STEP_SHIFT 8
+#define L1F_EFUSE_BIST_PAT_MASK ASHFT4(0x7UL)
+#define L1F_EFUSE_BIST_PAT_SHIFT 4
+#define L1F_EFUSE_BIST_CRITICAL BIT(3)
+#define L1F_EFUSE_BIST_FIXED BIT(2)
+#define L1F_EFUSE_BIST_FAIL BIT(1)
+#define L1F_EFUSE_BIST_NOW BIT(0)
+
+/* CR DMA ctrl */
+
+/* TX QoS */
+#define L1F_WRR 0x1938
+#define L1F_WRR_PRI_MASK ASHFT29(3UL)
+#define L1F_WRR_PRI_SHIFT 29
+#define L1F_WRR_PRI_RESTRICT_ALL 0
+#define L1F_WRR_PRI_RESTRICT_HI 1
+#define L1F_WRR_PRI_RESTRICT_HI2 2
+#define L1F_WRR_PRI_RESTRICT_NONE 3
+#define L1F_WRR_PRI3_MASK ASHFT24(0x1FUL)
+#define L1F_WRR_PRI3_SHIFT 24
+#define L1F_WRR_PRI2_MASK ASHFT16(0x1FUL)
+#define L1F_WRR_PRI2_SHIFT 16
+#define L1F_WRR_PRI1_MASK ASHFT8(0x1FUL)
+#define L1F_WRR_PRI1_SHIFT 8
+#define L1F_WRR_PRI0_MASK ASHFT0(0x1FUL)
+#define L1F_WRR_PRI0_SHIFT 0
+
+#define L1F_HQTPD 0x193C
+#define L1F_HQTPD_BURST_EN BIT(31)
+#define L1F_HQTPD_Q3_NUMPREF_MASK ASHFT8(0xFUL)
+#define L1F_HQTPD_Q3_NUMPREF_SHIFT 8
+#define L1F_HQTPD_Q2_NUMPREF_MASK ASHFT4(0xFUL)
+#define L1F_HQTPD_Q2_NUMPREF_SHIFT 4
+#define L1F_HQTPD_Q1_NUMPREF_MASK ASHFT0(0xFUL)
+#define L1F_HQTPD_Q1_NUMPREF_SHIFT 0
+
+#define L1F_CPUMAP1 0x19A0
+#define L1F_CPUMAP1_VCT7_MASK ASHFT28(0xFUL)
+#define L1F_CPUMAP1_VCT7_SHIFT 28
+#define L1F_CPUMAP1_VCT6_MASK ASHFT24(0xFUL)
+#define L1F_CPUMAP1_VCT6_SHIFT 24
+#define L1F_CPUMAP1_VCT5_MASK ASHFT20(0xFUL)
+#define L1F_CPUMAP1_VCT5_SHIFT 20
+#define L1F_CPUMAP1_VCT4_MASK ASHFT16(0xFUL)
+#define L1F_CPUMAP1_VCT4_SHIFT 16
+#define L1F_CPUMAP1_VCT3_MASK ASHFT12(0xFUL)
+#define L1F_CPUMAP1_VCT3_SHIFT 12
+#define L1F_CPUMAP1_VCT2_MASK ASHFT8(0xFUL)
+#define L1F_CPUMAP1_VCT2_SHIFT 8
+#define L1F_CPUMAP1_VCT1_MASK ASHFT4(0xFUL)
+#define L1F_CPUMAP1_VCT1_SHIFT 4
+#define L1F_CPUMAP1_VCT0_MASK ASHFT0(0xFUL)
+#define L1F_CPUMAP1_VCT0_SHIFT 0
+
+#define L1F_CPUMAP2 0x19A4
+#define L1F_CPUMAP2_VCT15_MASK ASHFT28(0xFUL)
+#define L1F_CPUMAP2_VCT15_SHIFT 28
+#define L1F_CPUMAP2_VCT14_MASK ASHFT24(0xFUL)
+#define L1F_CPUMAP2_VCT14_SHIFT 24
+#define L1F_CPUMAP2_VCT13_MASK ASHFT20(0xFUL)
+#define L1F_CPUMAP2_VCT13_SHIFT 20
+#define L1F_CPUMAP2_VCT12_MASK ASHFT16(0xFUL)
+#define L1F_CPUMAP2_VCT12_SHIFT 16
+#define L1F_CPUMAP2_VCT11_MASK ASHFT12(0xFUL)
+#define L1F_CPUMAP2_VCT11_SHIFT 12
+#define L1F_CPUMAP2_VCT10_MASK ASHFT8(0xFUL)
+#define L1F_CPUMAP2_VCT10_SHIFT 8
+#define L1F_CPUMAP2_VCT9_MASK ASHFT4(0xFUL)
+#define L1F_CPUMAP2_VCT9_SHIFT 4
+#define L1F_CPUMAP2_VCT8_MASK ASHFT0(0xFUL)
+#define L1F_CPUMAP2_VCT8_SHIFT 0
+
+#define L1F_MISC 0x19C0
+#define L1F_MISC_MODU BIT(31) /* 0:vector,1:cpu */
+#define L1F_MISC_OVERCUR BIT(29)
+#define L1F_MISC_PSWR_EN BIT(28)
+#define L1F_MISC_PSW_CTRL_MASK ASHFT24(0xFUL)
+#define L1F_MISC_PSW_CTRL_SHIFT 24
+#define L1F_MISC_PSW_OCP_MASK ASHFT21(7UL)
+#define L1F_MISC_PSW_OCP_SHIFT 21
+#define L1F_MISC_V18_HIGH BIT(20)
+#define L1F_MISC_LPO_CTRL_MASK ASHFT16(0xFUL)
+#define L1F_MISC_LPO_CTRL_SHIFT 16
+#define L1F_MISC_ISO_EN BIT(12)
+#define L1F_MISC_XSTANA_ALWAYS_ON BIT(11)
+#define L1F_MISC_SYS25M_SEL_ADAPTIVE BIT(10)
+#define L1F_MISC_SPEED_SIM BIT(9)
+#define L1F_MISC_S1_LWP_EN BIT(8)
+#define L1F_MISC_MACLPW BIT(7) /* pcie/mac do pwsaving
+ * as phy in lpw state */
+#define L1F_MISC_125M_SW BIT(6)
+#define L1F_MISC_INTNLOSC_OFF_EN BIT(5)
+#define L1F_MISC_EXTN25M_SEL BIT(4) /* 0:chipset,1:cystle */
+#define L1F_MISC_INTNLOSC_OPEN BIT(3)
+#define L1F_MISC_SMBUS_AT_LED BIT(2)
+#define L1F_MISC_PPS_AT_LED_MASK ASHFT0(3UL)
+#define L1F_MISC_PPS_AT_LED_SHIFT 0
+#define L1F_MISC_PPS_AT_LED_ACT 1
+#define L1F_MISC_PPS_AT_LED_10_100 2
+#define L1F_MISC_PPS_AT_LED_1000 3
+
+#define L1F_MISC1 0x19C4
+#define L1F_MSC1_BLK_CRASPM_REQ BIT(15)
+
+#define L1F_MISC3 0x19CC
+#define L1F_MISC3_25M_BY_SW BIT(1) /* 1:Software control 25M */
+#define L1F_MISC3_25M_NOTO_INTNL BIT(0) /* 0:25M switch to intnl OSC */
+
+
+
+/***************************** IO mapping registers ***************************/
+#define L1F_IO_ADDR 0x00 /* DWORD reg */
+#define L1F_IO_DATA 0x04 /* DWORD reg */
+#define L1F_IO_MASTER 0x08 /* DWORD same as reg0x1400 */
+#define L1F_IO_MAC_CTRL 0x0C /* DWORD same as reg0x1480*/
+#define L1F_IO_ISR 0x10 /* DWORD same as reg0x1600 */
+#define L1F_IO_IMR 0x14 /* DWORD same as reg0x1604 */
+#define L1F_IO_TPD_PRI1_PIDX 0x18 /* WORD same as reg0x15F0 */
+#define L1F_IO_TPD_PRI0_PIDX 0x1A /* WORD same as reg0x15F2 */
+#define L1F_IO_TPD_PRI1_CIDX 0x1C /* WORD same as reg0x15F4 */
+#define L1F_IO_TPD_PRI0_CIDX 0x1E /* WORD same as reg0x15F6 */
+#define L1F_IO_RFD_PIDX 0x20 /* WORD same as reg0x15E0 */
+#define L1F_IO_RFD_CIDX 0x30 /* WORD same as reg0x15F8 */
+#define L1F_IO_MDIO 0x38 /* WORD same as reg0x1414 */
+#define L1F_IO_PHY_CTRL 0x3C /* DWORD same as reg0x140C */
+
+
+/********************* PHY regs definition ***************************/
+
+/* Autoneg Advertisement Register */
+#define L1F_ADVERTISE_SPEED_MASK 0x01E0
+#define L1F_ADVERTISE_DEFAULT_CAP 0x1DE0 /* diff with L1C */
+
+/* 1000BASE-T Control Register (0x9) */
+#define L1F_GIGA_CR_1000T_HD_CAPS 0x0100
+#define L1F_GIGA_CR_1000T_FD_CAPS 0x0200
+#define L1F_GIGA_CR_1000T_REPEATER_DTE 0x0400
+
+#define L1F_GIGA_CR_1000T_MS_VALUE 0x0800
+
+#define L1F_GIGA_CR_1000T_MS_ENABLE 0x1000
+
+#define L1F_GIGA_CR_1000T_TEST_MODE_NORMAL 0x0000
+#define L1F_GIGA_CR_1000T_TEST_MODE_1 0x2000
+#define L1F_GIGA_CR_1000T_TEST_MODE_2 0x4000
+#define L1F_GIGA_CR_1000T_TEST_MODE_3 0x6000
+#define L1F_GIGA_CR_1000T_TEST_MODE_4 0x8000
+#define L1F_GIGA_CR_1000T_SPEED_MASK 0x0300
+#define L1F_GIGA_CR_1000T_DEFAULT_CAP 0x0300
+
+/* 1000BASE-T Status Register */
+#define L1F_MII_GIGA_SR 0x0A
+
+/* PHY Specific Status Register */
+#define L1F_MII_GIGA_PSSR 0x11
+#define L1F_GIGA_PSSR_FC_RXEN 0x0004
+#define L1F_GIGA_PSSR_FC_TXEN 0x0008
+#define L1F_GIGA_PSSR_SPD_DPLX_RESOLVED 0x0800
+#define L1F_GIGA_PSSR_DPLX 0x2000
+#define L1F_GIGA_PSSR_SPEED 0xC000
+#define L1F_GIGA_PSSR_10MBS 0x0000
+#define L1F_GIGA_PSSR_100MBS 0x4000
+#define L1F_GIGA_PSSR_1000MBS 0x8000
+
+/* PHY Interrupt Enable Register */
+#define L1F_MII_IER 0x12
+#define L1F_IER_LINK_UP 0x0400
+#define L1F_IER_LINK_DOWN 0x0800
+
+/* PHY Interrupt Status Register */
+#define L1F_MII_ISR 0x13
+#define L1F_ISR_LINK_UP 0x0400
+#define L1F_ISR_LINK_DOWN 0x0800
+
+/* Cable-Detect-Test Control Register */
+#define L1F_MII_CDTC 0x16
+#define L1F_CDTC_EN 1 /* sc */
+#define L1F_CDTC_PAIR_MASK ASHFT8(3U)
+#define L1F_CDTC_PAIR_SHIFT 8
+
+
+/* Cable-Detect-Test Status Register */
+#define L1F_MII_CDTS 0x1C
+#define L1F_CDTS_STATUS_MASK ASHFT8(3U)
+#define L1F_CDTS_STATUS_SHIFT 8
+#define L1F_CDTS_STATUS_NORMAL 0
+#define L1F_CDTS_STATUS_SHORT 1
+#define L1F_CDTS_STATUS_OPEN 2
+#define L1F_CDTS_STATUS_INVALID 3
+
+#define L1F_MII_DBG_ADDR 0x1D
+#define L1F_MII_DBG_DATA 0x1E
+
+/***************************** debug port *************************************/
+
+#define L1F_MIIDBG_ANACTRL 0x00
+#define L1F_ANACTRL_CLK125M_DELAY_EN BIT(15)
+#define L1F_ANACTRL_VCO_FAST BIT(14)
+#define L1F_ANACTRL_VCO_SLOW BIT(13)
+#define L1F_ANACTRL_AFE_MODE_EN BIT(12)
+#define L1F_ANACTRL_LCKDET_PHY BIT(11)
+#define L1F_ANACTRL_LCKDET_EN BIT(10)
+#define L1F_ANACTRL_OEN_125M BIT(9)
+#define L1F_ANACTRL_HBIAS_EN BIT(8)
+#define L1F_ANACTRL_HB_EN BIT(7)
+#define L1F_ANACTRL_SEL_HSP BIT(6)
+#define L1F_ANACTRL_CLASSA_EN BIT(5)
+#define L1F_ANACTRL_MANUSWON_SWR_MASK ASHFT2(3U)
+#define L1F_ANACTRL_MANUSWON_SWR_SHIFT 2
+#define L1F_ANACTRL_MANUSWON_SWR_2V 0
+#define L1F_ANACTRL_MANUSWON_SWR_1P9V 1
+#define L1F_ANACTRL_MANUSWON_SWR_1P8V 2
+#define L1F_ANACTRL_MANUSWON_SWR_1P7V 3
+#define L1F_ANACTRL_MANUSWON_BW3_4M BIT(1)
+#define L1F_ANACTRL_RESTART_CAL BIT(0)
+#define L1F_ANACTRL_DEF 0x02EF
+
+
+#define L1F_MIIDBG_SYSMODCTRL 0x04
+#define L1F_SYSMODCTRL_IECHOADJ_PFMH_PHY BIT(15)
+#define L1F_SYSMODCTRL_IECHOADJ_BIASGEN BIT(14)
+#define L1F_SYSMODCTRL_IECHOADJ_PFML_PHY BIT(13)
+#define L1F_SYSMODCTRL_IECHOADJ_PS_MASK ASHFT10(3U)
+#define L1F_SYSMODCTRL_IECHOADJ_PS_SHIFT 10
+#define L1F_SYSMODCTRL_IECHOADJ_PS_40 3
+#define L1F_SYSMODCTRL_IECHOADJ_PS_20 2
+#define L1F_SYSMODCTRL_IECHOADJ_PS_0 1
+#define L1F_SYSMODCTRL_IECHOADJ_10BT_100MV BIT(6) /* 1:100mv, 0:200mv */
+#define L1F_SYSMODCTRL_IECHOADJ_HLFAP_MASK ASHFT4(3U)
+#define L1F_SYSMODCTRL_IECHOADJ_HLFAP_SHIFT 4
+#define L1F_SYSMODCTRL_IECHOADJ_VDFULBW BIT(3)
+#define L1F_SYSMODCTRL_IECHOADJ_VDBIASHLF BIT(2)
+#define L1F_SYSMODCTRL_IECHOADJ_VDAMPHLF BIT(1)
+#define L1F_SYSMODCTRL_IECHOADJ_VDLANSW BIT(0)
+#define L1F_SYSMODCTRL_IECHOADJ_DEF 0xBB8B /* en half bias */
+
+
+#define L1F_MIIDBG_SRDSYSMOD 0x05
+#define L1F_SRDSYSMOD_LCKDET_EN BIT(13)
+#define L1F_SRDSYSMOD_PLL_EN BIT(11)
+#define L1F_SRDSYSMOD_SEL_HSP BIT(10)
+#define L1F_SRDSYSMOD_HLFTXDR BIT(9)
+#define L1F_SRDSYSMOD_TXCLK_DELAY_EN BIT(8)
+#define L1F_SRDSYSMOD_TXELECIDLE BIT(7)
+#define L1F_SRDSYSMOD_DEEMP_EN BIT(6)
+#define L1F_SRDSYSMOD_MS_PAD BIT(2)
+#define L1F_SRDSYSMOD_CDR_ADC_VLTG BIT(1)
+#define L1F_SRDSYSMOD_CDR_DAC_1MA BIT(0)
+#define L1F_SRDSYSMOD_DEF 0x2C46
+
+
+#define L1F_MIIDBG_HIBNEG 0x0B
+#define L1F_HIBNEG_PSHIB_EN BIT(15)
+#define L1F_HIBNEG_WAKE_BOTH BIT(14)
+#define L1F_HIBNEG_ONOFF_ANACHG_SUDEN BIT(13)
+#define L1F_HIBNEG_HIB_PULSE BIT(12)
+#define L1F_HIBNEG_GATE_25M_EN BIT(11)
+#define L1F_HIBNEG_RST_80U BIT(10)
+#define L1F_HIBNEG_RST_TIMER_MASK ASHFT8(3U)
+#define L1F_HIBNEG_RST_TIMER_SHIFT 8
+#define L1F_HIBNEG_GTX_CLK_DELAY_MASK ASHFT5(3U)
+#define L1F_HIBNEG_GTX_CLK_DELAY_SHIFT 5
+#define L1F_HIBNEG_BYPSS_BRKTIMER BIT(4)
+#define L1F_HIBNEG_DEF 0xBC40
+
+#define L1F_MIIDBG_TST10BTCFG 0x12
+#define L1F_TST10BTCFG_INTV_TIMER_MASK ASHFT14(3U)
+#define L1F_TST10BTCFG_INTV_TIMER_SHIFT 14
+#define L1F_TST10BTCFG_TRIGER_TIMER_MASK ASHFT12(3U)
+#define L1F_TST10BTCFG_TRIGER_TIMER_SHIFT 12
+#define L1F_TST10BTCFG_DIV_MAN_MLT3_EN BIT(11)
+#define L1F_TST10BTCFG_OFF_DAC_IDLE BIT(10)
+#define L1F_TST10BTCFG_LPBK_DEEP BIT(2) /* 1:deep,0:shallow */
+#define L1F_TST10BTCFG_DEF 0x4C04
+
+#define L1F_MIIDBG_AZ_ANADECT 0x15
+#define L1F_AZ_ANADECT_10BTRX_TH BIT(15)
+#define L1F_AZ_ANADECT_BOTH_01CHNL BIT(14)
+#define L1F_AZ_ANADECT_INTV_MASK ASHFT8(0x3FU)
+#define L1F_AZ_ANADECT_INTV_SHIFT 8
+#define L1F_AZ_ANADECT_THRESH_MASK ASHFT4(0xFU)
+#define L1F_AZ_ANADECT_THRESH_SHIFT 4
+#define L1F_AZ_ANADECT_CHNL_MASK ASHFT0(0xFU)
+#define L1F_AZ_ANADECT_CHNL_SHIFT 0
+#define L1F_AZ_ANADECT_DEF 0x3220
+#define L1F_AZ_ANADECT_LONG 0x3210
+
+#define L1F_MIIDBG_AGC 0x23
+#define L1F_AGC_2_VGA_MASK ASHFT8(0x3FU)
+#define L1F_AGC_2_VGA_SHIFT 8
+#define L1F_AGC_LONG1G_LIMT 40
+#define L1F_AGC_LONG100M_LIMT 44
+
+#define L1F_MIIDBG_LEGCYPS 0x29
+#define L1F_LEGCYPS_EN BIT(15)
+#define L1F_LEGCYPS_DAC_AMP1000_MASK ASHFT12(7U)
+#define L1F_LEGCYPS_DAC_AMP1000_SHIFT 12
+#define L1F_LEGCYPS_DAC_AMP100_MASK ASHFT9(7U)
+#define L1F_LEGCYPS_DAC_AMP100_SHIFT 9
+#define L1F_LEGCYPS_DAC_AMP10_MASK ASHFT6(7U)
+#define L1F_LEGCYPS_DAC_AMP10_SHIFT 6
+#define L1F_LEGCYPS_UNPLUG_TIMER_MASK ASHFT3(7U)
+#define L1F_LEGCYPS_UNPLUG_TIMER_SHIFT 3
+#define L1F_LEGCYPS_UNPLUG_DECT_EN BIT(2)
+#define L1F_LEGCYPS_ECNC_PS_EN BIT(0)
+#define L1F_LEGCYPS_DEF 0x129D
+
+#define L1F_MIIDBG_TST100BTCFG 0x36
+#define L1F_TST100BTCFG_NORMAL_BW_EN BIT(15)
+#define L1F_TST100BTCFG_BADLNK_BYPASS BIT(14)
+#define L1F_TST100BTCFG_SHORTCABL_TH_MASK ASHFT8(0x3FU)
+#define L1F_TST100BTCFG_SHORTCABL_TH_SHIFT 8
+#define L1F_TST100BTCFG_LITCH_EN BIT(7)
+#define L1F_TST100BTCFG_VLT_SW BIT(6)
+#define L1F_TST100BTCFG_LONGCABL_TH_MASK ASHFT0(0x3FU)
+#define L1F_TST100BTCFG_LONGCABL_TH_SHIFT 0
+#define L1F_TST100BTCFG_DEF 0xE12C
+
+#define L1F_MIIDBG_GREENCFG 0x3B
+#define L1F_GREENCFG_MSTPS_MSETH2_MASK ASHFT8(0xFFU)
+#define L1F_GREENCFG_MSTPS_MSETH2_SHIFT 8
+#define L1F_GREENCFG_MSTPS_MSETH1_MASK ASHFT0(0xFFU)
+#define L1F_GREENCFG_MSTPS_MSETH1_SHIFT 0
+#define L1F_GREENCFG_DEF 0x7078
+
+#define L1F_MIIDBG_GREENCFG2 0x3D
+#define L1F_GREENCFG2_GATE_DFSE_EN BIT(7)
+
+
+/***************************** extension **************************************/
+
+/******* dev 3 *********/
+#define L1F_MIIEXT_PCS 3
+
+#define L1F_MIIEXT_CLDCTRL6 0x8006
+#define L1F_CLDCTRL6_CAB_LEN_MASK ASHFT0(0xFFU)
+#define L1F_CLDCTRL6_CAB_LEN_SHIFT 0
+#define L1F_CLDCTRL6_CAB_LEN_SHORT1G 116
+#define L1F_CLDCTRL6_CAB_LEN_SHORT100M 152
+
+#define L1F_MIIEXT_CLDCTRL7 0x8007
+#define L1F_CLDCTRL7_VDHLF_BIAS_TH_MASK ASHFT9(0x7FU)
+#define L1F_CLDCTRL7_VDHLF_BIAS_TH_SHIFT 9
+#define L1F_CLDCTRL7_AFE_AZ_MASK ASHFT4(0x1FU)
+#define L1F_CLDCTRL7_AFE_AZ_SHIFT 4
+#define L1F_CLDCTRL7_SIDE_PEAK_TH_MASK ASHFT0(0xFU)
+#define L1F_CLDCTRL7_SIDE_PEAK_TH_SHIFT 0
+#define L1F_CLDCTRL7_DEF 0x6BF6 /* ???? */
+
+#define L1F_MIIEXT_AZCTRL 0x8008
+#define L1F_AZCTRL_SHORT_TH_MASK ASHFT8(0xFFU)
+#define L1F_AZCTRL_SHORT_TH_SHIFT 8
+#define L1F_AZCTRL_LONG_TH_MASK ASHFT0(0xFFU)
+#define L1F_AZCTRL_LONG_TH_SHIFT 0
+#define L1F_AZCTRL_DEF 0x1629
+
+#define L1F_MIIEXT_AZCTRL2 0x8009
+#define L1F_AZCTRL2_WAKETRNING_MASK ASHFT8(0xFFU)
+#define L1F_AZCTRL2_WAKETRNING_SHIFT 8
+#define L1F_AZCTRL2_QUIET_TIMER_MASH ASHFT6(3U)
+#define L1F_AZCTRL2_QUIET_TIMER_SHIFT 6
+#define L1F_AZCTRL2_PHAS_JMP2 BIT(4)
+#define L1F_AZCTRL2_CLKTRCV_125MD16 BIT(3)
+#define L1F_AZCTRL2_GATE1000_EN BIT(2)
+#define L1F_AZCTRL2_AVRG_FREQ BIT(1)
+#define L1F_AZCTRL2_PHAS_JMP4 BIT(0)
+#define L1F_AZCTRL2_DEF 0x32C0
+
+#define L1F_MIIEXT_AZCTRL6 0x800D
+
+#define L1F_MIIEXT_VDRVBIAS 0x8062
+#define L1F_VDRVBIAS_SEL_MASK ASHFT0(0x3U)
+#define L1F_VDRVBIAS_SEL_SHIFT 0
+#define L1F_VDRVBIAS_DEF 0x3
+
+/********* dev 7 **********/
+#define L1F_MIIEXT_ANEG 7
+
+#define L1F_MIIEXT_LOCAL_EEEADV 0x3C
+#define L1F_LOCAL_EEEADV_1000BT BIT(2)
+#define L1F_LOCAL_EEEADV_100BT BIT(1)
+
+#define L1F_MIIEXT_REMOTE_EEEADV 0x3D
+#define L1F_REMOTE_EEEADV_1000BT BIT(2)
+#define L1F_REMOTE_EEEADV_100BT BIT(1)
+
+#define L1F_MIIEXT_EEE_ANEG 0x8000
+#define L1F_EEE_ANEG_1000M BIT(2)
+#define L1F_EEE_ANEG_100M BIT(1)
+
+#define L1F_MIIEXT_AFE 0x801A
+#define L1F_AFE_10BT_100M_TH BIT(6)
+
+
+#define L1F_MIIEXT_NLP34 0x8025
+#define L1F_MIIEXT_NLP34_DEF 0x1010 /* for 160m */
+
+#define L1F_MIIEXT_NLP56 0x8026
+#define L1F_MIIEXT_NLP56_DEF 0x1010 /* for 160m */
+
+#define L1F_MIIEXT_NLP78 0x8027
+#define L1F_MIIEXT_NLP78_160M_DEF 0x8D05 /* for 160m */
+#define L1F_MIIEXT_NLP78_120M_DEF 0x8A05 /* for 120m */
+
+
+
+/******************************************************************************/
+
+/* functions */
+
+
+/* get permanent mac address from
+ * return
+ * 0: success
+ * non-0:fail
+ */
+u16 l1f_get_perm_macaddr(struct alx_hw *hw, u8 *addr);
+
+
+/* reset mac & dma
+ * return
+ * 0: success
+ * non-0:fail
+ */
+u16 l1f_reset_mac(struct alx_hw *hw);
+
+/* reset phy
+ * return
+ * 0: success
+ * non-0:fail
+ */
+u16 l1f_reset_phy(struct alx_hw *hw, bool pws_en, bool az_en, bool ptp_en);
+
+
+/* reset pcie
+ * just reset pcie relative registers (pci command, clk, aspm...)
+ * return
+ * 0:success
+ * non-0:fail
+ */
+u16 l1f_reset_pcie(struct alx_hw *hw, bool l0s_en, bool l1_en);
+
+
+/* disable/enable MAC/RXQ/TXQ
+ * en
+ * true:enable
+ * false:disable
+ * return
+ * 0:success
+ * non-0-fail
+ */
+u16 l1f_enable_mac(struct alx_hw *hw, bool en, u16 en_ctrl);
+
+
+/* enable/disable aspm support
+ * that will change settings for phy/mac/pcie
+ */
+u16 l1f_enable_aspm(struct alx_hw *hw, bool l0s_en, bool l1_en, u8 lnk_stat);
+
+
+/* initialize phy for speed / flow control
+ * lnk_cap
+ * if autoNeg, is link capability to tell the peer
+ * if force mode, is forced speed/duplex
+ */
+u16 l1f_init_phy_spdfc(struct alx_hw *hw, bool auto_neg,
+ u8 lnk_cap, bool fc_en);
+
+/* do post setting on phy if link up/down event occur
+ */
+u16 l1f_post_phy_link(struct alx_hw *hw, bool linkon, u8 wire_spd);
+
+
+/* do power saving setting befor enter suspend mode
+ * NOTE:
+ * 1. phy link must be established before calling this function
+ * 2. wol option (pattern,magic,link,etc.) is configed before call it.
+ */
+u16 l1f_powersaving(struct alx_hw *hw, u8 wire_spd, bool wol_en,
+ bool mahw_en, bool macrx_en, bool pws_en);
+
+/* read phy register */
+u16 l1f_read_phy(struct alx_hw *hw, bool ext, u8 dev, bool fast, u16 reg,
+ u16 *data);
+
+/* write phy register */
+u16 l1f_write_phy(struct alx_hw *hw, bool ext, u8 dev, bool fast, u16 reg,
+ u16 data);
+
+/* phy debug port */
+u16 l1f_read_phydbg(struct alx_hw *hw, bool fast, u16 reg, u16 *data);
+u16 l1f_write_phydbg(struct alx_hw *hw, bool fast, u16 reg, u16 data);
+
+
+/* check the configuration of the PHY */
+u16 l1f_get_phy_config(struct alx_hw *hw);
+
+/*
+ * initialize mac basically
+ * most of hi-feature no init
+ * MAC/PHY should be reset before call this function
+ */
+u16 l1f_init_mac(struct alx_hw *hw, u8 *addr, u32 txmem_hi,
+ u32 *tx_mem_lo, u8 tx_qnum, u16 txring_sz,
+ u32 rxmem_hi, u32 rfdmem_lo, u32 rrdmem_lo,
+ u16 rxring_sz, u16 rxbuf_sz, u16 smb_timer,
+ u16 mtu, u16 int_mod, bool hash_legacy);
+
+
+
+#endif/*L1F_HW_H_*/
+
diff --git a/drivers/net/ethernet/atheros/alx/alx.h b/drivers/net/ethernet/atheros/alx/alx.h
new file mode 100644
index 0000000..6482bee
--- /dev/null
+++ b/drivers/net/ethernet/atheros/alx/alx.h
@@ -0,0 +1,670 @@
+/*
+ * Copyright (c) 2012 Qualcomm Atheros, Inc.
+ *
+ * Permission to use, copy, modify, and/or distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+#ifndef _ALX_H_
+#define _ALX_H_
+
+#include <linux/types.h>
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/netdevice.h>
+#include <linux/vmalloc.h>
+#include <linux/string.h>
+#include <linux/in.h>
+#include <linux/interrupt.h>
+#include <linux/ip.h>
+#include <linux/tcp.h>
+#include <linux/sctp.h>
+#include <linux/pkt_sched.h>
+#include <linux/ipv6.h>
+#include <linux/slab.h>
+#include <net/checksum.h>
+#include <net/ip6_checksum.h>
+#include <linux/ethtool.h>
+#include <linux/if_vlan.h>
+#include <linux/mii.h>
+#include <linux/cpumask.h>
+#include <linux/aer.h>
+
+#include "alx_sw.h"
+
+/*
+ * Definition to enable some features
+ */
+#undef CONFIG_ALX_MSIX
+#undef CONFIG_ALX_MSI
+#undef CONFIG_ALX_MTQ
+#undef CONFIG_ALX_MRQ
+#undef CONFIG_ALX_RSS
+/* #define CONFIG_ALX_MSIX */
+#define CONFIG_ALX_MSI
+#define CONFIG_ALX_MTQ
+#define CONFIG_ALX_MRQ
+#ifdef CONFIG_ALX_MRQ
+#define CONFIG_ALX_RSS
+#endif
+
+#define ALX_MSG_DEFAULT 0
+
+/* Logging functions and macros */
+#define alx_err(adpt, fmt, ...) \
+ netdev_err(adpt->netdev, fmt, ##__VA_ARGS__)
+
+#define ALX_VLAN_TO_TAG(_vlan, _tag) \
+ do { \
+ _tag = ((((_vlan) >> 8) & 0xFF) | (((_vlan) & 0xFF) << 8)); \
+ } while (0)
+
+#define ALX_TAG_TO_VLAN(_tag, _vlan) \
+ do { \
+ _vlan = ((((_tag) >> 8) & 0xFF) | (((_tag) & 0xFF) << 8)) ; \
+ } while (0)
+
+/* Coalescing Message Block */
+struct coals_msg_block {
+ int test;
+};
+
+
+#define BAR_0 0
+
+#define ALX_DEF_RX_BUF_SIZE 1536
+#define ALX_MAX_JUMBO_PKT_SIZE (9*1024)
+#define ALX_MAX_TSO_PKT_SIZE (7*1024)
+
+#define ALX_MAX_ETH_FRAME_SIZE ALX_MAX_JUMBO_PKT_SIZE
+#define ALX_MIN_ETH_FRAME_SIZE 68
+
+
+#define ALX_MAX_RX_QUEUES 8
+#define ALX_MAX_TX_QUEUES 4
+#define ALX_MAX_HANDLED_INTRS 5
+
+#define ALX_WATCHDOG_TIME (5 * HZ)
+
+struct alx_cmb {
+ char name[IFNAMSIZ + 9];
+ void *cmb;
+ dma_addr_t dma;
+};
+struct alx_smb {
+ char name[IFNAMSIZ + 9];
+ void *smb;
+ dma_addr_t dma;
+};
+
+
+/*
+ * RRD : definition
+ */
+
+/* general parameter format of rrd */
+struct alx_rrdes_general {
+ u32 xsum:16;
+ u32 nor:4; /* number of RFD */
+ u32 si:12; /* start index of rfd-ring */
+
+ u32 hash;
+
+ /* dword 3 */
+ u32 vlan_tag:16; /* vlan-tag */
+ u32 pid:8; /* Header Length of Header-Data Split. WORD unit */
+ u32 reserve0:1;
+ u32 rss_cpu:3; /* CPU number used by RSS */
+ u32 rss_flag:4; /* rss_flag 0, TCP(IPv6) flag for RSS hash algrithm
+ * rss_flag 1, IPv6 flag for RSS hash algrithm
+ * rss_flag 2, TCP(IPv4) flag for RSS hash algrithm
+ * rss_flag 3, IPv4 flag for RSS hash algrithm
+ */
+
+ /* dword 3 */
+ u32 pkt_len:14; /* length of the packet */
+ u32 l4f:1; /* L4(TCP/UDP) checksum failed */
+ u32 ipf:1; /* IP checksum failed */
+ u32 vlan_flag:1;/* vlan tag */
+ u32 reserve:3;
+ u32 res:1; /* received error summary */
+ u32 crc:1; /* crc error */
+ u32 fae:1; /* frame alignment error */
+ u32 trunc:1; /* truncated packet, larger than MTU */
+ u32 runt:1; /* runt packet */
+ u32 icmp:1; /* incomplete packet,
+ * due to insufficient rx-descriptor
+ */
+ u32 bar:1; /* broadcast address received */
+ u32 mar:1; /* multicast address received */
+ u32 type:1; /* ethernet type */
+ u32 fov:1; /* fifo overflow*/
+ u32 lene:1; /* length error */
+ u32 update:1; /* update*/
+};
+
+union alx_rrdesc {
+ /* dword flat format */
+ struct {
+ __le32 dw0;
+ __le32 dw1;
+ __le32 dw2;
+ __le32 dw3;
+ } dfmt;
+
+ /* qword flat format */
+ struct {
+ __le64 qw0;
+ __le64 qw1;
+ } qfmt;
+};
+
+/*
+ * XXX: we should not use this guy, best to just
+ * do all le32_to_cpu() conversions on the spot.
+ */
+union alx_sw_rrdesc {
+ struct alx_rrdes_general genr;
+
+ /* dword flat format */
+ struct {
+ u32 dw0;
+ u32 dw1;
+ u32 dw2;
+ u32 dw3;
+ } dfmt;
+
+ /* qword flat format */
+ struct {
+ u64 qw0;
+ u64 qw1;
+ } qfmt;
+};
+
+/*
+ * RFD : definition
+ */
+
+/* general parameter format of rfd */
+struct alx_rfdes_general {
+ u64 addr;
+};
+
+union alx_rfdesc {
+ /* dword flat format */
+ struct {
+ __le32 dw0;
+ __le32 dw1;
+ } dfmt;
+
+ /* qword flat format */
+ struct {
+ __le64 qw0;
+ } qfmt;
+};
+
+/*
+ * XXX: we should not use this guy, best to just
+ * do all le32_to_cpu() conversions on the spot.
+ */
+union alx_sw_rfdesc {
+ struct alx_rfdes_general genr;
+
+ /* dword flat format */
+ struct {
+ u32 dw0;
+ u32 dw1;
+ } dfmt;
+
+ /* qword flat format */
+ struct {
+ u64 qw0;
+ } qfmt;
+};
+
+/*
+ * TPD : definition
+ */
+
+/* general parameter format of tpd */
+struct alx_tpdes_general {
+ u32 buffer_len:16; /* include 4-byte CRC */
+ u32 vlan_tag:16;
+
+ u32 l4hdr_offset:8; /* tcp/udp header offset to the 1st byte of
+ * the packet */
+ u32 c_csum:1; /* must be 0 in this format */
+ u32 ip_csum:1; /* do ip(v4) header checksum offload */
+ u32 tcp_csum:1; /* do tcp checksum offload, both ipv4 and ipv6 */
+ u32 udp_csum:1; /* do udp checksum offlaod, both ipv4 and ipv6 */
+ u32 lso:1;
+ u32 lso_v2:1; /* must be 0 in this format */
+ u32 vtagged:1; /* vlan-id tagged already */
+ u32 instag:1; /* insert vlan tag */
+
+ u32 ipv4:1; /* ipv4 packet */
+ u32 type:1; /* type of packet (ethernet_ii(1) or snap(0)) */
+ u32 reserve:12; /* reserved, must be 0 */
+ u32 epad:1; /* even byte padding when this packet */
+ u32 last_frag:1; /* last fragment(buffer) of the packet */
+
+ u64 addr;
+};
+
+/* custom checksum parameter format of tpd */
+struct alx_tpdes_checksum {
+ u32 buffer_len:16; /* include 4-byte CRC */
+ u32 vlan_tag:16;
+
+ u32 payld_offset:8; /* payload offset to the 1st byte of
+ * the packet
+ */
+ u32 c_sum:1; /* do custom chekcusm offload,
+ * must be 1 in this format
+ */
+ u32 ip_sum:1; /* must be 0 in thhis format */
+ u32 tcp_sum:1; /* must be 0 in this format */
+ u32 udp_sum:1; /* must be 0 in this format */
+ u32 lso:1; /* must be 0 in this format */
+ u32 lso_v2:1; /* must be 0 in this format */
+ u32 vtagged:1; /* vlan-id tagged already */
+ u32 instag:1; /* insert vlan tag */
+
+ u32 ipv4:1; /* ipv4 packet */
+ u32 type:1; /* type of packet (ethernet_ii(1) or snap(0)) */
+ u32 cxsum_offset:8; /* checksum offset to the 1st byte of
+ * the packet
+ */
+ u32 reserve:4; /* reserved, must be 0 */
+ u32 epad:1; /* even byte padding when this packet */
+ u32 last_frag:1; /* last fragment(buffer) of the packet */
+
+ u64 addr;
+};
+
+
+/* tcp large send format (v1/v2) of tpd */
+struct alx_tpdes_tso {
+ u32 buffer_len:16; /* include 4-byte CRC */
+ u32 vlan_tag:16;
+
+ u32 tcphdr_offset:8; /* tcp hdr offset to the 1st byte of packet */
+ u32 c_sum:1; /* must be 0 in this format */
+ u32 ip_sum:1; /* must be 0 in thhis format */
+ u32 tcp_sum:1; /* must be 0 in this format */
+ u32 udp_sum:1; /* must be 0 in this format */
+ u32 lso:1; /* do tcp large send (ipv4 only) */
+ u32 lso_v2:1; /* must be 0 in this format */
+ u32 vtagged:1; /* vlan-id tagged already */
+ u32 instag:1; /* insert vlan tag */
+
+ u32 ipv4:1; /* ipv4 packet */
+ u32 type:1; /* type of packet (ethernet_ii(1) or snap(0)) */
+ u32 mss:13; /* MSS if do tcp large send */
+ u32 last_frag:1; /* last fragment(buffer) of the packet */
+
+ u32 addr_lo;
+ u32 addr_hi;
+};
+
+union alx_tpdesc {
+ /* dword flat format */
+ struct {
+ __le32 dw0;
+ __le32 dw1;
+ __le32 dw2;
+ __le32 dw3;
+ } dfmt;
+
+ /* qword flat format */
+ struct {
+ __le64 qw0;
+ __le64 qw1;
+ } qfmt;
+};
+
+/*
+ * XXX: we should not use this guy, best to just
+ * do all le32_to_cpu() conversions on the spot.
+ */
+union alx_sw_tpdesc {
+ struct alx_tpdes_general genr;
+ struct alx_tpdes_checksum csum;
+ struct alx_tpdes_tso tso;
+
+ /* dword flat format */
+ struct {
+ u32 dw0;
+ u32 dw1;
+ u32 dw2;
+ u32 dw3;
+ } dfmt;
+
+ /* qword flat format */
+ struct {
+ u64 qw0;
+ u64 qw1;
+ } qfmt;
+};
+
+#define ALX_RRD(_que, _i) \
+ (&(((union alx_rrdesc *)(_que)->rrq.rrdesc)[(_i)]))
+#define ALX_RFD(_que, _i) \
+ (&(((union alx_rfdesc *)(_que)->rfq.rfdesc)[(_i)]))
+#define ALX_TPD(_que, _i) \
+ (&(((union alx_tpdesc *)(_que)->tpq.tpdesc)[(_i)]))
+
+
+/*
+ * alx_ring_header represents a single, contiguous block of DMA space
+ * mapped for the three descriptor rings (tpd, rfd, rrd) and the two
+ * message blocks (cmb, smb) described below
+ */
+struct alx_ring_header {
+ void *desc; /* virtual address */
+ dma_addr_t dma; /* physical address*/
+ unsigned int size; /* length in bytes */
+ unsigned int used;
+};
+
+
+/*
+ * alx_buffer is wrapper around a pointer to a socket buffer
+ * so a DMA handle can be stored along with the skb
+ */
+struct alx_buffer {
+ struct sk_buff *skb; /* socket buffer */
+ u16 length; /* rx buffer length */
+ dma_addr_t dma;
+};
+
+struct alx_sw_buffer {
+ struct sk_buff *skb; /* socket buffer */
+ u32 vlan_tag:16;
+ u32 vlan_flag:1;
+ u32 reserved:15;
+};
+
+/* receive free descriptor (rfd) queue */
+struct alx_rfd_queue {
+ struct alx_buffer *rfbuff;
+ union alx_rfdesc *rfdesc; /* virtual address */
+ dma_addr_t rfdma; /* physical address */
+ u16 size; /* length in bytes */
+ u16 count; /* number of descriptors in the ring */
+ u16 produce_idx; /* it's written to rxque->produce_reg */
+ u16 consume_idx; /* unused*/
+};
+
+/* receive return desciptor (rrd) queue */
+struct alx_rrd_queue {
+ union alx_rrdesc *rrdesc; /* virtual address */
+ dma_addr_t rrdma; /* physical address */
+ u16 size; /* length in bytes */
+ u16 count; /* number of descriptors in the ring */
+ u16 produce_idx; /* unused */
+ u16 consume_idx; /* rxque->consume_reg */
+};
+
+/* software desciptor (swd) queue */
+struct alx_swd_queue {
+ struct alx_sw_buffer *swbuff;
+ u16 count; /* number of descriptors in the ring */
+ u16 produce_idx;
+ u16 consume_idx;
+};
+
+/* rx queue */
+struct alx_rx_queue {
+ struct device *dev; /* device for dma mapping */
+ struct net_device *netdev; /* netdev ring belongs to */
+ struct alx_msix_param *msix;
+ struct alx_rrd_queue rrq;
+ struct alx_rfd_queue rfq;
+ struct alx_swd_queue swq;
+
+ u16 que_idx; /* index in multi rx queues*/
+ u16 max_packets; /* max work per interrupt */
+ u16 produce_reg;
+ u16 consume_reg;
+ u32 flags;
+};
+#define ALX_RX_FLAG_SW_QUE 0x00000001
+#define ALX_RX_FLAG_HW_QUE 0x00000002
+#define CHK_RX_FLAG(_flag) CHK_FLAG(rxque, RX, _flag)
+#define SET_RX_FLAG(_flag) SET_FLAG(rxque, RX, _flag)
+#define CLI_RX_FLAG(_flag) CLI_FLAG(rxque, RX, _flag)
+
+#define GET_RF_BUFFER(_rque, _i) (&((_rque)->rfq.rfbuff[(_i)]))
+#define GET_SW_BUFFER(_rque, _i) (&((_rque)->swq.swbuff[(_i)]))
+
+
+/* transimit packet descriptor (tpd) ring */
+struct alx_tpd_queue {
+ struct alx_buffer *tpbuff;
+ union alx_tpdesc *tpdesc; /* virtual address */
+ dma_addr_t tpdma; /* physical address */
+
+ u16 size; /* length in bytes */
+ u16 count; /* number of descriptors in the ring */
+ u16 produce_idx;
+ u16 consume_idx;
+ u16 last_produce_idx;
+};
+
+/* tx queue */
+struct alx_tx_queue {
+ struct device *dev; /* device for dma mapping */
+ struct net_device *netdev; /* netdev ring belongs to */
+ struct alx_tpd_queue tpq;
+ struct alx_msix_param *msix;
+
+ u16 que_idx; /* needed for multiqueue queue management */
+ u16 max_packets; /* max packets per interrupt */
+ u16 produce_reg;
+ u16 consume_reg;
+};
+#define GET_TP_BUFFER(_tque, _i) (&((_tque)->tpq.tpbuff[(_i)]))
+
+
+/*
+ * definition for array allocations.
+ */
+#define ALX_MAX_MSIX_INTRS 16
+#define ALX_MAX_RX_QUEUES 8
+#define ALX_MAX_TX_QUEUES 4
+
+enum alx_msix_type {
+ alx_msix_type_rx,
+ alx_msix_type_tx,
+ alx_msix_type_other,
+};
+#define ALX_MSIX_TYPE_OTH_TIMER 0
+#define ALX_MSIX_TYPE_OTH_ALERT 1
+#define ALX_MSIX_TYPE_OTH_SMB 2
+#define ALX_MSIX_TYPE_OTH_PHY 3
+
+/* ALX_MAX_MSIX_INTRS of these are allocated,
+ * but we only use one per queue-specific vector.
+ */
+struct alx_msix_param {
+ struct alx_adapter *adpt;
+ unsigned int vec_idx; /* index in HW interrupt vector */
+ char name[IFNAMSIZ + 9];
+
+ /* msix interrupts for queue */
+ u8 rx_map[ALX_MAX_RX_QUEUES];
+ u8 tx_map[ALX_MAX_TX_QUEUES];
+ u8 rx_count; /* Rx ring count assigned to this vector */
+ u8 tx_count; /* Tx ring count assigned to this vector */
+
+ struct napi_struct napi;
+ cpumask_var_t affinity_mask;
+ u32 flags;
+};
+
+#define ALX_MSIX_FLAG_RX0 0x00000001
+#define ALX_MSIX_FLAG_RX1 0x00000002
+#define ALX_MSIX_FLAG_RX2 0x00000004
+#define ALX_MSIX_FLAG_RX3 0x00000008
+#define ALX_MSIX_FLAG_RX4 0x00000010
+#define ALX_MSIX_FLAG_RX5 0x00000020
+#define ALX_MSIX_FLAG_RX6 0x00000040
+#define ALX_MSIX_FLAG_RX7 0x00000080
+#define ALX_MSIX_FLAG_TX0 0x00000100
+#define ALX_MSIX_FLAG_TX1 0x00000200
+#define ALX_MSIX_FLAG_TX2 0x00000400
+#define ALX_MSIX_FLAG_TX3 0x00000800
+#define ALX_MSIX_FLAG_TIMER 0x00001000
+#define ALX_MSIX_FLAG_ALERT 0x00002000
+#define ALX_MSIX_FLAG_SMB 0x00004000
+#define ALX_MSIX_FLAG_PHY 0x00008000
+
+#define ALX_MSIX_FLAG_RXS (\
+ ALX_MSIX_FLAG_RX0 |\
+ ALX_MSIX_FLAG_RX1 |\
+ ALX_MSIX_FLAG_RX2 |\
+ ALX_MSIX_FLAG_RX3 |\
+ ALX_MSIX_FLAG_RX4 |\
+ ALX_MSIX_FLAG_RX5 |\
+ ALX_MSIX_FLAG_RX6 |\
+ ALX_MSIX_FLAG_RX7)
+#define ALX_MSIX_FLAG_TXS (\
+ ALX_MSIX_FLAG_TX0 |\
+ ALX_MSIX_FLAG_TX1 |\
+ ALX_MSIX_FLAG_TX2 |\
+ ALX_MSIX_FLAG_TX3)
+#define ALX_MSIX_FLAG_ALL (\
+ ALX_MSIX_FLAG_RXS |\
+ ALX_MSIX_FLAG_TXS |\
+ ALX_MSIX_FLAG_TIMER |\
+ ALX_MSIX_FLAG_ALERT |\
+ ALX_MSIX_FLAG_SMB |\
+ ALX_MSIX_FLAG_PHY)
+
+#define CHK_MSIX_FLAG(_flag) CHK_FLAG(msix, MSIX, _flag)
+#define SET_MSIX_FLAG(_flag) SET_FLAG(msix, MSIX, _flag)
+#define CLI_MSIX_FLAG(_flag) CLI_FLAG(msix, MSIX, _flag)
+
+/*
+ *board specific private data structure
+ */
+struct alx_adapter {
+ struct net_device *netdev;
+ struct pci_dev *pdev;
+ struct net_device_stats net_stats;
+ bool netdev_registered;
+ u16 bd_number; /* board number;*/
+
+ struct alx_msix_param *msix[ALX_MAX_MSIX_INTRS];
+ struct msix_entry *msix_entries;
+ int num_msix_rxques;
+ int num_msix_txques;
+ int num_msix_noques; /* true count of msix_noques for device */
+ int num_msix_intrs;
+
+ int min_msix_intrs;
+ int max_msix_intrs;
+
+ /* All Descriptor memory */
+ struct alx_ring_header ring_header;
+
+ /* TX */
+ struct alx_tx_queue *tx_queue[ALX_MAX_TX_QUEUES];
+ /* RX */
+ struct alx_rx_queue *rx_queue[ALX_MAX_RX_QUEUES];
+
+ u16 num_txques;
+ u16 num_rxques; /* equal max(num_hw_rxques, num_sw_rxques) */
+ u16 num_hw_rxques;
+ u16 num_sw_rxques;
+ u16 max_rxques;
+ u16 max_txques;
+
+ u16 num_txdescs;
+ u16 num_rxdescs;
+
+ u32 rxbuf_size;
+
+ struct alx_cmb cmb;
+ struct alx_smb smb;
+
+ /* structs defined in alx_hw.h */
+ struct alx_hw hw;
+ struct alx_hw_stats hw_stats;
+
+ u32 *config_space;
+
+ struct work_struct alx_task;
+ struct timer_list alx_timer;
+
+ unsigned long link_jiffies;
+
+ u32 wol;
+ spinlock_t tx_lock;
+ spinlock_t rx_lock;
+ atomic_t irq_sem;
+
+ u16 msg_enable;
+ unsigned long flags[2];
+};
+
+#define ALX_ADPT_FLAG_0_MSI_CAP 0x00000001
+#define ALX_ADPT_FLAG_0_MSI_EN 0x00000002
+#define ALX_ADPT_FLAG_0_MSIX_CAP 0x00000004
+#define ALX_ADPT_FLAG_0_MSIX_EN 0x00000008
+#define ALX_ADPT_FLAG_0_MRQ_CAP 0x00000010
+#define ALX_ADPT_FLAG_0_MRQ_EN 0x00000020
+#define ALX_ADPT_FLAG_0_MTQ_CAP 0x00000040
+#define ALX_ADPT_FLAG_0_MTQ_EN 0x00000080
+#define ALX_ADPT_FLAG_0_SRSS_CAP 0x00000100
+#define ALX_ADPT_FLAG_0_SRSS_EN 0x00000200
+#define ALX_ADPT_FLAG_0_FIXED_MSIX 0x00000400
+
+#define ALX_ADPT_FLAG_0_TASK_REINIT_REQ 0x00010000 /* reinit */
+#define ALX_ADPT_FLAG_0_TASK_LSC_REQ 0x00020000
+
+#define ALX_ADPT_FLAG_1_STATE_TESTING 0x00000001
+#define ALX_ADPT_FLAG_1_STATE_RESETTING 0x00000002
+#define ALX_ADPT_FLAG_1_STATE_DOWN 0x00000004
+#define ALX_ADPT_FLAG_1_STATE_WATCH_DOG 0x00000008
+#define ALX_ADPT_FLAG_1_STATE_DIAG_RUNNING 0x00000010
+#define ALX_ADPT_FLAG_1_STATE_INACTIVE 0x00000020
+
+
+#define CHK_ADPT_FLAG(_idx, _flag) \
+ CHK_FLAG_ARRAY(adpt, _idx, ADPT, _flag)
+#define SET_ADPT_FLAG(_idx, _flag) \
+ SET_FLAG_ARRAY(adpt, _idx, ADPT, _flag)
+#define CLI_ADPT_FLAG(_idx, _flag) \
+ CLI_FLAG_ARRAY(adpt, _idx, ADPT, _flag)
+
+/* default to trying for four seconds */
+#define ALX_TRY_LINK_TIMEOUT (4 * HZ)
+
+
+#define ALX_OPEN_CTRL_IRQ_EN 0x00000001
+#define ALX_OPEN_CTRL_RESET_MAC 0x00000002
+#define ALX_OPEN_CTRL_RESET_PHY 0x00000004
+#define ALX_OPEN_CTRL_RESET_ALL (\
+ ALX_OPEN_CTRL_RESET_MAC |\
+ ALX_OPEN_CTRL_RESET_PHY)
+
+/* needed by alx_ethtool.c */
+extern char alx_drv_name[];
+extern void alx_reinit_locked(struct alx_adapter *adpt);
+extern void alx_set_ethtool_ops(struct net_device *netdev);
+#ifdef ETHTOOL_OPS_COMPAT
+extern int ethtool_ioctl(struct ifreq *ifr);
+#endif
+
+#endif /* _ALX_H_ */
diff --git a/drivers/net/ethernet/atheros/alx/alx_ethtool.c b/drivers/net/ethernet/atheros/alx/alx_ethtool.c
new file mode 100644
index 0000000..c044133
--- /dev/null
+++ b/drivers/net/ethernet/atheros/alx/alx_ethtool.c
@@ -0,0 +1,519 @@
+/*
+ * Copyright (c) 2012 Qualcomm Atheros, Inc.
+ *
+ * Permission to use, copy, modify, and/or distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+#include <linux/netdevice.h>
+#include <linux/ethtool.h>
+#include <linux/slab.h>
+
+#include "alx.h"
+#include "alx_hwcom.h"
+
+#ifdef ETHTOOL_OPS_COMPAT
+#include "alx_compat_ethtool.c"
+#endif
+
+
+static int alx_get_settings(struct net_device *netdev,
+ struct ethtool_cmd *ecmd)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ struct alx_hw *hw = &adpt->hw;
+ u32 link_speed = hw->link_speed;
+ bool link_up = hw->link_up;
+
+ ecmd->supported = (SUPPORTED_10baseT_Half |
+ SUPPORTED_10baseT_Full |
+ SUPPORTED_100baseT_Half |
+ SUPPORTED_100baseT_Full |
+ SUPPORTED_Autoneg |
+ SUPPORTED_TP);
+ if (CHK_HW_FLAG(GIGA_CAP))
+ ecmd->supported |= SUPPORTED_1000baseT_Full;
+
+ ecmd->advertising = ADVERTISED_TP;
+
+ ecmd->advertising |= ADVERTISED_Autoneg;
+ ecmd->advertising |= hw->autoneg_advertised;
+
+ ecmd->port = PORT_TP;
+ ecmd->phy_address = 0;
+ ecmd->autoneg = AUTONEG_ENABLE;
+ ecmd->transceiver = XCVR_INTERNAL;
+
+ if (!in_interrupt()) {
+ hw->cbs.check_phy_link(hw, &link_speed, &link_up);
+ hw->link_speed = link_speed;
+ hw->link_up = link_up;
+ }
+
+ if (link_up) {
+ switch (link_speed) {
+ case ALX_LINK_SPEED_10_HALF:
+ ethtool_cmd_speed_set(ecmd, SPEED_10);
+ ecmd->duplex = DUPLEX_HALF;
+ break;
+ case ALX_LINK_SPEED_10_FULL:
+ ethtool_cmd_speed_set(ecmd, SPEED_10);
+ ecmd->duplex = DUPLEX_FULL;
+ break;
+ case ALX_LINK_SPEED_100_HALF:
+ ethtool_cmd_speed_set(ecmd, SPEED_100);
+ ecmd->duplex = DUPLEX_HALF;
+ break;
+ case ALX_LINK_SPEED_100_FULL:
+ ethtool_cmd_speed_set(ecmd, SPEED_100);
+ ecmd->duplex = DUPLEX_FULL;
+ break;
+ case ALX_LINK_SPEED_1GB_FULL:
+ ethtool_cmd_speed_set(ecmd, SPEED_1000);
+ ecmd->duplex = DUPLEX_FULL;
+ break;
+ default:
+ ecmd->speed = -1;
+ ecmd->duplex = -1;
+ break;
+ }
+ } else {
+ ethtool_cmd_speed_set(ecmd, -1);
+ ecmd->duplex = -1;
+ }
+
+ return 0;
+}
+
+
+static int alx_set_settings(struct net_device *netdev,
+ struct ethtool_cmd *ecmd)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ struct alx_hw *hw = &adpt->hw;
+ u32 advertised, old;
+ int error = 0;
+
+ while (CHK_ADPT_FLAG(1, STATE_RESETTING))
+ msleep(20);
+ SET_ADPT_FLAG(1, STATE_RESETTING);
+
+ old = hw->autoneg_advertised;
+ advertised = 0;
+ if (ecmd->autoneg == AUTONEG_ENABLE) {
+ advertised = ALX_LINK_SPEED_DEFAULT;
+ } else {
+ u32 speed = ethtool_cmd_speed(ecmd);
+ if (speed == SPEED_1000) {
+ if (ecmd->duplex != DUPLEX_FULL) {
+ dev_warn(&adpt->pdev->dev,
+ "1000M half is invalid\n");
+ CLI_ADPT_FLAG(1, STATE_RESETTING);
+ return -EINVAL;
+ }
+ advertised = ALX_LINK_SPEED_1GB_FULL;
+ } else if (speed == SPEED_100) {
+ if (ecmd->duplex == DUPLEX_FULL)
+ advertised = ALX_LINK_SPEED_100_FULL;
+ else
+ advertised = ALX_LINK_SPEED_100_HALF;
+ } else {
+ if (ecmd->duplex == DUPLEX_FULL)
+ advertised = ALX_LINK_SPEED_10_FULL;
+ else
+ advertised = ALX_LINK_SPEED_10_HALF;
+ }
+ }
+
+ if (hw->autoneg_advertised == advertised) {
+ CLI_ADPT_FLAG(1, STATE_RESETTING);
+ return error;
+ }
+
+ error = hw->cbs.setup_phy_link_speed(hw, advertised, true,
+ !hw->disable_fc_autoneg);
+ if (error) {
+ dev_err(&adpt->pdev->dev,
+ "setup link failed with code %d\n", error);
+ hw->cbs.setup_phy_link_speed(hw, old, true,
+ !hw->disable_fc_autoneg);
+ }
+ CLI_ADPT_FLAG(1, STATE_RESETTING);
+ return error;
+}
+
+
+static void alx_get_pauseparam(struct net_device *netdev,
+ struct ethtool_pauseparam *pause)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ struct alx_hw *hw = &adpt->hw;
+
+
+ if (hw->disable_fc_autoneg ||
+ hw->cur_fc_mode == alx_fc_none)
+ pause->autoneg = 0;
+ else
+ pause->autoneg = 1;
+
+ if (hw->cur_fc_mode == alx_fc_rx_pause) {
+ pause->rx_pause = 1;
+ } else if (hw->cur_fc_mode == alx_fc_tx_pause) {
+ pause->tx_pause = 1;
+ } else if (hw->cur_fc_mode == alx_fc_full) {
+ pause->rx_pause = 1;
+ pause->tx_pause = 1;
+ }
+}
+
+
+static int alx_set_pauseparam(struct net_device *netdev,
+ struct ethtool_pauseparam *pause)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ struct alx_hw *hw = &adpt->hw;
+ enum alx_fc_mode req_fc_mode;
+ bool disable_fc_autoneg;
+ int retval;
+
+ while (CHK_ADPT_FLAG(1, STATE_RESETTING))
+ msleep(20);
+ SET_ADPT_FLAG(1, STATE_RESETTING);
+
+ req_fc_mode = hw->req_fc_mode;
+ disable_fc_autoneg = hw->disable_fc_autoneg;
+
+
+ if (pause->autoneg != AUTONEG_ENABLE)
+ disable_fc_autoneg = true;
+ else
+ disable_fc_autoneg = false;
+
+ if ((pause->rx_pause && pause->tx_pause) || pause->autoneg)
+ req_fc_mode = alx_fc_full;
+ else if (pause->rx_pause && !pause->tx_pause)
+ req_fc_mode = alx_fc_rx_pause;
+ else if (!pause->rx_pause && pause->tx_pause)
+ req_fc_mode = alx_fc_tx_pause;
+ else if (!pause->rx_pause && !pause->tx_pause)
+ req_fc_mode = alx_fc_none;
+ else
+ return -EINVAL;
+
+ if ((hw->req_fc_mode != req_fc_mode) ||
+ (hw->disable_fc_autoneg != disable_fc_autoneg)) {
+ hw->req_fc_mode = req_fc_mode;
+ hw->disable_fc_autoneg = disable_fc_autoneg;
+ if (!hw->disable_fc_autoneg)
+ retval = hw->cbs.setup_phy_link(hw,
+ hw->autoneg_advertised, true, true);
+
+ if (hw->cbs.config_fc)
+ hw->cbs.config_fc(hw);
+ }
+
+ CLI_ADPT_FLAG(1, STATE_RESETTING);
+ return 0;
+}
+
+
+static u32 alx_get_msglevel(struct net_device *netdev)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ return adpt->msg_enable;
+}
+
+
+static void alx_set_msglevel(struct net_device *netdev, u32 data)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ adpt->msg_enable = data;
+}
+
+
+static int alx_get_regs_len(struct net_device *netdev)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ struct alx_hw *hw = &adpt->hw;
+ return hw->hwreg_sz * sizeof(32);
+}
+
+
+static void alx_get_regs(struct net_device *netdev,
+ struct ethtool_regs *regs, void *buff)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ struct alx_hw *hw = &adpt->hw;
+
+ regs->version = 0;
+
+ memset(buff, 0, hw->hwreg_sz * sizeof(u32));
+ if (hw->cbs.get_ethtool_regs)
+ hw->cbs.get_ethtool_regs(hw, buff);
+}
+
+
+static int alx_get_eeprom_len(struct net_device *netdev)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ struct alx_hw *hw = &adpt->hw;
+ return hw->eeprom_sz;
+}
+
+
+static int alx_get_eeprom(struct net_device *netdev,
+ struct ethtool_eeprom *eeprom, u8 *bytes)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ struct alx_hw *hw = &adpt->hw;
+ bool eeprom_exist = false;
+ u32 *eeprom_buff;
+ int first_dword, last_dword;
+ int retval = 0;
+ int i;
+
+ if (eeprom->len == 0)
+ return -EINVAL;
+
+ if (hw->cbs.check_nvram)
+ hw->cbs.check_nvram(hw, &eeprom_exist);
+ if (!eeprom_exist)
+ return -EOPNOTSUPP;
+
+ eeprom->magic = adpt->pdev->vendor |
+ (adpt->pdev->device << 16);
+
+ first_dword = eeprom->offset >> 2;
+ last_dword = (eeprom->offset + eeprom->len - 1) >> 2;
+
+ eeprom_buff = kmalloc(sizeof(u32) *
+ (last_dword - first_dword + 1), GFP_KERNEL);
+ if (eeprom_buff == NULL)
+ return -ENOMEM;
+
+ for (i = first_dword; i < last_dword; i++) {
+ if (hw->cbs.read_nvram) {
+ retval = hw->cbs.read_nvram(hw, i*4,
+ &(eeprom_buff[i-first_dword]));
+ if (retval) {
+ retval = -EIO;
+ goto out;
+ }
+ }
+ }
+
+ /* Device's eeprom is always little-endian, word addressable */
+ for (i = 0; i < last_dword - first_dword; i++)
+ le32_to_cpus(&eeprom_buff[i]);
+
+ memcpy(bytes, (u8 *)eeprom_buff + (eeprom->offset & 3), eeprom->len);
+out:
+ kfree(eeprom_buff);
+ return retval;
+}
+
+
+static int alx_set_eeprom(struct net_device *netdev,
+ struct ethtool_eeprom *eeprom, u8 *bytes)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ struct alx_hw *hw = &adpt->hw;
+ bool eeprom_exist = false;
+ u32 *eeprom_buff;
+ u32 *ptr;
+ int first_dword, last_dword;
+ int retval = 0;
+ int i;
+
+ if (eeprom->len == 0)
+ return -EINVAL;
+
+ if (hw->cbs.check_nvram)
+ hw->cbs.check_nvram(hw, &eeprom_exist);
+ if (!eeprom_exist)
+ return -EOPNOTSUPP;
+
+
+ if (eeprom->magic != (adpt->pdev->vendor |
+ (adpt->pdev->device << 16)))
+ return -EINVAL;
+
+ first_dword = eeprom->offset >> 2;
+ last_dword = (eeprom->offset + eeprom->len - 1) >> 2;
+ eeprom_buff = kmalloc(ALX_MAX_EEPROM_LEN, GFP_KERNEL);
+ if (eeprom_buff == NULL)
+ return -ENOMEM;
+
+ ptr = (u32 *)eeprom_buff;
+
+ if (eeprom->offset & 3) {
+ /* need read/modify/write of first changed EEPROM word */
+ /* only the second byte of the word is being modified */
+ if (hw->cbs.read_nvram) {
+ retval = hw->cbs.read_nvram(hw, first_dword * 4,
+ &(eeprom_buff[0]));
+ if (retval) {
+ retval = -EIO;
+ goto out;
+ }
+ }
+ ptr++;
+ }
+
+ if (((eeprom->offset + eeprom->len) & 3)) {
+ /* need read/modify/write of last changed EEPROM word */
+ /* only the first byte of the word is being modified */
+ if (hw->cbs.read_nvram) {
+ retval = hw->cbs.read_nvram(hw, last_dword * 4,
+ &(eeprom_buff[last_dword - first_dword]));
+ if (retval) {
+ retval = -EIO;
+ goto out;
+ }
+ }
+ }
+
+ /* Device's eeprom is always little-endian, word addressable */
+ memcpy(ptr, bytes, eeprom->len);
+ for (i = 0; i < last_dword - first_dword + 1; i++) {
+ if (hw->cbs.write_nvram) {
+ retval = hw->cbs.write_nvram(hw, (first_dword + i) * 4,
+ eeprom_buff[i]);
+ if (retval) {
+ retval = -EIO;
+ goto out;
+ }
+ }
+ }
+out:
+ kfree(eeprom_buff);
+ return retval;
+}
+
+
+static void alx_get_drvinfo(struct net_device *netdev,
+ struct ethtool_drvinfo *drvinfo)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+
+ strlcpy(drvinfo->driver, alx_drv_name, sizeof(drvinfo->driver));
+ strlcpy(drvinfo->fw_version, "alx", 32);
+ strlcpy(drvinfo->bus_info, pci_name(adpt->pdev),
+ sizeof(drvinfo->bus_info));
+ drvinfo->n_stats = 0;
+ drvinfo->testinfo_len = 0;
+ drvinfo->regdump_len = adpt->hw.hwreg_sz;
+ drvinfo->eedump_len = adpt->hw.eeprom_sz;
+}
+
+
+static int alx_wol_exclusion(struct alx_adapter *adpt,
+ struct ethtool_wolinfo *wol)
+{
+ struct alx_hw *hw = &adpt->hw;
+ int retval = 1;
+
+ /* WOL not supported except for the following */
+ switch (hw->pci_devid) {
+ case ALX_DEV_ID_AR8131:
+ case ALX_DEV_ID_AR8132:
+ case ALX_DEV_ID_AR8151_V1:
+ case ALX_DEV_ID_AR8151_V2:
+ case ALX_DEV_ID_AR8152_V1:
+ case ALX_DEV_ID_AR8152_V2:
+ case ALX_DEV_ID_AR8161:
+ case ALX_DEV_ID_AR8162:
+ retval = 0;
+ break;
+ default:
+ wol->supported = 0;
+ }
+
+ return retval;
+}
+
+
+static void alx_get_wol(struct net_device *netdev,
+ struct ethtool_wolinfo *wol)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+
+ wol->supported = WAKE_MAGIC | WAKE_PHY;
+ wol->wolopts = 0;
+
+ if (adpt->wol & ALX_WOL_MAGIC)
+ wol->wolopts |= WAKE_MAGIC;
+ if (adpt->wol & ALX_WOL_PHY)
+ wol->wolopts |= WAKE_PHY;
+
+ netif_info(adpt, wol, adpt->netdev,
+ "wol->wolopts = %x\n", wol->wolopts);
+}
+
+
+static int alx_set_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+
+ if (wol->wolopts & (WAKE_ARP | WAKE_MAGICSECURE |
+ WAKE_UCAST | WAKE_BCAST | WAKE_MCAST))
+ return -EOPNOTSUPP;
+
+ if (alx_wol_exclusion(adpt, wol))
+ return wol->wolopts ? -EOPNOTSUPP : 0;
+
+ adpt->wol = 0;
+
+ if (wol->wolopts & WAKE_MAGIC)
+ adpt->wol |= ALX_WOL_MAGIC;
+ if (wol->wolopts & WAKE_PHY)
+ adpt->wol |= ALX_WOL_PHY;
+
+ device_set_wakeup_enable(&adpt->pdev->dev, adpt->wol);
+
+ return 0;
+}
+
+
+static int alx_nway_reset(struct net_device *netdev)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ if (netif_running(netdev))
+ alx_reinit_locked(adpt);
+ return 0;
+}
+
+
+static const struct ethtool_ops alx_ethtool_ops = {
+ .get_settings = alx_get_settings,
+ .set_settings = alx_set_settings,
+ .get_pauseparam = alx_get_pauseparam,
+ .set_pauseparam = alx_set_pauseparam,
+ .get_drvinfo = alx_get_drvinfo,
+ .get_regs_len = alx_get_regs_len,
+ .get_regs = alx_get_regs,
+ .get_wol = alx_get_wol,
+ .set_wol = alx_set_wol,
+ .get_msglevel = alx_get_msglevel,
+ .set_msglevel = alx_set_msglevel,
+ .nway_reset = alx_nway_reset,
+ .get_link = ethtool_op_get_link,
+ .get_eeprom_len = alx_get_eeprom_len,
+ .get_eeprom = alx_get_eeprom,
+ .set_eeprom = alx_set_eeprom,
+};
+
+
+void alx_set_ethtool_ops(struct net_device *netdev)
+{
+ SET_ETHTOOL_OPS(netdev, &alx_ethtool_ops);
+}
diff --git a/drivers/net/ethernet/atheros/alx/alx_hwcom.h b/drivers/net/ethernet/atheros/alx/alx_hwcom.h
new file mode 100644
index 0000000..d3bd2f1
--- /dev/null
+++ b/drivers/net/ethernet/atheros/alx/alx_hwcom.h
@@ -0,0 +1,187 @@
+/*
+ * Copyright (c) 2012 Qualcomm Atheros, Inc.
+ *
+ * Permission to use, copy, modify, and/or distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+#ifndef _ALX_HWCOMMON_H_
+#define _ALX_HWCOMMON_H_
+
+#include <linux/bitops.h>
+#include "alx_sw.h"
+
+
+#define BIT_ALL 0xffffffffUL
+
+#define ASHFT31(_x) ((_x) << 31)
+#define ASHFT30(_x) ((_x) << 30)
+#define ASHFT29(_x) ((_x) << 29)
+#define ASHFT28(_x) ((_x) << 28)
+#define ASHFT27(_x) ((_x) << 27)
+#define ASHFT26(_x) ((_x) << 26)
+#define ASHFT25(_x) ((_x) << 25)
+#define ASHFT24(_x) ((_x) << 24)
+#define ASHFT23(_x) ((_x) << 23)
+#define ASHFT22(_x) ((_x) << 22)
+#define ASHFT21(_x) ((_x) << 21)
+#define ASHFT20(_x) ((_x) << 20)
+#define ASHFT19(_x) ((_x) << 19)
+#define ASHFT18(_x) ((_x) << 18)
+#define ASHFT17(_x) ((_x) << 17)
+#define ASHFT16(_x) ((_x) << 16)
+#define ASHFT15(_x) ((_x) << 15)
+#define ASHFT14(_x) ((_x) << 14)
+#define ASHFT13(_x) ((_x) << 13)
+#define ASHFT12(_x) ((_x) << 12)
+#define ASHFT11(_x) ((_x) << 11)
+#define ASHFT10(_x) ((_x) << 10)
+#define ASHFT9(_x) ((_x) << 9)
+#define ASHFT8(_x) ((_x) << 8)
+#define ASHFT7(_x) ((_x) << 7)
+#define ASHFT6(_x) ((_x) << 6)
+#define ASHFT5(_x) ((_x) << 5)
+#define ASHFT4(_x) ((_x) << 4)
+#define ASHFT3(_x) ((_x) << 3)
+#define ASHFT2(_x) ((_x) << 2)
+#define ASHFT1(_x) ((_x) << 1)
+#define ASHFT0(_x) ((_x) << 0)
+
+
+#define FIELD_GETX(_x, _name) (((_x) & (_name##_MASK)) >> (_name##_SHIFT))
+#define FIELD_SETS(_x, _name, _v) (\
+(_x) = \
+((_x) & ~(_name##_MASK)) |\
+(((u16)(_v) << (_name##_SHIFT)) & (_name##_MASK)))
+#define FIELD_SETL(_x, _name, _v) (\
+(_x) = \
+((_x) & ~(_name##_MASK)) |\
+(((u32)(_v) << (_name##_SHIFT)) & (_name##_MASK)))
+#define FIELDL(_name, _v) (((u32)(_v) << (_name##_SHIFT)) & (_name##_MASK))
+#define FIELDS(_name, _v) (((u16)(_v) << (_name##_SHIFT)) & (_name##_MASK))
+
+
+
+#define LX_SWAP_DW(_x) (\
+ (((_x) << 24) & 0xFF000000UL) |\
+ (((_x) << 8) & 0x00FF0000UL) |\
+ (((_x) >> 8) & 0x0000FF00UL) |\
+ (((_x) >> 24) & 0x000000FFUL))
+
+#define LX_SWAP_W(_x) (\
+ (((_x) >> 8) & 0x00FFU) |\
+ (((_x) << 8) & 0xFF00U))
+
+
+#define LX_ERR_SUCCESS 0x0000
+#define LX_ERR_ALOAD 0x0001
+#define LX_ERR_RSTMAC 0x0002
+#define LX_ERR_PARM 0x0003
+#define LX_ERR_MIIBUSY 0x0004
+
+/* link capability */
+#define LX_LC_10H 0x01
+#define LX_LC_10F 0x02
+#define LX_LC_100H 0x04
+#define LX_LC_100F 0x08
+#define LX_LC_1000F 0x10
+#define LX_LC_ALL \
+ (LX_LC_10H|LX_LC_10F|LX_LC_100H|LX_LC_100F|LX_LC_1000F)
+
+/* options for MAC contrl */
+#define LX_MACSPEED_1000 BIT(0) /* 1:1000M, 0:10/100M */
+#define LX_MACDUPLEX_FULL BIT(1) /* 1:full, 0:half */
+#define LX_FLT_BROADCAST BIT(2) /* 1:enable rx-broadcast */
+#define LX_FLT_MULTI_ALL BIT(3)
+#define LX_FLT_DIRECT BIT(4)
+#define LX_FLT_PROMISC BIT(5)
+#define LX_FC_TXEN BIT(6)
+#define LX_FC_RXEN BIT(7)
+#define LX_VLAN_STRIP BIT(8)
+#define LX_LOOPBACK BIT(9)
+#define LX_ADD_FCS BIT(10)
+#define LX_SINGLE_PAUSE BIT(11)
+
+
+/* interop between drivers */
+#define LX_DRV_TYPE_MASK ASHFT27(0x1FUL)
+#define LX_DRV_TYPE_SHIFT 27
+#define LX_DRV_TYPE_UNKNOWN 0
+#define LX_DRV_TYPE_BIOS 1
+#define LX_DRV_TYPE_BTROM 2
+#define LX_DRV_TYPE_PKT 3
+#define LX_DRV_TYPE_NDS2 4
+#define LX_DRV_TYPE_UEFI 5
+#define LX_DRV_TYPE_NDS5 6
+#define LX_DRV_TYPE_NDS62 7
+#define LX_DRV_TYPE_NDS63 8
+#define LX_DRV_TYPE_LNX 9
+#define LX_DRV_TYPE_ODI16 10
+#define LX_DRV_TYPE_ODI32 11
+#define LX_DRV_TYPE_FRBSD 12
+#define LX_DRV_TYPE_NTBSD 13
+#define LX_DRV_TYPE_WCE 14
+#define LX_DRV_PHY_AUTO BIT(26) /* 1:auto, 0:force */
+#define LX_DRV_PHY_1000 BIT(25)
+#define LX_DRV_PHY_100 BIT(24)
+#define LX_DRV_PHY_10 BIT(23)
+#define LX_DRV_PHY_DUPLEX BIT(22) /* 1:full, 0:half */
+#define LX_DRV_PHY_FC BIT(21) /* 1:en flow control */
+#define LX_DRV_PHY_MASK ASHFT21(0x1FUL)
+#define LX_DRV_PHY_SHIFT 21
+#define LX_DRV_PHY_UNKNOWN 0
+#define LX_DRV_DISABLE BIT(18)
+#define LX_DRV_WOLS5_EN BIT(17)
+#define LX_DRV_WOLS5_BIOS_EN BIT(16)
+#define LX_DRV_AZ_EN BIT(12)
+#define LX_DRV_WOLPATTERN_EN BIT(11)
+#define LX_DRV_WOLLINKUP_EN BIT(10)
+#define LX_DRV_WOLMAGIC_EN BIT(9)
+#define LX_DRV_WOLCAP_BIOS_EN BIT(8)
+#define LX_DRV_ASPM_SPD1000LMT_MASK ASHFT4(3UL)
+#define LX_DRV_ASPM_SPD1000LMT_SHIFT 4
+#define LX_DRV_ASPM_SPD1000LMT_100M 0
+#define LX_DRV_ASPM_SPD1000LMT_NO 1
+#define LX_DRV_ASPM_SPD1000LMT_1M 2
+#define LX_DRV_ASPM_SPD1000LMT_10M 3
+#define LX_DRV_ASPM_SPD100LMT_MASK ASHFT2(3UL)
+#define LX_DRV_ASPM_SPD100LMT_SHIFT 2
+#define LX_DRV_ASPM_SPD100LMT_1M 0
+#define LX_DRV_ASPM_SPD100LMT_10M 1
+#define LX_DRV_ASPM_SPD100LMT_100M 2
+#define LX_DRV_ASPM_SPD100LMT_NO 3
+#define LX_DRV_ASPM_SPD10LMT_MASK ASHFT0(3UL)
+#define LX_DRV_ASPM_SPD10LMT_SHIFT 0
+#define LX_DRV_ASPM_SPD10LMT_1M 0
+#define LX_DRV_ASPM_SPD10LMT_10M 1
+#define LX_DRV_ASPM_SPD10LMT_100M 2
+#define LX_DRV_ASPM_SPD10LMT_NO 3
+
+/* flag of phy inited */
+#define LX_PHY_INITED 0x003F
+
+/* check if the mac address is valid */
+#define macaddr_valid(_addr) (\
+ ((*(u8 *)(_addr))&1) == 0 && \
+ !(*(u32 *)(_addr) == 0 && *((u16 *)(_addr)+2) == 0))
+
+#define test_set_or_clear(_val, _ctrl, _flag, _bit) \
+do { \
+ if ((_ctrl) & (_flag)) \
+ (_val) |= (_bit); \
+ else \
+ (_val) &= ~(_bit); \
+} while (0)
+
+
+#endif/*_ALX_HWCOMMON_H_*/
+
diff --git a/drivers/net/ethernet/atheros/alx/alx_main.c b/drivers/net/ethernet/atheros/alx/alx_main.c
new file mode 100644
index 0000000..a51c608
--- /dev/null
+++ b/drivers/net/ethernet/atheros/alx/alx_main.c
@@ -0,0 +1,3899 @@
+/*
+ * Copyright (c) 2012 Qualcomm Atheros, Inc.
+ *
+ * Permission to use, copy, modify, and/or distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+#include "alx.h"
+#include "alx_hwcom.h"
+
+char alx_drv_name[] = "alx";
+static const char alx_drv_description[] =
+ "Qualcomm Atheros(R) "
+ "AR813x/AR815x/AR816x PCI-E Ethernet Network Driver";
+
+/* alx_pci_tbl - PCI Device ID Table
+ *
+ * Wildcard entries (PCI_ANY_ID) should come last
+ * Last entry must be all 0s
+ *
+ * { Vendor ID, Device ID, SubVendor ID, SubDevice ID,
+ * Class, Class Mask, private data (not used) }
+ */
+#define ALX_ETHER_DEVICE(device_id) {\
+ PCI_DEVICE(ALX_VENDOR_ID, device_id)}
+static DEFINE_PCI_DEVICE_TABLE(alx_pci_tbl) = {
+ ALX_ETHER_DEVICE(ALX_DEV_ID_AR8131),
+ ALX_ETHER_DEVICE(ALX_DEV_ID_AR8132),
+ ALX_ETHER_DEVICE(ALX_DEV_ID_AR8151_V1),
+ ALX_ETHER_DEVICE(ALX_DEV_ID_AR8151_V2),
+ ALX_ETHER_DEVICE(ALX_DEV_ID_AR8152_V1),
+ ALX_ETHER_DEVICE(ALX_DEV_ID_AR8152_V2),
+ ALX_ETHER_DEVICE(ALX_DEV_ID_AR8161),
+ ALX_ETHER_DEVICE(ALX_DEV_ID_AR8162),
+ {0,}
+};
+MODULE_DEVICE_TABLE(pci, alx_pci_tbl);
+MODULE_AUTHOR("Qualcomm Corporation, <nic-devel@...lcomm.com>");
+MODULE_DESCRIPTION("Qualcomm Atheros Gigabit Ethernet Driver");
+MODULE_LICENSE("Dual BSD/GPL");
+
+static int alx_open_internal(struct alx_adapter *adpt, u32 ctrl);
+static void alx_stop_internal(struct alx_adapter *adpt, u32 ctrl);
+
+int alx_cfg_r16(const struct alx_hw *hw, int reg, u16 *pval)
+{
+ if (!(hw && hw->adpt && hw->adpt->pdev))
+ return -EINVAL;
+ return pci_read_config_word(hw->adpt->pdev, reg, pval);
+}
+
+
+int alx_cfg_w16(const struct alx_hw *hw, int reg, u16 val)
+{
+ if (!(hw && hw->adpt && hw->adpt->pdev))
+ return -EINVAL;
+ return pci_write_config_word(hw->adpt->pdev, reg, val);
+}
+
+
+void alx_mem_flush(const struct alx_hw *hw)
+{
+ readl(hw->hw_addr);
+}
+
+
+void alx_mem_r32(const struct alx_hw *hw, int reg, u32 *val)
+{
+ if (unlikely(!hw->link_up))
+ readl(hw->hw_addr + reg);
+ *(u32 *)val = readl(hw->hw_addr + reg);
+}
+
+
+void alx_mem_w32(const struct alx_hw *hw, int reg, u32 val)
+{
+ if (hw->mac_type == alx_mac_l2cb_v20 && reg < 0x1400)
+ readl(hw->hw_addr + reg);
+ writel(val, hw->hw_addr + reg);
+}
+
+
+static void alx_mem_r16(const struct alx_hw *hw, int reg, u16 *val)
+{
+ if (unlikely(!hw->link_up))
+ readl(hw->hw_addr + reg);
+ *(u16 *)val = readw(hw->hw_addr + reg);
+}
+
+
+static void alx_mem_w16(const struct alx_hw *hw, int reg, u16 val)
+{
+ if (hw->mac_type == alx_mac_l2cb_v20 && reg < 0x1400)
+ readl(hw->hw_addr + reg);
+ writew(val, hw->hw_addr + reg);
+}
+
+
+void alx_mem_w8(const struct alx_hw *hw, int reg, u8 val)
+{
+ if (hw->mac_type == alx_mac_l2cb_v20 && reg < 0x1400)
+ readl(hw->hw_addr + reg);
+ writeb(val, hw->hw_addr + reg);
+}
+
+
+/*
+ * alx_hw_printk
+ */
+void alx_hw_printk(const char *level, const struct alx_hw *hw,
+ const char *fmt, ...)
+{
+ struct va_format vaf;
+ va_list args;
+
+ va_start(args, fmt);
+ vaf.fmt = fmt;
+ vaf.va = &args;
+
+ if (hw && hw->adpt && hw->adpt->netdev)
+ __netdev_printk(level, hw->adpt->netdev, &vaf);
+ else
+ printk("%salx_hw: %pV", level, &vaf);
+
+ va_end(args);
+}
+
+
+/*
+ * alx_validate_mac_addr - Validate MAC address
+ */
+static int alx_validate_mac_addr(u8 *mac_addr)
+{
+ int retval = 0;
+
+ if (mac_addr[0] & 0x01) {
+ printk(KERN_DEBUG "MAC address is multicast\n");
+ retval = -EADDRNOTAVAIL;
+ } else if (mac_addr[0] == 0xff && mac_addr[1] == 0xff) {
+ printk(KERN_DEBUG "MAC address is broadcast\n");
+ retval = -EADDRNOTAVAIL;
+ } else if (mac_addr[0] == 0 && mac_addr[1] == 0 &&
+ mac_addr[2] == 0 && mac_addr[3] == 0 &&
+ mac_addr[4] == 0 && mac_addr[5] == 0) {
+ printk(KERN_DEBUG "MAC address is all zeros\n");
+ retval = -EADDRNOTAVAIL;
+ }
+ return retval;
+}
+
+
+/*
+ * alx_set_mac_type - Sets MAC type
+ */
+static int alx_set_mac_type(struct alx_adapter *adpt)
+{
+ struct alx_hw *hw = &adpt->hw;
+ int retval = 0;
+
+ if (hw->pci_venid == ALX_VENDOR_ID) {
+ switch (hw->pci_devid) {
+ case ALX_DEV_ID_AR8131:
+ hw->mac_type = alx_mac_l1c;
+ break;
+ case ALX_DEV_ID_AR8132:
+ hw->mac_type = alx_mac_l2c;
+ break;
+ case ALX_DEV_ID_AR8151_V1:
+ hw->mac_type = alx_mac_l1d_v1;
+ break;
+ case ALX_DEV_ID_AR8151_V2:
+ /* just use l1d configure */
+ hw->mac_type = alx_mac_l1d_v2;
+ break;
+ case ALX_DEV_ID_AR8152_V1:
+ hw->mac_type = alx_mac_l2cb_v1;
+ break;
+ case ALX_DEV_ID_AR8152_V2:
+ if (hw->pci_revid == ALX_REV_ID_AR8152_V2_0)
+ hw->mac_type = alx_mac_l2cb_v20;
+ else
+ hw->mac_type = alx_mac_l2cb_v21;
+ break;
+ case ALX_DEV_ID_AR8161:
+ hw->mac_type = alx_mac_l1f;
+ break;
+ case ALX_DEV_ID_AR8162:
+ hw->mac_type = alx_mac_l2f;
+ break;
+ default:
+ retval = -EINVAL;
+ break;
+ }
+ } else {
+ retval = -EINVAL;
+ }
+
+ netif_info(adpt, hw, adpt->netdev,
+ "found mac: %d, returns: %d\n", hw->mac_type, retval);
+ return retval;
+}
+
+
+/*
+ * alx_init_hw_callbacks
+ */
+static int alx_init_hw_callbacks(struct alx_adapter *adpt)
+{
+ struct alx_hw *hw = &adpt->hw;
+ int retval = 0;
+
+ alx_set_mac_type(adpt);
+
+ switch (hw->mac_type) {
+ case alx_mac_l1f:
+ case alx_mac_l2f:
+ retval = alf_init_hw_callbacks(hw);
+ break;
+ case alx_mac_l1c:
+ case alx_mac_l2c:
+ case alx_mac_l2cb_v1:
+ case alx_mac_l2cb_v20:
+ case alx_mac_l2cb_v21:
+ case alx_mac_l1d_v1:
+ case alx_mac_l1d_v2:
+ retval = alc_init_hw_callbacks(hw);
+ break;
+ default:
+ retval = -EINVAL;
+ break;
+ }
+ return retval;
+}
+
+
+void alx_reinit_locked(struct alx_adapter *adpt)
+{
+ WARN_ON(in_interrupt());
+
+ while (CHK_ADPT_FLAG(1, STATE_RESETTING))
+ msleep(20);
+ SET_ADPT_FLAG(1, STATE_RESETTING);
+
+ alx_stop_internal(adpt, ALX_OPEN_CTRL_RESET_MAC);
+ alx_open_internal(adpt, ALX_OPEN_CTRL_RESET_MAC);
+
+ CLI_ADPT_FLAG(1, STATE_RESETTING);
+}
+
+
+static void alx_task_schedule(struct alx_adapter *adpt)
+{
+ if (!CHK_ADPT_FLAG(1, STATE_DOWN) &&
+ !CHK_ADPT_FLAG(1, STATE_WATCH_DOG)) {
+ SET_ADPT_FLAG(1, STATE_WATCH_DOG);
+ schedule_work(&adpt->alx_task);
+ }
+}
+
+
+static void alx_check_lsc(struct alx_adapter *adpt)
+{
+ SET_ADPT_FLAG(0, TASK_LSC_REQ);
+ adpt->link_jiffies = jiffies + ALX_TRY_LINK_TIMEOUT;
+
+ if (!CHK_ADPT_FLAG(1, STATE_DOWN))
+ alx_task_schedule(adpt);
+}
+
+
+/*
+ * alx_tx_timeout - Respond to a Tx Hang
+ */
+static void alx_tx_timeout(struct net_device *netdev)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+
+ /* Do the reset outside of interrupt context */
+ if (!CHK_ADPT_FLAG(1, STATE_DOWN)) {
+ SET_ADPT_FLAG(0, TASK_REINIT_REQ);
+ alx_task_schedule(adpt);
+ }
+}
+
+
+/*
+ * alx_set_multicase_list - Multicast and Promiscuous mode set
+ */
+static void alx_set_multicase_list(struct net_device *netdev)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ struct alx_hw *hw = &adpt->hw;
+ struct netdev_hw_addr *ha;
+
+ /* Check for Promiscuous and All Multicast modes */
+ if (netdev->flags & IFF_PROMISC) {
+ SET_HW_FLAG(PROMISC_EN);
+ } else if (netdev->flags & IFF_ALLMULTI) {
+ SET_HW_FLAG(MULTIALL_EN);
+ CLI_HW_FLAG(PROMISC_EN);
+ } else {
+ CLI_HW_FLAG(MULTIALL_EN);
+ CLI_HW_FLAG(PROMISC_EN);
+ }
+ hw->cbs.config_mac_ctrl(hw);
+
+ /* clear the old settings from the multicast hash table */
+ hw->cbs.clear_mc_addr(hw);
+
+ /* comoute mc addresses' hash value ,and put it into hash table */
+ netdev_for_each_mc_addr(ha, netdev)
+ hw->cbs.set_mc_addr(hw, ha->addr);
+}
+
+
+/*
+ * alx_set_mac - Change the Ethernet Address of the NIC
+ */
+static int alx_set_mac_address(struct net_device *netdev, void *data)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ struct alx_hw *hw = &adpt->hw;
+ struct sockaddr *addr = data;
+
+ if (!is_valid_ether_addr(addr->sa_data))
+ return -EADDRNOTAVAIL;
+
+ if (netif_running(netdev))
+ return -EBUSY;
+
+ if (netdev->addr_assign_type & NET_ADDR_RANDOM)
+ netdev->addr_assign_type ^= NET_ADDR_RANDOM;
+
+ memcpy(netdev->dev_addr, addr->sa_data, netdev->addr_len);
+ memcpy(hw->mac_addr, addr->sa_data, netdev->addr_len);
+
+ if (hw->cbs.set_mac_addr)
+ hw->cbs.set_mac_addr(hw, hw->mac_addr);
+ return 0;
+}
+
+
+/*
+ * Read / Write Ptr Initialize:
+ */
+static void alx_init_ring_ptrs(struct alx_adapter *adpt)
+{
+ int i, j;
+
+ for (i = 0; i < adpt->num_txques; i++) {
+ struct alx_tx_queue *txque = adpt->tx_queue[i];
+ struct alx_buffer *tpbuf = txque->tpq.tpbuff;
+ txque->tpq.produce_idx = 0;
+ txque->tpq.consume_idx = 0;
+ for (j = 0; j < txque->tpq.count; j++)
+ tpbuf[j].dma = 0;
+ }
+
+ for (i = 0; i < adpt->num_hw_rxques; i++) {
+ struct alx_rx_queue *rxque = adpt->rx_queue[i];
+ struct alx_buffer *rfbuf = rxque->rfq.rfbuff;
+ rxque->rrq.produce_idx = 0;
+ rxque->rrq.consume_idx = 0;
+ rxque->rfq.produce_idx = 0;
+ rxque->rfq.consume_idx = 0;
+ for (j = 0; j < rxque->rfq.count; j++)
+ rfbuf[j].dma = 0;
+ }
+
+ if (CHK_ADPT_FLAG(0, SRSS_EN))
+ goto srrs_enable;
+
+ return;
+
+srrs_enable:
+ for (i = 0; i < adpt->num_sw_rxques; i++) {
+ struct alx_rx_queue *rxque = adpt->rx_queue[i];
+ rxque->swq.produce_idx = 0;
+ rxque->swq.consume_idx = 0;
+ }
+}
+
+
+static void alx_config_rss(struct alx_adapter *adpt)
+{
+ static const u8 key[40] = {
+ 0xE2, 0x91, 0xD7, 0x3D, 0x18, 0x05, 0xEC, 0x6C,
+ 0x2A, 0x94, 0xB3, 0x0D, 0xA5, 0x4F, 0x2B, 0xEC,
+ 0xEA, 0x49, 0xAF, 0x7C, 0xE2, 0x14, 0xAD, 0x3D,
+ 0xB8, 0x55, 0xAA, 0xBE, 0x6A, 0x3E, 0x67, 0xEA,
+ 0x14, 0x36, 0x4D, 0x17, 0x3B, 0xED, 0x20, 0x0D};
+
+ struct alx_hw *hw = &adpt->hw;
+ u32 reta = 0;
+ int i, j;
+
+ /* initialize rss hash type and idt table size */
+ hw->rss_hstype = ALX_RSS_HSTYP_ALL_EN;
+ hw->rss_idt_size = 0x100;
+
+ /* Fill out redirection table */
+ memcpy(hw->rss_key, key, sizeof(hw->rss_key));
+
+ /* Fill out redirection table */
+ memset(hw->rss_idt, 0x0, sizeof(hw->rss_idt));
+ for (i = 0, j = 0; i < 256; i++, j++) {
+ if (j == adpt->max_rxques)
+ j = 0;
+ reta |= (j << ((i & 7) * 4));
+ if ((i & 7) == 7) {
+ hw->rss_idt[i>>3] = reta;
+ reta = 0;
+ }
+ }
+
+ if (hw->cbs.config_rss)
+ hw->cbs.config_rss(hw, CHK_ADPT_FLAG(0, SRSS_EN));
+}
+
+
+/*
+ * alx_receive_skb
+ */
+static void alx_receive_skb(struct alx_adapter *adpt,
+ struct sk_buff *skb,
+ u16 vlan_tag, bool vlan_flag)
+{
+ if (vlan_flag) {
+ u16 vlan;
+ ALX_TAG_TO_VLAN(vlan_tag, vlan);
+ __vlan_hwaccel_put_tag(skb, vlan);
+ }
+ netif_receive_skb(skb);
+}
+
+
+static bool alx_get_rrdesc(struct alx_rx_queue *rxque,
+ union alx_sw_rrdesc *srrd)
+{
+ union alx_rrdesc *hrrd =
+ ALX_RRD(rxque, rxque->rrq.consume_idx);
+
+ srrd->dfmt.dw0 = le32_to_cpu(hrrd->dfmt.dw0);
+ srrd->dfmt.dw1 = le32_to_cpu(hrrd->dfmt.dw1);
+ srrd->dfmt.dw2 = le32_to_cpu(hrrd->dfmt.dw2);
+ srrd->dfmt.dw3 = le32_to_cpu(hrrd->dfmt.dw3);
+
+ if (!srrd->genr.update)
+ return false;
+
+ if (likely(srrd->genr.nor != 1)) {
+ /* TODO support mul rfd*/
+ printk(KERN_EMERG "Multi rfd not support yet!\n");
+ }
+
+ srrd->genr.update = 0;
+ hrrd->dfmt.dw3 = cpu_to_le32(srrd->dfmt.dw3);
+ if (++rxque->rrq.consume_idx == rxque->rrq.count)
+ rxque->rrq.consume_idx = 0;
+
+ return true;
+}
+
+
+static bool alx_set_rfdesc(struct alx_rx_queue *rxque,
+ union alx_sw_rfdesc *srfd)
+{
+ union alx_rfdesc *hrfd =
+ ALX_RFD(rxque, rxque->rfq.produce_idx);
+
+ hrfd->qfmt.qw0 = cpu_to_le64(srfd->qfmt.qw0);
+
+ if (++rxque->rfq.produce_idx == rxque->rfq.count)
+ rxque->rfq.produce_idx = 0;
+
+ return true;
+}
+
+
+static bool alx_set_tpdesc(struct alx_tx_queue *txque,
+ union alx_sw_tpdesc *stpd)
+{
+ union alx_tpdesc *htpd;
+
+ txque->tpq.last_produce_idx = txque->tpq.produce_idx;
+ htpd = ALX_TPD(txque, txque->tpq.produce_idx);
+
+ if (++txque->tpq.produce_idx == txque->tpq.count)
+ txque->tpq.produce_idx = 0;
+
+ htpd->dfmt.dw0 = cpu_to_le32(stpd->dfmt.dw0);
+ htpd->dfmt.dw1 = cpu_to_le32(stpd->dfmt.dw1);
+ htpd->qfmt.qw1 = cpu_to_le64(stpd->qfmt.qw1);
+
+ return true;
+}
+
+
+static void alx_set_tpdesc_lastfrag(struct alx_tx_queue *txque)
+{
+ union alx_tpdesc *htpd;
+#define ALX_TPD_LAST_FLAGMENT 0x80000000
+ htpd = ALX_TPD(txque, txque->tpq.last_produce_idx);
+ htpd->dfmt.dw1 |= cpu_to_le32(ALX_TPD_LAST_FLAGMENT);
+}
+
+
+static int alx_refresh_rx_buffer(struct alx_rx_queue *rxque)
+{
+ struct alx_adapter *adpt = netdev_priv(rxque->netdev);
+ struct alx_hw *hw = &adpt->hw;
+ struct alx_buffer *curr_rxbuf;
+ struct alx_buffer *next_rxbuf;
+ union alx_sw_rfdesc srfd;
+ struct sk_buff *skb;
+ void *skb_data = NULL;
+ u16 count = 0;
+ u16 next_produce_idx;
+
+ next_produce_idx = rxque->rfq.produce_idx;
+ if (++next_produce_idx == rxque->rfq.count)
+ next_produce_idx = 0;
+ curr_rxbuf = GET_RF_BUFFER(rxque, rxque->rfq.produce_idx);
+ next_rxbuf = GET_RF_BUFFER(rxque, next_produce_idx);
+
+ /* this always has a blank rx_buffer*/
+ while (next_rxbuf->dma == 0) {
+ skb = dev_alloc_skb(adpt->rxbuf_size);
+ if (unlikely(!skb)) {
+ alx_err(adpt, "alloc rx buffer failed\n");
+ break;
+ }
+
+ /*
+ * Make buffer alignment 2 beyond a 16 byte boundary
+ * this will result in a 16 byte aligned IP header after
+ * the 14 byte MAC header is removed
+ */
+ skb_data = skb->data;
+ /*skb_reserve(skb, NET_IP_ALIGN);*/
+ curr_rxbuf->skb = skb;
+ curr_rxbuf->length = adpt->rxbuf_size;
+ curr_rxbuf->dma = dma_map_single(rxque->dev,
+ skb_data,
+ curr_rxbuf->length,
+ DMA_FROM_DEVICE);
+ srfd.genr.addr = curr_rxbuf->dma;
+ alx_set_rfdesc(rxque, &srfd);
+
+ next_produce_idx = rxque->rfq.produce_idx;
+ if (++next_produce_idx == rxque->rfq.count)
+ next_produce_idx = 0;
+ curr_rxbuf = GET_RF_BUFFER(rxque, rxque->rfq.produce_idx);
+ next_rxbuf = GET_RF_BUFFER(rxque, next_produce_idx);
+ count++;
+ }
+
+ if (count) {
+ wmb();
+ alx_mem_w16(hw, rxque->produce_reg, rxque->rfq.produce_idx);
+ netif_info(adpt, rx_err, adpt->netdev,
+ "RX[%d]: prod_reg[%x] = 0x%x, rfq.prod_idx = 0x%x\n",
+ rxque->que_idx, rxque->produce_reg,
+ rxque->rfq.produce_idx, rxque->rfq.produce_idx);
+ }
+ return count;
+}
+
+
+static void alx_clean_rfdesc(struct alx_rx_queue *rxque,
+ union alx_sw_rrdesc *srrd)
+{
+ struct alx_buffer *rfbuf = rxque->rfq.rfbuff;
+ u32 consume_idx = srrd->genr.si;
+ u32 i;
+
+ for (i = 0; i < srrd->genr.nor; i++) {
+ rfbuf[consume_idx].skb = NULL;
+ if (++consume_idx == rxque->rfq.count)
+ consume_idx = 0;
+ }
+ rxque->rfq.consume_idx = consume_idx;
+}
+
+
+static bool alx_dispatch_rx_irq(struct alx_msix_param *msix,
+ struct alx_rx_queue *rxque)
+{
+ struct alx_adapter *adpt = msix->adpt;
+ struct pci_dev *pdev = adpt->pdev;
+ struct net_device *netdev = adpt->netdev;
+
+ union alx_sw_rrdesc srrd;
+ struct alx_buffer *rfbuf;
+ struct sk_buff *skb;
+ struct alx_rx_queue *swque;
+ struct alx_sw_buffer *curr_swbuf;
+ struct alx_sw_buffer *next_swbuf;
+
+ u16 next_produce_idx;
+ u16 count = 0;
+
+ while (1) {
+ if (!alx_get_rrdesc(rxque, &srrd))
+ break;
+
+ if (srrd.genr.res || srrd.genr.lene) {
+ alx_clean_rfdesc(rxque, &srrd);
+ netif_warn(adpt, rx_err, adpt->netdev,
+ "wrong packet! rrd->word3 is 0x%08x\n",
+ srrd.dfmt.dw3);
+ continue;
+ }
+
+ /* Good Receive */
+ if (likely(srrd.genr.nor == 1)) {
+ rfbuf = GET_RF_BUFFER(rxque, srrd.genr.si);
+ pci_unmap_single(pdev, rfbuf->dma,
+ rfbuf->length, DMA_FROM_DEVICE);
+ rfbuf->dma = 0;
+ skb = rfbuf->skb;
+ netif_info(adpt, rx_err, adpt->netdev,
+ "skb addr = %p, rxbuf_len = %x\n",
+ skb->data, rfbuf->length);
+ } else {
+ /* TODO */
+ alx_err(adpt, "Multil rfd not support yet!\n");
+ break;
+ }
+ alx_clean_rfdesc(rxque, &srrd);
+
+ skb_put(skb, srrd.genr.pkt_len - ETH_FCS_LEN);
+ skb->protocol = eth_type_trans(skb, netdev);
+ skb_checksum_none_assert(skb);
+
+ /* start to dispatch */
+ swque = adpt->rx_queue[srrd.genr.rss_cpu];
+ next_produce_idx = swque->swq.produce_idx;
+ if (++next_produce_idx == swque->swq.count)
+ next_produce_idx = 0;
+
+ curr_swbuf = GET_SW_BUFFER(swque, swque->swq.produce_idx);
+ next_swbuf = GET_SW_BUFFER(swque, next_produce_idx);
+
+ /*
+ * if full, will discard the packet,
+ * and at lease has a blank sw_buffer.
+ */
+ if (!next_swbuf->skb) {
+ curr_swbuf->skb = skb;
+ curr_swbuf->vlan_tag = srrd.genr.vlan_tag;
+ curr_swbuf->vlan_flag = srrd.genr.vlan_flag;
+ if (++swque->swq.produce_idx == swque->swq.count)
+ swque->swq.produce_idx = 0;
+ }
+
+ count++;
+ if (count == 32)
+ break;
+ }
+ if (count)
+ alx_refresh_rx_buffer(rxque);
+ return true;
+}
+
+
+static bool alx_handle_srx_irq(struct alx_msix_param *msix,
+ struct alx_rx_queue *rxque,
+ int *num_pkts, int max_pkts)
+{
+ struct alx_adapter *adpt = msix->adpt;
+ struct alx_sw_buffer *swbuf;
+ bool retval = true;
+
+ while (rxque->swq.consume_idx != rxque->swq.produce_idx) {
+ swbuf = GET_SW_BUFFER(rxque, rxque->swq.consume_idx);
+
+ alx_receive_skb(adpt, swbuf->skb, (u16)swbuf->vlan_tag,
+ (bool)swbuf->vlan_flag);
+ swbuf->skb = NULL;
+
+ if (++rxque->swq.consume_idx == rxque->swq.count)
+ rxque->swq.consume_idx = 0;
+
+ (*num_pkts)++;
+ if (*num_pkts >= max_pkts) {
+ retval = false;
+ break;
+ }
+ }
+ return retval;
+}
+
+
+static bool alx_handle_rx_irq(struct alx_msix_param *msix,
+ struct alx_rx_queue *rxque,
+ int *num_pkts, int max_pkts)
+{
+ struct alx_adapter *adpt = msix->adpt;
+ struct alx_hw *hw = &adpt->hw;
+ struct pci_dev *pdev = adpt->pdev;
+ struct net_device *netdev = adpt->netdev;
+
+ union alx_sw_rrdesc srrd;
+ struct alx_buffer *rfbuf;
+ struct sk_buff *skb;
+
+ u16 hw_consume_idx, num_consume_pkts;
+ u16 count = 0;
+
+ alx_mem_r16(hw, rxque->consume_reg, &hw_consume_idx);
+ num_consume_pkts = (hw_consume_idx > rxque->rrq.consume_idx) ?
+ (hw_consume_idx - rxque->rrq.consume_idx) :
+ (hw_consume_idx + rxque->rrq.count - rxque->rrq.consume_idx);
+
+ while (1) {
+ if (!num_consume_pkts)
+ break;
+
+ if (!alx_get_rrdesc(rxque, &srrd))
+ break;
+
+ if (srrd.genr.res || srrd.genr.lene) {
+ alx_clean_rfdesc(rxque, &srrd);
+ netif_warn(adpt, rx_err, adpt->netdev,
+ "wrong packet! rrd->word3 is 0x%08x\n",
+ srrd.dfmt.dw3);
+ continue;
+ }
+
+ /* TODO: Good Receive */
+ if (likely(srrd.genr.nor == 1)) {
+ rfbuf = GET_RF_BUFFER(rxque, srrd.genr.si);
+ pci_unmap_single(pdev, rfbuf->dma, rfbuf->length,
+ DMA_FROM_DEVICE);
+ rfbuf->dma = 0;
+ skb = rfbuf->skb;
+ } else {
+ /* TODO */
+ alx_err(adpt, "Multil rfd not support yet!\n");
+ break;
+ }
+ alx_clean_rfdesc(rxque, &srrd);
+
+ skb_put(skb, srrd.genr.pkt_len - ETH_FCS_LEN);
+ skb->protocol = eth_type_trans(skb, netdev);
+ skb_checksum_none_assert(skb);
+ alx_receive_skb(adpt, skb, (u16)srrd.genr.vlan_tag,
+ (bool)srrd.genr.vlan_flag);
+
+ num_consume_pkts--;
+ count++;
+ (*num_pkts)++;
+ if (*num_pkts >= max_pkts)
+ break;
+ }
+ if (count)
+ alx_refresh_rx_buffer(rxque);
+
+ return true;
+}
+
+
+static bool alx_handle_tx_irq(struct alx_msix_param *msix,
+ struct alx_tx_queue *txque)
+{
+ struct alx_adapter *adpt = msix->adpt;
+ struct alx_hw *hw = &adpt->hw;
+ struct alx_buffer *tpbuf;
+ u16 consume_data;
+
+ alx_mem_r16(hw, txque->consume_reg, &consume_data);
+ netif_info(adpt, tx_err, adpt->netdev,
+ "TX[%d]: consume_reg[0x%x] = 0x%x, tpq.consume_idx = 0x%x\n",
+ txque->que_idx, txque->consume_reg, consume_data,
+ txque->tpq.consume_idx);
+
+
+ while (txque->tpq.consume_idx != consume_data) {
+ tpbuf = GET_TP_BUFFER(txque, txque->tpq.consume_idx);
+ if (tpbuf->dma) {
+ pci_unmap_page(adpt->pdev, tpbuf->dma, tpbuf->length,
+ DMA_TO_DEVICE);
+ tpbuf->dma = 0;
+ }
+
+ if (tpbuf->skb) {
+ dev_kfree_skb_irq(tpbuf->skb);
+ tpbuf->skb = NULL;
+ }
+
+ if (++txque->tpq.consume_idx == txque->tpq.count)
+ txque->tpq.consume_idx = 0;
+ }
+
+ if (netif_queue_stopped(adpt->netdev) &&
+ netif_carrier_ok(adpt->netdev)) {
+ netif_wake_queue(adpt->netdev);
+ }
+ return true;
+}
+
+
+static irqreturn_t alx_msix_timer(int irq, void *data)
+{
+ struct alx_msix_param *msix = data;
+ struct alx_adapter *adpt = msix->adpt;
+ struct alx_hw *hw = &adpt->hw;
+ u32 isr;
+
+ hw->cbs.disable_msix_intr(hw, msix->vec_idx);
+
+ alx_mem_r32(hw, ALX_ISR, &isr);
+ isr = isr & (ALX_ISR_TIMER | ALX_ISR_MANU);
+
+
+ if (isr == 0) {
+ hw->cbs.enable_msix_intr(hw, msix->vec_idx);
+ return IRQ_NONE;
+ }
+
+ /* Ack ISR */
+ alx_mem_w32(hw, ALX_ISR, isr);
+
+ if (isr & ALX_ISR_MANU) {
+ adpt->net_stats.tx_carrier_errors++;
+ alx_check_lsc(adpt);
+ }
+
+ hw->cbs.enable_msix_intr(hw, msix->vec_idx);
+
+ return IRQ_HANDLED;
+}
+
+
+static irqreturn_t alx_msix_alert(int irq, void *data)
+{
+ struct alx_msix_param *msix = data;
+ struct alx_adapter *adpt = msix->adpt;
+ struct alx_hw *hw = &adpt->hw;
+ u32 isr;
+
+ hw->cbs.disable_msix_intr(hw, msix->vec_idx);
+
+ alx_mem_r32(hw, ALX_ISR, &isr);
+ isr = isr & ALX_ISR_ALERT_MASK;
+
+ if (isr == 0) {
+ hw->cbs.enable_msix_intr(hw, msix->vec_idx);
+ return IRQ_NONE;
+ }
+ alx_mem_w32(hw, ALX_ISR, isr);
+
+ hw->cbs.enable_msix_intr(hw, msix->vec_idx);
+
+ return IRQ_HANDLED;
+}
+
+
+static irqreturn_t alx_msix_smb(int irq, void *data)
+{
+ struct alx_msix_param *msix = data;
+ struct alx_adapter *adpt = msix->adpt;
+ struct alx_hw *hw = &adpt->hw;
+
+ hw->cbs.disable_msix_intr(hw, msix->vec_idx);
+
+ hw->cbs.enable_msix_intr(hw, msix->vec_idx);
+
+ return IRQ_HANDLED;
+}
+
+
+static irqreturn_t alx_msix_phy(int irq, void *data)
+{
+ struct alx_msix_param *msix = data;
+ struct alx_adapter *adpt = msix->adpt;
+ struct alx_hw *hw = &adpt->hw;
+
+ hw->cbs.disable_msix_intr(hw, msix->vec_idx);
+
+ if (hw->cbs.ack_phy_intr)
+ hw->cbs.ack_phy_intr(hw);
+
+ adpt->net_stats.tx_carrier_errors++;
+ alx_check_lsc(adpt);
+
+ hw->cbs.enable_msix_intr(hw, msix->vec_idx);
+
+ return IRQ_HANDLED;
+}
+
+
+/*
+ * alx_msix_rtx
+ */
+static irqreturn_t alx_msix_rtx(int irq, void *data)
+{
+ struct alx_msix_param *msix = data;
+ struct alx_adapter *adpt = msix->adpt;
+ struct alx_hw *hw = &adpt->hw;
+
+ netif_info(adpt, intr, adpt->netdev,
+ "msix vec_idx = %d\n", msix->vec_idx);
+
+ hw->cbs.disable_msix_intr(hw, msix->vec_idx);
+ if (!msix->rx_count && !msix->tx_count) {
+ hw->cbs.enable_msix_intr(hw, msix->vec_idx);
+ return IRQ_HANDLED;
+ }
+
+ napi_schedule(&msix->napi);
+ return IRQ_HANDLED;
+}
+
+
+/*
+ * alx_napi_msix_rtx
+ */
+static int alx_napi_msix_rtx(struct napi_struct *napi, int max_pkts)
+{
+ struct alx_msix_param *msix =
+ container_of(napi, struct alx_msix_param, napi);
+ struct alx_adapter *adpt = msix->adpt;
+ struct alx_hw *hw = &adpt->hw;
+ struct alx_rx_queue *rxque;
+ struct alx_rx_queue *swque;
+ struct alx_tx_queue *txque;
+ unsigned long flags = 0;
+ bool complete = true;
+ int num_pkts = 0;
+ int rque_idx, tque_idx;
+ int i, j;
+
+ netif_info(adpt, intr, adpt->netdev,
+ "NAPI: msix vec_idx = %d\n", msix->vec_idx);
+
+ /* RX */
+ for (i = 0; i < msix->rx_count; i++) {
+ rque_idx = msix->rx_map[i];
+ num_pkts = 0;
+ if (CHK_ADPT_FLAG(0, SRSS_EN)) {
+ if (!spin_trylock_irqsave(&adpt->rx_lock, flags))
+ goto clean_sw_irq;
+
+ for (j = 0; j < adpt->num_hw_rxques; j++)
+ alx_dispatch_rx_irq(msix, adpt->rx_queue[j]);
+
+ spin_unlock_irqrestore(&adpt->rx_lock, flags);
+clean_sw_irq:
+ swque = adpt->rx_queue[rque_idx];
+ complete &= alx_handle_srx_irq(msix, swque, &num_pkts,
+ max_pkts);
+
+ } else {
+ rxque = adpt->rx_queue[rque_idx];
+ complete &= alx_handle_rx_irq(msix, rxque, &num_pkts,
+ max_pkts);
+ }
+ }
+
+
+ /* Handle TX */
+ for (i = 0; i < msix->tx_count; i++) {
+ tque_idx = msix->tx_map[i];
+ txque = adpt->tx_queue[tque_idx];
+ complete &= alx_handle_tx_irq(msix, txque);
+ }
+
+ if (!complete) {
+ netif_info(adpt, intr, adpt->netdev,
+ "Some packets in the queue are not handled!\n");
+ num_pkts = max_pkts;
+ }
+
+ netif_info(adpt, intr, adpt->netdev,
+ "num_pkts = %d, max_pkts = %d\n", num_pkts, max_pkts);
+ /* If all work done, exit the polling mode */
+ if (num_pkts < max_pkts) {
+ napi_complete(napi);
+ if (!CHK_ADPT_FLAG(1, STATE_DOWN))
+ hw->cbs.enable_msix_intr(hw, msix->vec_idx);
+ }
+
+ return num_pkts;
+}
+
+
+
+/*
+ * alx_napi_legacy_rtx - NAPI Rx polling callback
+ */
+static int alx_napi_legacy_rtx(struct napi_struct *napi, int max_pkts)
+{
+ struct alx_msix_param *msix =
+ container_of(napi, struct alx_msix_param, napi);
+ struct alx_adapter *adpt = msix->adpt;
+ struct alx_hw *hw = &adpt->hw;
+ int complete = true;
+ int num_pkts = 0;
+ int que_idx;
+
+ netif_info(adpt, intr, adpt->netdev,
+ "NAPI: msix vec_idx = %d\n", msix->vec_idx);
+
+ /* Keep link state information with original netdev */
+ if (!netif_carrier_ok(adpt->netdev))
+ goto enable_rtx_irq;
+
+ for (que_idx = 0; que_idx < adpt->num_txques; que_idx++)
+ complete &= alx_handle_tx_irq(msix, adpt->tx_queue[que_idx]);
+
+ for (que_idx = 0; que_idx < adpt->num_hw_rxques; que_idx++) {
+ num_pkts = 0;
+ complete &= alx_handle_rx_irq(msix, adpt->rx_queue[que_idx],
+ &num_pkts, max_pkts);
+ }
+
+ if (!complete)
+ num_pkts = max_pkts;
+
+ if (num_pkts < max_pkts) {
+enable_rtx_irq:
+ napi_complete(napi);
+ hw->intr_mask |= (ALX_ISR_RXQ | ALX_ISR_TXQ);
+ alx_mem_w32(hw, ALX_IMR, hw->intr_mask);
+ }
+ return num_pkts;
+}
+
+
+static void alx_set_msix_flags(struct alx_msix_param *msix,
+ enum alx_msix_type type, int index)
+{
+ if (type == alx_msix_type_rx) {
+ switch (index) {
+ case 0:
+ SET_MSIX_FLAG(RX0);
+ break;
+ case 1:
+ SET_MSIX_FLAG(RX1);
+ break;
+ case 2:
+ SET_MSIX_FLAG(RX2);
+ break;
+ case 3:
+ SET_MSIX_FLAG(RX3);
+ break;
+ case 4:
+ SET_MSIX_FLAG(RX4);
+ break;
+ case 5:
+ SET_MSIX_FLAG(RX5);
+ break;
+ case 6:
+ SET_MSIX_FLAG(RX6);
+ break;
+ case 7:
+ SET_MSIX_FLAG(RX7);
+ break;
+ default:
+ printk(KERN_ERR "alx_set_msix_flags: rx error.");
+ break;
+ }
+ } else if (type == alx_msix_type_tx) {
+ switch (index) {
+ case 0:
+ SET_MSIX_FLAG(TX0);
+ break;
+ case 1:
+ SET_MSIX_FLAG(TX1);
+ break;
+ case 2:
+ SET_MSIX_FLAG(TX2);
+ break;
+ case 3:
+ SET_MSIX_FLAG(TX3);
+ break;
+ default:
+ printk(KERN_ERR "alx_set_msix_flags: tx error.");
+ break;
+ }
+ } else if (type == alx_msix_type_other) {
+ switch (index) {
+ case ALX_MSIX_TYPE_OTH_TIMER:
+ SET_MSIX_FLAG(TIMER);
+ break;
+ case ALX_MSIX_TYPE_OTH_ALERT:
+ SET_MSIX_FLAG(ALERT);
+ break;
+ case ALX_MSIX_TYPE_OTH_SMB:
+ SET_MSIX_FLAG(SMB);
+ break;
+ case ALX_MSIX_TYPE_OTH_PHY:
+ SET_MSIX_FLAG(PHY);
+ break;
+ default:
+ printk(KERN_ERR "alx_set_msix_flags: other error.");
+ break;
+ }
+ }
+}
+
+
+/* alx_setup_msix_maps */
+static int alx_setup_msix_maps(struct alx_adapter *adpt)
+{
+ int msix_idx = 0;
+ int que_idx = 0;
+ int num_rxques = adpt->num_rxques;
+ int num_txques = adpt->num_txques;
+ int num_msix_rxques = adpt->num_msix_rxques;
+ int num_msix_txques = adpt->num_msix_txques;
+ int num_msix_noques = adpt->num_msix_noques;
+
+ if (CHK_ADPT_FLAG(0, FIXED_MSIX))
+ goto fixed_msix_map;
+
+ netif_warn(adpt, ifup, adpt->netdev,
+ "don't support non-fixed msix map\n");
+ return -EINVAL;
+
+fixed_msix_map:
+ /*
+ * For RX queue msix map
+ */
+ msix_idx = 0;
+ for (que_idx = 0; que_idx < num_msix_rxques; que_idx++, msix_idx++) {
+ struct alx_msix_param *msix = adpt->msix[msix_idx];
+ if (que_idx < num_rxques) {
+ adpt->rx_queue[que_idx]->msix = msix;
+ msix->rx_map[msix->rx_count] = que_idx;
+ msix->rx_count++;
+ alx_set_msix_flags(msix, alx_msix_type_rx, que_idx);
+ }
+ }
+ if (msix_idx != num_msix_rxques)
+ netif_warn(adpt, ifup, adpt->netdev, "msix_idx is wrong\n");
+
+ /*
+ * For TX queue msix map
+ */
+ for (que_idx = 0; que_idx < num_msix_txques; que_idx++, msix_idx++) {
+ struct alx_msix_param *msix = adpt->msix[msix_idx];
+ if (que_idx < num_txques) {
+ adpt->tx_queue[que_idx]->msix = msix;
+ msix->tx_map[msix->tx_count] = que_idx;
+ msix->tx_count++;
+ alx_set_msix_flags(msix, alx_msix_type_tx, que_idx);
+ }
+ }
+ if (msix_idx != (num_msix_rxques + num_msix_txques))
+ netif_warn(adpt, ifup, adpt->netdev, "msix_idx is wrong\n");
+
+
+ /*
+ * For NON queue msix map
+ */
+ for (que_idx = 0; que_idx < num_msix_noques; que_idx++, msix_idx++) {
+ struct alx_msix_param *msix = adpt->msix[msix_idx];
+ alx_set_msix_flags(msix, alx_msix_type_other, que_idx);
+ }
+ return 0;
+}
+
+
+static inline void alx_reset_msix_maps(struct alx_adapter *adpt)
+{
+ int que_idx, msix_idx;
+
+ for (que_idx = 0; que_idx < adpt->num_rxques; que_idx++)
+ adpt->rx_queue[que_idx]->msix = NULL;
+ for (que_idx = 0; que_idx < adpt->num_txques; que_idx++)
+ adpt->tx_queue[que_idx]->msix = NULL;
+
+ for (msix_idx = 0; msix_idx < adpt->num_msix_intrs; msix_idx++) {
+ struct alx_msix_param *msix = adpt->msix[msix_idx];
+ memset(msix->rx_map, 0, sizeof(msix->rx_map));
+ memset(msix->tx_map, 0, sizeof(msix->tx_map));
+ msix->rx_count = 0;
+ msix->tx_count = 0;
+ CLI_MSIX_FLAG(ALL);
+ }
+}
+
+
+/*
+ * alx_enable_intr - Enable default interrupt generation settings
+ */
+static inline void alx_enable_intr(struct alx_adapter *adpt)
+{
+ struct alx_hw *hw = &adpt->hw;
+ int i;
+
+ if (!atomic_dec_and_test(&adpt->irq_sem))
+ return;
+
+ if (hw->cbs.enable_legacy_intr)
+ hw->cbs.enable_legacy_intr(hw);
+
+ /* enable all MSIX IRQs */
+ for (i = 0; i < adpt->num_msix_intrs; i++) {
+ if (hw->cbs.disable_msix_intr)
+ hw->cbs.disable_msix_intr(hw, i);
+ if (hw->cbs.enable_msix_intr)
+ hw->cbs.enable_msix_intr(hw, i);
+ }
+}
+
+
+/* alx_disable_intr - Mask off interrupt generation on the NIC */
+static inline void alx_disable_intr(struct alx_adapter *adpt)
+{
+ struct alx_hw *hw = &adpt->hw;
+ atomic_inc(&adpt->irq_sem);
+
+ if (hw->cbs.disable_legacy_intr)
+ hw->cbs.disable_legacy_intr(hw);
+
+ if (CHK_ADPT_FLAG(0, MSIX_EN)) {
+ int i;
+ for (i = 0; i < adpt->num_msix_intrs; i++) {
+ synchronize_irq(adpt->msix_entries[i].vector);
+ hw->cbs.disable_msix_intr(hw, i);
+ }
+ } else {
+ synchronize_irq(adpt->pdev->irq);
+ }
+
+
+}
+
+
+/*
+ * alx_interrupt - Interrupt Handler
+ */
+static irqreturn_t alx_interrupt(int irq, void *data)
+{
+ struct net_device *netdev = data;
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ struct alx_hw *hw = &adpt->hw;
+ struct alx_msix_param *msix = adpt->msix[0];
+ int max_intrs = ALX_MAX_HANDLED_INTRS;
+ u32 isr, status;
+
+ do {
+ alx_mem_r32(hw, ALX_ISR, &isr);
+ status = isr & hw->intr_mask;
+
+ if (status == 0) {
+ alx_mem_w32(hw, ALX_ISR, 0);
+ if (max_intrs != ALX_MAX_HANDLED_INTRS)
+ return IRQ_HANDLED;
+ return IRQ_NONE;
+ }
+
+ /* ack ISR to PHY register */
+ if (status & ALX_ISR_PHY)
+ hw->cbs.ack_phy_intr(hw);
+ /* ack ISR to MAC register */
+ alx_mem_w32(hw, ALX_ISR, status | ALX_ISR_DIS);
+
+ if (status & ALX_ISR_ERROR) {
+ netif_warn(adpt, intr, adpt->netdev,
+ "isr error (status = 0x%x)\n",
+ status & ALX_ISR_ERROR);
+ if (status & ALX_ISR_PCIE_FERR) {
+ alx_mem_w16(hw, ALX_DEV_STAT,
+ ALX_DEV_STAT_FERR |
+ ALX_DEV_STAT_NFERR |
+ ALX_DEV_STAT_CERR);
+ }
+ /* reset MAC */
+ SET_ADPT_FLAG(0, TASK_REINIT_REQ);
+ alx_task_schedule(adpt);
+ return IRQ_HANDLED;
+ }
+
+ if (status & (ALX_ISR_RXQ | ALX_ISR_TXQ)) {
+ if (napi_schedule_prep(&(msix->napi))) {
+ hw->intr_mask &= ~(ALX_ISR_RXQ | ALX_ISR_TXQ);
+ alx_mem_w32(hw, ALX_IMR, hw->intr_mask);
+ __napi_schedule(&(msix->napi));
+ }
+ }
+
+ if (status & ALX_ISR_OVER) {
+ netif_warn(adpt, intr, adpt->netdev,
+ "TX/RX overflow (status = 0x%x)\n",
+ status & ALX_ISR_OVER);
+ }
+
+ /* link event */
+ if (status & (ALX_ISR_PHY | ALX_ISR_MANU)) {
+ netdev->stats.tx_carrier_errors++;
+ alx_check_lsc(adpt);
+ break;
+ }
+
+ } while (--max_intrs > 0);
+ /* re-enable Interrupt*/
+ alx_mem_w32(hw, ALX_ISR, 0);
+ return IRQ_HANDLED;
+}
+
+
+/*
+ * alx_request_msix_irqs - Initialize MSI-X interrupts
+ */
+static int alx_request_msix_irq(struct alx_adapter *adpt)
+{
+ struct net_device *netdev = adpt->netdev;
+ irqreturn_t (*handler)(int, void *);
+ int msix_idx;
+ int num_msix_intrs = adpt->num_msix_intrs;
+ int rx_idx = 0, tx_idx = 0;
+ int i;
+ int retval;
+
+ retval = alx_setup_msix_maps(adpt);
+ if (retval)
+ return retval;
+
+ for (msix_idx = 0; msix_idx < num_msix_intrs; msix_idx++) {
+ struct alx_msix_param *msix = adpt->msix[msix_idx];
+
+ if (CHK_MSIX_FLAG(RXS) && CHK_MSIX_FLAG(TXS)) {
+ handler = alx_msix_rtx;
+ sprintf(msix->name, "%s:%s%d",
+ netdev->name, "rtx", rx_idx);
+ rx_idx++;
+ tx_idx++;
+ } else if (CHK_MSIX_FLAG(RXS)) {
+ handler = alx_msix_rtx;
+ sprintf(msix->name, "%s:%s%d",
+ netdev->name, "rx", rx_idx);
+ rx_idx++;
+ } else if (CHK_MSIX_FLAG(TXS)) {
+ handler = alx_msix_rtx;
+ sprintf(msix->name, "%s:%s%d",
+ netdev->name, "tx", tx_idx);
+ tx_idx++;
+ } else if (CHK_MSIX_FLAG(TIMER)) {
+ handler = alx_msix_timer;
+ sprintf(msix->name, "%s:%s", netdev->name, "timer");
+ } else if (CHK_MSIX_FLAG(ALERT)) {
+ handler = alx_msix_alert;
+ sprintf(msix->name, "%s:%s", netdev->name, "alert");
+ } else if (CHK_MSIX_FLAG(SMB)) {
+ handler = alx_msix_smb;
+ sprintf(msix->name, "%s:%s", netdev->name, "smb");
+ } else if (CHK_MSIX_FLAG(PHY)) {
+ handler = alx_msix_phy;
+ sprintf(msix->name, "%s:%s", netdev->name, "phy");
+ } else {
+ netif_info(adpt, ifup, adpt->netdev,
+ "MSIX entry [%d] is blank\n",
+ msix->vec_idx);
+ continue;
+ }
+ netif_info(adpt, ifup, adpt->netdev,
+ "MSIX entry [%d] is %s\n",
+ msix->vec_idx, msix->name);
+ retval = request_irq(adpt->msix_entries[msix_idx].vector,
+ handler, 0, msix->name, msix);
+ if (retval)
+ goto free_msix_irq;
+
+ /* assign the mask for this irq */
+ irq_set_affinity_hint(adpt->msix_entries[msix_idx].vector,
+ msix->affinity_mask);
+ }
+ return retval;
+
+
+free_msix_irq:
+ for (i = 0; i < msix_idx; i++) {
+ irq_set_affinity_hint(adpt->msix_entries[i].vector, NULL);
+ free_irq(adpt->msix_entries[i].vector, adpt->msix[i]);
+ }
+ CLI_ADPT_FLAG(0, MSIX_EN);
+ pci_disable_msix(adpt->pdev);
+ kfree(adpt->msix_entries);
+ adpt->msix_entries = NULL;
+ return retval;
+}
+
+
+/*
+ * alx_request_irq - initialize interrupts
+ */
+static int alx_request_irq(struct alx_adapter *adpt)
+{
+ struct net_device *netdev = adpt->netdev;
+ int retval;
+
+ /* request MSIX irq */
+ if (CHK_ADPT_FLAG(0, MSIX_EN)) {
+ retval = alx_request_msix_irq(adpt);
+ if (retval) {
+ alx_err(adpt, "request msix irq failed, error = %d\n",
+ retval);
+ }
+ goto out;
+ }
+
+ /* request MSI irq */
+ if (CHK_ADPT_FLAG(0, MSI_EN)) {
+ retval = request_irq(adpt->pdev->irq, alx_interrupt, 0,
+ netdev->name, netdev);
+ if (retval) {
+ alx_err(adpt, "request msix irq failed, error = %d\n",
+ retval);
+ }
+ goto out;
+ }
+
+ /* request shared irq */
+ retval = request_irq(adpt->pdev->irq, alx_interrupt, IRQF_SHARED,
+ netdev->name, netdev);
+ if (retval) {
+ alx_err(adpt, "request shared irq failed, error = %d\n",
+ retval);
+ }
+out:
+ return retval;
+}
+
+
+static void alx_free_irq(struct alx_adapter *adpt)
+{
+ struct net_device *netdev = adpt->netdev;
+ int i;
+
+ if (CHK_ADPT_FLAG(0, MSIX_EN)) {
+ for (i = 0; i < adpt->num_msix_intrs; i++) {
+ struct alx_msix_param *msix = adpt->msix[i];
+ netif_info(adpt, ifdown, adpt->netdev,
+ "msix entry = %d\n", i);
+ if (!CHK_MSIX_FLAG(ALL))
+ continue;
+ if (CHK_MSIX_FLAG(RXS) || CHK_MSIX_FLAG(TXS)) {
+ irq_set_affinity_hint(
+ adpt->msix_entries[i].vector, NULL);
+ }
+ free_irq(adpt->msix_entries[i].vector, msix);
+ }
+ alx_reset_msix_maps(adpt);
+ } else {
+ free_irq(adpt->pdev->irq, netdev);
+ }
+}
+
+
+static void alx_vlan_mode(struct net_device *netdev,
+ netdev_features_t features)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ struct alx_hw *hw = &adpt->hw;
+
+ if (!CHK_ADPT_FLAG(1, STATE_DOWN))
+ alx_disable_intr(adpt);
+
+ if (features & NETIF_F_HW_VLAN_RX) {
+ /* enable VLAN tag insert/strip */
+ SET_HW_FLAG(VLANSTRIP_EN);
+ } else {
+ /* disable VLAN tag insert/strip */
+ CLI_HW_FLAG(VLANSTRIP_EN);
+ }
+ hw->cbs.config_mac_ctrl(hw);
+
+ if (!CHK_ADPT_FLAG(1, STATE_DOWN))
+ alx_enable_intr(adpt);
+}
+
+
+static void alx_restore_vlan(struct alx_adapter *adpt)
+{
+ alx_vlan_mode(adpt->netdev, adpt->netdev->features);
+}
+
+
+static void alx_napi_enable_all(struct alx_adapter *adpt)
+{
+ struct alx_msix_param *msix;
+ int num_msix_intrs = adpt->num_msix_intrs;
+ int msix_idx;
+
+ if (!CHK_ADPT_FLAG(0, MSIX_EN))
+ num_msix_intrs = 1;
+
+ for (msix_idx = 0; msix_idx < num_msix_intrs; msix_idx++) {
+ struct napi_struct *napi;
+ msix = adpt->msix[msix_idx];
+ napi = &msix->napi;
+ napi_enable(napi);
+ }
+}
+
+
+static void alx_napi_disable_all(struct alx_adapter *adpt)
+{
+ struct alx_msix_param *msix;
+ int num_msix_intrs = adpt->num_msix_intrs;
+ int msix_idx;
+
+ if (!CHK_ADPT_FLAG(0, MSIX_EN))
+ num_msix_intrs = 1;
+
+ for (msix_idx = 0; msix_idx < num_msix_intrs; msix_idx++) {
+ msix = adpt->msix[msix_idx];
+ napi_disable(&msix->napi);
+ }
+}
+
+
+static void alx_clean_tx_queue(struct alx_tx_queue *txque)
+{
+ struct device *dev = txque->dev;
+ unsigned long size;
+ u16 i;
+
+ /* ring already cleared, nothing to do */
+ if (!txque->tpq.tpbuff)
+ return;
+
+ for (i = 0; i < txque->tpq.count; i++) {
+ struct alx_buffer *tpbuf;
+ tpbuf = GET_TP_BUFFER(txque, i);
+ if (tpbuf->dma) {
+ pci_unmap_single(to_pci_dev(dev),
+ tpbuf->dma,
+ tpbuf->length,
+ DMA_TO_DEVICE);
+ tpbuf->dma = 0;
+ }
+ if (tpbuf->skb) {
+ dev_kfree_skb_any(tpbuf->skb);
+ tpbuf->skb = NULL;
+ }
+ }
+
+ size = sizeof(struct alx_buffer) * txque->tpq.count;
+ memset(txque->tpq.tpbuff, 0, size);
+
+ /* Zero out Tx-buffers */
+ memset(txque->tpq.tpdesc, 0, txque->tpq.size);
+
+ txque->tpq.consume_idx = 0;
+ txque->tpq.produce_idx = 0;
+}
+
+
+/*
+ * alx_clean_all_tx_queues
+ */
+static void alx_clean_all_tx_queues(struct alx_adapter *adpt)
+{
+ int i;
+
+ for (i = 0; i < adpt->num_txques; i++)
+ alx_clean_tx_queue(adpt->tx_queue[i]);
+}
+
+
+static void alx_clean_rx_queue(struct alx_rx_queue *rxque)
+{
+ struct device *dev = rxque->dev;
+ unsigned long size;
+ int i;
+
+ if (CHK_RX_FLAG(HW_QUE)) {
+ /* ring already cleared, nothing to do */
+ if (!rxque->rfq.rfbuff)
+ goto clean_sw_queue;
+
+ for (i = 0; i < rxque->rfq.count; i++) {
+ struct alx_buffer *rfbuf;
+ rfbuf = GET_RF_BUFFER(rxque, i);
+
+ if (rfbuf->dma) {
+ pci_unmap_single(to_pci_dev(dev),
+ rfbuf->dma,
+ rfbuf->length,
+ DMA_FROM_DEVICE);
+ rfbuf->dma = 0;
+ }
+ if (rfbuf->skb) {
+ dev_kfree_skb(rfbuf->skb);
+ rfbuf->skb = NULL;
+ }
+ }
+ size = sizeof(struct alx_buffer) * rxque->rfq.count;
+ memset(rxque->rfq.rfbuff, 0, size);
+
+ /* zero out the descriptor ring */
+ memset(rxque->rrq.rrdesc, 0, rxque->rrq.size);
+ rxque->rrq.produce_idx = 0;
+ rxque->rrq.consume_idx = 0;
+
+ memset(rxque->rfq.rfdesc, 0, rxque->rfq.size);
+ rxque->rfq.produce_idx = 0;
+ rxque->rfq.consume_idx = 0;
+ }
+clean_sw_queue:
+ if (CHK_RX_FLAG(SW_QUE)) {
+ /* ring already cleared, nothing to do */
+ if (!rxque->swq.swbuff)
+ return;
+
+ for (i = 0; i < rxque->swq.count; i++) {
+ struct alx_sw_buffer *swbuf;
+ swbuf = GET_SW_BUFFER(rxque, i);
+
+ /* swq doesn't map DMA*/
+
+ if (swbuf->skb) {
+ dev_kfree_skb(swbuf->skb);
+ swbuf->skb = NULL;
+ }
+ }
+ size = sizeof(struct alx_buffer) * rxque->swq.count;
+ memset(rxque->swq.swbuff, 0, size);
+
+ /* swq doesn't have any descripter rings */
+ rxque->swq.produce_idx = 0;
+ rxque->swq.consume_idx = 0;
+ }
+}
+
+
+/*
+ * alx_clean_all_rx_queues
+ */
+static void alx_clean_all_rx_queues(struct alx_adapter *adpt)
+{
+ int i;
+ for (i = 0; i < adpt->num_rxques; i++)
+ alx_clean_rx_queue(adpt->rx_queue[i]);
+}
+
+
+/*
+ * alx_set_rss_queues: Allocate queues for RSS
+ */
+static inline void alx_set_num_txques(struct alx_adapter *adpt)
+{
+ struct alx_hw *hw = &adpt->hw;
+
+ if (hw->mac_type == alx_mac_l1f || hw->mac_type == alx_mac_l2f)
+ adpt->num_txques = 4;
+ else
+ adpt->num_txques = 2;
+}
+
+
+/*
+ * alx_set_rss_queues: Allocate queues for RSS
+ */
+static inline void alx_set_num_rxques(struct alx_adapter *adpt)
+{
+ if (CHK_ADPT_FLAG(0, SRSS_CAP)) {
+ adpt->num_hw_rxques = 1;
+ adpt->num_sw_rxques = adpt->max_rxques;
+ adpt->num_rxques =
+ max_t(u16, adpt->num_hw_rxques, adpt->num_sw_rxques);
+ }
+}
+
+
+/*
+ * alx_set_num_queues: Allocate queues for device, feature dependant
+ */
+static void alx_set_num_queues(struct alx_adapter *adpt)
+{
+ /* Start with default case */
+ adpt->num_txques = 1;
+ adpt->num_rxques = 1;
+ adpt->num_hw_rxques = 1;
+ adpt->num_sw_rxques = 0;
+
+ alx_set_num_rxques(adpt);
+ alx_set_num_txques(adpt);
+}
+
+
+/* alx_alloc_all_rtx_queue - allocate all queues */
+static int alx_alloc_all_rtx_queue(struct alx_adapter *adpt)
+{
+ int que_idx;
+
+ for (que_idx = 0; que_idx < adpt->num_txques; que_idx++) {
+ struct alx_tx_queue *txque = adpt->tx_queue[que_idx];
+
+ txque = kzalloc(sizeof(struct alx_tx_queue), GFP_KERNEL);
+ if (!txque)
+ goto err_alloc_tx_queue;
+ txque->tpq.count = adpt->num_txdescs;
+ txque->que_idx = que_idx;
+ txque->dev = &adpt->pdev->dev;
+ txque->netdev = adpt->netdev;
+
+ adpt->tx_queue[que_idx] = txque;
+ }
+
+ for (que_idx = 0; que_idx < adpt->num_rxques; que_idx++) {
+ struct alx_rx_queue *rxque = adpt->rx_queue[que_idx];
+
+ rxque = kzalloc(sizeof(struct alx_rx_queue), GFP_KERNEL);
+ if (!rxque)
+ goto err_alloc_rx_queue;
+ rxque->rrq.count = adpt->num_rxdescs;
+ rxque->rfq.count = adpt->num_rxdescs;
+ rxque->swq.count = adpt->num_rxdescs;
+ rxque->que_idx = que_idx;
+ rxque->dev = &adpt->pdev->dev;
+ rxque->netdev = adpt->netdev;
+
+ if (CHK_ADPT_FLAG(0, SRSS_EN)) {
+ if (que_idx < adpt->num_hw_rxques)
+ SET_RX_FLAG(HW_QUE);
+ if (que_idx < adpt->num_sw_rxques)
+ SET_RX_FLAG(SW_QUE);
+ } else {
+ SET_RX_FLAG(HW_QUE);
+ }
+ adpt->rx_queue[que_idx] = rxque;
+ }
+ netif_dbg(adpt, probe, adpt->netdev,
+ "num_tx_descs = %d, num_rx_descs = %d\n",
+ adpt->num_txdescs, adpt->num_rxdescs);
+ return 0;
+
+err_alloc_rx_queue:
+ alx_err(adpt, "goto err_alloc_rx_queue");
+ for (que_idx = 0; que_idx < adpt->num_rxques; que_idx++)
+ kfree(adpt->rx_queue[que_idx]);
+err_alloc_tx_queue:
+ alx_err(adpt, "goto err_alloc_tx_queue");
+ for (que_idx = 0; que_idx < adpt->num_txques; que_idx++)
+ kfree(adpt->tx_queue[que_idx]);
+ return -ENOMEM;
+}
+
+
+/* alx_free_all_rtx_queue */
+static void alx_free_all_rtx_queue(struct alx_adapter *adpt)
+{
+ int que_idx;
+
+ for (que_idx = 0; que_idx < adpt->num_txques; que_idx++) {
+ kfree(adpt->tx_queue[que_idx]);
+ adpt->tx_queue[que_idx] = NULL;
+ }
+ for (que_idx = 0; que_idx < adpt->num_rxques; que_idx++) {
+ kfree(adpt->rx_queue[que_idx]);
+ adpt->rx_queue[que_idx] = NULL;
+ }
+}
+
+
+/* alx_set_interrupt_param - set interrupt parameter */
+static int alx_set_interrupt_param(struct alx_adapter *adpt)
+{
+ struct alx_msix_param *msix;
+ int (*poll)(struct napi_struct *, int);
+ int msix_idx;
+
+ if (CHK_ADPT_FLAG(0, MSIX_EN)) {
+ poll = &alx_napi_msix_rtx;
+ } else {
+ adpt->num_msix_intrs = 1;
+ poll = &alx_napi_legacy_rtx;
+ }
+
+ for (msix_idx = 0; msix_idx < adpt->num_msix_intrs; msix_idx++) {
+ msix = kzalloc(sizeof(struct alx_msix_param),
+ GFP_KERNEL);
+ if (!msix)
+ goto err_alloc_msix;
+ msix->adpt = adpt;
+ msix->vec_idx = msix_idx;
+ /* Allocate the affinity_hint cpumask, configure the mask */
+ if (!alloc_cpumask_var(&msix->affinity_mask, GFP_KERNEL))
+ goto err_alloc_cpumask;
+
+ cpumask_set_cpu((msix_idx % num_online_cpus()),
+ msix->affinity_mask);
+
+ netif_napi_add(adpt->netdev, &msix->napi, (*poll), 64);
+ adpt->msix[msix_idx] = msix;
+ }
+ return 0;
+
+err_alloc_cpumask:
+ kfree(msix);
+ adpt->msix[msix_idx] = NULL;
+err_alloc_msix:
+ for (msix_idx--; msix_idx >= 0; msix_idx--) {
+ msix = adpt->msix[msix_idx];
+ netif_napi_del(&msix->napi);
+ free_cpumask_var(msix->affinity_mask);
+ kfree(msix);
+ adpt->msix[msix_idx] = NULL;
+ }
+ alx_err(adpt, "can't allocate memory\n");
+ return -ENOMEM;
+}
+
+
+/*
+ * alx_reset_interrupt_param - Free memory allocated for interrupt vectors
+ */
+static void alx_reset_interrupt_param(struct alx_adapter *adpt)
+{
+ int msix_idx;
+
+ for (msix_idx = 0; msix_idx < adpt->num_msix_intrs; msix_idx++) {
+ struct alx_msix_param *msix = adpt->msix[msix_idx];
+ netif_napi_del(&msix->napi);
+ free_cpumask_var(msix->affinity_mask);
+ kfree(msix);
+ adpt->msix[msix_idx] = NULL;
+ }
+}
+
+
+/* set msix interrupt mode */
+static int alx_set_msix_interrupt_mode(struct alx_adapter *adpt)
+{
+ int msix_intrs, msix_idx;
+ int retval = 0;
+
+ adpt->msix_entries = kcalloc(adpt->max_msix_intrs,
+ sizeof(struct msix_entry), GFP_KERNEL);
+ if (!adpt->msix_entries) {
+ netif_info(adpt, probe, adpt->netdev,
+ "can't allocate msix entry\n");
+ CLI_ADPT_FLAG(0, MSIX_EN);
+ goto try_msi_mode;
+ }
+
+ for (msix_idx = 0; msix_idx < adpt->max_msix_intrs; msix_idx++)
+ adpt->msix_entries[msix_idx].entry = msix_idx;
+
+
+ msix_intrs = adpt->max_msix_intrs;
+ while (msix_intrs >= adpt->min_msix_intrs) {
+ retval = pci_enable_msix(adpt->pdev, adpt->msix_entries,
+ msix_intrs);
+ if (!retval) /* Success in acquiring all requested vectors. */
+ break;
+ else if (retval < 0)
+ msix_intrs = 0; /* Nasty failure, quit now */
+ else /* error == number of vectors we should try again with */
+ msix_intrs = retval;
+ }
+ if (msix_intrs < adpt->min_msix_intrs) {
+ netif_info(adpt, probe, adpt->netdev,
+ "can't enable MSI-X interrupts\n");
+ CLI_ADPT_FLAG(0, MSIX_EN);
+ kfree(adpt->msix_entries);
+ adpt->msix_entries = NULL;
+ goto try_msi_mode;
+ }
+
+ netif_info(adpt, probe, adpt->netdev,
+ "enable MSI-X interrupts, num_msix_intrs = %d\n",
+ msix_intrs);
+ SET_ADPT_FLAG(0, MSIX_EN);
+ if (CHK_ADPT_FLAG(0, SRSS_CAP))
+ SET_ADPT_FLAG(0, SRSS_EN);
+
+ adpt->num_msix_intrs = min_t(int, msix_intrs, adpt->max_msix_intrs);
+ retval = 0;
+ return retval;
+
+try_msi_mode:
+ CLI_ADPT_FLAG(0, SRSS_CAP);
+ CLI_ADPT_FLAG(0, SRSS_EN);
+ alx_set_num_queues(adpt);
+ retval = -1;
+ return retval;
+}
+
+
+/* set msi interrupt mode */
+static int alx_set_msi_interrupt_mode(struct alx_adapter *adpt)
+{
+ int retval;
+
+ retval = pci_enable_msi(adpt->pdev);
+ if (retval) {
+ netif_info(adpt, probe, adpt->netdev,
+ "can't enable MSI interrupt, error = %d\n", retval);
+ return retval;
+ }
+ SET_ADPT_FLAG(0, MSI_EN);
+ return retval;
+}
+
+
+/* set interrupt mode */
+static int alx_set_interrupt_mode(struct alx_adapter *adpt)
+{
+ int retval = 0;
+
+ if (CHK_ADPT_FLAG(0, MSIX_CAP)) {
+ netif_info(adpt, probe, adpt->netdev,
+ "try to set MSIX interrupt\n");
+ retval = alx_set_msix_interrupt_mode(adpt);
+ if (!retval)
+ return retval;
+ }
+
+ if (CHK_ADPT_FLAG(0, MSI_CAP)) {
+ netif_info(adpt, probe, adpt->netdev,
+ "try to set MSI interrupt\n");
+ retval = alx_set_msi_interrupt_mode(adpt);
+ if (!retval)
+ return retval;
+ }
+
+ netif_info(adpt, probe, adpt->netdev,
+ "can't enable MSIX and MSI, will enable shared interrupt\n");
+ retval = 0;
+ return retval;
+}
+
+
+static void alx_reset_interrupt_mode(struct alx_adapter *adpt)
+{
+ if (CHK_ADPT_FLAG(0, MSIX_EN)) {
+ CLI_ADPT_FLAG(0, MSIX_EN);
+ pci_disable_msix(adpt->pdev);
+ kfree(adpt->msix_entries);
+ adpt->msix_entries = NULL;
+ } else if (CHK_ADPT_FLAG(0, MSI_EN)) {
+ CLI_ADPT_FLAG(0, MSI_EN);
+ pci_disable_msi(adpt->pdev);
+ }
+}
+
+
+static int __devinit alx_init_adapter_special(struct alx_adapter *adpt)
+{
+ switch (adpt->hw.mac_type) {
+ case alx_mac_l1f:
+ case alx_mac_l2f:
+ goto init_alf_adapter;
+ break;
+ case alx_mac_l1c:
+ case alx_mac_l1d_v1:
+ case alx_mac_l1d_v2:
+ case alx_mac_l2c:
+ case alx_mac_l2cb_v1:
+ case alx_mac_l2cb_v20:
+ case alx_mac_l2cb_v21:
+ goto init_alc_adapter;
+ break;
+ default:
+ break;
+ }
+ return -1;
+
+init_alc_adapter:
+ if (CHK_ADPT_FLAG(0, MSIX_CAP))
+ alx_err(adpt, "ALC doesn't support MSIX\n");
+
+ /* msi for tx, rx and none queues */
+ adpt->num_msix_txques = 0;
+ adpt->num_msix_rxques = 0;
+ adpt->num_msix_noques = 0;
+ return 0;
+
+init_alf_adapter:
+ if (CHK_ADPT_FLAG(0, MSIX_CAP)) {
+ /* msix for tx, rx and none queues */
+ adpt->num_msix_txques = 4;
+ adpt->num_msix_rxques = 8;
+ adpt->num_msix_noques = ALF_MAX_MSIX_NOQUE_INTRS;
+
+ /* msix vector range */
+ adpt->max_msix_intrs = ALF_MAX_MSIX_INTRS;
+ adpt->min_msix_intrs = ALF_MIN_MSIX_INTRS;
+ } else {
+ /* msi for tx, rx and none queues */
+ adpt->num_msix_txques = 0;
+ adpt->num_msix_rxques = 0;
+ adpt->num_msix_noques = 0;
+ }
+ return 0;
+
+}
+
+
+/*
+ * alx_init_adapter
+ */
+static int __devinit alx_init_adapter(struct alx_adapter *adpt)
+{
+ struct alx_hw *hw = &adpt->hw;
+ struct pci_dev *pdev = adpt->pdev;
+ u16 revision;
+ int max_frame;
+
+ /* PCI config space info */
+ hw->pci_venid = pdev->vendor;
+ hw->pci_devid = pdev->device;
+ alx_cfg_r16(hw, PCI_CLASS_REVISION, &revision);
+ hw->pci_revid = revision & 0xFF;
+ hw->pci_sub_venid = pdev->subsystem_vendor;
+ hw->pci_sub_devid = pdev->subsystem_device;
+
+ if (alx_init_hw_callbacks(adpt) != 0) {
+ alx_err(adpt, "set HW function pointers failed\n");
+ return -1;
+ }
+
+ if (hw->cbs.identify_nic(hw) != 0) {
+ alx_err(adpt, "HW is disabled\n");
+ return -1;
+ }
+
+ /* Set adapter flags */
+ switch (hw->mac_type) {
+ case alx_mac_l1f:
+ case alx_mac_l2f:
+#ifdef CONFIG_ALX_MSI
+ SET_ADPT_FLAG(0, MSI_CAP);
+#endif
+#ifdef CONFIG_ALX_MSIX
+ SET_ADPT_FLAG(0, MSIX_CAP);
+#endif
+ if (CHK_ADPT_FLAG(0, MSIX_CAP)) {
+ SET_ADPT_FLAG(0, FIXED_MSIX);
+ SET_ADPT_FLAG(0, MRQ_CAP);
+#ifdef CONFIG_ALX_RSS
+ SET_ADPT_FLAG(0, SRSS_CAP);
+#endif
+ }
+ pdev->dev_flags |= PCI_DEV_FLAGS_MSI_INTX_DISABLE_BUG;
+ break;
+ case alx_mac_l1c:
+ case alx_mac_l1d_v1:
+ case alx_mac_l1d_v2:
+ case alx_mac_l2c:
+ case alx_mac_l2cb_v1:
+ case alx_mac_l2cb_v20:
+ case alx_mac_l2cb_v21:
+#ifdef CONFIG_ALX_MSI
+ SET_ADPT_FLAG(0, MSI_CAP);
+#endif
+ break;
+ default:
+ break;
+ }
+
+ /* set default for alx_adapter */
+ adpt->max_msix_intrs = 1;
+ adpt->min_msix_intrs = 1;
+ max_frame = adpt->netdev->mtu + ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN;
+ adpt->rxbuf_size = adpt->netdev->mtu > ALX_DEF_RX_BUF_SIZE ?
+ ALIGN(max_frame, 8) : ALX_DEF_RX_BUF_SIZE;
+ adpt->wol = 0;
+ device_set_wakeup_enable(&pdev->dev, false);
+
+ /* set default for alx_hw */
+ hw->link_up = false;
+ hw->link_speed = ALX_LINK_SPEED_UNKNOWN;
+ hw->preamble = 7;
+ hw->intr_mask = ALX_IMR_NORMAL_MASK;
+ hw->smb_timer = 400; /* 400ms */
+ hw->mtu = adpt->netdev->mtu;
+ hw->imt = 100; /* set to 200us */
+
+ /* set default for wrr */
+ hw->wrr_prio0 = 4;
+ hw->wrr_prio1 = 4;
+ hw->wrr_prio2 = 4;
+ hw->wrr_prio3 = 4;
+ hw->wrr_mode = alx_wrr_mode_none;
+
+ /* set default flow control settings */
+ hw->req_fc_mode = alx_fc_full;
+ hw->cur_fc_mode = alx_fc_full; /* init for ethtool output */
+ hw->disable_fc_autoneg = false;
+ hw->fc_was_autonegged = false;
+ hw->fc_single_pause = true;
+
+ /* set default for rss info*/
+ hw->rss_hstype = 0;
+ hw->rss_mode = alx_rss_mode_disable;
+ hw->rss_idt_size = 0;
+ hw->rss_base_cpu = 0;
+ memset(hw->rss_idt, 0x0, sizeof(hw->rss_idt));
+ memset(hw->rss_key, 0x0, sizeof(hw->rss_key));
+
+ atomic_set(&adpt->irq_sem, 1);
+ spin_lock_init(&adpt->tx_lock);
+ spin_lock_init(&adpt->rx_lock);
+
+ alx_init_adapter_special(adpt);
+
+ if (hw->cbs.init_phy) {
+ if (hw->cbs.init_phy(hw))
+ return -EINVAL;
+ }
+
+ SET_ADPT_FLAG(1, STATE_DOWN);
+ return 0;
+}
+
+
+static int alx_set_register_info_special(struct alx_adapter *adpt)
+{
+ struct alx_hw *hw = &adpt->hw;
+ int num_txques = adpt->num_txques;
+
+ switch (adpt->hw.mac_type) {
+ case alx_mac_l1f:
+ case alx_mac_l2f:
+ goto cache_alf_register;
+ break;
+ case alx_mac_l1c:
+ case alx_mac_l1d_v1:
+ case alx_mac_l1d_v2:
+ case alx_mac_l2c:
+ case alx_mac_l2cb_v1:
+ case alx_mac_l2cb_v20:
+ case alx_mac_l2cb_v21:
+ goto cache_alc_register;
+ break;
+ default:
+ break;
+ }
+ return -1;
+
+cache_alc_register:
+ /* setting for Produce Index and Consume Index */
+ adpt->rx_queue[0]->produce_reg = hw->rx_prod_reg[0];
+ adpt->rx_queue[0]->consume_reg = hw->rx_cons_reg[0];
+
+ switch (num_txques) {
+ case 2:
+ adpt->tx_queue[1]->produce_reg = hw->tx_prod_reg[1];
+ adpt->tx_queue[1]->consume_reg = hw->tx_cons_reg[1];
+ case 1:
+ adpt->tx_queue[0]->produce_reg = hw->tx_prod_reg[0];
+ adpt->tx_queue[0]->consume_reg = hw->tx_cons_reg[0];
+ break;
+ }
+ return 0;
+
+cache_alf_register:
+ /* setting for Produce Index and Consume Index */
+ adpt->rx_queue[0]->produce_reg = hw->rx_prod_reg[0];
+ adpt->rx_queue[0]->consume_reg = hw->rx_cons_reg[0];
+
+ switch (num_txques) {
+ case 4:
+ adpt->tx_queue[3]->produce_reg = hw->tx_prod_reg[3];
+ adpt->tx_queue[3]->consume_reg = hw->tx_cons_reg[3];
+ case 3:
+ adpt->tx_queue[2]->produce_reg = hw->tx_prod_reg[2];
+ adpt->tx_queue[2]->consume_reg = hw->tx_cons_reg[2];
+ case 2:
+ adpt->tx_queue[1]->produce_reg = hw->tx_prod_reg[1];
+ adpt->tx_queue[1]->consume_reg = hw->tx_cons_reg[1];
+ case 1:
+ adpt->tx_queue[0]->produce_reg = hw->tx_prod_reg[0];
+ adpt->tx_queue[0]->consume_reg = hw->tx_cons_reg[0];
+ }
+ return 0;
+}
+
+
+/* alx_alloc_tx_descriptor - allocate Tx Descriptors */
+static int alx_alloc_tx_descriptor(struct alx_adapter *adpt,
+ struct alx_tx_queue *txque)
+{
+ struct alx_ring_header *ring_header = &adpt->ring_header;
+ int size;
+
+ netif_info(adpt, ifup, adpt->netdev,
+ "tpq.count = %d\n", txque->tpq.count);
+
+ size = sizeof(struct alx_buffer) * txque->tpq.count;
+ txque->tpq.tpbuff = kzalloc(size, GFP_KERNEL);
+ if (!txque->tpq.tpbuff)
+ goto err_alloc_tpq_buffer;
+
+ /* round up to nearest 4K */
+ txque->tpq.size = txque->tpq.count * sizeof(union alx_tpdesc);
+
+ txque->tpq.tpdma = ring_header->dma + ring_header->used;
+ txque->tpq.tpdesc = ring_header->desc + ring_header->used;
+ adpt->hw.tpdma[txque->que_idx] = (u64)txque->tpq.tpdma;
+ ring_header->used += ALIGN(txque->tpq.size, 8);
+
+ txque->tpq.produce_idx = 0;
+ txque->tpq.consume_idx = 0;
+ txque->max_packets = txque->tpq.count;
+ return 0;
+
+err_alloc_tpq_buffer:
+ alx_err(adpt, "Unable to allocate memory for the Tx descriptor\n");
+ return -ENOMEM;
+}
+
+
+/* alx_alloc_all_tx_descriptor - allocate all Tx Descriptors */
+static int alx_alloc_all_tx_descriptor(struct alx_adapter *adpt)
+{
+ int i, retval = 0;
+ netif_info(adpt, ifup, adpt->netdev,
+ "num_tques = %d\n", adpt->num_txques);
+
+ for (i = 0; i < adpt->num_txques; i++) {
+ retval = alx_alloc_tx_descriptor(adpt, adpt->tx_queue[i]);
+ if (!retval)
+ continue;
+
+ alx_err(adpt, "Allocation for Tx Queue %u failed\n", i);
+ break;
+ }
+
+ return retval;
+}
+
+
+/* alx_alloc_rx_descriptor - allocate Rx Descriptors */
+static int alx_alloc_rx_descriptor(struct alx_adapter *adpt,
+ struct alx_rx_queue *rxque)
+{
+ struct alx_ring_header *ring_header = &adpt->ring_header;
+ int size;
+
+ netif_info(adpt, ifup, adpt->netdev,
+ "RRD.count = %d, RFD.count = %d, SWD.count = %d\n",
+ rxque->rrq.count, rxque->rfq.count, rxque->swq.count);
+
+ if (CHK_RX_FLAG(HW_QUE)) {
+ /* alloc buffer info */
+ size = sizeof(struct alx_buffer) * rxque->rfq.count;
+ rxque->rfq.rfbuff = kzalloc(size, GFP_KERNEL);
+ if (!rxque->rfq.rfbuff)
+ goto err_alloc_rfq_buffer;
+ /*
+ * set dma's point of rrq and rfq
+ */
+
+ /* Round up to nearest 4K */
+ rxque->rrq.size =
+ rxque->rrq.count * sizeof(union alx_rrdesc);
+ rxque->rfq.size =
+ rxque->rfq.count * sizeof(union alx_rfdesc);
+
+ rxque->rrq.rrdma = ring_header->dma + ring_header->used;
+ rxque->rrq.rrdesc = ring_header->desc + ring_header->used;
+ adpt->hw.rrdma[rxque->que_idx] = (u64)rxque->rrq.rrdma;
+ ring_header->used += ALIGN(rxque->rrq.size, 8);
+
+ rxque->rfq.rfdma = ring_header->dma + ring_header->used;
+ rxque->rfq.rfdesc = ring_header->desc + ring_header->used;
+ adpt->hw.rfdma[rxque->que_idx] = (u64)rxque->rfq.rfdma;
+ ring_header->used += ALIGN(rxque->rfq.size, 8);
+
+ /* clean all counts within rxque */
+ rxque->rrq.produce_idx = 0;
+ rxque->rrq.consume_idx = 0;
+
+ rxque->rfq.produce_idx = 0;
+ rxque->rfq.consume_idx = 0;
+ }
+
+ if (CHK_RX_FLAG(SW_QUE)) {
+ size = sizeof(struct alx_sw_buffer) * rxque->swq.count;
+ rxque->swq.swbuff = kzalloc(size, GFP_KERNEL);
+ if (!rxque->swq.swbuff)
+ goto err_alloc_swq_buffer;
+
+ rxque->swq.consume_idx = 0;
+ rxque->swq.produce_idx = 0;
+ }
+
+ rxque->max_packets = rxque->rrq.count / 2;
+ return 0;
+
+err_alloc_swq_buffer:
+ kfree(rxque->rfq.rfbuff);
+ rxque->rfq.rfbuff = NULL;
+err_alloc_rfq_buffer:
+ alx_err(adpt, "Unable to allocate memory for the Rx descriptor\n");
+ return -ENOMEM;
+}
+
+
+/* alx_alloc_all_rx_descriptor - allocate all Rx Descriptors */
+static int alx_alloc_all_rx_descriptor(struct alx_adapter *adpt)
+{
+ int i, error = 0;
+
+ for (i = 0; i < adpt->num_rxques; i++) {
+ error = alx_alloc_rx_descriptor(adpt, adpt->rx_queue[i]);
+ if (!error)
+ continue;
+ alx_err(adpt, "Allocation for Rx Queue %u failed\n", i);
+ break;
+ }
+
+ return error;
+}
+
+
+/* alx_free_tx_descriptor - Free Tx Descriptor */
+static void alx_free_tx_descriptor(struct alx_tx_queue *txque)
+{
+ alx_clean_tx_queue(txque);
+
+ kfree(txque->tpq.tpbuff);
+ txque->tpq.tpbuff = NULL;
+
+ /* if not set, then don't free */
+ if (!txque->tpq.tpdesc)
+ return;
+ txque->tpq.tpdesc = NULL;
+}
+
+
+/* alx_free_all_tx_descriptor - Free all Tx Descriptor */
+static void alx_free_all_tx_descriptor(struct alx_adapter *adpt)
+{
+ int i;
+
+ for (i = 0; i < adpt->num_txques; i++)
+ alx_free_tx_descriptor(adpt->tx_queue[i]);
+}
+
+
+/* alx_free_all_rx_descriptor - Free all Rx Descriptor */
+static void alx_free_rx_descriptor(struct alx_rx_queue *rxque)
+{
+ alx_clean_rx_queue(rxque);
+
+ if (CHK_RX_FLAG(HW_QUE)) {
+ kfree(rxque->rfq.rfbuff);
+ rxque->rfq.rfbuff = NULL;
+
+ /* if not set, then don't free */
+ if (!rxque->rrq.rrdesc)
+ return;
+ rxque->rrq.rrdesc = NULL;
+
+ if (!rxque->rfq.rfdesc)
+ return;
+ rxque->rfq.rfdesc = NULL;
+ }
+
+ if (CHK_RX_FLAG(SW_QUE)) {
+ kfree(rxque->swq.swbuff);
+ rxque->swq.swbuff = NULL;
+ }
+}
+
+
+/* alx_free_all_rx_descriptor - Free all Rx Descriptor */
+static void alx_free_all_rx_descriptor(struct alx_adapter *adpt)
+{
+ int i;
+ for (i = 0; i < adpt->num_rxques; i++)
+ alx_free_rx_descriptor(adpt->rx_queue[i]);
+}
+
+
+/*
+ * alx_alloc_all_rtx_descriptor - allocate Tx / RX descriptor queues
+ */
+static int alx_alloc_all_rtx_descriptor(struct alx_adapter *adpt)
+{
+ struct device *dev = &adpt->pdev->dev;
+ struct alx_ring_header *ring_header = &adpt->ring_header;
+ int num_tques = adpt->num_txques;
+ int num_rques = adpt->num_hw_rxques;
+ unsigned int num_tx_descs = adpt->num_txdescs;
+ unsigned int num_rx_descs = adpt->num_rxdescs;
+ int retval;
+
+ /*
+ * real ring DMA buffer
+ * each ring/block may need up to 8 bytes for alignment, hence the
+ * additional bytes tacked onto the end.
+ */
+ ring_header->size =
+ num_tques * num_tx_descs * sizeof(union alx_tpdesc) +
+ num_rques * num_rx_descs * sizeof(union alx_rfdesc) +
+ num_rques * num_rx_descs * sizeof(union alx_rrdesc) +
+ sizeof(struct coals_msg_block) +
+ sizeof(struct alx_hw_stats) +
+ num_tques * 8 + num_rques * 2 * 8 + 8 * 2;
+ netif_info(adpt, ifup, adpt->netdev,
+ "num_tques = %d, num_tx_descs = %d\n",
+ num_tques, num_tx_descs);
+ netif_info(adpt, ifup, adpt->netdev,
+ "num_rques = %d, num_rx_descs = %d\n",
+ num_rques, num_rx_descs);
+
+ ring_header->used = 0;
+ ring_header->desc = dma_alloc_coherent(dev, ring_header->size,
+ &ring_header->dma, GFP_KERNEL);
+
+ if (!ring_header->desc) {
+ alx_err(adpt, "dma_alloc_coherent failed\n");
+ retval = -ENOMEM;
+ goto err_alloc_dma;
+ }
+ memset(ring_header->desc, 0, ring_header->size);
+ ring_header->used = ALIGN(ring_header->dma, 8) - ring_header->dma;
+
+ netif_info(adpt, ifup, adpt->netdev,
+ "ring header: size = %d, used= %d\n",
+ ring_header->size, ring_header->used);
+
+ /* allocate receive descriptors */
+ retval = alx_alloc_all_tx_descriptor(adpt);
+ if (retval)
+ goto err_alloc_tx;
+
+ /* allocate receive descriptors */
+ retval = alx_alloc_all_rx_descriptor(adpt);
+ if (retval)
+ goto err_alloc_rx;
+
+ /* Init CMB dma address */
+ adpt->cmb.dma = ring_header->dma + ring_header->used;
+ adpt->cmb.cmb = (u8 *) ring_header->desc + ring_header->used;
+ ring_header->used += ALIGN(sizeof(struct coals_msg_block), 8);
+
+ adpt->smb.dma = ring_header->dma + ring_header->used;
+ adpt->smb.smb = (u8 *)ring_header->desc + ring_header->used;
+ ring_header->used += ALIGN(sizeof(struct alx_hw_stats), 8);
+
+ return 0;
+
+err_alloc_rx:
+ alx_free_all_rx_descriptor(adpt);
+err_alloc_tx:
+ alx_free_all_tx_descriptor(adpt);
+err_alloc_dma:
+ return retval;
+}
+
+
+/*
+ * alx_alloc_all_rtx_descriptor - allocate Tx / RX descriptor queues
+ */
+static void alx_free_all_rtx_descriptor(struct alx_adapter *adpt)
+{
+ struct pci_dev *pdev = adpt->pdev;
+ struct alx_ring_header *ring_header = &adpt->ring_header;
+
+ alx_free_all_tx_descriptor(adpt);
+ alx_free_all_rx_descriptor(adpt);
+
+ adpt->cmb.dma = 0;
+ adpt->cmb.cmb = NULL;
+ adpt->smb.dma = 0;
+ adpt->smb.smb = NULL;
+
+ pci_free_consistent(pdev, ring_header->size, ring_header->desc,
+ ring_header->dma);
+ ring_header->desc = NULL;
+ ring_header->size = ring_header->used = 0;
+}
+
+
+static netdev_features_t alx_fix_features(struct net_device *netdev,
+ netdev_features_t features)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ /*
+ * Since there is no support for separate rx/tx vlan accel
+ * enable/disable make sure tx flag is always in same state as rx.
+ */
+ if (features & NETIF_F_HW_VLAN_RX)
+ features |= NETIF_F_HW_VLAN_TX;
+ else
+ features &= ~NETIF_F_HW_VLAN_TX;
+
+ if (netdev->mtu > ALX_MAX_TSO_PKT_SIZE ||
+ adpt->hw.mac_type == alx_mac_l1c ||
+ adpt->hw.mac_type == alx_mac_l2c)
+ features &= ~(NETIF_F_TSO | NETIF_F_TSO6);
+
+ return features;
+}
+
+
+static int alx_set_features(struct net_device *netdev,
+ netdev_features_t features)
+{
+ netdev_features_t changed = netdev->features ^ features;
+
+ if (changed & NETIF_F_HW_VLAN_RX)
+ alx_vlan_mode(netdev, features);
+ return 0;
+}
+/*
+ * alx_change_mtu - Change the Maximum Transfer Unit
+ */
+static int alx_change_mtu(struct net_device *netdev, int new_mtu)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ int old_mtu = netdev->mtu;
+ int max_frame = new_mtu + ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN;
+
+ if ((max_frame < ALX_MIN_ETH_FRAME_SIZE) ||
+ (max_frame > ALX_MAX_ETH_FRAME_SIZE)) {
+ alx_err(adpt, "invalid MTU setting\n");
+ return -EINVAL;
+ }
+ /* set MTU */
+ if (old_mtu != new_mtu && netif_running(netdev)) {
+ netif_info(adpt, hw, adpt->netdev,
+ "changing MTU from %d to %d\n",
+ netdev->mtu, new_mtu);
+ netdev->mtu = new_mtu;
+ adpt->hw.mtu = new_mtu;
+ adpt->rxbuf_size = new_mtu > ALX_DEF_RX_BUF_SIZE ?
+ ALIGN(max_frame, 8) : ALX_DEF_RX_BUF_SIZE;
+ netdev_update_features(netdev);
+ alx_reinit_locked(adpt);
+ }
+
+ return 0;
+}
+
+
+static int alx_open_internal(struct alx_adapter *adpt, u32 ctrl)
+{
+ struct alx_hw *hw = &adpt->hw;
+ int retval = 0;
+ int i;
+
+ alx_init_ring_ptrs(adpt);
+
+ alx_set_multicase_list(adpt->netdev);
+ alx_restore_vlan(adpt);
+
+ if (hw->cbs.config_mac)
+ retval = hw->cbs.config_mac(hw, adpt->rxbuf_size,
+ adpt->num_hw_rxques, adpt->num_rxdescs,
+ adpt->num_txques, adpt->num_txdescs);
+
+ if (hw->cbs.config_tx)
+ retval = hw->cbs.config_tx(hw);
+
+ if (hw->cbs.config_rx)
+ retval = hw->cbs.config_rx(hw);
+
+ alx_config_rss(adpt);
+
+ for (i = 0; i < adpt->num_hw_rxques; i++)
+ alx_refresh_rx_buffer(adpt->rx_queue[i]);
+
+ /* configure HW regsiters of MSIX */
+ if (hw->cbs.config_msix)
+ retval = hw->cbs.config_msix(hw, adpt->num_msix_intrs,
+ CHK_ADPT_FLAG(0, MSIX_EN),
+ CHK_ADPT_FLAG(0, MSI_EN));
+
+ if (ctrl & ALX_OPEN_CTRL_IRQ_EN) {
+ retval = alx_request_irq(adpt);
+ if (retval)
+ goto err_request_irq;
+ }
+
+ /* enable NAPI, INTR and TX */
+ alx_napi_enable_all(adpt);
+
+ alx_enable_intr(adpt);
+
+ netif_tx_start_all_queues(adpt->netdev);
+
+ CLI_ADPT_FLAG(1, STATE_DOWN);
+
+ /* check link status */
+ SET_ADPT_FLAG(0, TASK_LSC_REQ);
+ adpt->link_jiffies = jiffies + ALX_TRY_LINK_TIMEOUT;
+ mod_timer(&adpt->alx_timer, jiffies);
+
+ return retval;
+
+err_request_irq:
+ alx_clean_all_rx_queues(adpt);
+ return retval;
+}
+
+
+static void alx_stop_internal(struct alx_adapter *adpt, u32 ctrl)
+{
+ struct net_device *netdev = adpt->netdev;
+ struct alx_hw *hw = &adpt->hw;
+
+ SET_ADPT_FLAG(1, STATE_DOWN);
+
+ netif_tx_stop_all_queues(netdev);
+ /* call carrier off first to avoid false dev_watchdog timeouts */
+ netif_carrier_off(netdev);
+ netif_tx_disable(netdev);
+
+ alx_disable_intr(adpt);
+
+ alx_napi_disable_all(adpt);
+
+ if (ctrl & ALX_OPEN_CTRL_IRQ_EN)
+ alx_free_irq(adpt);
+
+ CLI_ADPT_FLAG(0, TASK_LSC_REQ);
+ CLI_ADPT_FLAG(0, TASK_REINIT_REQ);
+ del_timer_sync(&adpt->alx_timer);
+
+ if (ctrl & ALX_OPEN_CTRL_RESET_PHY)
+ hw->cbs.reset_phy(hw);
+
+ if (ctrl & ALX_OPEN_CTRL_RESET_MAC)
+ hw->cbs.reset_mac(hw);
+
+ adpt->hw.link_speed = ALX_LINK_SPEED_UNKNOWN;
+
+ alx_clean_all_tx_queues(adpt);
+ alx_clean_all_rx_queues(adpt);
+}
+
+
+/*
+ * alx_open - Called when a network interface is made active
+ */
+static int alx_open(struct net_device *netdev)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ struct alx_hw *hw = &adpt->hw;
+ int retval;
+
+ /* disallow open during test */
+ if (CHK_ADPT_FLAG(1, STATE_TESTING) ||
+ CHK_ADPT_FLAG(1, STATE_DIAG_RUNNING))
+ return -EBUSY;
+
+ netif_carrier_off(netdev);
+
+ /* allocate rx/tx dma buffer & descriptors */
+ retval = alx_alloc_all_rtx_descriptor(adpt);
+ if (retval) {
+ alx_err(adpt, "error in alx_alloc_all_rtx_descriptor\n");
+ goto err_alloc_rtx;
+ }
+
+ retval = alx_open_internal(adpt, ALX_OPEN_CTRL_IRQ_EN);
+ if (retval)
+ goto err_open_internal;
+
+ return retval;
+
+err_open_internal:
+ alx_stop_internal(adpt, ALX_OPEN_CTRL_IRQ_EN);
+err_alloc_rtx:
+ alx_free_all_rtx_descriptor(adpt);
+ hw->cbs.reset_mac(hw);
+ return retval;
+}
+
+
+/*
+ * alx_stop - Disables a network interface
+ */
+static int alx_stop(struct net_device *netdev)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+
+ if (CHK_ADPT_FLAG(1, STATE_RESETTING))
+ netif_warn(adpt, ifdown, adpt->netdev,
+ "flag STATE_RESETTING has already set\n");
+
+ alx_stop_internal(adpt, ALX_OPEN_CTRL_IRQ_EN |
+ ALX_OPEN_CTRL_RESET_MAC);
+ alx_free_all_rtx_descriptor(adpt);
+
+ return 0;
+}
+
+
+static int alx_shutdown_internal(struct pci_dev *pdev, bool *wakeup)
+{
+ struct alx_adapter *adpt = pci_get_drvdata(pdev);
+ struct net_device *netdev = adpt->netdev;
+ struct alx_hw *hw = &adpt->hw;
+ u32 wufc = adpt->wol;
+ u16 lpa;
+ u32 speed, adv_speed, misc;
+ bool link_up;
+ int i;
+ int retval = 0;
+
+ hw->cbs.config_aspm(hw, false, false);
+
+ netif_device_detach(netdev);
+ if (netif_running(netdev))
+ alx_stop_internal(adpt, 0);
+
+#ifdef CONFIG_PM_SLEEP
+ retval = pci_save_state(pdev);
+ if (retval)
+ return retval;
+#endif
+ hw->cbs.check_phy_link(hw, &speed, &link_up);
+
+ if (link_up) {
+ if (hw->mac_type == alx_mac_l1f ||
+ hw->mac_type == alx_mac_l2f) {
+ alx_mem_r32(hw, ALX_MISC, &misc);
+ misc |= ALX_MISC_INTNLOSC_OPEN;
+ alx_mem_w32(hw, ALX_MISC, misc);
+ }
+
+ retval = hw->cbs.read_phy_reg(hw, MII_LPA, &lpa);
+ if (retval)
+ return retval;
+
+ adv_speed = ALX_LINK_SPEED_10_HALF;
+ if (lpa & LPA_10FULL)
+ adv_speed = ALX_LINK_SPEED_10_FULL;
+ else if (lpa & LPA_10HALF)
+ adv_speed = ALX_LINK_SPEED_10_HALF;
+ else if (lpa & LPA_100FULL)
+ adv_speed = ALX_LINK_SPEED_100_FULL;
+ else if (lpa & LPA_100HALF)
+ adv_speed = ALX_LINK_SPEED_100_HALF;
+
+ retval = hw->cbs.setup_phy_link(hw, adv_speed, true,
+ !hw->disable_fc_autoneg);
+ if (retval)
+ return retval;
+
+ for (i = 0; i < ALX_MAX_SETUP_LNK_CYCLE; i++) {
+ mdelay(100);
+ retval = hw->cbs.check_phy_link(hw, &speed, &link_up);
+ if (retval)
+ continue;
+ if (link_up)
+ break;
+ }
+ } else {
+ speed = ALX_LINK_SPEED_10_HALF;
+ link_up = false;
+ }
+ hw->link_speed = speed;
+ hw->link_up = link_up;
+
+ retval = hw->cbs.config_wol(hw, wufc);
+ if (retval)
+ return retval;
+
+ /* clear phy interrupt */
+ retval = hw->cbs.ack_phy_intr(hw);
+ if (retval)
+ return retval;
+
+ if (wufc) {
+ /* pcie patch */
+ device_set_wakeup_enable(&pdev->dev, 1);
+ }
+
+ retval = hw->cbs.config_pow_save(hw, adpt->hw.link_speed,
+ (wufc ? true : false), false,
+ (wufc & ALX_WOL_MAGIC ? true : false), true);
+ if (retval)
+ return retval;
+
+ *wakeup = wufc ? true : false;
+ pci_disable_device(pdev);
+ return 0;
+}
+
+
+static void alx_shutdown(struct pci_dev *pdev)
+{
+ bool wakeup;
+ alx_shutdown_internal(pdev, &wakeup);
+
+ pci_wake_from_d3(pdev, wakeup);
+ pci_set_power_state(pdev, PCI_D3hot);
+}
+
+
+#ifdef CONFIG_PM_SLEEP
+static int alx_suspend(struct device *dev)
+{
+ struct pci_dev *pdev = to_pci_dev(dev);
+ int retval;
+ bool wakeup;
+
+ retval = alx_shutdown_internal(pdev, &wakeup);
+ if (retval)
+ return retval;
+
+ if (wakeup) {
+ pci_prepare_to_sleep(pdev);
+ } else {
+ pci_wake_from_d3(pdev, false);
+ pci_set_power_state(pdev, PCI_D3hot);
+ }
+
+ return 0;
+}
+
+
+static int alx_resume(struct device *dev)
+{
+ struct pci_dev *pdev = to_pci_dev(dev);
+ struct alx_adapter *adpt = pci_get_drvdata(pdev);
+ struct net_device *netdev = adpt->netdev;
+ struct alx_hw *hw = &adpt->hw;
+ u32 retval;
+
+ pci_set_power_state(pdev, PCI_D0);
+ pci_restore_state(pdev);
+ /*
+ * pci_restore_state clears dev->state_saved so call
+ * pci_save_state to restore it.
+ */
+ pci_save_state(pdev);
+
+ pci_enable_wake(pdev, PCI_D3hot, 0);
+ pci_enable_wake(pdev, PCI_D3cold, 0);
+
+ retval = hw->cbs.reset_pcie(hw, true, true);
+ retval = hw->cbs.reset_phy(hw);
+ retval = hw->cbs.reset_mac(hw);
+ retval = hw->cbs.setup_phy_link(hw, hw->autoneg_advertised, true,
+ !hw->disable_fc_autoneg);
+
+ retval = hw->cbs.config_wol(hw, 0);
+
+ if (netif_running(netdev)) {
+ retval = alx_open_internal(adpt, 0);
+ if (retval)
+ return retval;
+ }
+
+ netif_device_attach(netdev);
+ return 0;
+}
+#endif
+
+
+/*
+ * alx_update_hw_stats - Update the board statistics counters.
+ */
+static void alx_update_hw_stats(struct alx_adapter *adpt)
+{
+ struct net_device_stats *net_stats;
+ struct alx_hw *hw = &adpt->hw;
+ struct alx_hw_stats *hwstats = &adpt->hw_stats;
+ unsigned long *hwstat_item = NULL;
+ u32 hwstat_reg;
+ u32 hwstat_data;
+
+ if (CHK_ADPT_FLAG(1, STATE_DOWN) || CHK_ADPT_FLAG(1, STATE_RESETTING))
+ return;
+
+ /* update RX status */
+ hwstat_reg = hw->rxstat_reg;
+ hwstat_item = &hwstats->rx_ok;
+ while (hwstat_reg < hw->rxstat_reg + hw->rxstat_sz) {
+ alx_mem_r32(hw, hwstat_reg, &hwstat_data);
+ *hwstat_item += hwstat_data;
+ hwstat_reg += 4;
+ hwstat_item++;
+ }
+
+ /* update TX status */
+ hwstat_reg = hw->txstat_reg;
+ hwstat_item = &hwstats->tx_ok;
+ while (hwstat_reg < hw->txstat_reg + hw->txstat_sz) {
+ alx_mem_r32(hw, hwstat_reg, &hwstat_data);
+ *hwstat_item += hwstat_data;
+ hwstat_reg += 4;
+ hwstat_item++;
+ }
+
+ net_stats = &adpt->netdev->stats;
+ net_stats->rx_packets = hwstats->rx_ok;
+ net_stats->tx_packets = hwstats->tx_ok;
+ net_stats->rx_bytes = hwstats->rx_byte_cnt;
+ net_stats->tx_bytes = hwstats->tx_byte_cnt;
+ net_stats->multicast = hwstats->rx_mcast;
+ net_stats->collisions = hwstats->tx_single_col +
+ hwstats->tx_multi_col * 2 +
+ hwstats->tx_late_col + hwstats->tx_abort_col;
+
+ net_stats->rx_errors = hwstats->rx_frag + hwstats->rx_fcs_err +
+ hwstats->rx_len_err + hwstats->rx_ov_sz +
+ hwstats->rx_ov_rrd + hwstats->rx_align_err;
+
+ net_stats->rx_fifo_errors = hwstats->rx_ov_rxf;
+ net_stats->rx_length_errors = hwstats->rx_len_err;
+ net_stats->rx_crc_errors = hwstats->rx_fcs_err;
+ net_stats->rx_frame_errors = hwstats->rx_align_err;
+ net_stats->rx_over_errors = hwstats->rx_ov_rrd + hwstats->rx_ov_rxf;
+
+ net_stats->rx_missed_errors = hwstats->rx_ov_rrd + hwstats->rx_ov_rxf;
+
+ net_stats->tx_errors = hwstats->tx_late_col + hwstats->tx_abort_col +
+ hwstats->tx_underrun + hwstats->tx_trunc;
+ net_stats->tx_fifo_errors = hwstats->tx_underrun;
+ net_stats->tx_aborted_errors = hwstats->tx_abort_col;
+ net_stats->tx_window_errors = hwstats->tx_late_col;
+}
+
+
+/*
+ * alx_get_stats - Get System Network Statistics
+ *
+ * Returns the address of the device statistics structure.
+ * The statistics are actually updated from the timer callback.
+ */
+static struct net_device_stats *alx_get_stats(struct net_device *netdev)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+
+ alx_update_hw_stats(adpt);
+ return &netdev->stats;
+}
+
+
+static void alx_link_task_routine(struct alx_adapter *adpt)
+{
+ struct net_device *netdev = adpt->netdev;
+ struct alx_hw *hw = &adpt->hw;
+ char *link_desc;
+
+ if (!CHK_ADPT_FLAG(0, TASK_LSC_REQ))
+ return;
+ CLI_ADPT_FLAG(0, TASK_LSC_REQ);
+
+ if (CHK_ADPT_FLAG(1, STATE_DOWN))
+ return;
+
+ if (hw->cbs.check_phy_link) {
+ hw->cbs.check_phy_link(hw,
+ &hw->link_speed, &hw->link_up);
+ } else {
+ /* always assume link is up, if no check link function */
+ hw->link_speed = ALX_LINK_SPEED_1GB_FULL;
+ hw->link_up = true;
+ }
+ netif_info(adpt, timer, adpt->netdev,
+ "link_speed = %d, link_up = %d\n",
+ hw->link_speed, hw->link_up);
+
+ if (!hw->link_up && time_after(adpt->link_jiffies, jiffies))
+ SET_ADPT_FLAG(0, TASK_LSC_REQ);
+
+ if (hw->link_up) {
+ if (netif_carrier_ok(netdev))
+ return;
+
+ link_desc = (hw->link_speed == ALX_LINK_SPEED_1GB_FULL) ?
+ "1 Gbps Duplex Full" :
+ (hw->link_speed == ALX_LINK_SPEED_100_FULL ?
+ "100 Mbps Duplex Full" :
+ (hw->link_speed == ALX_LINK_SPEED_100_HALF ?
+ "100 Mbps Duplex Half" :
+ (hw->link_speed == ALX_LINK_SPEED_10_FULL ?
+ "10 Mbps Duplex Full" :
+ (hw->link_speed == ALX_LINK_SPEED_10_HALF ?
+ "10 Mbps Duplex HALF" :
+ "unknown speed"))));
+ netif_info(adpt, timer, adpt->netdev,
+ "NIC Link is Up %s\n", link_desc);
+
+ hw->cbs.config_aspm(hw, true, true);
+ hw->cbs.start_mac(hw);
+ netif_carrier_on(netdev);
+ netif_tx_wake_all_queues(netdev);
+ } else {
+ /* only continue if link was up previously */
+ if (!netif_carrier_ok(netdev))
+ return;
+
+ hw->link_speed = 0;
+ netif_info(adpt, timer, adpt->netdev, "NIC Link is Down\n");
+ netif_carrier_off(netdev);
+ netif_tx_stop_all_queues(netdev);
+
+ hw->cbs.stop_mac(hw);
+ hw->cbs.config_aspm(hw, false, true);
+ hw->cbs.setup_phy_link(hw, hw->autoneg_advertised, true,
+ !hw->disable_fc_autoneg);
+ }
+}
+
+
+static void alx_reinit_task_routine(struct alx_adapter *adpt)
+{
+ if (!CHK_ADPT_FLAG(0, TASK_REINIT_REQ))
+ return;
+ CLI_ADPT_FLAG(0, TASK_REINIT_REQ);
+
+ if (CHK_ADPT_FLAG(1, STATE_DOWN) || CHK_ADPT_FLAG(1, STATE_RESETTING))
+ return;
+
+ alx_reinit_locked(adpt);
+}
+
+
+/*
+ * alx_timer_routine - Timer Call-back
+ */
+static void alx_timer_routine(unsigned long data)
+{
+ struct alx_adapter *adpt = (struct alx_adapter *)data;
+ unsigned long delay;
+
+ /* poll faster when waiting for link */
+ if (CHK_ADPT_FLAG(0, TASK_LSC_REQ))
+ delay = HZ / 10;
+ else
+ delay = HZ * 2;
+
+ /* Reset the timer */
+ mod_timer(&adpt->alx_timer, delay + jiffies);
+
+ alx_task_schedule(adpt);
+}
+
+
+/*
+ * alx_task_routine - manages and runs subtasks
+ */
+static void alx_task_routine(struct work_struct *work)
+{
+ struct alx_adapter *adpt = container_of(work,
+ struct alx_adapter, alx_task);
+ /* test state of adapter */
+ if (!CHK_ADPT_FLAG(1, STATE_WATCH_DOG))
+ netif_warn(adpt, timer, adpt->netdev,
+ "flag STATE_WATCH_DOG doesn't set\n");
+
+ /* reinit task */
+ alx_reinit_task_routine(adpt);
+
+ /* link task */
+ alx_link_task_routine(adpt);
+
+ /* flush memory to make sure state is correct before next watchog */
+ smp_mb__before_clear_bit();
+
+ CLI_ADPT_FLAG(1, STATE_WATCH_DOG);
+}
+
+
+/* Calculate the transmit packet descript needed*/
+static bool alx_check_num_tpdescs(struct alx_tx_queue *txque,
+ const struct sk_buff *skb)
+{
+ u16 num_required = 1;
+ u16 num_available = 0;
+ u16 produce_idx = txque->tpq.produce_idx;
+ u16 consume_idx = txque->tpq.consume_idx;
+ int i = 0;
+
+ u16 proto_hdr_len = 0;
+ if (skb_is_gso(skb)) {
+ proto_hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
+ if (proto_hdr_len < skb_headlen(skb))
+ num_required++;
+ if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6)
+ num_required++;
+ }
+ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++)
+ num_required++;
+ num_available = (u16)(consume_idx > produce_idx) ?
+ (consume_idx - produce_idx - 1) :
+ (txque->tpq.count + consume_idx - produce_idx - 1);
+
+ return num_required < num_available;
+}
+
+
+static int alx_tso_csum(struct alx_adapter *adpt,
+ struct alx_tx_queue *txque,
+ struct sk_buff *skb,
+ union alx_sw_tpdesc *stpd)
+{
+ struct pci_dev *pdev = adpt->pdev;
+ u8 hdr_len;
+ int retval;
+
+ if (skb_is_gso(skb)) {
+ if (skb_header_cloned(skb)) {
+ retval = pskb_expand_head(skb, 0, 0, GFP_ATOMIC);
+ if (unlikely(retval))
+ return retval;
+ }
+
+ if (skb->protocol == htons(ETH_P_IP)) {
+ u32 pkt_len =
+ ((unsigned char *)ip_hdr(skb) - skb->data) +
+ ntohs(ip_hdr(skb)->tot_len);
+ if (skb->len > pkt_len)
+ pskb_trim(skb, pkt_len);
+ }
+
+ hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
+ if (unlikely(skb->len == hdr_len)) {
+ /* we only need to do csum */
+ dev_warn(&pdev->dev,
+ "tso doesn't need, if packet with 0 data\n");
+ goto do_csum;
+ }
+
+ if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4) {
+ ip_hdr(skb)->check = 0;
+ tcp_hdr(skb)->check = ~csum_tcpudp_magic(
+ ip_hdr(skb)->saddr,
+ ip_hdr(skb)->daddr,
+ 0, IPPROTO_TCP, 0);
+ stpd->genr.ipv4 = 1;
+ }
+
+ if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) {
+ /* ipv6 tso need an extra tpd */
+ union alx_sw_tpdesc extra_tpd;
+
+ memset(stpd, 0, sizeof(union alx_sw_tpdesc));
+ memset(&extra_tpd, 0, sizeof(union alx_sw_tpdesc));
+
+ ipv6_hdr(skb)->payload_len = 0;
+ tcp_hdr(skb)->check = ~csum_ipv6_magic(
+ &ipv6_hdr(skb)->saddr,
+ &ipv6_hdr(skb)->daddr,
+ 0, IPPROTO_TCP, 0);
+ extra_tpd.tso.addr_lo = skb->len;
+ extra_tpd.tso.lso = 0x1;
+ extra_tpd.tso.lso_v2 = 0x1;
+ alx_set_tpdesc(txque, &extra_tpd);
+ stpd->tso.lso_v2 = 0x1;
+ }
+
+ stpd->tso.lso = 0x1;
+ stpd->tso.tcphdr_offset = skb_transport_offset(skb);
+ stpd->tso.mss = skb_shinfo(skb)->gso_size;
+ return 0;
+ }
+
+do_csum:
+ if (likely(skb->ip_summed == CHECKSUM_PARTIAL)) {
+ u8 css, cso;
+ cso = skb_checksum_start_offset(skb);
+
+ if (unlikely(cso & 0x1)) {
+ dev_err(&pdev->dev, "pay load offset should not be an "
+ "event number\n");
+ return -1;
+ } else {
+ css = cso + skb->csum_offset;
+
+ stpd->csum.payld_offset = cso >> 1;
+ stpd->csum.cxsum_offset = css >> 1;
+ stpd->csum.c_sum = 0x1;
+ }
+ }
+ return 0;
+}
+
+
+static void alx_tx_map(struct alx_adapter *adpt,
+ struct alx_tx_queue *txque,
+ struct sk_buff *skb,
+ union alx_sw_tpdesc *stpd)
+{
+ struct alx_buffer *tpbuf = NULL;
+
+ unsigned int nr_frags = skb_shinfo(skb)->nr_frags;
+
+ unsigned int len = skb_headlen(skb);
+
+ u16 map_len = 0;
+ u16 mapped_len = 0;
+ u16 hdr_len = 0;
+ u16 f;
+ u32 tso = stpd->tso.lso;
+
+ if (tso) {
+ /* TSO */
+ map_len = hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
+
+ tpbuf = GET_TP_BUFFER(txque, txque->tpq.produce_idx);
+ tpbuf->length = map_len;
+ tpbuf->dma = dma_map_single(txque->dev,
+ skb->data, hdr_len, DMA_TO_DEVICE);
+ mapped_len += map_len;
+ stpd->genr.addr = tpbuf->dma;
+ stpd->genr.buffer_len = tpbuf->length;
+
+ alx_set_tpdesc(txque, stpd);
+ }
+
+ if (mapped_len < len) {
+ tpbuf = GET_TP_BUFFER(txque, txque->tpq.produce_idx);
+ tpbuf->length = len - mapped_len;
+ tpbuf->dma =
+ dma_map_single(txque->dev, skb->data + mapped_len,
+ tpbuf->length, DMA_TO_DEVICE);
+ stpd->genr.addr = tpbuf->dma;
+ stpd->genr.buffer_len = tpbuf->length;
+ alx_set_tpdesc(txque, stpd);
+ }
+
+ for (f = 0; f < nr_frags; f++) {
+ struct skb_frag_struct *frag;
+
+ frag = &skb_shinfo(skb)->frags[f];
+
+ tpbuf = GET_TP_BUFFER(txque, txque->tpq.produce_idx);
+ tpbuf->length = skb_frag_size(frag);
+ tpbuf->dma = skb_frag_dma_map(txque->dev, frag, 0,
+ tpbuf->length, DMA_TO_DEVICE);
+ stpd->genr.addr = tpbuf->dma;
+ stpd->genr.buffer_len = tpbuf->length;
+ alx_set_tpdesc(txque, stpd);
+ }
+
+
+ /* The last tpd */
+ alx_set_tpdesc_lastfrag(txque);
+ /*
+ * The last buffer info contain the skb address,
+ * so it will be free after unmap
+ */
+ tpbuf->skb = skb;
+}
+
+
+static netdev_tx_t alx_start_xmit_frame(struct alx_adapter *adpt,
+ struct alx_tx_queue *txque,
+ struct sk_buff *skb)
+{
+ struct alx_hw *hw = &adpt->hw;
+ unsigned long flags = 0;
+ union alx_sw_tpdesc stpd; /* normal*/
+
+ if (CHK_ADPT_FLAG(1, STATE_DOWN) ||
+ CHK_ADPT_FLAG(1, STATE_DIAG_RUNNING)) {
+ dev_kfree_skb_any(skb);
+ return NETDEV_TX_OK;
+ }
+
+ if (!spin_trylock_irqsave(&adpt->tx_lock, flags)) {
+ alx_err(adpt, "tx locked!\n");
+ return NETDEV_TX_LOCKED;
+ }
+
+ if (!alx_check_num_tpdescs(txque, skb)) {
+ /* no enough descriptor, just stop queue */
+ netif_stop_queue(adpt->netdev);
+ spin_unlock_irqrestore(&adpt->tx_lock, flags);
+ return NETDEV_TX_BUSY;
+ }
+
+ memset(&stpd, 0, sizeof(union alx_sw_tpdesc));
+ /* do TSO and check sum */
+ if (alx_tso_csum(adpt, txque, skb, &stpd) != 0) {
+ spin_unlock_irqrestore(&adpt->tx_lock, flags);
+ dev_kfree_skb_any(skb);
+ return NETDEV_TX_OK;
+ }
+
+ if (unlikely(vlan_tx_tag_present(skb))) {
+ u16 vlan = vlan_tx_tag_get(skb);
+ u16 tag;
+ ALX_VLAN_TO_TAG(vlan, tag);
+ stpd.genr.vlan_tag = tag;
+ stpd.genr.instag = 0x1;
+ }
+
+ if (skb_network_offset(skb) != ETH_HLEN)
+ stpd.genr.type = 0x1; /* Ethernet frame */
+
+ alx_tx_map(adpt, txque, skb, &stpd);
+
+
+ /* update produce idx */
+ wmb();
+ alx_mem_w16(hw, txque->produce_reg, txque->tpq.produce_idx);
+ netif_info(adpt, tx_err, adpt->netdev,
+ "TX[%d]: tpq.consume_idx = 0x%x, tpq.produce_idx = 0x%x\n",
+ txque->que_idx, txque->tpq.consume_idx,
+ txque->tpq.produce_idx);
+ netif_info(adpt, tx_err, adpt->netdev,
+ "TX[%d]: Produce Reg[%x] = 0x%x\n",
+ txque->que_idx, txque->produce_reg, txque->tpq.produce_idx);
+
+ spin_unlock_irqrestore(&adpt->tx_lock, flags);
+ return NETDEV_TX_OK;
+}
+
+
+static netdev_tx_t alx_start_xmit(struct sk_buff *skb,
+ struct net_device *netdev)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ struct alx_tx_queue *txque;
+
+ txque = adpt->tx_queue[0];
+ return alx_start_xmit_frame(adpt, txque, skb);
+}
+
+
+/*
+ * alx_mii_ioctl
+ */
+static int alx_mii_ioctl(struct net_device *netdev,
+ struct ifreq *ifr, int cmd)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ struct alx_hw *hw = &adpt->hw;
+ struct mii_ioctl_data *data = if_mii(ifr);
+ int retval = 0;
+
+ if (!netif_running(netdev))
+ return -EINVAL;
+
+ switch (cmd) {
+ case SIOCGMIIPHY:
+ data->phy_id = 0;
+ break;
+
+ case SIOCGMIIREG:
+ if (data->reg_num & ~(0x1F)) {
+ retval = -EFAULT;
+ goto out;
+ }
+
+ retval = hw->cbs.read_phy_reg(hw, data->reg_num,
+ &data->val_out);
+ netif_dbg(adpt, hw, adpt->netdev, "read phy %02x %04x\n",
+ data->reg_num, data->val_out);
+ if (retval) {
+ retval = -EIO;
+ goto out;
+ }
+ break;
+
+ case SIOCSMIIREG:
+ if (data->reg_num & ~(0x1F)) {
+ retval = -EFAULT;
+ goto out;
+ }
+
+ retval = hw->cbs.write_phy_reg(hw, data->reg_num, data->val_in);
+ netif_dbg(adpt, hw, adpt->netdev, "write phy %02x %04x\n",
+ data->reg_num, data->val_in);
+ if (retval) {
+ retval = -EIO;
+ goto out;
+ }
+ break;
+ default:
+ retval = -EOPNOTSUPP;
+ break;
+ }
+out:
+ return retval;
+
+}
+
+
+static int alx_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd)
+{
+ switch (cmd) {
+ case SIOCGMIIPHY:
+ case SIOCGMIIREG:
+ case SIOCSMIIREG:
+ return alx_mii_ioctl(netdev, ifr, cmd);
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
+
+#ifdef CONFIG_NET_POLL_CONTROLLER
+static void alx_poll_controller(struct net_device *netdev)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ int num_msix_intrs = adpt->num_msix_intrs;
+ int msix_idx;
+
+ /* if interface is down do nothing */
+ if (CHK_ADPT_FLAG(1, STATE_DOWN))
+ return;
+
+ if (CHK_ADPT_FLAG(0, MSIX_EN)) {
+ for (msix_idx = 0; msix_idx < num_msix_intrs; msix_idx++) {
+ struct alx_msix_param *msix = adpt->msix[msix_idx];
+ if (CHK_MSIX_FLAG(RXS) || CHK_MSIX_FLAG(TXS))
+ alx_msix_rtx(0, msix);
+ else if (CHK_MSIX_FLAG(TIMER))
+ alx_msix_timer(0, msix);
+ else if (CHK_MSIX_FLAG(ALERT))
+ alx_msix_alert(0, msix);
+ else if (CHK_MSIX_FLAG(SMB))
+ alx_msix_smb(0, msix);
+ else if (CHK_MSIX_FLAG(PHY))
+ alx_msix_phy(0, msix);
+ }
+ } else {
+ alx_interrupt(adpt->pdev->irq, netdev);
+ }
+}
+#endif
+
+
+static const struct net_device_ops alx_netdev_ops = {
+ .ndo_open = alx_open,
+ .ndo_stop = alx_stop,
+ .ndo_start_xmit = alx_start_xmit,
+ .ndo_get_stats = alx_get_stats,
+ .ndo_set_rx_mode = alx_set_multicase_list,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_set_mac_address = alx_set_mac_address,
+ .ndo_change_mtu = alx_change_mtu,
+ .ndo_do_ioctl = alx_ioctl,
+ .ndo_tx_timeout = alx_tx_timeout,
+ .ndo_fix_features = alx_fix_features,
+ .ndo_set_features = alx_set_features,
+#ifdef CONFIG_NET_POLL_CONTROLLER
+ .ndo_poll_controller = alx_poll_controller,
+#endif
+};
+
+
+/*
+ * alx_init - Device Initialization Routine
+ */
+static int __devinit alx_init(struct pci_dev *pdev,
+ const struct pci_device_id *ent)
+{
+ struct net_device *netdev;
+ struct alx_adapter *adpt = NULL;
+ struct alx_hw *hw = NULL;
+ static int cards_found;
+ int retval;
+
+ /* enable device (incl. PCI PM wakeup and hotplug setup) */
+ retval = pci_enable_device_mem(pdev);
+ if (retval) {
+ dev_err(&pdev->dev, "cannot enable PCI device\n");
+ goto err_alloc_device;
+ }
+
+ /*
+ * The alx chip can DMA to 64-bit addresses, but it uses a single
+ * shared register for the high 32 bits, so only a single, aligned,
+ * 4 GB physical address range can be used at a time.
+ */
+ if (!dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)) &&
+ !dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64))) {
+ dev_info(&pdev->dev, "DMA to 64-BIT addresses\n");
+ } else {
+ retval = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32));
+ if (retval) {
+ retval = dma_set_coherent_mask(&pdev->dev,
+ DMA_BIT_MASK(32));
+ if (retval) {
+ dev_err(&pdev->dev,
+ "No usable DMA config, aborting\n");
+ goto err_alloc_pci_res_mem;
+ }
+ }
+ }
+
+ retval = pci_request_selected_regions(pdev, pci_select_bars(pdev,
+ IORESOURCE_MEM), alx_drv_name);
+ if (retval) {
+ dev_err(&pdev->dev,
+ "pci_request_selected_regions failed 0x%x\n", retval);
+ goto err_alloc_pci_res_mem;
+ }
+
+
+ pci_enable_pcie_error_reporting(pdev);
+ pci_set_master(pdev);
+
+ netdev = alloc_etherdev(sizeof(struct alx_adapter));
+ if (netdev == NULL) {
+ dev_err(&pdev->dev, "etherdev alloc failed\n");
+ retval = -ENOMEM;
+ goto err_alloc_netdev;
+ }
+
+ SET_NETDEV_DEV(netdev, &pdev->dev);
+ netdev->irq = pdev->irq;
+
+ adpt = netdev_priv(netdev);
+ pci_set_drvdata(pdev, adpt);
+ adpt->netdev = netdev;
+ adpt->pdev = pdev;
+ hw = &adpt->hw;
+ hw->adpt = adpt;
+ adpt->msg_enable = ALX_MSG_DEFAULT;
+
+ adpt->hw.hw_addr = ioremap(pci_resource_start(pdev, BAR_0),
+ pci_resource_len(pdev, BAR_0));
+ if (!adpt->hw.hw_addr) {
+ alx_err(adpt, "cannot map device registers\n");
+ retval = -EIO;
+ goto err_iomap;
+ }
+ netdev->base_addr = (unsigned long)adpt->hw.hw_addr;
+
+ /* set cb member of netdev structure*/
+ netdev->netdev_ops = &alx_netdev_ops;
+ alx_set_ethtool_ops(netdev);
+ netdev->watchdog_timeo = ALX_WATCHDOG_TIME;
+ strncpy(netdev->name, pci_name(pdev), sizeof(netdev->name) - 1);
+
+ adpt->bd_number = cards_found;
+
+ /* init alx_adapte structure */
+ retval = alx_init_adapter(adpt);
+ if (retval) {
+ alx_err(adpt, "net device private data init failed\n");
+ goto err_init_adapter;
+ }
+
+ /* reset pcie */
+ retval = hw->cbs.reset_pcie(hw, true, true);
+ if (retval) {
+ alx_err(adpt, "PCIE Reset failed, error = %d\n", retval);
+ retval = -EIO;
+ goto err_init_adapter;
+ }
+
+ /* Init GPHY as early as possible due to power saving issue */
+ retval = hw->cbs.reset_phy(hw);
+ if (retval) {
+ alx_err(adpt, "PHY Reset failed, error = %d\n", retval);
+ retval = -EIO;
+ goto err_init_adapter;
+ }
+
+ /* reset mac */
+ retval = hw->cbs.reset_mac(hw);
+ if (retval) {
+ alx_err(adpt, "MAC Reset failed, error = %d\n", retval);
+ retval = -EIO;
+ goto err_init_adapter;
+ }
+
+ /* setup link to put it in a known good starting state */
+ retval = hw->cbs.setup_phy_link(hw, hw->autoneg_advertised, true,
+ !hw->disable_fc_autoneg);
+
+ /* get user settings */
+ adpt->num_txdescs = 1024;
+ adpt->num_rxdescs = 512;
+ adpt->max_rxques = min_t(int, ALX_MAX_RX_QUEUES, num_online_cpus());
+ adpt->max_txques = min_t(int, ALX_MAX_TX_QUEUES, num_online_cpus());
+
+ netdev->hw_features = NETIF_F_SG |
+ NETIF_F_HW_CSUM |
+ NETIF_F_HW_VLAN_RX;
+ if (adpt->hw.mac_type != alx_mac_l1c &&
+ adpt->hw.mac_type != alx_mac_l2c) {
+ netdev->hw_features = netdev->hw_features |
+ NETIF_F_TSO |
+ NETIF_F_TSO6;
+ }
+ netdev->features = netdev->hw_features |
+ NETIF_F_HW_VLAN_TX;
+
+ /* get mac addr and perm mac addr, set to register */
+ if (hw->cbs.get_mac_addr)
+ retval = hw->cbs.get_mac_addr(hw, hw->mac_perm_addr);
+ else
+ retval = -EINVAL;
+
+ if (retval) {
+ eth_hw_addr_random(netdev);
+ memcpy(hw->mac_perm_addr, netdev->dev_addr, netdev->addr_len);
+ }
+
+ memcpy(hw->mac_addr, hw->mac_perm_addr, netdev->addr_len);
+ if (hw->cbs.set_mac_addr)
+ hw->cbs.set_mac_addr(hw, hw->mac_addr);
+
+ memcpy(netdev->dev_addr, hw->mac_perm_addr, netdev->addr_len);
+ memcpy(netdev->perm_addr, hw->mac_perm_addr, netdev->addr_len);
+ retval = alx_validate_mac_addr(netdev->perm_addr);
+ if (retval) {
+ alx_err(adpt, "invalid MAC address\n");
+ goto err_init_adapter;
+ }
+
+ setup_timer(&adpt->alx_timer, &alx_timer_routine,
+ (unsigned long)adpt);
+ INIT_WORK(&adpt->alx_task, alx_task_routine);
+
+ /* Number of supported queues */
+ alx_set_num_queues(adpt);
+ retval = alx_set_interrupt_mode(adpt);
+ if (retval) {
+ alx_err(adpt, "can't set interrupt mode\n");
+ goto err_set_interrupt_mode;
+ }
+
+ retval = alx_set_interrupt_param(adpt);
+ if (retval) {
+ alx_err(adpt, "can't set interrupt parameter\n");
+ goto err_set_interrupt_param;
+ }
+
+ retval = alx_alloc_all_rtx_queue(adpt);
+ if (retval) {
+ alx_err(adpt, "can't allocate memory for queues\n");
+ goto err_alloc_rtx_queue;
+ }
+
+ alx_set_register_info_special(adpt);
+
+ netif_dbg(adpt, probe, adpt->netdev,
+ "num_msix_noque_intrs = %d, num_msix_rxque_intrs = %d, "
+ "num_msix_txque_intrs = %d\n",
+ adpt->num_msix_noques, adpt->num_msix_rxques,
+ adpt->num_msix_txques);
+ netif_dbg(adpt, probe, adpt->netdev, "num_msix_all_intrs = %d\n",
+ adpt->num_msix_intrs);
+
+ netif_dbg(adpt, probe, adpt->netdev,
+ "RX Queue Count = %u, HRX Queue Count = %u, "
+ "SRX Queue Count = %u, TX Queue Count = %u\n",
+ adpt->num_rxques, adpt->num_hw_rxques, adpt->num_sw_rxques,
+ adpt->num_txques);
+
+ /* WOL not supported for all but the following */
+ switch (hw->pci_devid) {
+ case ALX_DEV_ID_AR8131:
+ case ALX_DEV_ID_AR8132:
+ case ALX_DEV_ID_AR8151_V1:
+ case ALX_DEV_ID_AR8151_V2:
+ case ALX_DEV_ID_AR8152_V1:
+ case ALX_DEV_ID_AR8152_V2:
+ adpt->wol = (ALX_WOL_MAGIC | ALX_WOL_PHY);
+ break;
+ case ALX_DEV_ID_AR8161:
+ case ALX_DEV_ID_AR8162:
+ adpt->wol = (ALX_WOL_MAGIC | ALX_WOL_PHY);
+ break;
+ default:
+ adpt->wol = 0;
+ break;
+ }
+ device_set_wakeup_enable(&adpt->pdev->dev, adpt->wol);
+
+ SET_ADPT_FLAG(1, STATE_DOWN);
+ strcpy(netdev->name, "eth%d");
+ retval = register_netdev(netdev);
+ if (retval) {
+ alx_err(adpt, "register netdevice failed\n");
+ goto err_register_netdev;
+ }
+ adpt->netdev_registered = true;
+
+ /* carrier off reporting is important to ethtool even BEFORE open */
+ netif_carrier_off(netdev);
+ /* keep stopping all the transmit queues for older kernels */
+ netif_tx_stop_all_queues(netdev);
+
+ /* print the MAC address */
+ netif_info(adpt, probe, adpt->netdev, "%pM\n", netdev->dev_addr);
+
+ /* print the adapter capability */
+ if (CHK_ADPT_FLAG(0, MSI_CAP)) {
+ netif_info(adpt, probe, adpt->netdev,
+ "MSI Capable: %s\n",
+ CHK_ADPT_FLAG(0, MSI_EN) ? "Enable" : "Disable");
+ }
+ if (CHK_ADPT_FLAG(0, MSIX_CAP)) {
+ netif_info(adpt, probe, adpt->netdev,
+ "MSIX Capable: %s\n",
+ CHK_ADPT_FLAG(0, MSIX_EN) ? "Enable" : "Disable");
+ }
+ if (CHK_ADPT_FLAG(0, MRQ_CAP)) {
+ netif_info(adpt, probe, adpt->netdev,
+ "MRQ Capable: %s\n",
+ CHK_ADPT_FLAG(0, MRQ_EN) ? "Enable" : "Disable");
+ }
+ if (CHK_ADPT_FLAG(0, MRQ_CAP)) {
+ netif_info(adpt, probe, adpt->netdev,
+ "MTQ Capable: %s\n",
+ CHK_ADPT_FLAG(0, MTQ_EN) ? "Enable" : "Disable");
+ }
+ if (CHK_ADPT_FLAG(0, SRSS_CAP)) {
+ netif_info(adpt, probe, adpt->netdev,
+ "RSS(SW) Capable: %s\n",
+ CHK_ADPT_FLAG(0, SRSS_EN) ? "Enable" : "Disable");
+ }
+
+ printk(KERN_INFO "alx: Atheros Gigabit Network Connection\n");
+ cards_found++;
+ return 0;
+
+err_register_netdev:
+ alx_free_all_rtx_queue(adpt);
+err_alloc_rtx_queue:
+ alx_reset_interrupt_param(adpt);
+err_set_interrupt_param:
+ alx_reset_interrupt_mode(adpt);
+err_set_interrupt_mode:
+err_init_adapter:
+ iounmap(adpt->hw.hw_addr);
+err_iomap:
+ free_netdev(netdev);
+err_alloc_netdev:
+ pci_release_selected_regions(pdev,
+ pci_select_bars(pdev, IORESOURCE_MEM));
+err_alloc_pci_res_mem:
+ pci_disable_device(pdev);
+err_alloc_device:
+ dev_err(&pdev->dev,
+ "error when probe device, error = %d\n", retval);
+ return retval;
+}
+
+
+/*
+ * alx_remove - Device Removal Routine
+ */
+static void __devexit alx_remove(struct pci_dev *pdev)
+{
+ struct alx_adapter *adpt = pci_get_drvdata(pdev);
+ struct alx_hw *hw = &adpt->hw;
+ struct net_device *netdev = adpt->netdev;
+
+ SET_ADPT_FLAG(1, STATE_DOWN);
+ cancel_work_sync(&adpt->alx_task);
+
+ hw->cbs.config_pow_save(hw, ALX_LINK_SPEED_UNKNOWN,
+ false, false, false, false);
+
+ /* resume permanent mac address */
+ hw->cbs.set_mac_addr(hw, hw->mac_perm_addr);
+
+ if (adpt->netdev_registered) {
+ unregister_netdev(netdev);
+ adpt->netdev_registered = false;
+ }
+
+ alx_free_all_rtx_queue(adpt);
+ alx_reset_interrupt_param(adpt);
+ alx_reset_interrupt_mode(adpt);
+
+ iounmap(adpt->hw.hw_addr);
+ pci_release_selected_regions(pdev,
+ pci_select_bars(pdev, IORESOURCE_MEM));
+
+ netif_info(adpt, probe, adpt->netdev, "complete\n");
+ free_netdev(netdev);
+
+ pci_disable_pcie_error_reporting(pdev);
+
+ pci_disable_device(pdev);
+}
+
+
+/*
+ * alx_pci_error_detected
+ */
+static pci_ers_result_t alx_pci_error_detected(struct pci_dev *pdev,
+ pci_channel_state_t state)
+{
+ struct alx_adapter *adpt = pci_get_drvdata(pdev);
+ struct net_device *netdev = adpt->netdev;
+ pci_ers_result_t retval = PCI_ERS_RESULT_NEED_RESET;
+
+ netif_device_detach(netdev);
+
+ if (state == pci_channel_io_perm_failure) {
+ retval = PCI_ERS_RESULT_DISCONNECT;
+ goto out;
+ }
+
+ if (netif_running(netdev))
+ alx_stop_internal(adpt, ALX_OPEN_CTRL_RESET_MAC);
+ pci_disable_device(pdev);
+out:
+ return retval;
+}
+
+
+/*
+ * alx_pci_error_slot_reset
+ */
+static pci_ers_result_t alx_pci_error_slot_reset(struct pci_dev *pdev)
+{
+ struct alx_adapter *adpt = pci_get_drvdata(pdev);
+ pci_ers_result_t retval = PCI_ERS_RESULT_DISCONNECT;
+
+ if (pci_enable_device(pdev)) {
+ alx_err(adpt, "cannot re-enable PCI device after reset\n");
+ goto out;
+ }
+
+ pci_set_master(pdev);
+ pci_enable_wake(pdev, PCI_D3hot, 0);
+ pci_enable_wake(pdev, PCI_D3cold, 0);
+ adpt->hw.cbs.reset_mac(&adpt->hw);
+ retval = PCI_ERS_RESULT_RECOVERED;
+out:
+ pci_cleanup_aer_uncorrect_error_status(pdev);
+ return retval;
+}
+
+
+/*
+ * alx_pci_error_resume
+ */
+static void alx_pci_error_resume(struct pci_dev *pdev)
+{
+ struct alx_adapter *adpt = pci_get_drvdata(pdev);
+ struct net_device *netdev = adpt->netdev;
+
+ if (netif_running(netdev)) {
+ if (alx_open_internal(adpt, 0))
+ return;
+ }
+
+ netif_device_attach(netdev);
+}
+
+
+static struct pci_error_handlers alx_err_handler = {
+ .error_detected = alx_pci_error_detected,
+ .slot_reset = alx_pci_error_slot_reset,
+ .resume = alx_pci_error_resume,
+};
+
+
+#ifdef CONFIG_PM_SLEEP
+static SIMPLE_DEV_PM_OPS(alx_pm_ops, alx_suspend, alx_resume);
+#define ALX_PM_OPS (&alx_pm_ops)
+#else
+#define ALX_PM_OPS NULL
+#endif
+
+
+static struct pci_driver alx_driver = {
+ .name = alx_drv_name,
+ .id_table = alx_pci_tbl,
+ .probe = alx_init,
+ .remove = __devexit_p(alx_remove),
+ .shutdown = alx_shutdown,
+ .err_handler = &alx_err_handler,
+ .driver.pm = ALX_PM_OPS,
+};
+
+
+static int __init alx_init_module(void)
+{
+ int retval;
+
+ printk(KERN_INFO "%s\n", alx_drv_description);
+ retval = pci_register_driver(&alx_driver);
+
+ return retval;
+}
+module_init(alx_init_module);
+
+
+static void __exit alx_exit_module(void)
+{
+ pci_unregister_driver(&alx_driver);
+}
+
+
+module_exit(alx_exit_module);
diff --git a/drivers/net/ethernet/atheros/alx/alx_sw.h b/drivers/net/ethernet/atheros/alx/alx_sw.h
new file mode 100644
index 0000000..3daa392
--- /dev/null
+++ b/drivers/net/ethernet/atheros/alx/alx_sw.h
@@ -0,0 +1,493 @@
+/*
+ * Copyright (c) 2012 Qualcomm Atheros, Inc.
+ *
+ * Permission to use, copy, modify, and/or distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+#ifndef _ALX_SW_H_
+#define _ALX_SW_H_
+
+#include <linux/netdevice.h>
+#include <linux/crc32.h>
+
+/* Vendor ID */
+#define ALX_VENDOR_ID 0x1969
+
+/* Device IDs */
+#define ALX_DEV_ID_AR8131 0x1063 /* l1c */
+#define ALX_DEV_ID_AR8132 0x1062 /* l2c */
+#define ALX_DEV_ID_AR8151_V1 0x1073 /* l1d_v1 */
+#define ALX_DEV_ID_AR8151_V2 0x1083 /* l1d_v2 */
+#define ALX_DEV_ID_AR8152_V1 0x2060 /* l2cb_v1 */
+#define ALX_DEV_ID_AR8152_V2 0x2062 /* l2cb_v2 */
+#define ALX_DEV_ID_AR8161 0x1091 /* l1f */
+#define ALX_DEV_ID_AR8162 0x1090 /* l2f */
+
+#define ALX_REV_ID_AR8152_V1_0 0xc0
+#define ALX_REV_ID_AR8152_V1_1 0xc1
+#define ALX_REV_ID_AR8152_V2_0 0xc0
+#define ALX_REV_ID_AR8152_V2_1 0xc1
+#define ALX_REV_ID_AR8161_V2_0 0x10 /* B0 */
+
+/* Generic Registers */
+#define ALX_DEV_STAT 0x62 /* 16 bits */
+#define ALX_DEV_STAT_CERR 0x0001
+#define ALX_DEV_STAT_NFERR 0x0002
+#define ALX_DEV_STAT_FERR 0x0004
+
+#define ALX_ISR 0x1600
+#define ALX_IMR 0x1604
+#define ALX_ISR_SMB 0x00000001
+#define ALX_ISR_TIMER 0x00000002
+#define ALX_ISR_MANU 0x00000004
+#define ALX_ISR_RXF_OV 0x00000008
+#define ALX_ISR_RFD_UR 0x00000010
+#define ALX_ISR_TX_Q1 0x00000020
+#define ALX_ISR_TX_Q2 0x00000040
+#define ALX_ISR_TX_Q3 0x00000080
+#define ALX_ISR_TXF_UR 0x00000100
+#define ALX_ISR_DMAR 0x00000200
+#define ALX_ISR_DMAW 0x00000400
+#define ALX_ISR_TX_CREDIT 0x00000800
+#define ALX_ISR_PHY 0x00001000
+#define ALX_ISR_PHY_LPW 0x00002000
+#define ALX_ISR_TXQ_TO 0x00004000
+#define ALX_ISR_TX_Q0 0x00008000
+#define ALX_ISR_RX_Q0 0x00010000
+#define ALX_ISR_RX_Q1 0x00020000
+#define ALX_ISR_RX_Q2 0x00040000
+#define ALX_ISR_RX_Q3 0x00080000
+#define ALX_ISR_MAC_RX 0x00100000
+#define ALX_ISR_MAC_TX 0x00200000
+#define ALX_ISR_PCIE_UR 0x00400000
+#define ALX_ISR_PCIE_FERR 0x00800000
+#define ALX_ISR_PCIE_NFERR 0x01000000
+#define ALX_ISR_PCIE_CERR 0x02000000
+#define ALX_ISR_PCIE_LNKDOWN 0x04000000
+#define ALX_ISR_RX_Q4 0x08000000
+#define ALX_ISR_RX_Q5 0x10000000
+#define ALX_ISR_RX_Q6 0x20000000
+#define ALX_ISR_RX_Q7 0x40000000
+#define ALX_ISR_DIS 0x80000000
+
+
+#define ALX_IMR_NORMAL_MASK (\
+ ALX_ISR_MANU |\
+ ALX_ISR_OVER |\
+ ALX_ISR_TXQ |\
+ ALX_ISR_RXQ |\
+ ALX_ISR_PHY_LPW |\
+ ALX_ISR_PHY |\
+ ALX_ISR_ERROR)
+
+#define ALX_ISR_ALERT_MASK (\
+ ALX_ISR_DMAR |\
+ ALX_ISR_DMAW |\
+ ALX_ISR_TXQ_TO |\
+ ALX_ISR_PCIE_FERR |\
+ ALX_ISR_PCIE_LNKDOWN |\
+ ALX_ISR_RFD_UR |\
+ ALX_ISR_RXF_OV)
+
+#define ALX_ISR_TXQ (\
+ ALX_ISR_TX_Q0 |\
+ ALX_ISR_TX_Q1 |\
+ ALX_ISR_TX_Q2 |\
+ ALX_ISR_TX_Q3)
+
+#define ALX_ISR_RXQ (\
+ ALX_ISR_RX_Q0 |\
+ ALX_ISR_RX_Q1 |\
+ ALX_ISR_RX_Q2 |\
+ ALX_ISR_RX_Q3 |\
+ ALX_ISR_RX_Q4 |\
+ ALX_ISR_RX_Q5 |\
+ ALX_ISR_RX_Q6 |\
+ ALX_ISR_RX_Q7)
+
+#define ALX_ISR_OVER (\
+ ALX_ISR_RFD_UR |\
+ ALX_ISR_RXF_OV |\
+ ALX_ISR_TXF_UR)
+
+#define ALX_ISR_ERROR (\
+ ALX_ISR_DMAR |\
+ ALX_ISR_TXQ_TO |\
+ ALX_ISR_DMAW |\
+ ALX_ISR_PCIE_ERROR)
+
+#define ALX_ISR_PCIE_ERROR (\
+ ALX_ISR_PCIE_FERR |\
+ ALX_ISR_PCIE_LNKDOWN)
+
+/* MISC Register */
+#define ALX_MISC 0x19C0
+#define ALX_MISC_INTNLOSC_OPEN 0x00000008
+
+#define ALX_CLK_GATE 0x1814
+
+/* DMA address */
+#define DMA_ADDR_HI_MASK 0xffffffff00000000ULL
+#define DMA_ADDR_LO_MASK 0x00000000ffffffffULL
+
+#define ALX_DMA_ADDR_HI(_addr) \
+ ((u32)(((u64)(_addr) & DMA_ADDR_HI_MASK) >> 32))
+#define ALX_DMA_ADDR_LO(_addr) \
+ ((u32)((u64)(_addr) & DMA_ADDR_LO_MASK))
+
+/* mac address length */
+#define ALX_ETH_LENGTH_OF_ADDRESS 6
+#define ALX_ETH_LENGTH_OF_HEADER ETH_HLEN
+
+#define ALX_ETH_CRC(_addr, _len) ether_crc((_len), (_addr));
+
+/* Autonegotiation advertised speeds */
+/* Link speed */
+#define ALX_LINK_SPEED_UNKNOWN 0x0
+#define ALX_LINK_SPEED_10_HALF 0x0001
+#define ALX_LINK_SPEED_10_FULL 0x0002
+#define ALX_LINK_SPEED_100_HALF 0x0004
+#define ALX_LINK_SPEED_100_FULL 0x0008
+#define ALX_LINK_SPEED_1GB_FULL 0x0020
+#define ALX_LINK_SPEED_DEFAULT (\
+ ALX_LINK_SPEED_10_HALF |\
+ ALX_LINK_SPEED_10_FULL |\
+ ALX_LINK_SPEED_100_HALF |\
+ ALX_LINK_SPEED_100_FULL |\
+ ALX_LINK_SPEED_1GB_FULL)
+
+#define ALX_MAX_SETUP_LNK_CYCLE 100
+
+/* Device Type definitions for new protocol MDIO commands */
+#define ALX_MDIO_DEV_TYPE_NORM 0
+
+/* Wake On Lan */
+#define ALX_WOL_PHY 0x00000001 /* PHY Status Change */
+#define ALX_WOL_MAGIC 0x00000002 /* Magic Packet */
+
+#define ALX_MAX_EEPROM_LEN 0x200
+#define ALX_MAX_HWREG_LEN 0x200
+
+/* RSS Settings */
+enum alx_rss_mode {
+ alx_rss_mode_disable = 0,
+ alx_rss_sig_que = 1,
+ alx_rss_mul_que_sig_int = 2,
+ alx_rss_mul_que_mul_int = 4,
+};
+
+/* Flow Control Settings */
+enum alx_fc_mode {
+ alx_fc_none = 0,
+ alx_fc_rx_pause,
+ alx_fc_tx_pause,
+ alx_fc_full,
+ alx_fc_default
+};
+
+/* WRR Restrict Settings */
+enum alx_wrr_mode {
+ alx_wrr_mode_none = 0,
+ alx_wrr_mode_high,
+ alx_wrr_mode_high2,
+ alx_wrr_mode_all
+};
+
+enum alx_mac_type {
+ alx_mac_unknown = 0,
+ alx_mac_l1c,
+ alx_mac_l2c,
+ alx_mac_l1d_v1,
+ alx_mac_l1d_v2,
+ alx_mac_l2cb_v1,
+ alx_mac_l2cb_v20,
+ alx_mac_l2cb_v21,
+ alx_mac_l1f,
+ alx_mac_l2f,
+};
+
+
+/* Statistics counters collected by the MAC */
+struct alx_hw_stats {
+ /* rx */
+ unsigned long rx_ok;
+ unsigned long rx_bcast;
+ unsigned long rx_mcast;
+ unsigned long rx_pause;
+ unsigned long rx_ctrl;
+ unsigned long rx_fcs_err;
+ unsigned long rx_len_err;
+ unsigned long rx_byte_cnt;
+ unsigned long rx_runt;
+ unsigned long rx_frag;
+ unsigned long rx_sz_64B;
+ unsigned long rx_sz_127B;
+ unsigned long rx_sz_255B;
+ unsigned long rx_sz_511B;
+ unsigned long rx_sz_1023B;
+ unsigned long rx_sz_1518B;
+ unsigned long rx_sz_max;
+ unsigned long rx_ov_sz;
+ unsigned long rx_ov_rxf;
+ unsigned long rx_ov_rrd;
+ unsigned long rx_align_err;
+ unsigned long rx_bc_byte_cnt;
+ unsigned long rx_mc_byte_cnt;
+ unsigned long rx_err_addr;
+
+ /* tx */
+ unsigned long tx_ok;
+ unsigned long tx_bcast;
+ unsigned long tx_mcast;
+ unsigned long tx_pause;
+ unsigned long tx_exc_defer;
+ unsigned long tx_ctrl;
+ unsigned long tx_defer;
+ unsigned long tx_byte_cnt;
+ unsigned long tx_sz_64B;
+ unsigned long tx_sz_127B;
+ unsigned long tx_sz_255B;
+ unsigned long tx_sz_511B;
+ unsigned long tx_sz_1023B;
+ unsigned long tx_sz_1518B;
+ unsigned long tx_sz_max;
+ unsigned long tx_single_col;
+ unsigned long tx_multi_col;
+ unsigned long tx_late_col;
+ unsigned long tx_abort_col;
+ unsigned long tx_underrun;
+ unsigned long tx_trd_eop;
+ unsigned long tx_len_err;
+ unsigned long tx_trunc;
+ unsigned long tx_bc_byte_cnt;
+ unsigned long tx_mc_byte_cnt;
+ unsigned long update;
+};
+
+/* HW callback function pointer table */
+struct alx_hw;
+struct alx_hw_callbacks {
+ /* NIC */
+ int (*identify_nic)(struct alx_hw *);
+ /* PHY */
+ int (*init_phy)(struct alx_hw *);
+ int (*reset_phy)(struct alx_hw *);
+ int (*read_phy_reg)(struct alx_hw *, u16, u16 *);
+ int (*write_phy_reg)(struct alx_hw *, u16, u16);
+ /* Link */
+ int (*setup_phy_link)(struct alx_hw *, u32, bool, bool);
+ int (*setup_phy_link_speed)(struct alx_hw *, u32, bool, bool);
+ int (*check_phy_link)(struct alx_hw *, u32 *, bool *);
+
+ /* MAC */
+ int (*reset_mac)(struct alx_hw *);
+ int (*start_mac)(struct alx_hw *);
+ int (*stop_mac)(struct alx_hw *);
+ int (*config_mac)(struct alx_hw *, u16, u16, u16, u16, u16);
+ int (*get_mac_addr)(struct alx_hw *, u8 *);
+ int (*set_mac_addr)(struct alx_hw *, u8 *);
+ int (*set_mc_addr)(struct alx_hw *, u8 *);
+ int (*clear_mc_addr)(struct alx_hw *);
+
+ /* intr */
+ int (*ack_phy_intr)(struct alx_hw *);
+ int (*enable_legacy_intr)(struct alx_hw *);
+ int (*disable_legacy_intr)(struct alx_hw *);
+ int (*enable_msix_intr)(struct alx_hw *, u8);
+ int (*disable_msix_intr)(struct alx_hw *, u8);
+
+ /* Configure */
+ int (*config_rx)(struct alx_hw *);
+ int (*config_tx)(struct alx_hw *);
+ int (*config_fc)(struct alx_hw *);
+ int (*config_rss)(struct alx_hw *, bool);
+ int (*config_msix)(struct alx_hw *, u16, bool, bool);
+ int (*config_wol)(struct alx_hw *, u32);
+ int (*config_aspm)(struct alx_hw *, bool, bool);
+ int (*config_mac_ctrl)(struct alx_hw *);
+ int (*config_pow_save)(struct alx_hw *, u32,
+ bool, bool, bool, bool);
+ int (*reset_pcie)(struct alx_hw *, bool, bool);
+
+ /* NVRam function */
+ int (*check_nvram)(struct alx_hw *, bool *);
+ int (*read_nvram)(struct alx_hw *, u16, u32 *);
+ int (*write_nvram)(struct alx_hw *, u16, u32);
+
+ /* Others */
+ int (*get_ethtool_regs)(struct alx_hw *, void *);
+};
+
+struct alx_hw {
+ struct alx_adapter *adpt;
+ struct alx_hw_callbacks cbs;
+ u8 __iomem *hw_addr; /* inner register address */
+ u16 pci_venid;
+ u16 pci_devid;
+ u16 pci_sub_devid;
+ u16 pci_sub_venid;
+ u8 pci_revid;
+
+ bool long_cable;
+ bool aps_en;
+ bool hi_txperf;
+ bool msi_lnkpatch;
+ u32 dma_chnl;
+ u32 hwreg_sz;
+ u32 eeprom_sz;
+
+ /* PHY parameter */
+ u32 phy_id;
+ u32 autoneg_advertised;
+ u32 link_speed;
+ bool link_up;
+ spinlock_t mdio_lock;
+
+ /* MAC parameter */
+ enum alx_mac_type mac_type;
+ u8 mac_addr[ALX_ETH_LENGTH_OF_ADDRESS];
+ u8 mac_perm_addr[ALX_ETH_LENGTH_OF_ADDRESS];
+
+ u32 mtu;
+ u16 rxstat_reg;
+ u16 rxstat_sz;
+ u16 txstat_reg;
+ u16 txstat_sz;
+
+ u16 tx_prod_reg[4];
+ u16 tx_cons_reg[4];
+ u16 rx_prod_reg[2];
+ u16 rx_cons_reg[2];
+ u64 tpdma[4];
+ u64 rfdma[2];
+ u64 rrdma[2];
+
+ /* WRR parameter */
+ enum alx_wrr_mode wrr_mode;
+ u32 wrr_prio0;
+ u32 wrr_prio1;
+ u32 wrr_prio2;
+ u32 wrr_prio3;
+
+ /* RSS parameter */
+ enum alx_rss_mode rss_mode;
+ u8 rss_hstype;
+ u8 rss_base_cpu;
+ u16 rss_idt_size;
+ u32 rss_idt[32];
+ u8 rss_key[40];
+
+ /* flow control parameter */
+ enum alx_fc_mode cur_fc_mode; /* FC mode in effect */
+ enum alx_fc_mode req_fc_mode; /* FC mode requested by caller */
+ bool disable_fc_autoneg; /* Do not autonegotiate FC */
+ bool fc_was_autonegged; /* the result of autonegging */
+ bool fc_single_pause;
+
+ /* Others */
+ u32 preamble;
+ u32 intr_mask;
+ u16 smb_timer;
+ u16 imt; /* Interrupt Moderator timer (2us) */
+ u32 flags;
+};
+
+#define ALX_HW_FLAG_L0S_CAP 0x00000001
+#define ALX_HW_FLAG_L0S_EN 0x00000002
+#define ALX_HW_FLAG_L1_CAP 0x00000004
+#define ALX_HW_FLAG_L1_EN 0x00000008
+#define ALX_HW_FLAG_PWSAVE_CAP 0x00000010
+#define ALX_HW_FLAG_PWSAVE_EN 0x00000020
+#define ALX_HW_FLAG_AZ_CAP 0x00000040
+#define ALX_HW_FLAG_AZ_EN 0x00000080
+#define ALX_HW_FLAG_PTP_CAP 0x00000100
+#define ALX_HW_FLAG_PTP_EN 0x00000200
+#define ALX_HW_FLAG_GIGA_CAP 0x00000400
+
+#define ALX_HW_FLAG_PROMISC_EN 0x00010000 /* for mac ctrl reg */
+#define ALX_HW_FLAG_VLANSTRIP_EN 0x00020000 /* for mac ctrl reg */
+#define ALX_HW_FLAG_MULTIALL_EN 0x00040000 /* for mac ctrl reg */
+#define ALX_HW_FLAG_LOOPBACK_EN 0x00080000 /* for mac ctrl reg */
+
+#define CHK_HW_FLAG(_flag) CHK_FLAG(hw, HW, _flag)
+#define SET_HW_FLAG(_flag) SET_FLAG(hw, HW, _flag)
+#define CLI_HW_FLAG(_flag) CLI_FLAG(hw, HW, _flag)
+
+
+/* RSS hstype Definitions */
+#define ALX_RSS_HSTYP_IPV4_EN 0x00000001
+#define ALX_RSS_HSTYP_TCP4_EN 0x00000002
+#define ALX_RSS_HSTYP_IPV6_EN 0x00000004
+#define ALX_RSS_HSTYP_TCP6_EN 0x00000008
+#define ALX_RSS_HSTYP_ALL_EN (\
+ ALX_RSS_HSTYP_IPV4_EN |\
+ ALX_RSS_HSTYP_TCP4_EN |\
+ ALX_RSS_HSTYP_IPV6_EN |\
+ ALX_RSS_HSTYP_TCP6_EN)
+
+
+/* definitions for flags */
+
+#define CHK_FLAG_ARRAY(_st, _idx, _type, _flag) \
+ ((_st)->flags[_idx] & (ALX_##_type##_FLAG_##_idx##_##_flag))
+#define CHK_FLAG(_st, _type, _flag) \
+ ((_st)->flags & (ALX_##_type##_FLAG_##_flag))
+
+#define SET_FLAG_ARRAY(_st, _idx, _type, _flag) \
+ ((_st)->flags[_idx] |= (ALX_##_type##_FLAG_##_idx##_##_flag))
+#define SET_FLAG(_st, _type, _flag) \
+ ((_st)->flags |= (ALX_##_type##_FLAG_##_flag))
+
+#define CLI_FLAG_ARRAY(_st, _idx, _type, _flag) \
+ ((_st)->flags[_idx] &= ~(ALX_##_type##_FLAG_##_idx##_##_flag))
+#define CLI_FLAG(_st, _type, _flag) \
+ ((_st)->flags &= ~(ALX_##_type##_FLAG_##_flag))
+
+int alx_cfg_r16(const struct alx_hw *hw, int reg, u16 *pval);
+int alx_cfg_w16(const struct alx_hw *hw, int reg, u16 val);
+
+
+void alx_mem_flush(const struct alx_hw *hw);
+void alx_mem_r32(const struct alx_hw *hw, int reg, u32 *val);
+void alx_mem_w32(const struct alx_hw *hw, int reg, u32 val);
+void alx_mem_w8(const struct alx_hw *hw, int reg, u8 val);
+
+
+/* special definitions for hw */
+#define ALF_MAX_MSIX_NOQUE_INTRS 4
+#define ALF_MIN_MSIX_NOQUE_INTRS 4
+#define ALF_MAX_MSIX_QUEUE_INTRS 12
+#define ALF_MIN_MSIX_QUEUE_INTRS 12
+#define ALF_MAX_MSIX_INTRS \
+ (ALF_MAX_MSIX_QUEUE_INTRS + ALF_MAX_MSIX_NOQUE_INTRS)
+#define ALF_MIN_MSIX_INTRS \
+ (ALF_MIN_MSIX_NOQUE_INTRS + ALF_MIN_MSIX_QUEUE_INTRS)
+
+
+/* function */
+extern int alc_init_hw_callbacks(struct alx_hw *hw);
+extern int alf_init_hw_callbacks(struct alx_hw *hw);
+
+/* Logging message functions */
+void __printf(3, 4) alx_hw_printk(const char *level, const struct alx_hw *hw,
+ const char *fmt, ...);
+
+#define alx_hw_err(_hw, _format, ...) \
+ alx_hw_printk(KERN_ERR, _hw, _format, ##__VA_ARGS__)
+#define alx_hw_warn(_hw, _format, ...) \
+ alx_hw_printk(KERN_WARNING, _hw, _format, ##__VA_ARGS__)
+#define alx_hw_info(_hw, _format, ...) \
+ alx_hw_printk(KERN_INFO, _hw, _format, ##__VA_ARGS__)
+
+#endif /* _ALX_SW_H_ */
+
--
1.7.4.15.g7811d
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists