[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <6349D7A510622448B1BA0967850A8438013692CA@NASANEXD02D.na.qualcomm.com>
Date: Thu, 9 Aug 2012 02:27:35 +0000
From: "Ren, Cloud" <cjren@....qualcomm.com>
To: "davem@...emloft.net" <davem@...emloft.net>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
CC: qca-linux-team <qca-linux-team@...lcomm.com>,
nic-devel <nic-devel@...lcomm.com>,
"Huang, Xiong" <xiong@....qualcomm.com>,
"Hao-Ran Liu(Joseph Liu)" <hao-ran.liu@...onical.com>,
Joe Perches <joe@...ches.com>,
"Rodriguez, Luis" <rodrigue@....qualcomm.com>
Subject: RE: [PATCH] net: add new QCA alx ethernet driver
Hi David,
The alx driver only supports two new nics(l1f/l2f) now. It doesn't supersede atl1c driver.
Atl1c driver still supports old nics.
Thank
Cloud ren
-----Original Message-----
From: Ren, Cloud
Sent: 2012年8月9日 18:15
To: davem@...emloft.net; netdev@...r.kernel.org; linux-kernel@...r.kernel.org
Cc: qca-linux-team; nic-devel; Ren, Cloud; Huang, Xiong; Hao-Ran Liu(Joseph Liu); Joe Perches; Rodriguez, Luis
Subject: [PATCH] net: add new QCA alx ethernet driver
From: Cloud Ren <cjren@....qualcomm.com>
This driver support two new ethernet chipsets:
1969:1091 - AR8161 Gigabit Ethernet
1969:1090 - AR8162 Fast Ethernet
Qualcomm Atheros(QCA) is commiting to fix all bugs found on this driver.
This driver is also permissively licensed thereby enabling developers of
other OSes to cherry pick this driver to port to their OS.
Test including build/install/uninstall/dhcp/ping/iperf/wol/reboot/etc.
is passed on the supported chipsets.
This driver and patch also have addressed all sparse and checkpatch
warnings.
Signed-off-by: Cloud Ren <cjren@....qualcomm.com>
Signed-off-by: Xiong Huang <xiong@....qualcomm.com>
Signed-off-by: Hao-Ran Liu(Joseph Liu) <hao-ran.liu@...onical.com>
Signed-off-by: Joe Perches <joe@...ches.com>
Tested-by: David Liu <dwliu@....qualcomm.com>
Signed-off-by: Luis R. Rodriguez <rodrigue@....qualcomm.com>
---
MAINTAINERS | 11 +
drivers/net/ethernet/atheros/Kconfig | 38 +-
drivers/net/ethernet/atheros/Makefile | 1 +
drivers/net/ethernet/atheros/alx/Makefile | 7 +
drivers/net/ethernet/atheros/alx/alx.h | 746 +++++
drivers/net/ethernet/atheros/alx/alx_abs.c | 1055 +++++++
drivers/net/ethernet/atheros/alx/alx_cifs.c | 307 ++
drivers/net/ethernet/atheros/alx/alx_cifs.h | 69 +
drivers/net/ethernet/atheros/alx/alx_dfs.c | 878 ++++++
drivers/net/ethernet/atheros/alx/alx_dfs.h | 182 ++
drivers/net/ethernet/atheros/alx/alx_ethtool.c | 337 +++
drivers/net/ethernet/atheros/alx/alx_hw.c | 1042 +++++++
drivers/net/ethernet/atheros/alx/alx_hw.h | 2132 +++++++++++++
drivers/net/ethernet/atheros/alx/alx_hwcom.h | 128 +
drivers/net/ethernet/atheros/alx/alx_main.c | 3830 ++++++++++++++++++++++++
drivers/net/ethernet/atheros/alx/alx_sw.h | 478 +++
16 files changed, 11238 insertions(+), 3 deletions(-)
create mode 100644 drivers/net/ethernet/atheros/alx/Makefile
create mode 100644 drivers/net/ethernet/atheros/alx/alx.h
create mode 100644 drivers/net/ethernet/atheros/alx/alx_abs.c
create mode 100644 drivers/net/ethernet/atheros/alx/alx_cifs.c
create mode 100644 drivers/net/ethernet/atheros/alx/alx_cifs.h
create mode 100644 drivers/net/ethernet/atheros/alx/alx_dfs.c
create mode 100644 drivers/net/ethernet/atheros/alx/alx_dfs.h
create mode 100644 drivers/net/ethernet/atheros/alx/alx_ethtool.c
create mode 100644 drivers/net/ethernet/atheros/alx/alx_hw.c
create mode 100644 drivers/net/ethernet/atheros/alx/alx_hw.h
create mode 100644 drivers/net/ethernet/atheros/alx/alx_hwcom.h
create mode 100644 drivers/net/ethernet/atheros/alx/alx_main.c
create mode 100644 drivers/net/ethernet/atheros/alx/alx_sw.h
diff --git a/MAINTAINERS b/MAINTAINERS
index 1b71f6c..eb12e1b 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -1290,6 +1290,17 @@ W: http://atl1.sourceforge.net
S: Maintained
F: drivers/net/ethernet/atheros/
+ALX ETHERNET DRIVERS
+M: Cloud Ren <cjren@....qualcomm.com>
+M: Stevent Li <steventl@....qualcomm.com>
+M: Wu Ken <kenw@....qualcomm.com>
+M: David Liu <dwliu@....qualcomm.com>
+L: netdev@...r.kernel.org
+L: nic-devel@...lcomm.com
+W: http://www.linuxfoundation.org/collaborate/workgroups/networking/alx
+S: Supported
+F: drivers/net/ethernet/atheros/alx/
+
ATM
M: Chas Williams <chas@....nrl.navy.mil>
L: linux-atm-general@...ts.sourceforge.net (moderated for non-subscribers)
diff --git a/drivers/net/ethernet/atheros/Kconfig b/drivers/net/ethernet/atheros/Kconfig
index 1ed886d..8dfd634 100644
--- a/drivers/net/ethernet/atheros/Kconfig
+++ b/drivers/net/ethernet/atheros/Kconfig
@@ -56,15 +56,47 @@ config ATL1E
will be called atl1e.
config ATL1C
- tristate "Atheros L1C Gigabit Ethernet support (EXPERIMENTAL)"
- depends on PCI && EXPERIMENTAL
+ tristate "Atheros L1C/L1D Gigabit Ethernet support"
+ depends on PCI
select CRC32
select NET_CORE
select MII
---help---
- This driver supports the Atheros L1C gigabit ethernet adapter.
+ This driver supports the Atheros L1C/L1D ethernet adapter.
+ This driver supports following chipsets:
+
+ 1969:1063 - AR8131 Gigabit Ethernet
+ 1969:1062 - AR8132 Fast Ethernet (10/100 Mbit/s)
+ 1969:2062 - AR8152 v2.0 Fast Ethernet
+ 1969:2060 - AR8152 v1.1 Fast Ethernet
+ 1969:1073 - AR8151 v1.0 Gigabit Ethernet
+ 1969:1083 - AR8151 v2.0 Gigabit Ethernet
To compile this driver as a module, choose M here. The module
will be called atl1c.
+config ALX
+ tristate "Qualcomm Atheros L1F Gigabit Ethernet support"
+ depends on PCI
+ select CRC32
+ select NET_CORE
+ select MII
+ ---help---
+ This driver supports the Qualcomm Atheros L1F ethernet adapter.
+ This driver supports following chipsets:
+
+ 1969:1091 - AR8161 Gigabit Ethernet
+ 1969:1090 - AR8162 Fast Ethernet
+
+ To compile this driver as a module, choose M here. The module
+ will be called alx.
+
+config ALX_DEBUGFS
+ bool "Qualcomm Atheros debugging interface"
+ depends on ALX && DEBUG_FS
+ ---help---
+ This option adds ability to debug and test L1F. It can
+ support Qualcomm Atheros tools, including diagnostic, memcfg
+ and SWOI.
+
endif # NET_VENDOR_ATHEROS
diff --git a/drivers/net/ethernet/atheros/Makefile b/drivers/net/ethernet/atheros/Makefile
index e7e76fb..5cf1c65 100644
--- a/drivers/net/ethernet/atheros/Makefile
+++ b/drivers/net/ethernet/atheros/Makefile
@@ -6,3 +6,4 @@ obj-$(CONFIG_ATL1) += atlx/
obj-$(CONFIG_ATL2) += atlx/
obj-$(CONFIG_ATL1E) += atl1e/
obj-$(CONFIG_ATL1C) += atl1c/
+obj-$(CONFIG_ALX) += alx/
diff --git a/drivers/net/ethernet/atheros/alx/Makefile b/drivers/net/ethernet/atheros/alx/Makefile
new file mode 100644
index 0000000..66acb3f
--- /dev/null
+++ b/drivers/net/ethernet/atheros/alx/Makefile
@@ -0,0 +1,7 @@
+obj-$(CONFIG_ALX) += alx.o
+alx-objs := alx_main.o alx_ethtool.o alx_abs.o alx_hw.o
+ifdef CONFIG_ALX_DEBUGFS
+alx-objs += alx_dfs.o
+alx-objs += alx_cifs.o
+endif
+ccflags-y += -D__CHECK_ENDIAN__
diff --git a/drivers/net/ethernet/atheros/alx/alx.h b/drivers/net/ethernet/atheros/alx/alx.h
new file mode 100644
index 0000000..69709e7
--- /dev/null
+++ b/drivers/net/ethernet/atheros/alx/alx.h
@@ -0,0 +1,746 @@
+/*
+ * Copyright (c) 2012 Qualcomm Atheros, Inc.
+ *
+ * Permission to use, copy, modify, and/or distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+#ifndef _ALX_H_
+#define _ALX_H_
+
+#include <linux/netdevice.h>
+
+#include "alx_sw.h"
+#include "alx_dfs.h"
+
+/*
+ * Definition to enable some features
+ */
+#undef CONFIG_ALX_MSIX
+#undef CONFIG_ALX_MSI
+#undef CONFIG_ALX_MTQ
+#undef CONFIG_ALX_MRQ
+#undef CONFIG_ALX_RSS
+/* #define CONFIG_ALX_MSIX */
+#define CONFIG_ALX_MSI
+#define CONFIG_ALX_MTQ
+#define CONFIG_ALX_MRQ
+#ifdef CONFIG_ALX_MRQ
+#define CONFIG_ALX_RSS
+#endif
+
+#define ALX_MSG_DEFAULT 0
+
+/* Logging functions and macros */
+#define alx_err(adpt, fmt, ...) \
+ netdev_err(adpt->netdev, fmt, ##__VA_ARGS__)
+
+#define ALX_VLAN_TO_TAG(_vlan, _tag) \
+ do { \
+ _tag = ((((_vlan) >> 8) & 0xFF) | (((_vlan) & 0xFF) << 8)); \
+ } while (0)
+
+#define ALX_TAG_TO_VLAN(_tag, _vlan) \
+ do { \
+ _vlan = ((((_tag) >> 8) & 0xFF) | (((_tag) & 0xFF) << 8)); \
+ } while (0)
+
+
+#define BAR_0 0
+#define BAR_1 1
+#define BAR_5 5
+
+
+#define ALX_DEF_RX_BUF_SIZE 1536
+#define ALX_MAX_JUMBO_PKT_SIZE (9*1024)
+#define ALX_MAX_TSO_PKT_SIZE (7*1024)
+
+#define ALX_MAX_ETH_FRAME_SIZE ALX_MAX_JUMBO_PKT_SIZE
+#define ALX_MIN_ETH_FRAME_SIZE 68
+
+
+#define ALX_MAX_RX_QUEUES 8
+#define ALX_MAX_TX_QUEUES 4
+#define ALX_MAX_HANDLED_INTRS 5
+
+#define ALX_WATCHDOG_TIME (5 * HZ)
+
+/*
+ * RRD : definition
+ */
+
+/* general parameter format of rrd */
+struct alx_sw_rrdes_general {
+#if defined(__LITTLE_ENDIAN_BITFIELD)
+ /* dword 0 */
+ u32 xsum:16;
+ u32 nor:4; /* number of RFD */
+ u32 si:12; /* start index of rfd-ring */
+ /* dword 1 */
+ u32 hash;
+ /* dword 2 */
+ u32 vlan_tag:16; /* vlan-tag */
+ u32 pid:8; /* Header Length of Header-Data Split. WORD unit */
+ u32 reserve0:1;
+ u32 rss_cpu:3; /* CPU number used by RSS */
+ u32 rss_flag:4; /* rss_flag 0, TCP(IPv6) flag for RSS hash algrithm
+ * rss_flag 1, IPv6 flag for RSS hash algrithm
+ * rss_flag 2, TCP(IPv4) flag for RSS hash algrithm
+ * rss_flag 3, IPv4 flag for RSS hash algrithm */
+ /* dword 3 */
+ u32 pkt_len:14; /* length of the packet */
+ u32 l4f:1; /* L4(TCP/UDP) checksum failed */
+ u32 ipf:1; /* IP checksum failed */
+ u32 vlan_flag:1; /* vlan tag */
+ u32 reserve:3;
+ u32 res:1; /* received error summary */
+ u32 crc:1; /* crc error */
+ u32 fae:1; /* frame alignment error */
+ u32 trunc:1; /* truncated packet, larger than MTU */
+ u32 runt:1; /* runt packet */
+ u32 icmp:1; /* incomplete packet due to insufficient rx-desc*/
+ u32 bar:1; /* broadcast address received */
+ u32 mar:1; /* multicast address received */
+ u32 type:1; /* ethernet type */
+ u32 fov:1; /* fifo overflow*/
+ u32 lene:1; /* length error */
+ u32 update:1; /* update*/
+#elif defined(__BIG_ENDIAN_BITFIELD)
+ /* dword 0 */
+ u32 si:12;
+ u32 nor:4;
+ u32 xsum:16;
+ /* dword 1 */
+ u32 hash;
+ /* dword 2 */
+ u32 rss_flag:4;
+ u32 rss_cpu:3;
+ u32 reserve0:1;
+ u32 pid:8;
+ u32 vlan_tag:16;
+ /* dword 3 */
+ u32 update:1;
+ u32 lene:1;
+ u32 fov:1;
+ u32 type:1;
+ u32 mar:1;
+ u32 bar:1;
+ u32 icmp:1;
+ u32 runt:1;
+ u32 trunc:1;
+ u32 fae:1;
+ u32 crc:1;
+ u32 res:1;
+ u32 reserve1:3;
+ u32 vlan_flag:1;
+ u32 ipf:1;
+ u32 l4f:1;
+ u32 pkt_len:14;
+#else
+#error "Please fix <asm/byteorder.h>"
+#endif
+};
+
+union alx_hw_rrdesc {
+ /* dword flat format */
+ struct {
+ __le32 dw0;
+ __le32 dw1;
+ __le32 dw2;
+ __le32 dw3;
+ } dfmt;
+
+ /* qword flat format */
+ struct {
+ __le64 qw0;
+ __le64 qw1;
+ } qfmt;
+};
+
+/*
+ * XXX: we should not use this guy, best to just
+ * do all le32_to_cpu() conversions on the spot.
+ */
+union alx_sw_rrdesc {
+ struct alx_sw_rrdes_general genr;
+
+ /* dword flat format */
+ struct {
+ u32 dw0;
+ u32 dw1;
+ u32 dw2;
+ u32 dw3;
+ } dfmt;
+
+ /* qword flat format */
+ struct {
+ u64 qw0;
+ u64 qw1;
+ } qfmt;
+};
+
+/*
+ * RFD : definition
+ */
+
+/* general parameter format of rfd */
+struct alx_sw_rfdes_general {
+ u64 addr;
+};
+
+union alx_hw_rfdesc {
+ /* dword flat format */
+ struct {
+ __le32 dw0;
+ __le32 dw1;
+ } dfmt;
+
+ /* qword flat format */
+ struct {
+ __le64 qw0;
+ } qfmt;
+};
+
+/*
+ * XXX: we should not use this guy, best to just
+ * do all le32_to_cpu() conversions on the spot.
+ */
+union alx_sw_rfdesc {
+ struct alx_sw_rfdes_general genr;
+
+ /* dword flat format */
+ struct {
+ u32 dw0;
+ u32 dw1;
+ } dfmt;
+
+ /* qword flat format */
+ struct {
+ u64 qw0;
+ } qfmt;
+};
+
+/*
+ * TPD : definition
+ */
+
+/* general parameter format of tpd */
+struct alx_sw_tpdes_general {
+#if defined(__LITTLE_ENDIAN_BITFIELD)
+ /* dword 0 */
+ u32 buffer_len:16; /* include 4-byte CRC */
+ u32 vlan_tag:16;
+ /* dword 1 */
+ u32 l4hdr_offset:8; /* l4 header offset to the 1st byte of packet */
+ u32 c_csum:1;
+ u32 ip_csum:1;
+ u32 tcp_csum:1;
+ u32 udp_csum:1;
+ u32 lso:1;
+ u32 lso_v2:1;
+ u32 vtagged:1; /* vlan-id tagged already */
+ u32 instag:1; /* insert vlan tag */
+
+ u32 ipv4:1; /* ipv4 packet */
+ u32 type:1; /* type of packet (ethernet_ii(0) or snap(1)) */
+ u32 reserve:12;
+ u32 epad:1; /* even byte padding when this packet */
+ u32 last_frag:1; /* last fragment(buffer) of the packet */
+
+ u64 addr;
+#elif defined(__BIG_ENDIAN_BITFIELD)
+ /* dword 0 */
+ u32 vlan_tag:16;
+ u32 buffer_len:16;
+ /* dword 1 */
+ u32 last_frag:1;
+ u32 epad:1;
+ u32 reserve:12;
+ u32 type:1;
+ u32 ipv4:1;
+ u32 instag:1;
+ u32 vtagged:1;
+ u32 lso_v2:1;
+ u32 lso:1;
+ u32 udp_csum:1;
+ u32 tcp_csum:1;
+ u32 ip_csum:1;
+ u32 c_csum:1;
+ u32 l4hdr_offset:8;
+
+ u64 addr;
+#else
+#error "Please fix <asm/byteorder.h>"
+#endif
+};
+
+/* custom checksum parameter format of tpd */
+struct alx_sw_tpdes_checksum {
+#if defined(__LITTLE_ENDIAN_BITFIELD)
+ /* dword 0 */
+ u32 buffer_len:16;
+ u32 vlan_tag:16;
+ /* dword 1 */
+ u32 payld_offset:8; /* payload offset to the 1st byte of packet */
+ u32 c_csum:1; /* do custom checksum offload */
+ u32 ip_csum:1; /* do ip(v4) header checksum offload */
+ u32 tcp_csum:1; /* do tcp checksum offload, both ipv4 and ipv6 */
+ u32 udp_csum:1; /* do udp checksum offlaod, both ipv4 and ipv6 */
+ u32 lso:1;
+ u32 lso_v2:1;
+ u32 vtagged:1; /* vlan-id tagged already */
+ u32 instag:1; /* insert vlan tag */
+ u32 ipv4:1; /* ipv4 packet */
+ u32 type:1; /* type of packet (ethernet_ii(0) or snap(1)) */
+ u32 cxsum_offset:8; /* checksum offset to the 1st byte of packet */
+ u32 reserve:4;
+ u32 epad:1; /* even byte padding when this packet */
+ u32 last_frag:1; /* last fragment(buffer) of the packet */
+
+ u64 addr;
+#elif defined(__BIG_ENDIAN_BITFIELD)
+ /* dword 0 */
+ u32 vlan_tag:16;
+ u32 buffer_len:16;
+ /* dword 1 */
+ u32 last_frag:1;
+ u32 epad:1;
+ u32 reserve:4;
+ u32 cxsum_offset:8;
+ u32 type:1;
+ u32 ipv4:1;
+ u32 instag:1;
+ u32 vtagged:1;
+ u32 lso_v2:1;
+ u32 lso:1;
+ u32 udp_csum:1;
+ u32 tcp_csum:1;
+ u32 ip_csum:1;
+ u32 c_csum:1;
+ u32 payld_offset:8;
+
+ u64 addr;
+#else
+#error "Please fix <asm/byteorder.h>"
+#endif
+};
+
+
+/* tcp large send format (v1/v2) of tpd */
+struct alx_sw_tpdes_tso {
+#if defined(__LITTLE_ENDIAN_BITFIELD)
+ /* dword 0 */
+ u32 buffer_len:16; /* include 4-byte CRC */
+ u32 vlan_tag:16;
+ /* dword 1 */
+ u32 tcphdr_offset:8; /* tcp hdr offset to the 1st byte of packet */
+ u32 c_csum:1;
+ u32 ip_csum:1;
+ u32 tcp_csum:1;
+ u32 udp_csum:1;
+ u32 lso:1; /* do tcp large send (ipv4 only) */
+ u32 lso_v2:1; /* must be 0 in this format */
+ u32 vtagged:1; /* vlan-id tagged already */
+ u32 instag:1; /* insert vlan tag */
+ u32 ipv4:1; /* ipv4 packet */
+ u32 type:1; /* type of packet (ethernet_ii(1) or snap(0)) */
+ u32 mss:13; /* mss if do tcp large send */
+ u32 last_frag:1; /* last fragment(buffer) of the packet */
+
+ u32 pkt_len; /* packet length in ext tpd */
+ u32 reserve;
+#elif defined(__BIG_ENDIAN_BITFIELD)
+ /* dword 0 */
+ u32 vlan_tag:16;
+ u32 buffer_len:16;
+ /* dword 1 */
+ u32 last_frag:1;
+ u32 mss:13;
+ u32 type:1;
+ u32 ipv4:1;
+ u32 instag:1;
+ u32 vtagged:1;
+ u32 lso_v2:1;
+ u32 lso:1;
+ u32 udp_csum:1;
+ u32 tcp_csum:1;
+ u32 ip_csum:1;
+ u32 c_csum:1;
+ u32 tcphdr_offset:8;
+
+ u32 pkt_len;
+ u32 reserve;
+#else
+#error "Please fix <asm/byteorder.h>"
+#endif
+};
+
+union alx_hw_tpdesc {
+ /* dword flat format */
+ struct {
+ __le32 dw0;
+ __le32 dw1;
+ __le32 dw2;
+ __le32 dw3;
+ } dfmt;
+
+ /* qword flat format */
+ struct {
+ __le64 qw0;
+ __le64 qw1;
+ } qfmt;
+};
+
+/*
+ * XXX: we should not use this guy, best to just
+ * do all le32_to_cpu() conversions on the spot.
+ */
+union alx_sw_tpdesc {
+ struct alx_sw_tpdes_general genr;
+ struct alx_sw_tpdes_checksum csum;
+ struct alx_sw_tpdes_tso tso;
+
+ /* dword flat format */
+ struct {
+ u32 dw0;
+ u32 dw1;
+ u32 dw2;
+ u32 dw3;
+ } dfmt;
+
+ /* qword flat format */
+ struct {
+ u64 qw0;
+ u64 qw1;
+ } qfmt;
+};
+
+#define ALX_RRD(_que, _i) \
+ (&(((union alx_hw_rrdesc *)(_que)->rrq.rrdesc)[(_i)]))
+#define ALX_RFD(_que, _i) \
+ (&(((union alx_hw_rfdesc *)(_que)->rfq.rfdesc)[(_i)]))
+#define ALX_TPD(_que, _i) \
+ (&(((union alx_hw_tpdesc *)(_que)->tpq.tpdesc)[(_i)]))
+
+
+/*
+ * alx_ring_header represents a single, contiguous block of DMA space
+ * mapped for the three descriptor rings (tpd, rfd, rrd) and the two
+ * message blocks (cmb, smb) described below
+ */
+struct alx_ring_header {
+ void *desc; /* virtual address */
+ dma_addr_t dma; /* physical address*/
+ unsigned int size; /* length in bytes */
+ unsigned int used;
+};
+
+
+/*
+ * alx_buffer is wrapper around a pointer to a socket buffer
+ * so a DMA handle can be stored along with the skb
+ */
+struct alx_buffer {
+ struct sk_buff *skb; /* socket buffer */
+ u16 length; /* rx buffer length */
+ dma_addr_t dma;
+};
+
+struct alx_sw_buffer {
+ struct sk_buff *skb; /* socket buffer */
+ u32 vlan_tag:16;
+ u32 vlan_flag:1;
+ u32 reserved:15;
+};
+
+/* receive free descriptor (rfd) queue */
+struct alx_rfd_queue {
+ struct alx_buffer *rfbuff;
+ union alx_hw_rfdesc *rfdesc; /* virtual address */
+ dma_addr_t rfdma; /* physical address */
+ u16 size; /* length in bytes */
+ u16 count; /* number of descriptors in the ring */
+ u16 produce_idx; /* it's written to rxque->produce_reg */
+ u16 consume_idx; /* unused*/
+};
+
+/* receive return desciptor (rrd) queue */
+struct alx_rrd_queue {
+ union alx_hw_rrdesc *rrdesc; /* virtual address */
+ dma_addr_t rrdma; /* physical address */
+ u16 size; /* length in bytes */
+ u16 count; /* number of descriptors in the ring */
+ u16 produce_idx; /* unused */
+ u16 consume_idx; /* rxque->consume_reg */
+};
+
+/* software desciptor (swd) queue */
+struct alx_swd_queue {
+ struct alx_sw_buffer *swbuff;
+ u16 count; /* number of descriptors in the ring */
+ u16 produce_idx;
+ u16 consume_idx;
+};
+
+/* rx queue */
+struct alx_rx_queue {
+ struct device *dev; /* device for dma mapping */
+ struct net_device *netdev; /* netdev ring belongs to */
+ struct alx_msix_param *msix;
+ struct alx_rrd_queue rrq;
+ struct alx_rfd_queue rfq;
+ struct alx_swd_queue swq;
+
+ u16 que_idx; /* index in multi rx queues*/
+ u16 max_packets; /* max work per interrupt */
+ u16 produce_reg;
+ u16 consume_reg;
+ u32 flags;
+};
+#define ALX_RX_FLAG_SW_QUE 0x00000001
+#define ALX_RX_FLAG_HW_QUE 0x00000002
+#define CHK_RX_FLAG(_flag) CHK_FLAG(rxque, RX, _flag)
+#define SET_RX_FLAG(_flag) SET_FLAG(rxque, RX, _flag)
+#define CLI_RX_FLAG(_flag) CLI_FLAG(rxque, RX, _flag)
+
+#define GET_RF_BUFFER(_rque, _i) (&((_rque)->rfq.rfbuff[(_i)]))
+#define GET_SW_BUFFER(_rque, _i) (&((_rque)->swq.swbuff[(_i)]))
+
+
+/* transimit packet descriptor (tpd) ring */
+struct alx_tpd_queue {
+ struct alx_buffer *tpbuff;
+ union alx_hw_tpdesc *tpdesc; /* virtual address */
+ dma_addr_t tpdma; /* physical address */
+
+ u16 size; /* length in bytes */
+ u16 count; /* number of descriptors in the ring */
+ u16 produce_idx;
+ u16 consume_idx;
+ u16 last_produce_idx;
+};
+
+/* tx queue */
+struct alx_tx_queue {
+ struct device *dev; /* device for dma mapping */
+ struct net_device *netdev; /* netdev ring belongs to */
+ struct alx_tpd_queue tpq;
+ struct alx_msix_param *msix;
+
+ u16 que_idx; /* needed for multiqueue queue management */
+ u16 max_packets; /* max packets per interrupt */
+ u16 produce_reg;
+ u16 consume_reg;
+};
+#define GET_TP_BUFFER(_tque, _i) (&((_tque)->tpq.tpbuff[(_i)]))
+
+
+/*
+ * definition for array allocations.
+ */
+#define ALX_MAX_MSIX_INTRS 16
+#define ALX_MAX_RX_QUEUES 8
+#define ALX_MAX_TX_QUEUES 4
+
+enum alx_msix_type {
+ alx_msix_type_rx,
+ alx_msix_type_tx,
+ alx_msix_type_other,
+};
+#define ALX_MSIX_TYPE_OTH_TIMER 0
+#define ALX_MSIX_TYPE_OTH_ALERT 1
+#define ALX_MSIX_TYPE_OTH_SMB 2
+#define ALX_MSIX_TYPE_OTH_PHY 3
+
+/* ALX_MAX_MSIX_INTRS of these are allocated,
+ * but we only use one per queue-specific vector.
+ */
+struct alx_msix_param {
+ struct alx_adapter *adpt;
+ unsigned int vec_idx; /* index in HW interrupt vector */
+ char name[IFNAMSIZ + 9];
+
+ /* msix interrupts for queue */
+ u8 rx_map[ALX_MAX_RX_QUEUES];
+ u8 tx_map[ALX_MAX_TX_QUEUES];
+ u8 rx_count; /* Rx ring count assigned to this vector */
+ u8 tx_count; /* Tx ring count assigned to this vector */
+
+ struct napi_struct napi;
+ cpumask_var_t affinity_mask;
+ u32 flags;
+};
+
+#define ALX_MSIX_FLAG_RX0 0x00000001
+#define ALX_MSIX_FLAG_RX1 0x00000002
+#define ALX_MSIX_FLAG_RX2 0x00000004
+#define ALX_MSIX_FLAG_RX3 0x00000008
+#define ALX_MSIX_FLAG_RX4 0x00000010
+#define ALX_MSIX_FLAG_RX5 0x00000020
+#define ALX_MSIX_FLAG_RX6 0x00000040
+#define ALX_MSIX_FLAG_RX7 0x00000080
+#define ALX_MSIX_FLAG_TX0 0x00000100
+#define ALX_MSIX_FLAG_TX1 0x00000200
+#define ALX_MSIX_FLAG_TX2 0x00000400
+#define ALX_MSIX_FLAG_TX3 0x00000800
+#define ALX_MSIX_FLAG_TIMER 0x00001000
+#define ALX_MSIX_FLAG_ALERT 0x00002000
+#define ALX_MSIX_FLAG_SMB 0x00004000
+#define ALX_MSIX_FLAG_PHY 0x00008000
+
+#define ALX_MSIX_FLAG_RXS (\
+ ALX_MSIX_FLAG_RX0 |\
+ ALX_MSIX_FLAG_RX1 |\
+ ALX_MSIX_FLAG_RX2 |\
+ ALX_MSIX_FLAG_RX3 |\
+ ALX_MSIX_FLAG_RX4 |\
+ ALX_MSIX_FLAG_RX5 |\
+ ALX_MSIX_FLAG_RX6 |\
+ ALX_MSIX_FLAG_RX7)
+#define ALX_MSIX_FLAG_TXS (\
+ ALX_MSIX_FLAG_TX0 |\
+ ALX_MSIX_FLAG_TX1 |\
+ ALX_MSIX_FLAG_TX2 |\
+ ALX_MSIX_FLAG_TX3)
+#define ALX_MSIX_FLAG_ALL (\
+ ALX_MSIX_FLAG_RXS |\
+ ALX_MSIX_FLAG_TXS |\
+ ALX_MSIX_FLAG_TIMER |\
+ ALX_MSIX_FLAG_ALERT |\
+ ALX_MSIX_FLAG_SMB |\
+ ALX_MSIX_FLAG_PHY)
+
+#define CHK_MSIX_FLAG(_flag) CHK_FLAG(msix, MSIX, _flag)
+#define SET_MSIX_FLAG(_flag) SET_FLAG(msix, MSIX, _flag)
+#define CLI_MSIX_FLAG(_flag) CLI_FLAG(msix, MSIX, _flag)
+
+/*
+ *board specific private data structure
+ */
+struct alx_adapter {
+ struct net_device *netdev;
+ struct pci_dev *pdev;
+ struct net_device_stats net_stats;
+ bool netdev_registered;
+ u16 bd_number; /* board number;*/
+
+ struct alx_msix_param *msix[ALX_MAX_MSIX_INTRS];
+ struct msix_entry *msix_entries;
+ int num_msix_rxques;
+ int num_msix_txques;
+ int num_msix_noques; /* true count of msix_noques for device */
+ int num_msix_intrs;
+
+ int min_msix_intrs;
+ int max_msix_intrs;
+
+ /* All Descriptor memory */
+ struct alx_ring_header ring_header;
+
+ /* TX */
+ struct alx_tx_queue *tx_queue[ALX_MAX_TX_QUEUES];
+ /* RX */
+ struct alx_rx_queue *rx_queue[ALX_MAX_RX_QUEUES];
+
+ u16 num_txques;
+ u16 num_rxques; /* equal max(num_hw_rxques, num_sw_rxques) */
+ u16 num_hw_rxques;
+ u16 num_sw_rxques;
+ u16 max_rxques;
+ u16 max_txques;
+
+ u16 num_txdescs;
+ u16 num_rxdescs;
+
+ u32 rxbuf_size;
+
+ /* structs defined in alx_hw.h */
+ struct alx_hw hw;
+ struct alx_hw_stats hw_stats;
+
+ u32 *config_space;
+
+ struct work_struct alx_task;
+ struct timer_list alx_timer;
+
+ unsigned long link_jiffies;
+
+ u32 wol;
+ bool cifs;
+ int bars;
+ bool ioport;
+ spinlock_t tx_lock;
+ spinlock_t rx_lock;
+ atomic_t irq_sem;
+
+#ifdef CONFIG_ALX_DEBUGFS
+ struct alx_debugfs_param dfs;
+#endif
+ u16 msg_enable;
+ unsigned long flags;
+};
+
+#define ALX_ADPT_FLAG_MSI_CAP 0x00000001
+#define ALX_ADPT_FLAG_MSI_EN 0x00000002
+#define ALX_ADPT_FLAG_MSIX_CAP 0x00000004
+#define ALX_ADPT_FLAG_MSIX_EN 0x00000008
+#define ALX_ADPT_FLAG_MRQ_CAP 0x00000010
+#define ALX_ADPT_FLAG_MRQ_EN 0x00000020
+#define ALX_ADPT_FLAG_MTQ_CAP 0x00000040
+#define ALX_ADPT_FLAG_MTQ_EN 0x00000080
+#define ALX_ADPT_FLAG_SRSS_CAP 0x00000100
+#define ALX_ADPT_FLAG_SRSS_EN 0x00000200
+#define ALX_ADPT_FLAG_FIXED_MSIX 0x00000400
+
+#define ALX_ADPT_FLAG_TASK_REINIT_REQ 0x00010000 /* reinit */
+#define ALX_ADPT_FLAG_TASK_LSC_REQ 0x00020000
+
+#define ALX_ADPT_FLAG_STATE_TESTING 0x00100000
+#define ALX_ADPT_FLAG_STATE_RESETTING 0x00200000
+#define ALX_ADPT_FLAG_STATE_DOWN 0x00400000
+#define ALX_ADPT_FLAG_STATE_WATCH_DOG 0x00800000
+#define ALX_ADPT_FLAG_STATE_DIAG_RUNNING 0x01000000
+#define ALX_ADPT_FLAG_STATE_INACTIVE 0x02000000
+
+
+#define CHK_ADPT_FLAG(_flag) CHK_FLAG(adpt, ADPT, _flag)
+#define SET_ADPT_FLAG(_flag) SET_FLAG(adpt, ADPT, _flag)
+#define CLI_ADPT_FLAG(_flag) CLI_FLAG(adpt, ADPT, _flag)
+
+/* default to trying for four seconds */
+#define ALX_TRY_LINK_TIMEOUT (4 * HZ)
+
+
+#define ALX_OPEN_CTRL_IRQ_EN 0x00000001
+#define ALX_OPEN_CTRL_RESET_MAC 0x00000002
+#define ALX_OPEN_CTRL_RESET_PHY 0x00000004
+#define ALX_OPEN_CTRL_RESET_ALL (\
+ ALX_OPEN_CTRL_RESET_MAC |\
+ ALX_OPEN_CTRL_RESET_PHY)
+
+/* needed by alx_ethtool.c */
+extern char alx_drv_name[];
+extern void alx_reinit_locked(struct alx_adapter *adpt);
+extern void alx_set_ethtool_ops(struct net_device *netdev);
+#ifdef ETHTOOL_OPS_COMPAT
+extern int ethtool_ioctl(struct ifreq *ifr);
+#endif
+
+/* needed by alx_dfs.c */
+int alx_validate_mac_addr(u8 *mac_addr);
+bool alx_set_tpdesc(struct alx_tx_queue *txque, union alx_sw_tpdesc *stpd);
+void alx_set_tpdesc_lastfrag(struct alx_tx_queue *txque);
+int alx_open_internal(struct alx_adapter *adpt, u32 ctrl);
+void alx_stop_internal(struct alx_adapter *adpt, u32 ctrl);
+void alx_disable_intr(struct alx_adapter *adpt);
+
+#endif /* _ALX_H_ */
diff --git a/drivers/net/ethernet/atheros/alx/alx_abs.c b/drivers/net/ethernet/atheros/alx/alx_abs.c
new file mode 100644
index 0000000..b039795
--- /dev/null
+++ b/drivers/net/ethernet/atheros/alx/alx_abs.c
@@ -0,0 +1,1055 @@
+/*
+ * Copyright (c) 2012 Qualcomm Atheros, Inc.
+ *
+ * Permission to use, copy, modify, and/or distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+#include <linux/pci_regs.h>
+#include <linux/mii.h>
+#include <linux/netdevice.h>
+#include <linux/crc32.h>
+
+#include "alx_hw.h"
+
+#define ALF_REV_ID_AR8161_B0 0x10
+
+/* definition for MSIX */
+#define ALF_MSIX_ENTRY_BASE 0x2000
+#define ALF_MSIX_ENTRY_SIZE 16
+#define ALF_MSIX_MSG_LOADDR_OFF 0
+#define ALF_MSIX_MSG_HIADDR_OFF 4
+#define ALF_MSIX_MSG_DATA_OFF 8
+#define ALF_MSIX_MSG_CTRL_OFF 12
+
+#define ALF_MSIX_INDEX_RXQ0 0
+#define ALF_MSIX_INDEX_RXQ1 1
+#define ALF_MSIX_INDEX_RXQ2 2
+#define ALF_MSIX_INDEX_RXQ3 3
+#define ALF_MSIX_INDEX_RXQ4 4
+#define ALF_MSIX_INDEX_RXQ5 5
+#define ALF_MSIX_INDEX_RXQ6 6
+#define ALF_MSIX_INDEX_RXQ7 7
+#define ALF_MSIX_INDEX_TXQ0 8
+#define ALF_MSIX_INDEX_TXQ1 9
+#define ALF_MSIX_INDEX_TXQ2 10
+#define ALF_MSIX_INDEX_TXQ3 11
+#define ALF_MSIX_INDEX_TIMER 12
+#define ALF_MSIX_INDEX_ALERT 13
+#define ALF_MSIX_INDEX_SMB 14
+#define ALF_MSIX_INDEX_PHY 15
+
+
+#define ALF_SRAM_BASE L1F_SRAM0
+#define ALF_SRAM(_i, _type) \
+ (ALF_SRAM_BASE + ((_i) * sizeof(_type)))
+
+#define ALF_MIB_BASE L1F_MIB_BASE
+#define ALF_MIB(_i, _type) \
+ (ALF_MIB_BASE + ((_i) * sizeof(_type)))
+
+/* definition for RSS */
+#define ALF_RSS_KEY_BASE L1F_RSS_KEY0
+#define ALF_RSS_IDT_BASE L1F_RSS_IDT_TBL0
+#define ALF_RSS_KEY(_i, _type) \
+ (ALF_RSS_KEY_BASE + ((_i) * sizeof(_type)))
+#define ALF_RSS_TBL(_i, _type) \
+ (L1F_RSS_IDT_TBL0 + ((_i) * sizeof(_type)))
+
+
+/* NIC */
+int alf_identify_nic(struct alx_hw *hw)
+{
+ u32 drv;
+
+ if (hw->pci_revid < ALX_REV_ID_AR8161_V2_0)
+ return 0;
+
+ /* check from V2_0(b0) to ... */
+ switch (hw->pci_revid) {
+ default:
+ alx_mem_r32(hw, L1F_DRV, &drv);
+ if (drv & LX_DRV_DISABLE)
+ return -EINVAL;
+ break;
+ }
+ return 0;
+}
+
+
+/* PHY */
+int alf_read_phy_reg(struct alx_hw *hw, u16 reg_addr, u16 *phy_data)
+{
+ unsigned long flags;
+ int retval = 0;
+
+ spin_lock_irqsave(&hw->mdio_lock, flags);
+
+ retval = l1f_read_phy(hw, false, ALX_MDIO_DEV_TYPE_NORM, false,
+ reg_addr, phy_data);
+ if (retval)
+ alx_hw_err(hw, "error:%u when read phy reg\n", retval);
+
+ spin_unlock_irqrestore(&hw->mdio_lock, flags);
+ return retval;
+}
+
+
+int alf_write_phy_reg(struct alx_hw *hw, u16 reg_addr, u16 phy_data)
+{
+ unsigned long flags;
+ int retval = 0;
+
+ spin_lock_irqsave(&hw->mdio_lock, flags);
+
+ retval = l1f_write_phy(hw, false, ALX_MDIO_DEV_TYPE_NORM, false,
+ reg_addr, phy_data);
+ if (retval)
+ alx_hw_err(hw, "error;%u, when write phy reg\n", retval);
+
+ spin_unlock_irqrestore(&hw->mdio_lock, flags);
+ return retval;
+}
+
+#ifdef CONFIG_ALX_DEBUGFS
+
+int alf_read_ext_phy_reg(struct alx_hw *hw, u8 type, u16 reg_addr,
+ u16 *phy_data)
+{
+ unsigned long flags;
+ int retval = 0;
+
+ spin_lock_irqsave(&hw->mdio_lock, flags);
+
+ retval = l1f_read_phy(hw, true, type, false, reg_addr, phy_data);
+ if (retval)
+ alx_hw_err(hw, "error:%u, when read ext phy reg\n", retval);
+
+ spin_unlock_irqrestore(&hw->mdio_lock, flags);
+ return retval;
+}
+
+
+int alf_write_ext_phy_reg(struct alx_hw *hw, u8 type, u16 reg_addr,
+ u16 phy_data)
+{
+ unsigned long flags;
+ int retval = 0;
+
+ spin_lock_irqsave(&hw->mdio_lock, flags);
+
+ retval = l1f_write_phy(hw, true, type, false, reg_addr, phy_data);
+ if (retval)
+ alx_hw_err(hw, "error:%u, when write ext phy reg\n", retval);
+
+
+ spin_unlock_irqrestore(&hw->mdio_lock, flags);
+ return retval;
+}
+
+#endif
+
+int alf_init_phy(struct alx_hw *hw)
+{
+ u16 phy_id[2];
+ int retval;
+
+ spin_lock_init(&hw->mdio_lock);
+
+ retval = alf_read_phy_reg(hw, MII_PHYSID1, &phy_id[0]);
+ if (retval)
+ return retval;
+ retval = alf_read_phy_reg(hw, MII_PHYSID2, &phy_id[1]);
+ if (retval)
+ return retval;
+ memcpy(&hw->phy_id, phy_id, sizeof(hw->phy_id));
+
+ hw->autoneg_advertised = LX_LC_ALL;
+
+ return retval;
+}
+
+
+int alf_reset_phy(struct alx_hw *hw)
+{
+ int retval = 0;
+
+ CLI_HW_FLAG(PWSAVE_EN);
+ CLI_HW_FLAG(AZ_EN);
+ CLI_HW_FLAG(PTP_EN);
+
+ if (CHK_HW_FLAG(PWSAVE_CAP))
+ SET_HW_FLAG(PWSAVE_EN);
+
+ if (CHK_HW_FLAG(AZ_CAP))
+ SET_HW_FLAG(AZ_EN);
+
+ if (CHK_HW_FLAG(PTP_CAP))
+ SET_HW_FLAG(PTP_EN);
+
+ alx_hw_info(hw, "reset PHY, pws = %d, az = %d, ptp = %d\n",
+ CHK_HW_FLAG(PWSAVE_EN), CHK_HW_FLAG(AZ_EN),
+ CHK_HW_FLAG(PTP_EN));
+ retval = l1f_reset_phy(hw, CHK_HW_FLAG(PWSAVE_EN), CHK_HW_FLAG(AZ_EN),
+ CHK_HW_FLAG(PTP_EN));
+ if (retval)
+ alx_hw_err(hw, "error when reset phy\n");
+
+ return retval;
+}
+
+
+/* LINK */
+int alf_setup_phy_link(struct alx_hw *hw, u8 speed, bool autoneg,
+ bool fc)
+{
+ int retval = 0;
+
+ if (!CHK_HW_FLAG(GIGA_CAP))
+ speed &= ~(LX_LC_1000F | LX_LC_1000H);
+
+ alx_hw_info(hw, "speed = 0x%x, autoneg = %d\n", speed, autoneg);
+ if (l1f_init_phy_spdfc(hw, autoneg, speed, fc)) {
+ alx_hw_err(hw, "error when init phy speed and fc\n");
+ retval = -EINVAL;
+ }
+
+ return retval;
+}
+
+
+int alf_check_phy_link(struct alx_hw *hw, u8 *speed, bool *link_up)
+{
+ u16 bmsr, giga;
+ int retval;
+
+ alf_read_phy_reg(hw, MII_BMSR, &bmsr);
+ retval = alf_read_phy_reg(hw, MII_BMSR, &bmsr);
+ if (retval)
+ return retval;
+
+ if (!(bmsr & BMSR_LSTATUS)) {
+ *link_up = false;
+ *speed = 0;
+ return retval;
+ }
+ *link_up = true;
+
+ /* Read PHY Specific Status Register (17) */
+ retval = alf_read_phy_reg(hw, L1F_MII_GIGA_PSSR, &giga);
+ if (retval)
+ return retval;
+
+
+ if (!(giga & L1F_GIGA_PSSR_SPD_DPLX_RESOLVED)) {
+ alx_hw_err(hw, "error for speed duplex resolved\n");
+ return -EINVAL;
+ }
+
+ switch (giga & L1F_GIGA_PSSR_SPEED) {
+ case L1F_GIGA_PSSR_1000MBS:
+ if (giga & L1F_GIGA_PSSR_DPLX)
+ *speed = LX_LC_1000F;
+ else
+ alx_hw_err(hw, "1000M half is invalid\n");
+ break;
+ case L1F_GIGA_PSSR_100MBS:
+ if (giga & L1F_GIGA_PSSR_DPLX)
+ *speed = LX_LC_100F;
+ else
+ *speed = LX_LC_100H;
+ break;
+ case L1F_GIGA_PSSR_10MBS:
+ if (giga & L1F_GIGA_PSSR_DPLX)
+ *speed = LX_LC_10F;
+ else
+ *speed = LX_LC_10H;
+ break;
+ default:
+ *speed = 0;
+ retval = -EINVAL;
+ break;
+ }
+ return retval;
+}
+
+int alf_post_phy_link(struct alx_hw *hw, bool az_en, bool link_up, u8 speed)
+{
+ return l1f_post_phy_link(hw, az_en, link_up, speed);
+}
+
+/*
+ * 1. stop_mac
+ * 2. reset mac & dma by reg1400(MASTER)
+ * 3. control speed/duplex, hash-alg
+ * 4. clock switch setting
+ */
+int alf_reset_mac(struct alx_hw *hw)
+{
+ int retval = 0;
+
+ retval = l1f_reset_mac(hw);
+ if (retval)
+ alx_hw_err(hw, "error(%d) when reset mac\n", retval);
+
+ return retval;
+}
+
+
+int alf_start_mac(struct alx_hw *hw)
+{
+ u16 en_ctrl = 0;
+ int retval = 0;
+
+ /* set link speed param */
+ switch (hw->link_speed) {
+ case LX_LC_1000F:
+ en_ctrl |= LX_MACSPEED_1000;
+ /* fall through */
+ case LX_LC_100F:
+ case LX_LC_10F:
+ en_ctrl |= LX_MACDUPLEX_FULL;
+ break;
+ }
+
+ /* set fc param*/
+ switch (hw->cur_fc_mode) {
+ case alx_fc_full:
+ en_ctrl |= LX_FC_RXEN; /* Flow Control RX Enable */
+ en_ctrl |= LX_FC_TXEN; /* Flow Control TX Enable */
+ break;
+ case alx_fc_rx_pause:
+ en_ctrl |= LX_FC_RXEN; /* Flow Control RX Enable */
+ break;
+ case alx_fc_tx_pause:
+ en_ctrl |= LX_FC_TXEN; /* Flow Control TX Enable */
+ break;
+ default:
+ break;
+ }
+
+ if (hw->fc_single_pause)
+ en_ctrl |= LX_SINGLE_PAUSE;
+
+ en_ctrl |= LX_FLT_DIRECT; /* RX Enable; and TX Always Enable */
+ en_ctrl |= LX_ADD_FCS;
+
+ en_ctrl |= hw->flags & ALX_HW_FLAG_LX_MASK;
+
+ if (l1f_enable_mac(hw, true, en_ctrl)) {
+ alx_hw_err(hw, "error when start mac\n");
+ retval = -EINVAL;
+ }
+ return retval;
+}
+
+
+/*
+ * 1. stop RXQ (reg15A0) and TXQ (reg1590)
+ * 2. stop MAC (reg1480)
+ */
+int alf_stop_mac(struct alx_hw *hw)
+{
+ if (l1f_enable_mac(hw, false, 0)) {
+ alx_hw_err(hw, "error when stop mac\n");
+ return -EINVAL;
+ }
+ return 0;
+}
+
+
+int alf_init_mac(struct alx_hw *hw, u16 rxbuf_sz, u16 rx_qnum,
+ u16 rxring_sz, u16 tx_qnum, u16 txring_sz)
+{
+ int retval = 0;
+
+ l1f_init_mac_misc(hw, hw->mac_addr, hw->smb_timer, hw->imt_mod, true);
+
+ retval = l1f_init_mac_rtx_ring_desc(hw, hw->dma.rfdmem_hi[0],
+ hw->dma.rfdmem_lo[0], hw->dma.rrdmem_lo[0],
+ rxring_sz, rxbuf_sz,
+ hw->dma.tpdmem_hi[0], hw->dma.tpdmem_lo,
+ tx_qnum, txring_sz);
+ if (retval) {
+ alx_hw_err(hw, "error(%d) when init mac\n", retval);
+ return retval;
+ }
+
+ /* the mtu equals hw->mtu + length of ether header */
+ l1f_init_mac_rtx_queue(hw, hw->mtu + ALX_ETH_LENGTH_OF_HEADER);
+
+ l1f_init_mac_dma(hw);
+
+ return retval;
+}
+
+
+int alf_reset_pcie(struct alx_hw *hw, bool l0s_en, bool l1_en)
+{
+ int retval = 0;
+
+ if (!CHK_HW_FLAG(L0S_CAP))
+ l0s_en = false;
+
+ if (l0s_en)
+ SET_HW_FLAG(L0S_EN);
+ else
+ CLI_HW_FLAG(L0S_EN);
+
+
+ if (!CHK_HW_FLAG(L1_CAP))
+ l1_en = false;
+
+ if (l1_en)
+ SET_HW_FLAG(L1_EN);
+ else
+ CLI_HW_FLAG(L1_EN);
+
+ if (l1f_reset_pcie(hw, l0s_en, l1_en)) {
+ alx_hw_err(hw, "error when reset pcie\n");
+ retval = -EINVAL;
+ }
+ return retval;
+}
+
+
+int alf_config_aspm(struct alx_hw *hw, bool l0s_en, bool l1_en)
+{
+ int retval = 0;
+
+ if (!CHK_HW_FLAG(L0S_CAP))
+ l0s_en = false;
+
+ if (l0s_en)
+ SET_HW_FLAG(L0S_EN);
+ else
+ CLI_HW_FLAG(L0S_EN);
+
+ if (!CHK_HW_FLAG(L1_CAP))
+ l1_en = false;
+
+ if (l1_en)
+ SET_HW_FLAG(L1_EN);
+ else
+ CLI_HW_FLAG(L1_EN);
+
+ if (l1f_enable_aspm(hw, l0s_en, l1_en, 0)) {
+ alx_hw_err(hw, "error when enable aspm\n");
+ retval = -EINVAL;
+ }
+ return retval;
+}
+
+
+int alf_config_wol(struct alx_hw *hw, u32 wufc)
+{
+ u32 wol;
+ int retval = 0;
+
+ wol = 0;
+ /* turn on magic packet event */
+ if (wufc & ALX_WOL_MAGIC) {
+ wol |= L1F_WOL0_MAGIC_EN | L1F_WOL0_PME_MAGIC_EN;
+ /* magic packet maybe Broadcast&multicast&Unicast frame */
+ /* mac |= MAC_CTRL_BC_EN; */
+ }
+
+ /* turn on link up event */
+ if (wufc & ALX_WOL_PHY) {
+ wol |= L1F_WOL0_LINK_EN | L1F_WOL0_PME_LINK;
+ /* only link up can wake up */
+ retval = alf_write_phy_reg(hw, L1F_MII_IER, L1F_IER_LINK_UP);
+ }
+ alx_mem_w32(hw, L1F_WOL0, wol);
+ return retval;
+}
+
+void alf_update_mac_filter(struct alx_hw *hw)
+{
+ u32 mac;
+ u32 flg_hw_map[] = {
+ ALX_HW_FLAG_BROADCAST_EN, L1F_MAC_CTRL_BRD_EN,
+ ALX_HW_FLAG_VLANSTRIP_EN, L1F_MAC_CTRL_VLANSTRIP,
+ ALX_HW_FLAG_PROMISC_EN, L1F_MAC_CTRL_PROMISC_EN,
+ ALX_HW_FLAG_MULTIALL_EN, L1F_MAC_CTRL_MULTIALL_EN,
+ ALX_HW_FLAG_LOOPBACK_EN, L1F_MAC_CTRL_LPBACK_EN
+ };
+ int i;
+
+
+ alx_mem_r32(hw, L1F_MAC_CTRL, &mac);
+
+ for (i = 0; i < ARRAY_SIZE(flg_hw_map); i += 2) {
+ if (hw->flags & flg_hw_map[i])
+ mac |= flg_hw_map[i + 1];
+ else
+ mac &= ~flg_hw_map[i + 1];
+ }
+
+ alx_mem_w32(hw, L1F_MAC_CTRL, mac);
+}
+
+
+int alf_config_pow_save(struct alx_hw *hw, u8 speed, bool wol_en,
+ bool tx_en, bool rx_en, bool pws_en)
+{
+ int retval = 0;
+
+ if (l1f_powersaving(hw, speed, wol_en, tx_en, rx_en, pws_en)) {
+ alx_hw_err(hw, "error when set power saving\n");
+ retval = -EINVAL;
+ }
+ return retval;
+}
+
+
+/* RAR, Multicast, VLAN */
+void alf_set_mac_addr(struct alx_hw *hw, u8 *addr)
+{
+ u32 sta;
+
+ /*
+ * for example: 00-0B-6A-F6-00-DC
+ * 0<-->6AF600DC, 1<-->000B.
+ */
+
+ /* low dword */
+ sta = addr[2] << 24 | addr[3] << 16 | addr[4] << 8 | addr[5];
+ alx_mem_w32(hw, L1F_STAD0, sta);
+
+ /* hight dword */
+ sta = addr[0] << 8 | addr[1];
+ alx_mem_w32(hw, L1F_STAD1, sta);
+}
+
+
+int alf_get_mac_addr(struct alx_hw *hw, u8 *addr)
+{
+ int retval = 0;
+
+ if (l1f_get_perm_macaddr(hw, addr)) {
+ alx_hw_err(hw, "error when get permanent mac address\n");
+ retval = -EINVAL;
+ }
+ return retval;
+}
+
+
+int alf_set_mc_addr(struct alx_hw *hw, u8 *addr)
+{
+ u32 crc32, bit, reg, mta;
+
+ /*
+ * set hash value for a multicast address hash calcu processing.
+ * 1. calcu 32bit CRC for multicast address
+ * 2. reverse crc with MSB to LSB
+ */
+ crc32 = ALX_ETH_CRC(addr, ALX_ETH_LENGTH_OF_ADDRESS);
+
+ /*
+ * The HASH Table is a register array of 2 32-bit registers.
+ * It is treated like an array of 64 bits. We want to set
+ * bit BitArray[hash_value]. So we figure out what register
+ * the bit is in, read it, OR in the new bit, then write
+ * back the new value. The register is determined by the
+ * upper 7 bits of the hash value and the bit within that
+ * register are determined by the lower 5 bits of the value.
+ */
+ reg = (crc32 >> 31) & 0x1;
+ bit = (crc32 >> 26) & 0x1F;
+
+ alx_mem_r32(hw, L1F_HASH_TBL0 + (reg<<2), &mta);
+ mta |= (0x1 << bit);
+ alx_mem_w32(hw, L1F_HASH_TBL0 + (reg<<2), mta);
+ return 0;
+}
+
+
+int alf_clear_mc_addr(struct alx_hw *hw)
+{
+ alx_mem_w32(hw, L1F_HASH_TBL0, 0);
+ alx_mem_w32(hw, L1F_HASH_TBL1, 0);
+ return 0;
+}
+
+
+/* RTX, IRQ */
+void alf_config_tx(struct alx_hw *hw)
+{
+ u32 wrr;
+
+ alx_mem_r32(hw, L1F_WRR, &wrr);
+ switch (hw->wrr_mode) {
+ case alx_wrr_mode_none:
+ FIELD_SETL(wrr, L1F_WRR_PRI, L1F_WRR_PRI_RESTRICT_NONE);
+ break;
+ case alx_wrr_mode_high:
+ FIELD_SETL(wrr, L1F_WRR_PRI, L1F_WRR_PRI_RESTRICT_HI);
+ break;
+ case alx_wrr_mode_high2:
+ FIELD_SETL(wrr, L1F_WRR_PRI, L1F_WRR_PRI_RESTRICT_HI2);
+ break;
+ case alx_wrr_mode_all:
+ FIELD_SETL(wrr, L1F_WRR_PRI, L1F_WRR_PRI_RESTRICT_ALL);
+ break;
+ }
+ FIELD_SETL(wrr, L1F_WRR_PRI0, hw->wrr_prio0);
+ FIELD_SETL(wrr, L1F_WRR_PRI1, hw->wrr_prio1);
+ FIELD_SETL(wrr, L1F_WRR_PRI2, hw->wrr_prio2);
+ FIELD_SETL(wrr, L1F_WRR_PRI3, hw->wrr_prio3);
+ alx_mem_w32(hw, L1F_WRR, wrr);
+}
+
+
+int alf_config_msix(struct alx_hw *hw, u16 num_intrs, bool msix_en, bool msi_en)
+{
+ u32 map[2];
+ u32 type;
+ int msix_idx;
+
+ if (!msix_en)
+ goto configure_legacy;
+
+ memset(map, 0, sizeof(map));
+ for (msix_idx = 0; msix_idx < num_intrs; msix_idx++) {
+ switch (msix_idx) {
+ case ALF_MSIX_INDEX_RXQ0:
+ FIELD_SETL(map[0], L1F_MSI_MAP_TBL1_RXQ0,
+ ALF_MSIX_INDEX_RXQ0);
+ break;
+ case ALF_MSIX_INDEX_RXQ1:
+ FIELD_SETL(map[0], L1F_MSI_MAP_TBL1_RXQ1,
+ ALF_MSIX_INDEX_RXQ1);
+ break;
+ case ALF_MSIX_INDEX_RXQ2:
+ FIELD_SETL(map[0], L1F_MSI_MAP_TBL1_RXQ2,
+ ALF_MSIX_INDEX_RXQ2);
+ break;
+ case ALF_MSIX_INDEX_RXQ3:
+ FIELD_SETL(map[0], L1F_MSI_MAP_TBL1_RXQ3,
+ ALF_MSIX_INDEX_RXQ3);
+ break;
+ case ALF_MSIX_INDEX_RXQ4:
+ FIELD_SETL(map[1], L1F_MSI_MAP_TBL2_RXQ4,
+ ALF_MSIX_INDEX_RXQ4);
+ break;
+ case ALF_MSIX_INDEX_RXQ5:
+ FIELD_SETL(map[1], L1F_MSI_MAP_TBL2_RXQ5,
+ ALF_MSIX_INDEX_RXQ5);
+ break;
+ case ALF_MSIX_INDEX_RXQ6:
+ FIELD_SETL(map[1], L1F_MSI_MAP_TBL2_RXQ6,
+ ALF_MSIX_INDEX_RXQ6);
+ break;
+ case ALF_MSIX_INDEX_RXQ7:
+ FIELD_SETL(map[1], L1F_MSI_MAP_TBL2_RXQ7,
+ ALF_MSIX_INDEX_RXQ7);
+ break;
+ case ALF_MSIX_INDEX_TXQ0:
+ FIELD_SETL(map[0], L1F_MSI_MAP_TBL1_TXQ0,
+ ALF_MSIX_INDEX_TXQ0);
+ break;
+ case ALF_MSIX_INDEX_TXQ1:
+ FIELD_SETL(map[0], L1F_MSI_MAP_TBL1_TXQ1,
+ ALF_MSIX_INDEX_TXQ1);
+ break;
+ case ALF_MSIX_INDEX_TXQ2:
+ FIELD_SETL(map[1], L1F_MSI_MAP_TBL2_TXQ2,
+ ALF_MSIX_INDEX_TXQ2);
+ break;
+ case ALF_MSIX_INDEX_TXQ3:
+ FIELD_SETL(map[1], L1F_MSI_MAP_TBL2_TXQ3,
+ ALF_MSIX_INDEX_TXQ3);
+ break;
+ case ALF_MSIX_INDEX_TIMER:
+ FIELD_SETL(map[0], L1F_MSI_MAP_TBL1_TIMER,
+ ALF_MSIX_INDEX_TIMER);
+ break;
+ case ALF_MSIX_INDEX_ALERT:
+ FIELD_SETL(map[0], L1F_MSI_MAP_TBL1_ALERT,
+ ALF_MSIX_INDEX_ALERT);
+ break;
+ case ALF_MSIX_INDEX_SMB:
+ FIELD_SETL(map[1], L1F_MSI_MAP_TBL2_SMB,
+ ALF_MSIX_INDEX_SMB);
+ break;
+ case ALF_MSIX_INDEX_PHY:
+ FIELD_SETL(map[1], L1F_MSI_MAP_TBL2_PHY,
+ ALF_MSIX_INDEX_PHY);
+ break;
+ default:
+ break;
+
+ }
+
+ }
+
+ alx_mem_w32(hw, L1F_MSI_MAP_TBL1, map[0]);
+ alx_mem_w32(hw, L1F_MSI_MAP_TBL2, map[1]);
+
+ /* 0 to alert, 1 to timer */
+ type = (L1F_MSI_ID_MAP_DMAW |
+ L1F_MSI_ID_MAP_DMAR |
+ L1F_MSI_ID_MAP_PCIELNKDW |
+ L1F_MSI_ID_MAP_PCIECERR |
+ L1F_MSI_ID_MAP_PCIENFERR |
+ L1F_MSI_ID_MAP_PCIEFERR |
+ L1F_MSI_ID_MAP_PCIEUR);
+
+ alx_mem_w32(hw, L1F_MSI_ID_MAP, type);
+ return 0;
+
+configure_legacy:
+ alx_mem_w32(hw, L1F_MSI_MAP_TBL1, 0x0);
+ alx_mem_w32(hw, L1F_MSI_MAP_TBL2, 0x0);
+ alx_mem_w32(hw, L1F_MSI_ID_MAP, 0x0);
+ if (msi_en) {
+ u32 msi;
+ alx_mem_r32(hw, 0x1920, &msi);
+ msi |= 0x10000;
+ alx_mem_w32(hw, 0x1920, msi);
+ }
+ return 0;
+}
+
+
+/*
+ * Interrupt
+ */
+int alf_ack_phy_intr(struct alx_hw *hw)
+{
+ u16 isr;
+ return alf_read_phy_reg(hw, L1F_MII_ISR, &isr);
+}
+
+
+int alf_enable_legacy_intr(struct alx_hw *hw)
+{
+ u16 cmd;
+
+ alx_cfg_r16(hw, PCI_COMMAND, &cmd);
+ cmd &= ~PCI_COMMAND_INTX_DISABLE;
+ alx_cfg_w16(hw, PCI_COMMAND, cmd);
+
+ alx_mem_w32(hw, L1F_ISR, ~((u32) L1F_ISR_DIS));
+ alx_mem_w32(hw, L1F_IMR, hw->intr_mask);
+ return 0;
+}
+
+
+int alf_disable_legacy_intr(struct alx_hw *hw)
+{
+ alx_mem_w32(hw, L1F_ISR, L1F_ISR_DIS);
+ alx_mem_w32(hw, L1F_IMR, 0);
+ alx_mem_flush(hw);
+ return 0;
+}
+
+
+int alf_enable_msix_intr(struct alx_hw *hw, u8 entry_idx)
+{
+ u32 ctrl_reg;
+
+ ctrl_reg = ALF_MSIX_ENTRY_BASE + (entry_idx * ALF_MSIX_ENTRY_SIZE) +
+ ALF_MSIX_MSG_CTRL_OFF;
+
+ alx_mem_w32(hw, ctrl_reg, 0x0);
+ alx_mem_flush(hw);
+ return 0;
+}
+
+
+int alf_disable_msix_intr(struct alx_hw *hw, u8 entry_idx)
+{
+ u32 ctrl_reg;
+
+ ctrl_reg = ALF_MSIX_ENTRY_BASE + (entry_idx * ALF_MSIX_ENTRY_SIZE) +
+ ALF_MSIX_MSG_CTRL_OFF;
+
+ alx_mem_w32(hw, ctrl_reg, 0x1);
+ alx_mem_flush(hw);
+ return 0;
+}
+
+
+/* RSS */
+int alf_config_rss(struct alx_hw *hw, bool rss_en)
+{
+ int key_len_by_u8 = sizeof(hw->rss_key);
+ int idt_len_by_u32 = sizeof(hw->rss_idt) / sizeof(u32);
+ u32 rxq0;
+ int i;
+
+ /* Fill out hash function keys */
+ for (i = 0; i < key_len_by_u8; i++) {
+ alx_mem_w8(hw, ALF_RSS_KEY(i, u8),
+ hw->rss_key[key_len_by_u8 - i - 1]);
+ }
+
+ /* Fill out redirection table */
+ for (i = 0; i < idt_len_by_u32; i++)
+ alx_mem_w32(hw, ALF_RSS_TBL(i, u32), hw->rss_idt[i]);
+
+ alx_mem_w32(hw, L1F_RSS_BASE_CPU_NUM, hw->rss_base_cpu);
+
+ alx_mem_r32(hw, L1F_RXQ0, &rxq0);
+ if (hw->rss_hstype & ALX_RSS_HSTYP_IPV4_EN)
+ rxq0 |= L1F_RXQ0_RSS_HSTYP_IPV4_EN;
+ else
+ rxq0 &= ~L1F_RXQ0_RSS_HSTYP_IPV4_EN;
+
+ if (hw->rss_hstype & ALX_RSS_HSTYP_TCP4_EN)
+ rxq0 |= L1F_RXQ0_RSS_HSTYP_IPV4_TCP_EN;
+ else
+ rxq0 &= ~L1F_RXQ0_RSS_HSTYP_IPV4_TCP_EN;
+
+ if (hw->rss_hstype & ALX_RSS_HSTYP_IPV6_EN)
+ rxq0 |= L1F_RXQ0_RSS_HSTYP_IPV6_EN;
+ else
+ rxq0 &= ~L1F_RXQ0_RSS_HSTYP_IPV6_EN;
+
+ if (hw->rss_hstype & ALX_RSS_HSTYP_TCP6_EN)
+ rxq0 |= L1F_RXQ0_RSS_HSTYP_IPV6_TCP_EN;
+ else
+ rxq0 &= ~L1F_RXQ0_RSS_HSTYP_IPV6_TCP_EN;
+
+ FIELD_SETL(rxq0, L1F_RXQ0_RSS_MODE, hw->rss_mode);
+ FIELD_SETL(rxq0, L1F_RXQ0_IDT_TBL_SIZE, hw->rss_idt_size);
+
+ if (rss_en)
+ rxq0 |= L1F_RXQ0_RSS_HASH_EN;
+ else
+ rxq0 &= ~L1F_RXQ0_RSS_HASH_EN;
+
+ alx_mem_w32(hw, L1F_RXQ0, rxq0);
+ return 0;
+}
+
+
+/* fc */
+static int alf_get_fc_mode(struct alx_hw *hw, enum alx_fc_mode *mode)
+{
+ u16 bmsr, giga;
+ int i;
+ int retval = 0;
+
+ for (i = 0; i < ALX_MAX_SETUP_LNK_CYCLE; i++) {
+ alf_read_phy_reg(hw, MII_BMSR, &bmsr);
+ alf_read_phy_reg(hw, MII_BMSR, &bmsr);
+ if (bmsr & BMSR_LSTATUS) {
+ /* Read phy Specific Status Register (17) */
+ retval = alf_read_phy_reg(hw, L1F_MII_GIGA_PSSR, &giga);
+ if (retval)
+ return retval;
+
+ if (!(giga & L1F_GIGA_PSSR_SPD_DPLX_RESOLVED)) {
+ alx_hw_err(hw,
+ "error for speed duplex resolved\n");
+ return -EINVAL;
+ }
+
+ if ((giga & L1F_GIGA_PSSR_FC_TXEN) &&
+ (giga & L1F_GIGA_PSSR_FC_RXEN)) {
+ *mode = alx_fc_full;
+ } else if (giga & L1F_GIGA_PSSR_FC_TXEN) {
+ *mode = alx_fc_tx_pause;
+ } else if (giga & L1F_GIGA_PSSR_FC_RXEN) {
+ *mode = alx_fc_rx_pause;
+ } else {
+ *mode = alx_fc_none;
+ }
+ break;
+ }
+ mdelay(100);
+ }
+
+ if (i == ALX_MAX_SETUP_LNK_CYCLE) {
+ alx_hw_err(hw, "error when get flow control mode\n");
+ retval = -EINVAL;
+ }
+ return retval;
+}
+
+
+int alf_config_fc(struct alx_hw *hw)
+{
+ u32 mac;
+ int retval = 0;
+
+ if (hw->disable_fc_autoneg) {
+ hw->fc_was_autonegged = false;
+ hw->cur_fc_mode = hw->req_fc_mode;
+ } else {
+ hw->fc_was_autonegged = true;
+ retval = alf_get_fc_mode(hw, &hw->cur_fc_mode);
+ if (retval)
+ return retval;
+ }
+
+ alx_mem_r32(hw, L1F_MAC_CTRL, &mac);
+
+ switch (hw->cur_fc_mode) {
+ case alx_fc_none: /* 0 */
+ mac &= ~(L1F_MAC_CTRL_RXFC_EN | L1F_MAC_CTRL_TXFC_EN);
+ break;
+ case alx_fc_rx_pause: /* 1 */
+ mac &= ~L1F_MAC_CTRL_TXFC_EN;
+ mac |= L1F_MAC_CTRL_RXFC_EN;
+ break;
+ case alx_fc_tx_pause: /* 2 */
+ mac |= L1F_MAC_CTRL_TXFC_EN;
+ mac &= ~L1F_MAC_CTRL_RXFC_EN;
+ break;
+ case alx_fc_full: /* 3 */
+ case alx_fc_default: /* 4 */
+ mac |= (L1F_MAC_CTRL_TXFC_EN | L1F_MAC_CTRL_RXFC_EN);
+ break;
+ default:
+ alx_hw_err(hw, "flow control param set incorrectly\n");
+ return -EINVAL;
+ break;
+ }
+
+ alx_mem_w32(hw, L1F_MAC_CTRL, mac);
+
+ return retval;
+}
+
+
+/* ethtool */
+void alf_get_ethtool_regs(struct alx_hw *hw, void *buff)
+{
+ int i;
+ u32 *val = buff;
+ static const u32 reg[] = {
+ /* 0 */
+ L1F_DEV_CAP, L1F_DEV_CTRL, L1F_LNK_CAP, L1F_LNK_CTRL,
+ L1F_UE_SVRT, L1F_EFLD, L1F_SLD, L1F_PPHY_MISC1,
+ L1F_PPHY_MISC2, L1F_PDLL_TRNS1,
+
+ /* 10 */
+ L1F_TLEXTN_STATS, L1F_EFUSE_CTRL, L1F_EFUSE_DATA, L1F_SPI_OP1,
+ L1F_SPI_OP2, L1F_SPI_OP3, L1F_EF_CTRL, L1F_EF_ADDR,
+ L1F_EF_DATA, L1F_SPI_ID,
+
+ /* 20 */
+ L1F_SPI_CFG_START, L1F_PMCTRL, L1F_LTSSM_CTRL, L1F_MASTER,
+ L1F_MANU_TIMER, L1F_IRQ_MODU_TIMER, L1F_PHY_CTRL, L1F_MAC_STS,
+ L1F_MDIO, L1F_MDIO_EXTN,
+
+ /* 30 */
+ L1F_PHY_STS, L1F_BIST0, L1F_BIST1, L1F_SERDES,
+ L1F_LED_CTRL, L1F_LED_PATN, L1F_LED_PATN2, L1F_SYSALV,
+ L1F_PCIERR_INST, L1F_LPI_DECISN_TIMER,
+
+ /* 40 */
+ L1F_LPI_CTRL, L1F_LPI_WAIT, L1F_HRTBT_VLAN, L1F_HRTBT_CTRL,
+ L1F_RXPARSE, L1F_MAC_CTRL, L1F_GAP, L1F_STAD1,
+ L1F_LED_CTRL, L1F_HASH_TBL0,
+
+ /* 50 */
+ L1F_HASH_TBL1, L1F_HALFD, L1F_DMA, L1F_WOL0,
+ L1F_WOL1, L1F_WOL2, L1F_WRR, L1F_HQTPD,
+ L1F_CPUMAP1, L1F_CPUMAP2,
+
+ /* 60 */
+ L1F_MISC, L1F_RX_BASE_ADDR_HI, L1F_RFD_ADDR_LO, L1F_RFD_RING_SZ,
+ L1F_RFD_BUF_SZ, L1F_RRD_ADDR_LO, L1F_RRD_RING_SZ,
+ L1F_RFD_PIDX, L1F_RFD_CIDX, L1F_RXQ0,
+
+ /* 70 */
+ L1F_RXQ1, L1F_RXQ2, L1F_RXQ3, L1F_TX_BASE_ADDR_HI,
+ L1F_TPD_PRI0_ADDR_LO, L1F_TPD_PRI1_ADDR_LO,
+ L1F_TPD_PRI2_ADDR_LO, L1F_TPD_PRI3_ADDR_LO,
+ L1F_TPD_PRI0_PIDX, L1F_TPD_PRI1_PIDX,
+
+ /* 80 */
+ L1F_TPD_PRI2_PIDX, L1F_TPD_PRI3_PIDX, L1F_TPD_PRI0_CIDX,
+ L1F_TPD_PRI1_CIDX, L1F_TPD_PRI2_CIDX, L1F_TPD_PRI3_CIDX,
+ L1F_TPD_RING_SZ, L1F_TXQ0, L1F_TXQ1, L1F_TXQ2,
+
+ /* 90 */
+ L1F_MSI_MAP_TBL1, L1F_MSI_MAP_TBL2, L1F_MSI_ID_MAP,
+ L1F_MSIX_MASK, L1F_MSIX_PENDING,
+ };
+
+ for (i = 0; i < ARRAY_SIZE(reg); i++)
+ alx_mem_r32(hw, reg[i], &val[i]);
+
+ /* SRAM */
+ for (i = 0; i < 16; i++)
+ alx_mem_r32(hw, ALF_SRAM(i, u32), &val[100 + i]);
+
+ /* RSS */
+ for (i = 0; i < 10; i++)
+ alx_mem_r32(hw, ALF_RSS_KEY(i, u32), &val[120 + i]);
+ for (i = 0; i < 32; i++)
+ alx_mem_r32(hw, ALF_RSS_TBL(i, u32), &val[130 + i]);
+ alx_mem_r32(hw, L1F_RSS_HASH_VAL, &val[162]);
+ alx_mem_r32(hw, L1F_RSS_HASH_FLAG, &val[163]);
+ alx_mem_r32(hw, L1F_RSS_BASE_CPU_NUM, &val[164]);
+
+ /* MIB */
+ for (i = 0; i < 48; i++)
+ alx_mem_r32(hw, ALF_MIB(i, u32), &val[170 + i]);
+}
+
+
+/******************************************************************************/
+static void alf_set_hw_capabilities(struct alx_hw *hw)
+{
+ SET_HW_FLAG(L0S_CAP);
+ SET_HW_FLAG(L1_CAP);
+
+ if (hw->mac_type == alx_mac_l1f)
+ SET_HW_FLAG(GIGA_CAP);
+
+ /* set flags of alx_phy_info */
+ SET_HW_FLAG(PWSAVE_CAP);
+}
+
+
+/* alc_set_hw_info */
+static void alf_set_hw_infos(struct alx_hw *hw)
+{
+ hw->rxstat_reg = L1F_MIB_RX_OK;
+ hw->rxstat_sz = 0x60;
+ hw->txstat_reg = L1F_MIB_TX_OK;
+ hw->txstat_sz = 0x68;
+
+ hw->rx_prod_reg[0] = L1F_RFD_PIDX;
+ hw->rx_cons_reg[0] = L1F_RFD_CIDX;
+
+ hw->tx_prod_reg[0] = L1F_TPD_PRI0_PIDX;
+ hw->tx_cons_reg[0] = L1F_TPD_PRI0_CIDX;
+ hw->tx_prod_reg[1] = L1F_TPD_PRI1_PIDX;
+ hw->tx_cons_reg[1] = L1F_TPD_PRI1_CIDX;
+ hw->tx_prod_reg[2] = L1F_TPD_PRI2_PIDX;
+ hw->tx_cons_reg[2] = L1F_TPD_PRI2_CIDX;
+ hw->tx_prod_reg[3] = L1F_TPD_PRI3_PIDX;
+ hw->tx_cons_reg[3] = L1F_TPD_PRI3_CIDX;
+
+ hw->hwreg_sz = 0x200;
+ hw->eeprom_sz = 0;
+}
+
+
+/*
+ * alf_init_hw
+ */
+void alf_init_hw(struct alx_hw *hw)
+{
+ alf_set_hw_capabilities(hw);
+ alf_set_hw_infos(hw);
+
+ alx_hw_info(hw, "HW Flags = 0x%x\n", hw->flags);
+}
diff --git a/drivers/net/ethernet/atheros/alx/alx_cifs.c b/drivers/net/ethernet/atheros/alx/alx_cifs.c
new file mode 100644
index 0000000..864747d
--- /dev/null
+++ b/drivers/net/ethernet/atheros/alx/alx_cifs.c
@@ -0,0 +1,307 @@
+/*
+ * Copyright (c) 2012 Qualcomm Atheros, Inc.
+ *
+ * Permission to use, copy, modify, and/or distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+#include "alx.h"
+#include "alx_hw.h"
+
+/*
+ * Desc: CIFS offload, config the ANNOUNCEMENT message to our adapter.
+ */
+static void alf_setup_cifs(struct alx_hw *hw)
+{
+ u32 ctrl;
+ u32 i;
+ struct alx_swoi *swoi = &hw->swoi_offload;
+
+ if (!swoi->len) {
+ alx_hw_err(hw, "CIFS offload disabled.\n");
+ return;
+ }
+
+ /* Write ANNOUNCEMENT message to SRAM. */
+ for (i = 0; i < swoi->len; i += 4) {
+ alx_mem_w32(hw, hw->annce_addr_off + i,
+ *(u32 *) &swoi->msg[i]);
+ }
+
+ ctrl = 0;
+ FIELD_SETL(ctrl, L1F_HRTBT_CTRL_PKTLEN, swoi->len);
+ FIELD_SETL(ctrl, L1F_HRTBT_CTRL_HDRADDR,
+ ((hw->annce_addr_off - PACKET_MAP_ADDR) >> 3));
+ if (swoi->is_vlan)
+ ctrl |= L1F_HRTBT_CTRL_HASVLAN;
+
+ /* The registration packet send period.
+ * 1bit = 2s. nAnncePeriod = (pkt_send_period+1) *2
+ */
+ FIELD_SETL(ctrl, L1F_HRTBT_CTRL_PERIOD, swoi->period / 2 - 1);
+ ctrl |= L1F_HRTBT_CTRL_EN;
+ alx_mem_w32(hw, L1F_HRTBT_CTRL, ctrl);
+ alx_hw_info(hw,
+ "ANNOUNCEMENT message annce_base_addr[0x%08x], len%d, "
+ "%s vlan, Reg%04x-%08x\n",
+ (hw->annce_addr_off - PACKET_MAP_ADDR) >> 3, swoi->len,
+ swoi->is_vlan ? "has" : "no", L1F_HRTBT_CTRL, ctrl);
+
+ hw->annce_addr_off =
+ (hw->annce_addr_off + swoi->len + 0x0F) & 0xFFFFFFF0;
+
+ /* Enable magic because cifs needs it. */
+ alx_mem_r32(hw, L1F_WOL0, &ctrl);
+ ctrl |= (L1F_WOL0_MAGIC_EN | L1F_WOL0_PME_MAGIC_EN);
+
+ alx_mem_w32(hw, L1F_WOL0, ctrl);
+}
+
+
+/*
+ * Desc: CIFS offload, config the ANNOUNCEMENT message to our adapter.
+ */
+static void alf_setup_swoi(struct alx_hw *hw)
+{
+ u32 ctrl;
+ u32 i;
+ struct alx_swoi *swoi = &hw->swoi_offload;
+
+ if (!swoi->len) {
+ alx_hw_err(hw, "SWOI offload disabled.\n");
+ return;
+ }
+
+ /* Write ANNOUNCEMENT message to SRAM. */
+ for (i = 0; i < swoi->total_len; i += 4) {
+ alx_mem_w32(hw, hw->annce_addr_off + i,
+ *(u32 *) &swoi->msg[i]);
+ }
+
+ ctrl = 0;
+ FIELD_SETL(ctrl, L1F_HRTBT_CTRL_PKTLEN, swoi->len);
+ FIELD_SETL(ctrl, L1F_HRTBT_CTRL_HDRADDRB0,
+ ((hw->annce_addr_off - PACKET_MAP_ADDR) >> 4));
+ if (swoi->fraged)
+ ctrl |= L1F_HRTBT_CTRL_PKT_FRAG;
+
+ if (swoi->is_vlan)
+ ctrl |= L1F_HRTBT_CTRL_HASVLAN;
+
+ /* The registration packet send period.
+ * 1bit = 2s. nAnncePeriod = (pkt_send_period+1) *2
+ */
+ FIELD_SETL(ctrl, L1F_HRTBT_CTRL_PERIOD, swoi->period / 2 - 1);
+ ctrl |= L1F_HRTBT_CTRL_EN;
+ alx_mem_w32(hw, L1F_HRTBT_CTRL, ctrl);
+ alx_hw_info(hw,
+ "ANNOUNCEMENT message annce_base_addr[0x%08x], len%d, "
+ "%s frag, %s vlan, Reg%04x-%08x\n",
+ (hw->annce_addr_off - PACKET_MAP_ADDR) >> 3, swoi->len,
+ swoi->fraged ? "has" : "no", swoi->is_vlan ? "has" : "no",
+ L1F_HRTBT_CTRL, ctrl);
+
+ ctrl = 0;
+ FIELD_SETL(ctrl, L1F_HRTBT_EXT_CTRL_FRAG_LEN, swoi->frag_len);
+ if (swoi->pkt_is_8023)
+ ctrl |= L1F_HRTBT_EXT_CTRL_IS_8023;
+ if (swoi->pkt_is_ipv6)
+ ctrl |= L1F_HRTBT_EXT_CTRL_IS_IPV6;
+
+ ctrl |= L1F_HRTBT_EXT_CTRL_WAKEUP_EN | (swoi->pkt_is_ipv6 ?
+ L1F_HRTBT_EXT_CTRL_NS_EN :
+ L1F_HRTBT_EXT_CTRL_ARP_EN);
+ alx_mem_w32(hw, L1F_HRTBT_EXT_CTRL, ctrl);
+ alx_mem_r32(hw, L1F_PMOFLD, &ctrl);
+ /* Config host mac address. */
+ alx_mem_r32(hw, L1F_STAD0, &i);
+ alx_mem_w32(hw, L1F_ARP_MAC0, i);
+ /* alx_mem_w32(hw, L1F_1ST_NS_MAC0, i); */
+ alx_mem_w32(hw, L1F_2ND_NS_MAC0, i);
+ alx_mem_r32(hw, L1F_STAD1, &i);
+ alx_mem_w32(hw, L1F_ARP_MAC1, i);
+ /* alx_mem_w32(hw, L1F_1ST_NS_MAC1, i); */
+ alx_mem_w32(hw, L1F_2ND_NS_MAC1, i);
+
+ if (swoi->pkt_is_ipv6) {
+ alx_mem_w32(hw, L1F_HRTBT_REM_IPV6_ADDR3,
+ swoi->svr_ipv6_addr[0]);
+ alx_mem_w32(hw, L1F_HRTBT_REM_IPV6_ADDR2,
+ swoi->svr_ipv6_addr[1]);
+ alx_mem_w32(hw, L1F_HRTBT_REM_IPV6_ADDR1,
+ swoi->svr_ipv6_addr[2]);
+ alx_mem_w32(hw, L1F_HRTBT_REM_IPV6_ADDR0,
+ swoi->svr_ipv6_addr[3]);
+ alx_mem_w32(hw, L1F_2ND_TAR_IPV6_1_3,
+ swoi->host_ipv6_addr[0]);
+ alx_mem_w32(hw, L1F_2ND_TAR_IPV6_1_2,
+ swoi->host_ipv6_addr[1]);
+ alx_mem_w32(hw, L1F_2ND_TAR_IPV6_1_1,
+ swoi->host_ipv6_addr[2]);
+ alx_mem_w32(hw, L1F_2ND_TAR_IPV6_1_0,
+ swoi->host_ipv6_addr[3]);
+ alx_mem_w32(hw, L1F_2ND_SN_IPV6_3, swoi->host_ipv6_addr[0]);
+ alx_mem_w32(hw, L1F_2ND_SN_IPV6_2, swoi->host_ipv6_addr[1]);
+ alx_mem_w32(hw, L1F_2ND_SN_IPV6_1, swoi->host_ipv6_addr[2]);
+ alx_mem_w32(hw, L1F_2ND_SN_IPV6_0, swoi->host_ipv6_addr[3]);
+ alx_mem_w32(hw, L1F_2ND_TAR_IPV6_2_0, 0);
+ alx_mem_w32(hw, L1F_2ND_TAR_IPV6_2_1, 0);
+ alx_mem_w32(hw, L1F_2ND_TAR_IPV6_2_2, 0);
+ alx_mem_w32(hw, L1F_2ND_TAR_IPV6_2_3, 0);
+ alx_mem_w32(hw, L1F_1ST_TAR_IPV6_1_0, 0);
+ alx_mem_w32(hw, L1F_1ST_TAR_IPV6_1_1, 0);
+ alx_mem_w32(hw, L1F_1ST_TAR_IPV6_1_2, 0);
+ alx_mem_w32(hw, L1F_1ST_TAR_IPV6_1_3, 0);
+ alx_mem_w32(hw, L1F_1ST_TAR_IPV6_2_0, 0);
+ alx_mem_w32(hw, L1F_1ST_TAR_IPV6_2_1, 0);
+ alx_mem_w32(hw, L1F_1ST_TAR_IPV6_2_2, 0);
+ alx_mem_w32(hw, L1F_1ST_TAR_IPV6_2_3, 0);
+ if (!(L1F_PMOFLD_BY_HW & ctrl)) {
+ /* no pm offload, we need config. */
+ alx_mem_r32(hw, L1F_MAC_CTRL, &ctrl);
+ alx_mem_w32(hw, L1F_MAC_CTRL,
+ ctrl | L1F_MAC_CTRL_MULTIALL_EN);
+ } else {
+ if (!(L1F_PMOFLD_MULTI_SOLD & ctrl)) {
+ /* MS PM offload */
+ alx_mem_w32(hw, L1F_ARP_REMOTE_IPV4, 0);
+ alx_mem_w32(hw, L1F_1ST_REMOTE_IPV6_0, 0);
+ alx_mem_w32(hw, L1F_1ST_REMOTE_IPV6_1, 0);
+ alx_mem_w32(hw, L1F_1ST_REMOTE_IPV6_2, 0);
+ alx_mem_w32(hw, L1F_1ST_REMOTE_IPV6_3, 0);
+ alx_mem_w32(hw, L1F_2ND_REMOTE_IPV6_0, 0);
+ alx_mem_w32(hw, L1F_2ND_REMOTE_IPV6_1, 0);
+ alx_mem_w32(hw, L1F_2ND_REMOTE_IPV6_2, 0);
+ alx_mem_w32(hw, L1F_2ND_REMOTE_IPV6_3, 0);
+ }
+ }
+ } else {
+ alx_mem_w32(hw, L1F_HRTBT_REM_IPV4_ADDR, swoi->svr_ipv4_addr);
+ alx_mem_w32(hw, L1F_HRTBT_HOST_IPV4_ADDR,
+ swoi->host_ipv4_addr);
+ alx_mem_w32(hw, L1F_ARP_HOST_IPV4, swoi->host_ipv4_addr);
+ }
+
+ ctrl = 0;
+ FIELD_SETL(ctrl, L1F_HRTBT_WAKEUP_PORT_SRC, swoi->svr_port);
+ FIELD_SETL(ctrl, L1F_HRTBT_WAKEUP_PORT_DEST, swoi->host_port);
+ alx_mem_w32(hw, L1F_HRTBT_WAKEUP_PORT, ctrl);
+ /* NIC will compare the received encrypted nonce
+ * with the following data:
+ */
+ alx_mem_w32(hw, L1F_HRTBT_WAKEUP_DATA7, swoi->wakeup_data[0]);
+ alx_mem_w32(hw, L1F_HRTBT_WAKEUP_DATA6, swoi->wakeup_data[1]);
+ alx_mem_w32(hw, L1F_HRTBT_WAKEUP_DATA5, swoi->wakeup_data[2]);
+ alx_mem_w32(hw, L1F_HRTBT_WAKEUP_DATA4, swoi->wakeup_data[3]);
+ alx_mem_w32(hw, L1F_HRTBT_WAKEUP_DATA3, swoi->wakeup_data[4]);
+ alx_mem_w32(hw, L1F_HRTBT_WAKEUP_DATA2, swoi->wakeup_data[5]);
+ alx_mem_w32(hw, L1F_HRTBT_WAKEUP_DATA1, swoi->wakeup_data[6]);
+ alx_mem_w32(hw, L1F_HRTBT_WAKEUP_DATA0, swoi->wakeup_data[7]);
+ hw->annce_addr_off =
+ (hw->annce_addr_off + swoi->total_len + 0x0F) & 0xFFFFFFF0;
+}
+
+
+/*
+ * Desc: CIFS offload, config the ANNOUNCEMENT message to our adapter.
+ */
+static void alf_setup_teredo(struct alx_hw *hw)
+{
+ u32 ctrl;
+ u32 i;
+ struct alx_teredo *trd = &hw->teredo_offload;
+
+ if (!trd->len) {
+ alx_hw_err(hw, "TEREDO offload disabled.\n");
+ return;
+ }
+
+ /* Write TEREDO message to SRAM. */
+ for (i = 0; i < trd->len; i += 4) {
+ alx_mem_w32(hw, hw->annce_addr_off + i,
+ *(u32 *) &trd->msg[i]);
+ }
+
+ ctrl = 0;
+ ctrl |= (L1F_TRD_CTRL_EN | L1F_TRD_CTRL_BUBBLE_WAKE_EN);
+ FIELD_SETL(ctrl, L1F_TRD_CTRL_RSHDR_ADDR,
+ ((hw->annce_addr_off - PACKET_MAP_ADDR) >> 3));
+ FIELD_SETL(ctrl, L1F_TRD_CTRL_SINTV_MAX, trd->intv_max);
+ FIELD_SETL(ctrl, L1F_TRD_CTRL_SINTV_MIN, trd->intv_min);
+ alx_mem_w32(hw, L1F_TRD_CTRL, ctrl);
+ alx_hw_info(hw, "TEREDO message annce_base_addr[0x%08x], len%d\n",
+ (hw->annce_addr_off - PACKET_MAP_ADDR) >> 3,
+ trd->len);
+
+ ctrl = 0;
+ FIELD_SETL(ctrl, L1F_TRD_RS_SZ, trd->len);
+ /* FIELD_SETL(ctrl, L1F_TRD_RS_NONCE_OFS, trd->nonce_off); */
+ FIELD_SETL(ctrl, L1F_TRD_RS_SEQ_OFS, trd->ipv4_id_off);
+ alx_mem_w32(hw, L1F_TRD_RS, ctrl);
+
+ alx_mem_w32(hw, L1F_TRD_SRV_IP4, trd->svr_ip4);
+ alx_mem_w32(hw, L1F_TRD_CLNT_EXTNL_IP4, trd->clt_external_ip4);
+ ctrl = 0;
+ FIELD_SETL(ctrl, L1F_TRD_PORT_CLNT_EXTNL, trd->clt_external_port);
+ FIELD_SETL(ctrl, L1F_TRD_PORT_SRV, trd->svr_port);
+ alx_mem_w32(hw, L1F_TRD_PORT, ctrl);
+ alx_mem_w32(hw, L1F_TRD_PREFIX, trd->prefix);
+ alx_mem_w32(hw, L1F_TRD_BUBBLE_DA_IP4, trd->clt_ip4);
+ alx_mem_w32(hw, L1F_TRD_BUBBLE_DA_PORT, trd->clt_port);
+
+ hw->annce_addr_off =
+ (hw->annce_addr_off + trd->len + 0x0F) & 0xFFFFFFF0;
+}
+
+
+int alx_setup_annce(struct alx_adapter *adpt, u32 wire_speed)
+{
+ struct alx_hw *hw = &adpt->hw;
+ u32 ctrl;
+ struct alx_swoi *swoi = &hw->swoi_offload;
+ struct alx_teredo *trd = &hw->teredo_offload;
+
+ hw->annce_addr_off = PACKET_MAP_ADDR + (CIFS_ANNCE_ADDR << 3);
+ if (!swoi->len && !trd->len) {
+ alx_hw_err(hw, "No cifs or teredo offload.\n");
+ return -EINVAL;
+ }
+
+ if (swoi->ver == ALX_SWOI_VER_CIFS)
+ alf_setup_cifs(hw);
+ else if (swoi->ver == ALX_SWOI_VER_SWOI)
+ alf_setup_swoi(hw);
+ alf_setup_teredo(hw);
+
+ alx_mem_r32(hw, L1F_GAP, &ctrl);
+ FIELD_SETL(ctrl, L1F_GAP_IPGT, 0x60);
+ alx_mem_w32(hw, L1F_GAP, ctrl);
+
+ alx_mem_r32(hw, L1F_MAC_CTRL, &ctrl);
+ ctrl |= (L1F_MAC_CTRL_TX_EN | L1F_MAC_CTRL_PCRCE | L1F_MAC_CTRL_CRCE);
+ alx_mem_w32(hw, L1F_MAC_CTRL, ctrl);
+
+
+ /* The HW won't switch clock even if we slow it down
+ * when the wire speedis 1G. When the wire speed is 100M,
+ * we shouldn't slow it down or the HW can't send frames correctly.
+ * If the wire speed is 10M, we should slow it
+ * down to save power, and the HW does work, too.
+ */
+ if (wire_speed == SPEED_100) {
+ alx_mem_r32(hw, L1F_SERDES, &ctrl);
+ ctrl &= ~L1F_SERDES_MACCLK_SLWDWN;
+ alx_mem_w32(hw, L1F_SERDES, ctrl);
+ }
+
+ return 0;
+}
diff --git a/drivers/net/ethernet/atheros/alx/alx_cifs.h b/drivers/net/ethernet/atheros/alx/alx_cifs.h
new file mode 100644
index 0000000..c6eb7eb
--- /dev/null
+++ b/drivers/net/ethernet/atheros/alx/alx_cifs.h
@@ -0,0 +1,69 @@
+/*
+ * Copyright (c) 2012 Qualcomm Atheros, Inc.
+ *
+ * Permission to use, copy, modify, and/or distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+#ifndef _ALX_CIFS_H_
+#define _ALX_CIFS_H_
+
+#define ALX_SWOI_VER_CIFS 0x01
+#define ALX_SWOI_VER_SWOI 0x02
+#define ALX_MAX_ANNCE_LEN 1522
+
+struct alx_teredo {
+ u32 ver:4;
+ u32 prefix_hw_compare:1;
+ u32 intv_max:8;
+ u32 intv_min:8;
+ u32 ipv4_id_off:8;
+ u32 s5_wakeup:1;
+ u32 svr_ip4; /* little endian order,
+ * compatible with our NIC. */
+ u16 svr_port; /* little endian order */
+ u32 clt_ip4; /* little endian order */
+ u16 clt_port; /* little endian order */
+ u32 clt_external_ip4; /* little endian order */
+ u16 clt_external_port; /* little endian order */
+ u32 prefix; /* little endian order */
+ u32 len;
+ char msg[ALX_MAX_ANNCE_LEN];
+};
+
+struct alx_swoi {
+ u32 ver:4;
+ u32 period:6;
+ u32 is_vlan:1;
+ u32 fraged:1;
+ u32 len:12;
+ u32 pkt_is_8023:1;
+ u32 pkt_is_ipv6:1;
+ u32 pkt_wakeup_en:1;
+ u32 frag_len:16;
+ u32 svr_ipv4_addr; /* The source ipv4 address
+ * for SWOI wakeup msg. */
+ u32 host_ipv4_addr; /* little endian order */
+ u32 svr_ipv6_addr[4]; /* little endian order */
+ u32 host_ipv6_addr[4]; /* little endian order */
+ u16 svr_port; /* SWOI wakeup packet UDP source port. */
+ u16 host_port; /* little endian order */
+ u32 wakeup_data[8]; /* little endian order */
+
+ u32 total_len; /* annce_len + padding, for SW use only */
+ char msg[ALX_MAX_ANNCE_LEN];
+};
+
+struct alx_adapter;
+int alx_setup_annce(struct alx_adapter *adpt, u32 wire_speed);
+
+#endif /*_ALX_CIFS_H_*/
diff --git a/drivers/net/ethernet/atheros/alx/alx_dfs.c b/drivers/net/ethernet/atheros/alx/alx_dfs.c
new file mode 100644
index 0000000..0fdeeac
--- /dev/null
+++ b/drivers/net/ethernet/atheros/alx/alx_dfs.c
@@ -0,0 +1,878 @@
+/*
+ * Copyright (c) 2012 Qualcomm Atheros, Inc.
+ *
+ * Permission to use, copy, modify, and/or distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+#include <linux/ip.h>
+#include <net/ipv6.h>
+
+#include "alx.h"
+#include "alx_hw.h"
+
+
+static void alx_mem_r8(const struct alx_hw *hw, int reg, u8 *val)
+{
+ if (unlikely(!hw->link_up))
+ readl(hw->hw_addr + reg);
+ *val = readb(hw->hw_addr + reg);
+}
+
+
+static void alx_io_r32(const struct alx_hw *hw, int reg, u32 *pval)
+{
+ *pval = inl(hw->io_addr + reg);
+}
+
+
+static void alx_io_w32(const struct alx_hw *hw, int reg, u32 val)
+{
+ outl(val, hw->io_addr + reg);
+}
+
+
+int alx_flush_mac_address(struct alx_adapter *adpt)
+{
+ struct alx_hw *hw = &adpt->hw;
+ u8 sta_addr[ALX_ETH_LENGTH_OF_ADDRESS];
+ u8 *mac_addr = hw->mac_addr;
+ int retval = 0;
+
+ if (!adpt->dfs.cfg_new_mac)
+ return 0;
+ adpt->dfs.cfg_new_mac = false;
+
+ retval = alf_get_mac_addr(hw, sta_addr);
+ if (retval)
+ return retval;
+
+ retval = alx_validate_mac_addr(sta_addr);
+ if (retval)
+ return retval;
+
+ if (memcmp(sta_addr, mac_addr, ALX_ETH_LENGTH_OF_ADDRESS)) {
+ memcpy(hw->mac_perm_addr, sta_addr, ALX_ETH_LENGTH_OF_ADDRESS);
+ alf_set_mac_addr(hw, mac_addr);
+ }
+
+ return retval;
+}
+
+/*
+ * device sub-command
+ */
+static int alx_dfs_device_inactive(struct alx_adapter *adpt, bool new_mac)
+{
+ struct net_device *netdev = adpt->netdev;
+ struct alx_hw *hw = &adpt->hw;
+
+ adpt->dfs.cfg_new_mac = new_mac;
+
+ if (CHK_ADPT_FLAG(STATE_INACTIVE)) {
+ alx_err(adpt, "can't be inactive again\n");
+ return -EINVAL;
+ }
+
+ SET_ADPT_FLAG(STATE_INACTIVE);
+
+ netif_tx_stop_all_queues(netdev);
+ /* call carrier off first to avoid false dev_watchdog timeouts */
+ netif_carrier_off(netdev);
+ netif_tx_disable(netdev);
+
+ alx_disable_intr(adpt);
+
+ alf_stop_mac(hw);
+ return 0;
+}
+
+
+static void alx_dfs_device_reinit(struct alx_adapter *adpt)
+{
+ if (CHK_ADPT_FLAG(STATE_DOWN) ||
+ CHK_ADPT_FLAG(STATE_RESETTING)) {
+ return;
+ }
+
+ while (CHK_ADPT_FLAG(STATE_RESETTING))
+ msleep(20);
+ SET_ADPT_FLAG(STATE_RESETTING);
+
+ alx_stop_internal(adpt, ALX_OPEN_CTRL_RESET_ALL);
+ alx_open_internal(adpt, ALX_OPEN_CTRL_RESET_ALL);
+
+ CLI_ADPT_FLAG(STATE_INACTIVE);
+ CLI_ADPT_FLAG(STATE_RESETTING);
+}
+
+
+/*
+ * Diagnostic Support
+ */
+static void alx_dfs_diag_config_packet(struct alx_adapter *adpt,
+ struct alx_diag_packet *pkt,
+ union alx_sw_rrdesc *srrd)
+{
+ u32 pid = srrd->genr.pid;
+
+ pkt->type = 0;
+ switch (pid) {
+ case 0: /* non-ip */
+ break;
+ case 1: /* ipv4(only) */
+ pkt->type |= ALX_DIAG_PKTYPE_IPV4;
+ break;
+ case 2: /* tcp/ipv6 */
+ pkt->type |= ALX_DIAG_PKTYPE_IPV6;
+ pkt->type |= ALX_DIAG_PKTYPE_TCP;
+ break;
+ case 3: /* tcp/ipv4 */
+ pkt->type |= ALX_DIAG_PKTYPE_IPV4;
+ pkt->type |= ALX_DIAG_PKTYPE_TCP;
+ break;
+ case 4: /* udp/ipv6 */
+ pkt->type |= ALX_DIAG_PKTYPE_IPV6;
+ pkt->type |= ALX_DIAG_PKTYPE_UDP;
+ break;
+ case 5: /* udp/ipv4 */
+ pkt->type |= ALX_DIAG_PKTYPE_IPV4;
+ pkt->type |= ALX_DIAG_PKTYPE_UDP;
+ break;
+ case 6: /* ipv6(only) */
+ pkt->type |= ALX_DIAG_PKTYPE_IPV6;
+ break;
+ case 7:
+ pkt->type |= ALX_DIAG_PKTYPE_LLDP;
+ break;
+ case 8:
+ pkt->type |= ALX_DIAG_PKTYPE_PTP;
+ break;
+ default: /* invalid protocol */
+ break;
+ }
+
+ if (srrd->genr.type)
+ pkt->type |= ALX_DIAG_PKTYPE_802_3;
+ else
+ pkt->type |= ALX_DIAG_PKTYPE_EII;
+
+ /* error */
+ if (srrd->genr.res) {
+ pkt->type |= ALX_DIAG_PKTYPE_RX_ERR;
+ if (srrd->genr.crc)
+ pkt->type |= ALX_DIAG_PKTYPE_CRC_ERR;
+ }
+
+ if (srrd->genr.icmp)
+ pkt->type |= ALX_DIAG_PKTYPE_INCOMPLETE_ERR;
+
+ if (srrd->genr.l4f)
+ pkt->type |= ALX_DIAG_PKTYPE_L4XSUM_ERR;
+
+ if (srrd->genr.ipf)
+ pkt->type |= ALX_DIAG_PKTYPE_IPXSUM_ERR;
+
+ if (srrd->genr.lene)
+ pkt->type |= ALX_DIAG_PKTYPE_802_3_LEN_ERR;
+
+ pkt->rss_hash = srrd->genr.hash;
+ pkt->rss_cpu_num = srrd->genr.rss_cpu;
+ pkt->xsum = srrd->genr.xsum;
+ pkt->length = srrd->genr.pkt_len - 4;
+
+ if (srrd->genr.vlan_flag)
+ ALX_TAG_TO_VLAN(srrd->genr.vlan_tag, pkt->vlanid);
+}
+
+
+void alx_dfs_diag_receive_skb(struct alx_adapter *adpt, struct sk_buff *skb,
+ union alx_sw_rrdesc *srrd)
+{
+ struct sk_buff_head *skb_list = &adpt->dfs.diag_skb_list;
+ struct alx_diag_packet *pkt;
+
+ if (skb_queue_len(skb_list) >= ALX_DIAG_MAX_RX_PACKETS) {
+ if (skb)
+ dev_kfree_skb_irq(skb);
+ return;
+ }
+
+ pkt = adpt->dfs.diag_pkt_info;
+ adpt->dfs.diag_pkt_info++;
+ memset(pkt, 0, sizeof(struct alx_diag_packet));
+ if (((u8 *)adpt->dfs.diag_pkt_info) >=
+ (adpt->dfs.diag_info_buf + adpt->dfs.diag_info_sz)) {
+ adpt->dfs.diag_pkt_info =
+ (struct alx_diag_packet *)adpt->dfs.diag_info_buf;
+ }
+ alx_dfs_diag_config_packet(adpt, pkt, srrd);
+
+ /* borrow skb->sk point store struct alx_diag_packet */
+ skb->sk = (struct sock *)pkt;
+
+ skb_queue_tail(skb_list, skb);
+}
+
+
+/*
+ * Calculate the transmit packet descript needed
+ */
+static bool alx_dfs_diag_check_num_tpdescs(struct alx_tx_queue *txque,
+ struct alx_diag_packet *pkt)
+{
+ u16 num_required = 1;
+ u16 num_available = 0;
+ u16 produce_idx = txque->tpq.produce_idx;
+ u16 consume_idx = txque->tpq.consume_idx;
+
+ if (pkt->type & ALX_DIAG_PKTYPE_TSOV2)
+ num_required += 1;
+
+ num_available = (u16)(consume_idx > produce_idx) ?
+ (consume_idx - produce_idx - 1) :
+ (txque->tpq.count + consume_idx - produce_idx - 1);
+
+ return num_required < num_available;
+}
+
+
+/*
+ * configure tpds according the diag packet
+ */
+static void alx_dfs_diag_config_tpd_tsov2(struct alx_tx_queue *txque,
+ struct alx_diag_packet *pkt,
+ union alx_sw_tpdesc *stpd)
+{
+ /* lso v2 need an extension TPD */
+ if (!(pkt->type & ALX_DIAG_PKTYPE_TSOV2))
+ return;
+
+ stpd->tso.lso = 1;
+ stpd->tso.lso_v2 = 1;
+ stpd->tso.pkt_len = pkt->buf[0].length;
+ alx_set_tpdesc(txque, stpd);
+ memset(stpd, 0, sizeof(union alx_sw_tpdesc));
+}
+
+
+static void alx_dfs_diag_config_tpd(struct alx_tx_queue *txque,
+ struct alx_diag_packet *pkt,
+ union alx_sw_tpdesc *stpd)
+{
+ u16 machdr_offset = 0;
+ u16 iphdr_offset = 0;
+ struct iphdr *iph;
+
+ /* VLAN */
+ if (pkt->type & ALX_DIAG_PKTYPE_VLANINST) {
+ stpd->genr.instag = 1;
+ ALX_VLAN_TO_TAG(pkt->vlanid, stpd->genr.vlan_tag);
+ }
+
+ if (pkt->type & ALX_DIAG_PKTYPE_VLANTAGGED)
+ stpd->genr.vtagged = 1;
+
+ /* checksum offload */
+ if (!(pkt->type & ALX_DIAG_PKTYPE_EII))
+ stpd->genr.type = 1;
+
+ /* MAC header length */
+ machdr_offset = 14;
+ if (pkt->type & ALX_DIAG_PKTYPE_SNAP)
+ machdr_offset += 8;
+
+ if (pkt->type & ALX_DIAG_PKTYPE_VLANTAGGED)
+ machdr_offset += 4;
+
+ /* IP header length */
+ if (pkt->type & ALX_DIAG_PKTYPE_IPV4) {
+ iph = (struct iphdr *)(pkt->buf[0].addr + machdr_offset);
+ iphdr_offset = iph->ihl << 2;
+ } else if (pkt->type & ALX_DIAG_PKTYPE_IPV6) {
+ struct ipv6hdr *ipv6h;
+ struct ipv6_opt_hdr *opth;
+ u8 nexthdr;
+
+ ipv6h = (struct ipv6hdr *)(pkt->buf[0].addr + machdr_offset);
+ iphdr_offset = sizeof(struct ipv6hdr);
+ nexthdr = ipv6h->nexthdr;
+ while (nexthdr != NEXTHDR_TCP && nexthdr != NEXTHDR_UDP &&
+ nexthdr != NEXTHDR_NONE) {
+ /* have IPv6 extension header */
+ opth = (struct ipv6_opt_hdr *)
+ ((u8 *)ipv6h + iphdr_offset);
+
+ if (nexthdr == NEXTHDR_FRAGMENT)
+ iphdr_offset += 8;
+ else if (nexthdr == NEXTHDR_AUTH)
+ iphdr_offset += (opth->hdrlen + 2) << 2;
+ else
+ iphdr_offset += ipv6_optlen(opth);
+ nexthdr = opth->nexthdr;
+ }
+ }
+
+ /* checksum */
+ if (pkt->type & ALX_DIAG_PKTYPE_IPXSUM) {
+ /* IP checksum */
+ stpd->csum.ip_csum = 1;
+ }
+
+ if (pkt->type & ALX_DIAG_PKTYPE_L4XSUM) {
+ /* L4 checksum */
+ if (pkt->type & ALX_DIAG_PKTYPE_TCP)
+ stpd->csum.tcp_csum = 1;
+
+ if (pkt->type & ALX_DIAG_PKTYPE_UDP)
+ stpd->csum.udp_csum = 1;
+
+ stpd->csum.payld_offset = machdr_offset + iphdr_offset;
+ }
+
+ if (pkt->type & ALX_DIAG_PKTYPE_CXSUM) {
+ /* Custom checksum */
+ stpd->csum.c_csum = 1;
+ stpd->csum.payld_offset = pkt->csum_start >> 1;
+ stpd->csum.cxsum_offset = pkt->csum_pos >> 1;
+ }
+
+ /* TCP Large send offload */
+ if (pkt->type & ALX_DIAG_PKTYPE_TSOV1) {
+ stpd->tso.lso = 1;
+ stpd->tso.tcphdr_offset = machdr_offset + iphdr_offset;
+ stpd->tso.mss = pkt->mss;
+ }
+
+ if (pkt->type & ALX_DIAG_PKTYPE_TSOV2) {
+ stpd->tso.lso = 1;
+ stpd->tso.lso_v2 = 1;
+ stpd->tso.tcphdr_offset = machdr_offset + iphdr_offset;
+ stpd->tso.mss = pkt->mss;
+ }
+
+ if (pkt->type & ALX_DIAG_PKTYPE_IPV4)
+ stpd->genr.ipv4 = 1;
+}
+
+
+static void alx_dfs_diag_tx_map(struct alx_adapter *adpt,
+ struct alx_tx_queue *txque,
+ struct alx_diag_packet *pkt,
+ union alx_sw_tpdesc *stpd)
+{
+ struct alx_buffer *tpbuf = NULL;
+
+ tpbuf = GET_TP_BUFFER(txque, txque->tpq.produce_idx);
+ tpbuf->length = pkt->buf[0].length;
+ tpbuf->dma = dma_map_single(txque->dev, pkt->buf[0].addr,
+ tpbuf->length, DMA_TO_DEVICE);
+ stpd->genr.addr = tpbuf->dma;
+ stpd->genr.buffer_len = tpbuf->length;
+ alx_set_tpdesc(txque, stpd);
+
+ /* The last tpd */
+ alx_set_tpdesc_lastfrag(txque);
+ /* diag don't use skb, so it's not need to free */
+ tpbuf->skb = NULL;
+}
+
+
+static netdev_tx_t alx_dfs_diag_xmit_frame(struct alx_adapter *adpt,
+ struct alx_tx_queue *txque,
+ struct alx_diag_packet *pkt)
+{
+ struct alx_hw *hw = &adpt->hw;
+ unsigned long flags = 0;
+ union alx_sw_tpdesc stpd; /* normal*/
+
+ if (CHK_ADPT_FLAG(STATE_DOWN))
+ return NETDEV_TX_OK;
+
+ if (!spin_trylock_irqsave(&adpt->tx_lock, flags)) {
+ alx_err(adpt, "tx locked!\n");
+ return NETDEV_TX_LOCKED;
+ }
+
+ if (!alx_dfs_diag_check_num_tpdescs(txque, pkt)) {
+ /* no enough descriptor, just stop queue */
+ spin_unlock_irqrestore(&adpt->tx_lock, flags);
+ return NETDEV_TX_BUSY;
+ }
+
+ memset(&stpd, 0, sizeof(union alx_sw_tpdesc));
+ alx_dfs_diag_config_tpd_tsov2(txque, pkt, &stpd);
+ alx_dfs_diag_config_tpd(txque, pkt, &stpd);
+ alx_dfs_diag_tx_map(adpt, txque, pkt, &stpd);
+
+ /* update produce idx */
+ wmb();
+ alx_mem_w16(hw, txque->produce_reg, txque->tpq.produce_idx);
+
+ spin_unlock_irqrestore(&adpt->tx_lock, flags);
+ return NETDEV_TX_OK;
+}
+
+
+static int alx_dfs_diag_begin(struct alx_adapter *adpt)
+{
+ struct alx_hw *hw = &adpt->hw;
+ int retval = 0;
+
+ if (CHK_ADPT_FLAG(MSIX_EN)) {
+ alx_err(adpt, "warning! Please use MSI or Shared intr\n");
+ return -EINVAL;
+ }
+
+ if (CHK_ADPT_FLAG(STATE_DOWN) ||
+ CHK_ADPT_FLAG(STATE_DIAG_RUNNING)) {
+ alx_err(adpt, "warning! Diag is running or nic is down\n");
+ return -EINVAL;
+ }
+
+ SET_ADPT_FLAG(STATE_DIAG_RUNNING);
+ SET_HW_FLAG(LOOPBACK_EN);
+ alf_update_mac_filter(hw);
+
+ alx_mem_w32(hw, ALX_CLK_GATE, 0x0);
+
+ skb_queue_head_init(&adpt->dfs.diag_skb_list);
+
+ adpt->dfs.diag_recv_sz = ALX_DIAG_MAX_DATA_BUFFER +
+ sizeof(struct alx_diag_packet) * ALX_DIAG_MAX_RX_PACKETS;
+ adpt->dfs.diag_send_sz = ALX_DIAG_MAX_DATA_BUFFER +
+ sizeof(struct alx_diag_packet) * ALX_DIAG_MAX_TX_PACKETS;
+ adpt->dfs.diag_info_sz =
+ sizeof(struct alx_diag_packet) * ALX_DIAG_MAX_RX_PACKETS;
+
+ netif_info(adpt, hw, adpt->netdev,
+ "send_buf_sz=0x%x, recv_buf_sz=0x%x, info_buf_sz=0x%x\n",
+ adpt->dfs.diag_send_sz, adpt->dfs.diag_recv_sz,
+ adpt->dfs.diag_info_sz);
+
+ adpt->dfs.diag_recv_buf = kmalloc(adpt->dfs.diag_recv_sz, GFP_KERNEL);
+ if (!adpt->dfs.diag_recv_buf) {
+ alx_err(adpt, "error alloc recv buff\n");
+ retval = -ENOMEM;
+ goto err_alloc_recv_mem;
+ }
+
+ adpt->dfs.diag_send_buf = kmalloc(adpt->dfs.diag_send_sz, GFP_KERNEL);
+ if (!adpt->dfs.diag_send_buf) {
+ alx_err(adpt, "error alloc send buff\n");
+ retval = -ENOMEM;
+ goto err_alloc_send_mem;
+ }
+
+ adpt->dfs.diag_info_buf = kmalloc(adpt->dfs.diag_info_sz, GFP_KERNEL);
+ if (!adpt->dfs.diag_info_buf) {
+ alx_err(adpt, "error alloc packet info buff\n");
+ retval = -ENOMEM;
+ goto err_alloc_info_mem;
+ }
+ adpt->dfs.diag_pkt_info =
+ (struct alx_diag_packet *)adpt->dfs.diag_info_buf;
+ return 0;
+
+err_alloc_info_mem:
+ kfree(adpt->dfs.diag_send_buf);
+err_alloc_send_mem:
+ kfree(adpt->dfs.diag_recv_buf);
+err_alloc_recv_mem:
+ CLI_HW_FLAG(LOOPBACK_EN);
+ alf_update_mac_filter(hw);
+ CLI_ADPT_FLAG(STATE_DIAG_RUNNING);
+ return retval;
+}
+
+
+static int alx_dfs_diag_end(struct alx_adapter *adpt)
+{
+ struct alx_hw *hw = &adpt->hw;
+
+ if (!CHK_ADPT_FLAG(STATE_DIAG_RUNNING)) {
+ alx_err(adpt, "can't end diag, becasue diag isn't running\n");
+ return -EINVAL;
+ }
+
+ kfree(adpt->dfs.diag_recv_buf);
+ kfree(adpt->dfs.diag_send_buf);
+ kfree(adpt->dfs.diag_info_buf);
+ adpt->dfs.diag_recv_buf = NULL;
+ adpt->dfs.diag_send_buf = NULL;
+ adpt->dfs.diag_info_buf = NULL;
+ adpt->dfs.diag_pkt_info = NULL;
+ adpt->dfs.diag_info_sz = 0;
+ adpt->dfs.diag_recv_sz = 0;
+ adpt->dfs.diag_send_sz = 0;
+
+ skb_queue_purge(&adpt->dfs.diag_skb_list);
+
+ CLI_HW_FLAG(LOOPBACK_EN);
+ alf_update_mac_filter(hw);
+ CLI_ADPT_FLAG(STATE_DIAG_RUNNING);
+ return 0;
+}
+
+
+static int alx_dfs_diag_rx_pkts(struct alx_adapter *adpt, char *buf,
+ u32 size_in, u32 *size_out)
+{
+ struct sk_buff_head *list = &adpt->dfs.diag_skb_list;
+ struct alx_diag_packet *pkt, *fpkt;
+ struct sk_buff *skb;
+ u8 *data;
+ u32 i, count, offset;
+
+ if (CHK_ADPT_FLAG(STATE_DOWN) ||
+ !CHK_ADPT_FLAG(STATE_DIAG_RUNNING)) {
+ alx_err(adpt, "warning! Diag isn't running or nic is down\n");
+ return -EINVAL;
+ }
+
+ count = skb_queue_len(list);
+ if (!count) {
+ *size_out = 0;
+ return -EINVAL;
+ }
+
+ offset = count * sizeof(struct alx_diag_packet);
+ if (offset >= adpt->dfs.diag_recv_sz) {
+ alx_err(adpt, "used diag buffer is greater than allocated\n");
+ return -EINVAL;
+ }
+
+ memset(adpt->dfs.diag_recv_buf, 0, adpt->dfs.diag_recv_sz);
+ fpkt = pkt = (struct alx_diag_packet *)adpt->dfs.diag_recv_buf;
+ data = adpt->dfs.diag_recv_buf + offset;
+
+ for (i = 0; i < count; i++) {
+ skb = skb_dequeue(list);
+ if (!skb)
+ break;
+ memcpy(pkt, skb->sk, sizeof(struct alx_diag_packet));
+ pkt->buf[0].offset = offset;
+ pkt->buf[0].length = pkt->length;
+
+ if (pkt->length != skb->len) {
+ netif_warn(adpt, hw, adpt->netdev,
+ "pkt->length(0x%x) != skb->len(0x%x)\n",
+ pkt->length, skb->len);
+ }
+ memcpy(data, skb->data, pkt->length);
+
+ offset += pkt->length;
+ if (offset >= adpt->dfs.diag_recv_sz) {
+ alx_err(adpt,
+ "used diag buffer is greater than allocated\n");
+ }
+ kfree_skb(skb);
+ pkt->next = pkt + 1;
+ pkt++;
+ data = adpt->dfs.diag_recv_buf + offset;
+ }
+ fpkt[count - 1].next = NULL;
+
+ if (!buf || offset > size_in) {
+ alx_err(adpt, "receive buf is null or too small\n");
+ return -EINVAL;
+ }
+
+ *size_out = offset;
+ if (copy_to_user((void __user *)buf, adpt->dfs.diag_recv_buf,
+ offset)) {
+ alx_err(adpt, "can't copy to user space\n");
+ return -EFAULT;
+ }
+ return 0;
+}
+
+static int alx_dfs_diag_tx_pkts(struct alx_adapter *adpt,
+ char *buf, u32 size_in)
+{
+ struct alx_diag_packet *pkt;
+ struct alx_tx_queue *txque = adpt->tx_queue[0];
+ int num_pkts = ALX_DIAG_MAX_TX_PACKETS;
+
+ if (CHK_ADPT_FLAG(STATE_DOWN) ||
+ !CHK_ADPT_FLAG(STATE_DIAG_RUNNING)) {
+ alx_err(adpt, "warning! Diag isn't running or nic is down\n");
+ return -EINVAL;
+ }
+
+ pkt = (struct alx_diag_packet *)adpt->dfs.diag_send_buf;
+ if (!buf || size_in > adpt->dfs.diag_send_sz) {
+ alx_err(adpt, "sending buf is null or too big\n");
+ return -EINVAL;
+ }
+
+ if (copy_from_user(pkt, (void __user *)buf, size_in)) {
+ alx_err(adpt, "can't copy from user\n");
+ return -EFAULT;
+ }
+
+ do {
+ /* fix buf[0].addr in alx_diag_packet */
+ if (pkt->buf[0].offset > size_in)
+ alx_err(adpt, "offset of alx_diag_packet error\n");
+
+ pkt->buf[0].addr = adpt->dfs.diag_send_buf + pkt->buf[0].offset;
+
+ alx_dfs_diag_xmit_frame(adpt, txque, pkt);
+
+ if (pkt->next == NULL)
+ break;
+ pkt++;
+ } while (--num_pkts);
+
+ return 0;
+}
+
+
+
+static int alx_dfs_cifs_annce_clear(struct alx_adapter *adpt,
+ struct alx_dfs_ioctl_data *did)
+{
+ adpt->hw.swoi_offload.len = 0;
+ return 0;
+}
+
+static int alx_dfs_cifs_annce_config(struct alx_adapter *adpt,
+ struct alx_dfs_ioctl_data *did)
+{
+ struct alx_hw *hw = &adpt->hw;
+ int ret = 0;
+
+ if (!adpt->cifs ||
+ did->param.buf.addr == NULL ||
+ did->param.buf.size_in != sizeof(hw->swoi_offload)) {
+ hw->swoi_offload.len = 0;
+ return -EINVAL;
+ }
+
+ ret = copy_from_user(&hw->swoi_offload,
+ (void __user *)did->param.buf.addr,
+ sizeof(hw->swoi_offload));
+ if (ret != 0)
+ return -EINVAL;
+
+ if (hw->swoi_offload.ver == ALX_SWOI_VER_CIFS) {
+ if (FIELD_GETX(hw->pci_revid, L1F_PCI_REVID) > L1F_REV_A1) {
+ hw->swoi_offload.len = 0;
+ return -EINVAL;
+ }
+ } else if (hw->swoi_offload.ver == ALX_SWOI_VER_SWOI) {
+ if (FIELD_GETX(hw->pci_revid, L1F_PCI_REVID) < L1F_REV_B0) {
+ hw->swoi_offload.total_len = 0;
+ return -EINVAL;
+ }
+ } else {
+ hw->swoi_offload.total_len = 0;
+ return -EINVAL;
+ }
+
+ return 0;
+}
+
+
+static int alx_dfs_ioctl_command_general(struct alx_adapter *adpt,
+ struct alx_dfs_ioctl_data *did)
+{
+ struct alx_hw *hw = &adpt->hw;
+ int retval = 0;
+
+ switch (did->sub_cmd) {
+ case ALX_DFS_IOCTL_SCMD_GMAC_REG_32:
+ alx_mem_r32(hw, did->param.mac.num, &did->param.mac.val32);
+ netif_dbg(adpt, hw, adpt->netdev,
+ "DFS: Read reg_32 %04x %08x\n",
+ did->param.mac.num, did->param.mac.val32);
+ break;
+ case ALX_DFS_IOCTL_SCMD_SMAC_REG_32:
+ netif_dbg(adpt, hw, adpt->netdev,
+ "DFS: write reg_32 %04x %08x\n",
+ did->param.mac.num, did->param.mac.val32);
+ alx_mem_w32(hw, did->param.mac.num, did->param.mac.val32);
+ break;
+
+ case ALX_DFS_IOCTL_SCMD_GMAC_REG_16:
+ alx_mem_r16(hw, did->param.mac.num, &did->param.mac.val16);
+ netif_dbg(adpt, hw, adpt->netdev,
+ "DFS: read reg_16 %04x %08x\n",
+ did->param.mac.num, did->param.mac.val16);
+ break;
+ case ALX_DFS_IOCTL_SCMD_SMAC_REG_16:
+ netif_dbg(adpt, hw, adpt->netdev,
+ "DFS: write reg_16 %04x %08x\n",
+ did->param.mac.num, did->param.mac.val16);
+ alx_mem_w16(hw, did->param.mac.num, did->param.mac.val16);
+ break;
+ case ALX_DFS_IOCTL_SCMD_GMAC_REG_8:
+ alx_mem_r8(hw, did->param.mac.num, &did->param.mac.val8);
+ netif_dbg(adpt, hw, adpt->netdev,
+ "DFS: read reg_8 %04x %08x\n",
+ did->param.mac.num, did->param.mac.val8);
+ break;
+ case ALX_DFS_IOCTL_SCMD_SMAC_REG_8:
+ netif_dbg(adpt, hw, adpt->netdev,
+ "DFS: write reg_8 %04x %08x\n",
+ did->param.mac.num, did->param.mac.val8);
+ alx_mem_w8(hw, did->param.mac.num, did->param.mac.val8);
+ break;
+
+ /* Read/Write MAC Register By config Mode */
+ case ALX_DFS_IOCTL_SCMD_GMAC_CFG_32:
+ alx_cfg_r32(hw, did->param.mac.num, &did->param.mac.val32);
+ netif_dbg(adpt, hw, adpt->netdev,
+ "DFS: read cfg_32 %04x %08x\n",
+ did->param.mac.num, did->param.mac.val32);
+ break;
+ case ALX_DFS_IOCTL_SCMD_SMAC_CFG_32:
+ netif_dbg(adpt, hw, adpt->netdev,
+ "DFS: write cfg_32 %04x %08x\n",
+ did->param.mac.num, did->param.mac.val32);
+ alx_cfg_w32(hw, did->param.mac.num, did->param.mac.val32);
+ break;
+
+ /* Read/Write MAC Register By IO Port */
+ case ALX_DFS_IOCTL_SCMD_GMAC_IO_32:
+ alx_io_r32(hw, did->param.mac.num, &did->param.mac.val32);
+ netif_dbg(adpt, hw, adpt->netdev,
+ "DFS: read io_32 %04x %08x\n",
+ did->param.mac.num, did->param.mac.val32);
+ break;
+ case ALX_DFS_IOCTL_SCMD_SMAC_IO_32:
+ netif_dbg(adpt, hw, adpt->netdev,
+ "DFS: write io_32 %04x %08x\n",
+ did->param.mac.num, did->param.mac.val32);
+ alx_io_w32(hw, did->param.mac.num, did->param.mac.val32);
+ break;
+
+
+ /* Read/Write PHY Ext Register */
+ case ALX_DFS_IOCTL_SCMD_GMII_EXT_REG:
+ if (!capable(CAP_NET_ADMIN)) {
+ retval = -EPERM;
+ goto out;
+ }
+
+ retval = alf_read_ext_phy_reg(hw, (u8)did->param.mii.dev,
+ did->param.mii.num,
+ &did->param.mii.val);
+ netif_dbg(adpt, hw, adpt->netdev,
+ "DFS: read phy_ext %02x:%02x %04x\n",
+ did->param.mii.dev, did->param.mii.num,
+ did->param.mii.val);
+ if (retval) {
+ retval = -EIO;
+ goto out;
+ }
+ break;
+
+ case ALX_DFS_IOCTL_SCMD_SMII_EXT_REG:
+ if (!capable(CAP_NET_ADMIN)) {
+ retval = -EPERM;
+ goto out;
+ }
+
+ retval = alf_write_ext_phy_reg(hw, (u8)did->param.mii.dev,
+ did->param.mii.num,
+ did->param.mii.val);
+ netif_dbg(adpt, hw, adpt->netdev,
+ "DFS: write phy_ext %02x:%02x %04x\n",
+ did->param.mii.dev, did->param.mii.num,
+ did->param.mii.val);
+ if (retval) {
+ retval = -EIO;
+ goto out;
+ }
+ break;
+
+
+ /* Diag & Memcfg */
+ case ALX_DFS_IOCTL_SCMD_DEVICE_INACTIVE:
+ netif_dbg(adpt, hw, adpt->netdev, "DFS: Device Inactive\n");
+ retval = alx_dfs_device_inactive(adpt, did->param.dat.val0);
+ break;
+
+ case ALX_DFS_IOCTL_SCMD_DEVICE_REINIT:
+ netif_dbg(adpt, hw, adpt->netdev, "DFS: Device Reset\n");
+ alx_dfs_device_reinit(adpt);
+ break;
+
+ case ALX_DFS_IOCTL_SCMD_DIAG_BEGIN:
+ netif_dbg(adpt, hw, adpt->netdev, "DFS: Diag begin.\n");
+ retval = alx_dfs_diag_begin(adpt);
+ break;
+ case ALX_DFS_IOCTL_SCMD_DIAG_END:
+ netif_dbg(adpt, hw, adpt->netdev, "DFS: Diag end.\n");
+ retval = alx_dfs_diag_end(adpt);
+ break;
+ case ALX_DFS_IOCTL_SCMD_DIAG_TX_PKT:
+ netif_dbg(adpt, hw, adpt->netdev,
+ "DFS: Diag TX (%p:in-%08x)\n",
+ did->param.buf.addr, did->param.buf.size_in);
+ retval = alx_dfs_diag_tx_pkts(adpt, did->param.buf.addr,
+ did->param.buf.size_in);
+ break;
+ case ALX_DFS_IOCTL_SCMD_DIAG_RX_PKT:
+ retval = alx_dfs_diag_rx_pkts(adpt, did->param.buf.addr,
+ did->param.buf.size_in,
+ &did->param.buf.size_out);
+ netif_dbg(adpt, hw, adpt->netdev,
+ "DFS: Diag RX (%p:in-%08x, out-%08x)\n",
+ did->param.buf.addr, did->param.buf.size_in,
+ did->param.buf.size_out);
+ break;
+
+ /* SWOI */
+ case ALX_DFS_IOCTL_SCMD_ANNCE_CLEAR:
+ retval = alx_dfs_cifs_annce_clear(adpt, did);
+ break;
+
+ case ALX_DFS_IOCTL_SCMD_ANNCE_CONFIG:
+ retval = alx_dfs_cifs_annce_config(adpt, did);
+ break;
+
+ default:
+ retval = -EOPNOTSUPP;
+ break;
+ }
+out:
+ return retval;
+}
+
+/*
+ * debugfs file operation
+ */
+long alx_debugfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg)
+{
+ struct alx_adapter *adpt = (struct alx_adapter *)file->private_data;
+ struct alx_dfs_ioctl_data did;
+ long retval = 0;
+
+ if (copy_from_user(&did, (void __user *)arg, sizeof(did)))
+ return -EFAULT;
+
+ switch (cmd) {
+ case ALX_DFS_IOCTL_CMD_GENR:
+ retval = alx_dfs_ioctl_command_general(adpt, &did);
+ break;
+ default:
+ return -EINVAL;
+ }
+
+ if (copy_to_user((void __user *)arg, &did, sizeof(did)))
+ return -EFAULT;
+
+ return retval;
+}
diff --git a/drivers/net/ethernet/atheros/alx/alx_dfs.h b/drivers/net/ethernet/atheros/alx/alx_dfs.h
new file mode 100644
index 0000000..ccc83a7
--- /dev/null
+++ b/drivers/net/ethernet/atheros/alx/alx_dfs.h
@@ -0,0 +1,182 @@
+/*
+ * Copyright (c) 2012 Qualcomm Atheros, Inc.
+ *
+ * Permission to use, copy, modify, and/or distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+#ifndef _ALX_DFS_H_
+#define _ALX_DFS_H_
+
+/*
+ * debugfs parameter in alx_adapter
+ */
+struct alx_debugfs_param {
+ struct dentry *fent;
+
+ /* DIAG parameter */
+ u8 *diag_recv_buf;
+ u8 *diag_send_buf;
+ u8 *diag_info_buf;
+ u32 diag_recv_sz;
+ u32 diag_send_sz;
+ u32 diag_info_sz;
+ struct alx_diag_packet *diag_pkt_info;
+ struct sk_buff_head diag_skb_list;
+
+ /* MEMCFG parameter*/
+ bool cfg_new_mac;
+};
+
+/*
+ * debugfs IOCTL parameter
+ */
+struct alx_dfs_ioctl_param_buf {
+ void *addr;
+ __u32 size_in;
+ __u32 size_out;
+};
+
+struct alx_dfs_ioctl_param_dat {
+ __u32 val0;
+ __u32 val1;
+};
+
+struct alx_dfs_ioctl_param_mac {
+ __u32 num;
+ union {
+ __u32 val32;
+ __u16 val16;
+ __u8 val8;
+ };
+};
+
+struct alx_dfs_ioctl_param_mii {
+ __u16 dev;
+ __u16 num;
+ __u16 val;
+};
+
+
+struct alx_dfs_ioctl_data {
+ __u32 sub_cmd;
+ union {
+ struct alx_dfs_ioctl_param_buf buf;
+ struct alx_dfs_ioctl_param_dat dat;
+ struct alx_dfs_ioctl_param_mac mac;
+ struct alx_dfs_ioctl_param_mii mii;
+ } param;
+};
+
+
+#define ALX_DFS_IOCTL_CMD_GENR _IOWR('L', 0x80, struct alx_dfs_ioctl_data)
+
+#define ALX_DFS_IOCTL_SCMD_GMAC_REG_32 0x0001
+#define ALX_DFS_IOCTL_SCMD_SMAC_REG_32 0x0002
+#define ALX_DFS_IOCTL_SCMD_GMAC_REG_16 0x0003
+#define ALX_DFS_IOCTL_SCMD_SMAC_REG_16 0x0004
+#define ALX_DFS_IOCTL_SCMD_GMAC_REG_8 0x0005
+#define ALX_DFS_IOCTL_SCMD_SMAC_REG_8 0x0006
+
+#define ALX_DFS_IOCTL_SCMD_GMAC_CFG_32 0x0011
+#define ALX_DFS_IOCTL_SCMD_SMAC_CFG_32 0x0012
+
+#define ALX_DFS_IOCTL_SCMD_GMAC_IO_32 0x0021
+#define ALX_DFS_IOCTL_SCMD_SMAC_IO_32 0x0022
+
+#define ALX_DFS_IOCTL_SCMD_GMII_EXT_REG 0x0031
+#define ALX_DFS_IOCTL_SCMD_SMII_EXT_REG 0x0032
+#define ALX_DFS_IOCTL_SCMD_GMII_IDR_REG 0x0033
+#define ALX_DFS_IOCTL_SCMD_SMII_IDR_REG 0x0034
+
+#define ALX_DFS_IOCTL_SCMD_DEVICE_INACTIVE 0x10001
+#define ALX_DFS_IOCTL_SCMD_DEVICE_REINIT 0x10002
+#define ALX_DFS_IOCTL_SCMD_DIAG_BEGIN 0x10003
+#define ALX_DFS_IOCTL_SCMD_DIAG_END 0x10004
+#define ALX_DFS_IOCTL_SCMD_DIAG_TX_PKT 0x10005
+#define ALX_DFS_IOCTL_SCMD_DIAG_RX_PKT 0x10006
+
+#define ALX_DFS_IOCTL_SCMD_ANNCE_CLEAR 0x20001
+#define ALX_DFS_IOCTL_SCMD_ANNCE_CONFIG 0x20002
+
+
+/*
+ * Diag tool support
+ */
+#define ALX_DIAG_MAX_PACKET_BUFS 1
+#define ALX_DIAG_MAX_DATA_BUFFER (48 * 64 * 1024)
+#define ALX_DIAG_MAX_TX_PACKETS 64
+#define ALX_DIAG_MAX_RX_PACKETS 512
+
+struct alx_diag_buf {
+ u8 *addr;
+ u32 offset;
+ u32 length;
+};
+
+struct alx_diag_packet {
+ struct alx_diag_packet *next;
+ u32 length; /* total length of the packet(buf) */
+ u32 type; /* packet type, vlan, ip checksum */
+ struct alx_diag_buf buf[ALX_DIAG_MAX_PACKET_BUFS];
+ struct alx_diag_buf sglist[ALX_DIAG_MAX_PACKET_BUFS];
+ u16 vlanid;
+ u16 mss;
+ u32 rss_hash;
+ u16 rss_cpu_num;
+ u16 xsum; /* rx, ip-payload checksum */
+ u16 csum_start; /* custom checksum offset to the
+ * mac-header */
+ u16 csum_pos; /* custom checksom position
+ * (to the mac_header) */
+ u32 uplevel_reserved[4];
+ void *lowlevel_reserved[4];
+};
+#define ALX_DIAG_PKTYPE_IPXSUM 0x00000001L /* ip checksum offload,
+ * TO:task offload */
+#define ALX_DIAG_PKTYPE_L4XSUM 0x00000002L /* tcp/udp checksum
+ * offload */
+#define ALX_DIAG_PKTYPE_VLANINST 0x00000004L /* insert vlan tag */
+#define ALX_DIAG_PKTYPE_TSOV1 0x00000008L /* tcp large send v1 */
+#define ALX_DIAG_PKTYPE_TSOV2 0x00000010L /* tcp large send v2 */
+#define ALX_DIAG_PKTYPE_CXSUM 0x00000020L /* checksum offload */
+#define ALX_DIAG_PKTYPE_VLANTAGGED 0x00000040L /* vlan tag */
+#define ALX_DIAG_PKTYPE_IPV4 0x00000080L /* ipv4 */
+#define ALX_DIAG_PKTYPE_IPV6 0x00000100L /* ipv6 */
+#define ALX_DIAG_PKTYPE_TCP 0x00000200L /* tcp */
+#define ALX_DIAG_PKTYPE_UDP 0x00000400L /* udp */
+#define ALX_DIAG_PKTYPE_EII 0x00000800L /* ethernet II */
+#define ALX_DIAG_PKTYPE_802_3 0x00001000L /* 802.3 */
+#define ALX_DIAG_PKTYPE_SNAP 0x00002000L /* 802.2/snap */
+#define ALX_DIAG_PKTYPE_FRAGMENT 0x00004000L /* fragment ip packet */
+#define ALX_DIAG_PKTYPE_SGLIST_VALID 0x00008000L /* SGList valid */
+#define ALX_DIAG_PKTYPE_HASH_VLAID 0x00010000L /* Hash valid */
+#define ALX_DIAG_PKTYPE_CPUNUM_VALID 0x00020000L /* CpuNum valid */
+#define ALX_DIAG_PKTYPE_XSUM_VALID 0x00040000L
+#define ALX_DIAG_PKTYPE_IPXSUM_ERR 0x00080000L
+#define ALX_DIAG_PKTYPE_L4XSUM_ERR 0x00100000L
+#define ALX_DIAG_PKTYPE_802_3_LEN_ERR 0x00200000L
+#define ALX_DIAG_PKTYPE_INCOMPLETE_ERR 0x00400000L
+#define ALX_DIAG_PKTYPE_CRC_ERR 0x00800000L
+#define ALX_DIAG_PKTYPE_RX_ERR 0x01000000L
+#define ALX_DIAG_PKTYPE_PTP 0x02000000L /* 1588 PTP */
+#define ALX_DIAG_PKTYPE_LLDP 0x04000000L /* IEEE LLDP */
+
+
+union alx_sw_rrdesc;
+int alx_flush_mac_address(struct alx_adapter *adpt);
+void alx_dfs_diag_receive_skb(struct alx_adapter *adpt, struct sk_buff *skb,
+ union alx_sw_rrdesc *srrd);
+
+long alx_debugfs_ioctl(struct file *file, unsigned int cmd, unsigned long arg);
+
+#endif
diff --git a/drivers/net/ethernet/atheros/alx/alx_ethtool.c b/drivers/net/ethernet/atheros/alx/alx_ethtool.c
new file mode 100644
index 0000000..92fb461
--- /dev/null
+++ b/drivers/net/ethernet/atheros/alx/alx_ethtool.c
@@ -0,0 +1,337 @@
+/*
+ * Copyright (c) 2012 Qualcomm Atheros, Inc.
+ *
+ * Permission to use, copy, modify, and/or distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+
+#include <linux/pci.h>
+#include <linux/ethtool.h>
+
+#include "alx.h"
+#include "alx_hwcom.h"
+
+static int alx_get_settings(struct net_device *netdev,
+ struct ethtool_cmd *ecmd)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ struct alx_hw *hw = &adpt->hw;
+
+ ecmd->supported = (SUPPORTED_10baseT_Half |
+ SUPPORTED_10baseT_Full |
+ SUPPORTED_100baseT_Half |
+ SUPPORTED_100baseT_Full |
+ SUPPORTED_Autoneg |
+ SUPPORTED_TP);
+ if (CHK_HW_FLAG(GIGA_CAP))
+ ecmd->supported |= SUPPORTED_1000baseT_Full;
+
+ ecmd->advertising = ADVERTISED_TP;
+
+ ecmd->advertising |= ADVERTISED_Autoneg;
+ ecmd->advertising |= hw->autoneg_advertised;
+
+ ecmd->port = PORT_TP;
+ ecmd->phy_address = 0;
+ ecmd->autoneg = AUTONEG_ENABLE;
+ ecmd->transceiver = XCVR_INTERNAL;
+
+ if (hw->link_up) {
+ switch (hw->link_speed) {
+ case LX_LC_10H:
+ ethtool_cmd_speed_set(ecmd, SPEED_10);
+ ecmd->duplex = DUPLEX_HALF;
+ break;
+ case LX_LC_10F:
+ ethtool_cmd_speed_set(ecmd, SPEED_10);
+ ecmd->duplex = DUPLEX_FULL;
+ break;
+ case LX_LC_100H:
+ ethtool_cmd_speed_set(ecmd, SPEED_100);
+ ecmd->duplex = DUPLEX_HALF;
+ break;
+ case LX_LC_100F:
+ ethtool_cmd_speed_set(ecmd, SPEED_100);
+ ecmd->duplex = DUPLEX_FULL;
+ break;
+ case LX_LC_1000F:
+ ethtool_cmd_speed_set(ecmd, SPEED_1000);
+ ecmd->duplex = DUPLEX_FULL;
+ break;
+ default:
+ ecmd->speed = -1;
+ ecmd->duplex = -1;
+ break;
+ }
+ } else {
+ ethtool_cmd_speed_set(ecmd, -1);
+ ecmd->duplex = -1;
+ }
+
+ return 0;
+}
+
+
+static int alx_set_settings(struct net_device *netdev,
+ struct ethtool_cmd *ecmd)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ struct alx_hw *hw = &adpt->hw;
+ u32 advertised, old;
+ int error = 0;
+
+ while (CHK_ADPT_FLAG(STATE_RESETTING))
+ msleep(20);
+ SET_ADPT_FLAG(STATE_RESETTING);
+
+ old = hw->autoneg_advertised;
+ advertised = 0;
+ if (ecmd->autoneg == AUTONEG_ENABLE) {
+ advertised = LX_LC_ALL;
+ } else {
+ u32 speed = ethtool_cmd_speed(ecmd);
+ if (speed == SPEED_1000) {
+ if (ecmd->duplex != DUPLEX_FULL) {
+ dev_warn(&adpt->pdev->dev,
+ "1000M half is invalid\n");
+ CLI_ADPT_FLAG(STATE_RESETTING);
+ return -EINVAL;
+ }
+ advertised = LX_LC_1000F;
+ } else if (speed == SPEED_100) {
+ if (ecmd->duplex == DUPLEX_FULL)
+ advertised = LX_LC_100F;
+ else
+ advertised = LX_LC_100H;
+ } else {
+ if (ecmd->duplex == DUPLEX_FULL)
+ advertised = LX_LC_10F;
+ else
+ advertised = LX_LC_10H;
+ }
+ }
+
+ if (hw->autoneg_advertised == advertised) {
+ CLI_ADPT_FLAG(STATE_RESETTING);
+ return error;
+ }
+
+ hw->autoneg_advertised = advertised;
+ error = alf_setup_phy_link(hw, hw->autoneg_advertised, true,
+ !hw->disable_fc_autoneg);
+ if (error) {
+ dev_err(&adpt->pdev->dev,
+ "setup link failed with code %d\n", error);
+ hw->autoneg_advertised = old;
+ alf_setup_phy_link(hw, hw->autoneg_advertised, true,
+ !hw->disable_fc_autoneg);
+ }
+ CLI_ADPT_FLAG(STATE_RESETTING);
+ return error;
+}
+
+
+static void alx_get_pauseparam(struct net_device *netdev,
+ struct ethtool_pauseparam *pause)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ struct alx_hw *hw = &adpt->hw;
+
+
+ if (hw->disable_fc_autoneg ||
+ hw->cur_fc_mode == alx_fc_none)
+ pause->autoneg = 0;
+ else
+ pause->autoneg = 1;
+
+ if (hw->cur_fc_mode == alx_fc_rx_pause) {
+ pause->rx_pause = 1;
+ } else if (hw->cur_fc_mode == alx_fc_tx_pause) {
+ pause->tx_pause = 1;
+ } else if (hw->cur_fc_mode == alx_fc_full) {
+ pause->rx_pause = 1;
+ pause->tx_pause = 1;
+ }
+}
+
+
+static int alx_set_pauseparam(struct net_device *netdev,
+ struct ethtool_pauseparam *pause)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ struct alx_hw *hw = &adpt->hw;
+ enum alx_fc_mode req_fc_mode;
+ bool disable_fc_autoneg;
+ int retval;
+
+ while (CHK_ADPT_FLAG(STATE_RESETTING))
+ msleep(20);
+ SET_ADPT_FLAG(STATE_RESETTING);
+
+ req_fc_mode = hw->req_fc_mode;
+ disable_fc_autoneg = hw->disable_fc_autoneg;
+
+
+ if (pause->autoneg != AUTONEG_ENABLE)
+ disable_fc_autoneg = true;
+ else
+ disable_fc_autoneg = false;
+
+ if ((pause->rx_pause && pause->tx_pause) || pause->autoneg)
+ req_fc_mode = alx_fc_full;
+ else if (pause->rx_pause && !pause->tx_pause)
+ req_fc_mode = alx_fc_rx_pause;
+ else if (!pause->rx_pause && pause->tx_pause)
+ req_fc_mode = alx_fc_tx_pause;
+ else if (!pause->rx_pause && !pause->tx_pause)
+ req_fc_mode = alx_fc_none;
+ else
+ return -EINVAL;
+
+ if ((hw->req_fc_mode != req_fc_mode) ||
+ (hw->disable_fc_autoneg != disable_fc_autoneg)) {
+ hw->req_fc_mode = req_fc_mode;
+ hw->disable_fc_autoneg = disable_fc_autoneg;
+ if (!hw->disable_fc_autoneg) {
+ retval = alf_setup_phy_link(hw, hw->autoneg_advertised,
+ true, true);
+ }
+
+ alf_config_fc(hw);
+ }
+
+ CLI_ADPT_FLAG(STATE_RESETTING);
+ return 0;
+}
+
+
+static u32 alx_get_msglevel(struct net_device *netdev)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ return adpt->msg_enable;
+}
+
+
+static void alx_set_msglevel(struct net_device *netdev, u32 data)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ adpt->msg_enable = data;
+}
+
+
+static int alx_get_regs_len(struct net_device *netdev)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ struct alx_hw *hw = &adpt->hw;
+ return hw->hwreg_sz * sizeof(32);
+}
+
+
+static void alx_get_regs(struct net_device *netdev,
+ struct ethtool_regs *regs, void *buff)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ struct alx_hw *hw = &adpt->hw;
+
+ regs->version = 0;
+
+ memset(buff, 0, hw->hwreg_sz * sizeof(u32));
+ alf_get_ethtool_regs(hw, buff);
+}
+
+
+static void alx_get_drvinfo(struct net_device *netdev,
+ struct ethtool_drvinfo *drvinfo)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+
+ strlcpy(drvinfo->driver, alx_drv_name, sizeof(drvinfo->driver));
+ strlcpy(drvinfo->fw_version, "alx", 32);
+ strlcpy(drvinfo->bus_info, pci_name(adpt->pdev),
+ sizeof(drvinfo->bus_info));
+ drvinfo->n_stats = 0;
+ drvinfo->testinfo_len = 0;
+ drvinfo->regdump_len = adpt->hw.hwreg_sz;
+ drvinfo->eedump_len = adpt->hw.eeprom_sz;
+}
+
+
+static void alx_get_wol(struct net_device *netdev,
+ struct ethtool_wolinfo *wol)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+
+ wol->supported = WAKE_MAGIC | WAKE_PHY;
+ wol->wolopts = 0;
+
+ if (adpt->wol & ALX_WOL_MAGIC)
+ wol->wolopts |= WAKE_MAGIC;
+ if (adpt->wol & ALX_WOL_PHY)
+ wol->wolopts |= WAKE_PHY;
+
+ netif_info(adpt, wol, adpt->netdev,
+ "wol->wolopts = %x\n", wol->wolopts);
+}
+
+
+static int alx_set_wol(struct net_device *netdev, struct ethtool_wolinfo *wol)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+
+ if (wol->wolopts & (WAKE_ARP | WAKE_MAGICSECURE |
+ WAKE_UCAST | WAKE_BCAST | WAKE_MCAST))
+ return -EOPNOTSUPP;
+
+ adpt->wol = 0;
+
+ if (wol->wolopts & WAKE_MAGIC)
+ adpt->wol |= ALX_WOL_MAGIC;
+ if (wol->wolopts & WAKE_PHY)
+ adpt->wol |= ALX_WOL_PHY;
+
+ device_set_wakeup_enable(&adpt->pdev->dev, adpt->wol);
+
+ return 0;
+}
+
+
+static int alx_nway_reset(struct net_device *netdev)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ if (netif_running(netdev))
+ alx_reinit_locked(adpt);
+ return 0;
+}
+
+
+static const struct ethtool_ops alx_ethtool_ops = {
+ .get_settings = alx_get_settings,
+ .set_settings = alx_set_settings,
+ .get_pauseparam = alx_get_pauseparam,
+ .set_pauseparam = alx_set_pauseparam,
+ .get_drvinfo = alx_get_drvinfo,
+ .get_regs_len = alx_get_regs_len,
+ .get_regs = alx_get_regs,
+ .get_wol = alx_get_wol,
+ .set_wol = alx_set_wol,
+ .get_msglevel = alx_get_msglevel,
+ .set_msglevel = alx_set_msglevel,
+ .nway_reset = alx_nway_reset,
+ .get_link = ethtool_op_get_link,
+};
+
+
+void alx_set_ethtool_ops(struct net_device *netdev)
+{
+ SET_ETHTOOL_OPS(netdev, &alx_ethtool_ops);
+}
diff --git a/drivers/net/ethernet/atheros/alx/alx_hw.c b/drivers/net/ethernet/atheros/alx/alx_hw.c
new file mode 100644
index 0000000..59045de
--- /dev/null
+++ b/drivers/net/ethernet/atheros/alx/alx_hw.c
@@ -0,0 +1,1042 @@
+/*
+ * Copyright (c) 2012 Qualcomm Atheros, Inc.
+ *
+ * Permission to use, copy, modify, and/or distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+#include <linux/pci_regs.h>
+#include <linux/mii.h>
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+
+#include "alx_hw.h"
+
+
+/* get permanent mac address from
+ * 0: success
+ * non-0:fail
+ */
+int l1f_get_perm_macaddr(struct alx_hw *hw, u8 *addr)
+{
+ u32 val, mac0, mac1;
+ u16 flag, i;
+
+#define INTN_LOADED 0x1
+#define EXTN_LOADED 0x2
+
+ flag = 0;
+ val = 0;
+
+read_mcadr:
+
+ /* get it from register first */
+ alx_mem_r32(hw, L1F_STAD0, &mac0);
+ alx_mem_r32(hw, L1F_STAD1, &mac1);
+
+ /* addr should be big-endian */
+ *(__be32 *)(addr + 2) = cpu_to_be32(mac0);
+ *(__be16 *)addr = cpu_to_be16((u16)mac1);
+
+ if (is_valid_ether_addr(addr))
+ return 0;
+
+ if ((flag & INTN_LOADED) == 0) {
+ /* load from efuse ? */
+ for (i = 0; i < L1F_SLD_MAX_TO; i++) {
+ alx_mem_r32(hw, L1F_SLD, &val);
+ if ((val & (L1F_SLD_STAT | L1F_SLD_START)) == 0)
+ break;
+ mdelay(1);
+ }
+ if (i == L1F_SLD_MAX_TO)
+ goto out;
+ alx_mem_w32(hw, L1F_SLD, val | L1F_SLD_START);
+ for (i = 0; i < L1F_SLD_MAX_TO; i++) {
+ mdelay(1);
+ alx_mem_r32(hw, L1F_SLD, &val);
+ if ((val & L1F_SLD_START) == 0)
+ break;
+ }
+ if (i == L1F_SLD_MAX_TO)
+ goto out;
+ flag |= INTN_LOADED;
+ goto read_mcadr;
+ }
+
+ if ((flag & EXTN_LOADED) == 0) {
+ alx_mem_r32(hw, L1F_EFLD, &val);
+ if ((val & (L1F_EFLD_F_EXIST | L1F_EFLD_E_EXIST)) != 0) {
+ /* load from eeprom/flash ? */
+ for (i = 0; i < L1F_SLD_MAX_TO; i++) {
+ alx_mem_r32(hw, L1F_EFLD, &val);
+ if ((val & (L1F_EFLD_STAT |
+ L1F_EFLD_START)) == 0) {
+ break;
+ }
+ mdelay(1);
+ }
+ if (i == L1F_SLD_MAX_TO)
+ goto out;
+ alx_mem_w32(hw, L1F_EFLD, val | L1F_EFLD_START);
+ for (i = 0; i < L1F_SLD_MAX_TO; i++) {
+ mdelay(1);
+ alx_mem_r32(hw, L1F_EFLD, &val);
+ if ((val & L1F_EFLD_START) == 0)
+ break;
+ }
+ if (i == L1F_SLD_MAX_TO)
+ goto out;
+ flag |= EXTN_LOADED;
+ goto read_mcadr;
+ }
+ }
+
+out:
+ return LX_ERR_ALOAD;
+}
+
+
+/* reset mac & dma
+ * return
+ * 0: success
+ * non-0:fail
+ */
+int l1f_reset_mac(struct alx_hw *hw)
+{
+ u32 val, pmctrl = 0;
+ int ret;
+ u16 i;
+ u8 rev = (u8)(FIELD_GETX(hw->pci_revid, L1F_PCI_REVID));
+
+ /* disable all interrupts, RXQ/TXQ */
+ alx_mem_w32(hw, L1F_MSIX_MASK, BIT_ALL); /* ???? msi-x */
+ alx_mem_w32(hw, L1F_IMR, 0);
+ alx_mem_w32(hw, L1F_ISR, L1F_ISR_DIS);
+
+ ret = l1f_enable_mac(hw, false, 0);
+ if (ret != 0)
+ return ret;
+
+ /* mac reset workaroud */
+ alx_mem_w32(hw, L1F_RFD_PIDX, 1);
+
+ /* dis l0s/l1 before mac reset */
+ if ((rev == L1F_REV_A0 || rev == L1F_REV_A1) &&
+ (hw->pci_revid & L1F_PCI_REVID_WTH_CR) != 0) {
+ alx_mem_r32(hw, L1F_PMCTRL, &pmctrl);
+ if ((pmctrl & (L1F_PMCTRL_L1_EN | L1F_PMCTRL_L0S_EN)) != 0) {
+ alx_mem_w32(hw, L1F_PMCTRL,
+ pmctrl & ~(L1F_PMCTRL_L1_EN |
+ L1F_PMCTRL_L0S_EN));
+ }
+ }
+
+ /* reset whole mac safely */
+ alx_mem_r32(hw, L1F_MASTER, &val);
+ alx_mem_w32(hw, L1F_MASTER,
+ val | L1F_MASTER_DMA_MAC_RST | L1F_MASTER_OOB_DIS);
+
+ /* make sure it's real idle */
+ udelay(10);
+ for (i = 0; i < L1F_DMA_MAC_RST_TO; i++) {
+ alx_mem_r32(hw, L1F_RFD_PIDX, &val);
+ if (val == 0)
+ break;
+ udelay(10);
+ }
+ for (; i < L1F_DMA_MAC_RST_TO; i++) {
+ alx_mem_r32(hw, L1F_MASTER, &val);
+ if ((val & L1F_MASTER_DMA_MAC_RST) == 0)
+ break;
+ udelay(10);
+ }
+ if (i == L1F_DMA_MAC_RST_TO)
+ return LX_ERR_RSTMAC;
+ udelay(10);
+
+ if ((rev == L1F_REV_A0 || rev == L1F_REV_A1) &&
+ (hw->pci_revid & L1F_PCI_REVID_WTH_CR) != 0) {
+ /* set L1F_MASTER_PCLKSEL_SRDS (affect by soft-rst, PERST) */
+ alx_mem_w32(hw, L1F_MASTER, val | L1F_MASTER_PCLKSEL_SRDS);
+ /* resoter l0s / l1 */
+ if ((pmctrl & (L1F_PMCTRL_L1_EN | L1F_PMCTRL_L0S_EN)) != 0)
+ alx_mem_w32(hw, L1F_PMCTRL, pmctrl);
+ }
+
+ /* clear Internal OSC settings, switching OSC by hw itself,
+ * disable isoloate for A0 */
+ alx_mem_r32(hw, L1F_MISC3, &val);
+ alx_mem_w32(hw, L1F_MISC3,
+ (val & ~L1F_MISC3_25M_BY_SW) | L1F_MISC3_25M_NOTO_INTNL);
+ alx_mem_r32(hw, L1F_MISC, &val);
+ val &= ~L1F_MISC_INTNLOSC_OPEN;
+ if (rev == L1F_REV_A0 || rev == L1F_REV_A1)
+ val &= ~L1F_MISC_ISO_EN;
+ alx_mem_w32(hw, L1F_MISC, val);
+ udelay(20);
+
+ /* driver control speed/duplex, hash-alg */
+ alx_mem_r32(hw, L1F_MAC_CTRL, &val);
+ alx_mem_w32(hw, L1F_MAC_CTRL, val | L1F_MAC_CTRL_WOLSPED_SWEN);
+
+ /* clk sw */
+ alx_mem_r32(hw, L1F_SERDES, &val);
+ alx_mem_w32(hw, L1F_SERDES,
+ val | L1F_SERDES_MACCLK_SLWDWN | L1F_SERDES_PHYCLK_SLWDWN);
+
+ return 0;
+}
+
+/* reset phy
+ * return
+ * 0: success
+ * non-0:fail
+ */
+int l1f_reset_phy(struct alx_hw *hw, bool pws_en, bool az_en, bool ptp_en)
+{
+ int val;
+ u16 i, phy_val;
+
+ az_en = az_en;
+ ptp_en = ptp_en;
+
+ /* reset PHY core */
+ alx_mem_r32(hw, L1F_PHY_CTRL, &val);
+ val &= ~(L1F_PHY_CTRL_DSPRST_OUT | L1F_PHY_CTRL_IDDQ |
+ L1F_PHY_CTRL_GATE_25M | L1F_PHY_CTRL_POWER_DOWN |
+ L1F_PHY_CTRL_CLS);
+ val |= L1F_PHY_CTRL_RST_ANALOG;
+
+ if (pws_en)
+ val |= (L1F_PHY_CTRL_HIB_PULSE | L1F_PHY_CTRL_HIB_EN);
+ else
+ val &= ~(L1F_PHY_CTRL_HIB_PULSE | L1F_PHY_CTRL_HIB_EN);
+ alx_mem_w32(hw, L1F_PHY_CTRL, val);
+ udelay(10); /* 5us is enough */
+ alx_mem_w32(hw, L1F_PHY_CTRL, val | L1F_PHY_CTRL_DSPRST_OUT);
+
+ for (i = 0; i < L1F_PHY_CTRL_DSPRST_TO; i++) { /* delay 800us */
+ udelay(10);
+ }
+
+ /* ???? phy power saving */
+
+ l1f_write_phydbg(hw, true,
+ L1F_MIIDBG_TST10BTCFG, L1F_TST10BTCFG_DEF);
+ l1f_write_phydbg(hw, true, L1F_MIIDBG_SRDSYSMOD, L1F_SRDSYSMOD_DEF);
+ l1f_write_phydbg(hw, true,
+ L1F_MIIDBG_TST100BTCFG, L1F_TST100BTCFG_DEF);
+ l1f_write_phydbg(hw, true, L1F_MIIDBG_ANACTRL, L1F_ANACTRL_DEF);
+ l1f_read_phydbg(hw, true, L1F_MIIDBG_GREENCFG2, &phy_val);
+ l1f_write_phydbg(hw, true, L1F_MIIDBG_GREENCFG2,
+ phy_val & ~L1F_GREENCFG2_GATE_DFSE_EN);
+ /* rtl8139c, 120m */
+ l1f_write_phy(hw, true, L1F_MIIEXT_ANEG, true,
+ L1F_MIIEXT_NLP78, L1F_MIIEXT_NLP78_120M_DEF);
+ l1f_write_phy(hw, true, L1F_MIIEXT_ANEG, true,
+ L1F_MIIEXT_S3DIG10, L1F_MIIEXT_S3DIG10_DEF);
+
+ if (hw->msi_lnkpatch) {
+ /* Turn off half amplitude */
+ l1f_read_phy(hw, true, L1F_MIIEXT_PCS, true,
+ L1F_MIIEXT_CLDCTRL3, &phy_val);
+ l1f_write_phy(hw, true, L1F_MIIEXT_PCS, true,
+ L1F_MIIEXT_CLDCTRL3,
+ phy_val | L1F_CLDCTRL3_BP_CABLE1TH_DET_GT);
+ /* Turn off Green feature */
+ l1f_read_phydbg(hw, true, L1F_MIIDBG_GREENCFG2, &phy_val);
+ l1f_write_phydbg(hw, true, L1F_MIIDBG_GREENCFG2,
+ phy_val | L1F_GREENCFG2_BP_GREEN);
+ /* Turn off half Bias */
+ l1f_read_phy(hw, true, L1F_MIIEXT_PCS, true,
+ L1F_MIIEXT_CLDCTRL5, &phy_val);
+ l1f_write_phy(hw, true, L1F_MIIEXT_PCS, true,
+ L1F_MIIEXT_CLDCTRL5,
+ phy_val | L1F_CLDCTRL5_BP_VD_HLFBIAS);
+ }
+
+ /* set phy interrupt mask */
+ l1f_write_phy(hw, false, 0, true,
+ L1F_MII_IER, L1F_IER_LINK_UP | L1F_IER_LINK_DOWN);
+
+ return 0;
+}
+
+
+/* reset pcie
+ * just reset pcie relative registers (pci command, clk, aspm...)
+ * return
+ * 0:success
+ * non-0:fail
+ */
+int l1f_reset_pcie(struct alx_hw *hw, bool l0s_en, bool l1_en)
+{
+ u32 val;
+ u16 val16;
+ int ret;
+ u8 rev = (u8)(FIELD_GETX(hw->pci_revid, L1F_PCI_REVID));
+
+ /* Workaround for PCI problem when BIOS sets MMRBC incorrectly. */
+ alx_cfg_r16(hw, PCI_COMMAND, &val16);
+ if (!(val16 & ALX_PCI_CMD) || (val16 & PCI_COMMAND_INTX_DISABLE)) {
+ val16 = (val16 | ALX_PCI_CMD) & ~PCI_COMMAND_INTX_DISABLE;
+ alx_cfg_w16(hw, PCI_COMMAND, val16);
+ }
+
+ /* Clear any PowerSaving Settings */
+ alx_cfg_w16(hw, L1F_PM_CSR, 0);
+
+ /* deflt val of PDLL D3PLLOFF */
+ alx_mem_r32(hw, L1F_PDLL_TRNS1, &val);
+ alx_mem_w32(hw, L1F_PDLL_TRNS1, val & ~L1F_PDLL_TRNS1_D3PLLOFF_EN);
+
+ /* mask some pcie error bits */
+ alx_mem_r32(hw, L1F_UE_SVRT, &val);
+ val &= ~(L1F_UE_SVRT_DLPROTERR | L1F_UE_SVRT_FCPROTERR);
+ alx_mem_w32(hw, L1F_UE_SVRT, val);
+
+ /* wol 25M & pclk */
+ alx_mem_r32(hw, L1F_MASTER, &val);
+ if ((rev == L1F_REV_A0 || rev == L1F_REV_A1) &&
+ (hw->pci_revid & L1F_PCI_REVID_WTH_CR) != 0) {
+ if ((val & L1F_MASTER_WAKEN_25M) == 0 ||
+ (val & L1F_MASTER_PCLKSEL_SRDS) == 0) {
+ alx_mem_w32(hw, L1F_MASTER,
+ val | L1F_MASTER_PCLKSEL_SRDS |
+ L1F_MASTER_WAKEN_25M);
+ }
+ } else {
+ if ((val & L1F_MASTER_WAKEN_25M) == 0 ||
+ (val & L1F_MASTER_PCLKSEL_SRDS) != 0) {
+ alx_mem_w32(hw, L1F_MASTER,
+ (val & ~L1F_MASTER_PCLKSEL_SRDS) |
+ L1F_MASTER_WAKEN_25M);
+ }
+ }
+
+ /* l0s, l1 setting */
+ ret = l1f_enable_aspm(hw, l0s_en, l1_en, 0);
+
+ udelay(10);
+
+ return ret;
+}
+
+
+/* disable/enable MAC/RXQ/TXQ
+ * en
+ * true:enable
+ * false:disable
+ * return
+ * 0:success
+ * non-0-fail
+ */
+int l1f_enable_mac(struct alx_hw *hw, bool en, u16 en_ctrl)
+{
+ u32 rxq, txq, mac, val;
+ u16 i;
+ u32 ctrl_hw_map[] = {
+ LX_MACDUPLEX_FULL, L1F_MAC_CTRL_FULLD,
+ LX_FLT_PROMISC, L1F_MAC_CTRL_PROMISC_EN,
+ LX_FLT_MULTI_ALL, L1F_MAC_CTRL_MULTIALL_EN,
+ LX_FLT_BROADCAST, L1F_MAC_CTRL_BRD_EN,
+ LX_FLT_DIRECT, L1F_MAC_CTRL_RX_EN,
+ LX_FC_TXEN, L1F_MAC_CTRL_TXFC_EN,
+ LX_FC_RXEN, L1F_MAC_CTRL_RXFC_EN,
+ LX_VLAN_STRIP, L1F_MAC_CTRL_VLANSTRIP,
+ LX_LOOPBACK, L1F_MAC_CTRL_LPBACK_EN,
+ LX_SINGLE_PAUSE, L1F_MAC_CTRL_SPAUSE_EN,
+ LX_ADD_FCS, (L1F_MAC_CTRL_PCRCE | L1F_MAC_CTRL_CRCE)
+ };
+
+ alx_mem_r32(hw, L1F_RXQ0, &rxq);
+ alx_mem_r32(hw, L1F_TXQ0, &txq);
+ alx_mem_r32(hw, L1F_MAC_CTRL, &mac);
+
+ if (en) { /* enable */
+ alx_mem_w32(hw, L1F_RXQ0, rxq | L1F_RXQ0_EN);
+ alx_mem_w32(hw, L1F_TXQ0, txq | L1F_TXQ0_EN);
+ if ((en_ctrl & LX_MACSPEED_1000) != 0) {
+ FIELD_SETL(mac, L1F_MAC_CTRL_SPEED,
+ L1F_MAC_CTRL_SPEED_1000);
+ } else {
+ FIELD_SETL(mac, L1F_MAC_CTRL_SPEED,
+ L1F_MAC_CTRL_SPEED_10_100);
+ }
+ for (i = 0; i < ARRAY_SIZE(ctrl_hw_map); i += 2) {
+ if (en_ctrl & ctrl_hw_map[i])
+ mac |= ctrl_hw_map[i + 1];
+ else
+ mac &= ~ctrl_hw_map[i + 1];
+ }
+ alx_mem_w32(hw, L1F_MAC_CTRL, mac | L1F_MAC_CTRL_TX_EN);
+ } else { /* disable mac */
+ alx_mem_w32(hw, L1F_RXQ0, rxq & ~L1F_RXQ0_EN);
+ alx_mem_w32(hw, L1F_TXQ0, txq & ~L1F_TXQ0_EN);
+
+ /* waiting for rxq/txq be idle */
+ udelay(40);
+
+ /* stop mac tx/rx */
+ alx_mem_w32(hw, L1F_MAC_CTRL,
+ mac & ~(L1F_MAC_CTRL_RX_EN | L1F_MAC_CTRL_TX_EN));
+
+ for (i = 0; i < L1F_DMA_MAC_RST_TO; i++) {
+ alx_mem_r32(hw, L1F_MAC_STS, &val);
+ if ((val & L1F_MAC_STS_IDLE) == 0)
+ break;
+ udelay(10);
+ }
+ if (L1F_DMA_MAC_RST_TO == i)
+ return LX_ERR_RSTMAC;
+ }
+
+ return 0;
+}
+
+/* enable/disable aspm support
+ * that will change settings for phy/mac/pcie
+ */
+int l1f_enable_aspm(struct alx_hw *hw, bool l0s_en, bool l1_en, u8 lnk_stat)
+{
+ u32 pmctrl;
+ u8 rev = (u8)(FIELD_GETX(hw->pci_revid, L1F_PCI_REVID));
+
+ lnk_stat = lnk_stat;
+
+
+ alx_mem_r32(hw, L1F_PMCTRL, &pmctrl);
+
+ /* ????default */
+ FIELD_SETL(pmctrl, L1F_PMCTRL_LCKDET_TIMER,
+ L1F_PMCTRL_LCKDET_TIMER_DEF);
+ pmctrl |= L1F_PMCTRL_RCVR_WT_1US | /* wait 1us */
+ L1F_PMCTRL_L1_CLKSW_EN | /* pcie clk sw */
+ L1F_PMCTRL_L1_SRDSRX_PWD ; /* pwd serdes ????default */
+ /* ????default */
+ FIELD_SETL(pmctrl, L1F_PMCTRL_L1REQ_TO, L1F_PMCTRL_L1REG_TO_DEF);
+ FIELD_SETL(pmctrl, L1F_PMCTRL_L1_TIMER, L1F_PMCTRL_L1_TIMER_16US);
+ pmctrl &= ~(L1F_PMCTRL_L1_SRDS_EN |
+ L1F_PMCTRL_L1_SRDSPLL_EN |
+ L1F_PMCTRL_L1_BUFSRX_EN |
+ L1F_PMCTRL_SADLY_EN | /* ???default */
+ L1F_PMCTRL_HOTRST_WTEN|
+ L1F_PMCTRL_L0S_EN |
+ L1F_PMCTRL_L1_EN |
+ L1F_PMCTRL_ASPM_FCEN |
+ L1F_PMCTRL_TXL1_AFTER_L0S |
+ L1F_PMCTRL_RXL1_AFTER_L0S
+ );
+ if ((rev == L1F_REV_A0 || rev == L1F_REV_A1) &&
+ (hw->pci_revid & L1F_PCI_REVID_WTH_CR) != 0) {
+ pmctrl |= L1F_PMCTRL_L1_SRDS_EN | L1F_PMCTRL_L1_SRDSPLL_EN;
+ }
+
+ /* on/off l0s only if bios/system enable l0s */
+ if (/* sysl0s_en && */ l0s_en)
+ pmctrl |= (L1F_PMCTRL_L0S_EN | L1F_PMCTRL_ASPM_FCEN);
+ /* on/off l1 only if bios/system enable l1 */
+ if (/* sysl1_en && */ l1_en)
+ pmctrl |= (L1F_PMCTRL_L1_EN | L1F_PMCTRL_ASPM_FCEN);
+
+ alx_mem_w32(hw, L1F_PMCTRL, pmctrl);
+
+ return 0;
+}
+
+
+/* initialize phy for speed / flow control
+ * lnk_cap
+ * if autoNeg, is link capability to tell the peer
+ * if force mode, is forced speed/duplex
+ */
+int l1f_init_phy_spdfc(struct alx_hw *hw, bool auto_neg,
+ u8 lnk_cap, bool fc_en)
+{
+ u16 adv, giga, cr;
+ u32 val;
+ int ret;
+
+ /* clear flag */
+ l1f_write_phy(hw, false, 0, false, L1F_MII_DBG_ADDR, 0);
+ alx_mem_r32(hw, L1F_DRV, &val);
+ FIELD_SETL(val, LX_DRV_PHY, 0);
+
+ if (auto_neg) {
+ adv = L1F_ADVERTISE_DEFAULT_CAP & ~L1F_ADVERTISE_SPEED_MASK;
+ giga = L1F_GIGA_CR_1000T_DEFAULT_CAP &
+ ~L1F_GIGA_CR_1000T_SPEED_MASK;
+ val |= LX_DRV_PHY_AUTO;
+ if (!fc_en)
+ adv &= ~(ADVERTISE_PAUSE_CAP | ADVERTISE_PAUSE_ASYM);
+ else
+ val |= LX_DRV_PHY_FC;
+ if ((LX_LC_10H & lnk_cap) != 0) {
+ adv |= ADVERTISE_10HALF;
+ val |= LX_DRV_PHY_10;
+ }
+ if ((LX_LC_10F & lnk_cap) != 0) {
+ adv |= ADVERTISE_10HALF |
+ ADVERTISE_10FULL;
+ val |= LX_DRV_PHY_10 | LX_DRV_PHY_DUPLEX;
+ }
+ if ((LX_LC_100H & lnk_cap) != 0) {
+ adv |= ADVERTISE_100HALF;
+ val |= LX_DRV_PHY_100;
+ }
+ if ((LX_LC_100F & lnk_cap) != 0) {
+ adv |= ADVERTISE_100HALF |
+ ADVERTISE_100FULL;
+ val |= LX_DRV_PHY_100 | LX_DRV_PHY_DUPLEX;
+ }
+ if ((LX_LC_1000F & lnk_cap) != 0) {
+ giga |= L1F_GIGA_CR_1000T_FD_CAPS;
+ val |= LX_DRV_PHY_1000 | LX_DRV_PHY_DUPLEX;
+ }
+
+ ret = l1f_write_phy(hw, false, 0, false, MII_ADVERTISE, adv);
+ ret = l1f_write_phy(hw, false, 0, false, MII_CTRL1000, giga);
+
+ cr = BMCR_RESET | BMCR_ANENABLE | BMCR_ANRESTART;
+ ret = l1f_write_phy(hw, false, 0, false, MII_BMCR, cr);
+ } else { /* force mode */
+ cr = BMCR_RESET;
+ switch (lnk_cap) {
+ case LX_LC_10H:
+ val |= LX_DRV_PHY_10;
+ break;
+ case LX_LC_10F:
+ cr |= BMCR_FULLDPLX;
+ val |= LX_DRV_PHY_10 | LX_DRV_PHY_DUPLEX;
+ break;
+ case LX_LC_100H:
+ cr |= BMCR_SPEED100;
+ val |= LX_DRV_PHY_100;
+ break;
+ case LX_LC_100F:
+ cr |= BMCR_SPEED100 | BMCR_FULLDPLX;
+ val |= LX_DRV_PHY_100 | LX_DRV_PHY_DUPLEX;
+ break;
+ default:
+ return LX_ERR_PARM;
+ }
+ ret = l1f_write_phy(hw, false, 0, false, MII_BMCR, cr);
+ }
+
+ if (!ret) {
+ l1f_write_phy(hw, false, 0, false,
+ L1F_MII_DBG_ADDR, LX_PHY_INITED);
+ }
+ alx_mem_w32(hw, L1F_DRV, val);
+
+ return ret;
+}
+
+
+/*
+ * do post setting on phy if link up/down event occur
+ */
+int l1f_post_phy_link(struct alx_hw *hw, bool az_en, bool linkon, u8 wire_spd)
+{
+ u16 phy_val, len, agc;
+ u8 revid = FIELD_GETX(hw->pci_revid, L1F_PCI_REVID);
+ bool adj_th;
+
+
+ if (revid != L1F_REV_B0 &&
+ revid != L1F_REV_A1 &&
+ revid != L1F_REV_A0) {
+ return 0;
+ }
+ adj_th = (revid == L1F_REV_B0) ? true : false;
+
+ /* 1000BT/AZ, wrong cable length */
+ if (linkon) {
+ l1f_read_phy(hw, true, L1F_MIIEXT_PCS, true,
+ L1F_MIIEXT_CLDCTRL6, &phy_val);
+ len = FIELD_GETX(phy_val, L1F_CLDCTRL6_CAB_LEN);
+ l1f_read_phydbg(hw, true, L1F_MIIDBG_AGC, &phy_val);
+ agc = FIELD_GETX(phy_val, L1F_AGC_2_VGA);
+
+ if ((wire_spd == LX_LC_1000F &&
+ (len > L1F_CLDCTRL6_CAB_LEN_SHORT1G ||
+ (0 == len && agc > L1F_AGC_LONG1G_LIMT))) ||
+ ((wire_spd == LX_LC_100F || wire_spd == LX_LC_100H) &&
+ (len > L1F_CLDCTRL6_CAB_LEN_SHORT100M ||
+ (0 == len && agc > L1F_AGC_LONG100M_LIMT)))) {
+ l1f_write_phydbg(hw, true,
+ L1F_MIIDBG_AZ_ANADECT,
+ L1F_AZ_ANADECT_LONG);
+ l1f_read_phy(hw, true,
+ L1F_MIIEXT_ANEG, true,
+ L1F_MIIEXT_AFE, &phy_val);
+ l1f_write_phy(hw, true,
+ L1F_MIIEXT_ANEG, true, L1F_MIIEXT_AFE,
+ phy_val | L1F_AFE_10BT_100M_TH);
+ } else {
+ l1f_write_phydbg(hw, true,
+ L1F_MIIDBG_AZ_ANADECT,
+ L1F_AZ_ANADECT_DEF);
+ l1f_read_phy(hw, true,
+ L1F_MIIEXT_ANEG, true,
+ L1F_MIIEXT_AFE, &phy_val);
+ l1f_write_phy(hw, true,
+ L1F_MIIEXT_ANEG, true, L1F_MIIEXT_AFE,
+ phy_val & ~L1F_AFE_10BT_100M_TH);
+ }
+
+ /* threashold adjust */
+ if (adj_th && hw->msi_lnkpatch) {
+ if (wire_spd == LX_LC_100F || wire_spd == LX_LC_100H) {
+ l1f_write_phydbg(hw, true, L1F_MIIDBG_MSE16DB,
+ L1F_MSE16DB_UP);
+ } else if (wire_spd == LX_LC_1000F) {
+ /*
+ * Giga link threshold, raise the tolerance of
+ * noise 50%
+ */
+ l1f_read_phydbg(hw, true, L1F_MIIDBG_MSE20DB,
+ &phy_val);
+ FIELD_SETS(phy_val, L1F_MSE20DB_TH,
+ L1F_MSE20DB_TH_HI);
+ l1f_write_phydbg(hw, true, L1F_MIIDBG_MSE20DB,
+ phy_val);
+ }
+ }
+ /* phy link-down in 1000BT/AZ mode */
+ if (az_en && revid == L1F_REV_B0 && wire_spd == LX_LC_1000F) {
+ l1f_write_phydbg(hw, true, L1F_MIIDBG_SRDSYSMOD,
+ L1F_SRDSYSMOD_DEF &
+ ~L1F_SRDSYSMOD_DEEMP_EN);
+ }
+ } else {
+ l1f_read_phy(hw, true,
+ L1F_MIIEXT_ANEG, false, L1F_MIIEXT_AFE, &phy_val);
+ l1f_write_phy(hw, true,
+ L1F_MIIEXT_ANEG, false, L1F_MIIEXT_AFE,
+ phy_val & ~L1F_AFE_10BT_100M_TH);
+
+ if (adj_th && hw->msi_lnkpatch) {
+ l1f_write_phydbg(hw, true, L1F_MIIDBG_MSE16DB,
+ L1F_MSE16DB_DOWN);
+ l1f_read_phydbg(hw, true, L1F_MIIDBG_MSE20DB, &phy_val);
+ FIELD_SETS(phy_val, L1F_MSE20DB_TH, L1F_MSE20DB_TH_DEF);
+ l1f_write_phydbg(hw, true, L1F_MIIDBG_MSE20DB, phy_val);
+ }
+ if (az_en && revid == L1F_REV_B0) {
+ l1f_write_phydbg(hw, true, L1F_MIIDBG_SRDSYSMOD,
+ L1F_SRDSYSMOD_DEF);
+ }
+ }
+
+ return 0;
+}
+
+
+/* do power saving setting befor enter suspend mode
+ * NOTE:
+ * 1. phy link must be established before calling this function
+ * 2. wol option (pattern,magic,link,etc.) is configed before call it.
+ */
+int l1f_powersaving(struct alx_hw *hw,
+ u8 wire_spd,
+ bool wol_en,
+ bool mactx_en,
+ bool macrx_en,
+ bool pws_en)
+{
+ u32 master_ctrl, mac_ctrl, phy_ctrl, val;
+ u16 pm_ctrl;
+ int ret = 0;
+
+ master_ctrl = 0;
+ mac_ctrl = 0;
+ phy_ctrl = 0;
+
+ pws_en = pws_en;
+
+ alx_mem_r32(hw, L1F_MASTER, &master_ctrl);
+ master_ctrl &= ~L1F_MASTER_PCLKSEL_SRDS;
+
+ alx_mem_r32(hw, L1F_MAC_CTRL, &mac_ctrl);
+ /* 10/100 half */
+ FIELD_SETL(mac_ctrl, L1F_MAC_CTRL_SPEED, L1F_MAC_CTRL_SPEED_10_100);
+ mac_ctrl &= ~(L1F_MAC_CTRL_FULLD |
+ L1F_MAC_CTRL_RX_EN |
+ L1F_MAC_CTRL_TX_EN);
+
+ alx_mem_r32(hw, L1F_PHY_CTRL, &phy_ctrl);
+ phy_ctrl &= ~(L1F_PHY_CTRL_DSPRST_OUT | L1F_PHY_CTRL_CLS);
+ /* if (pws_en) { */
+ phy_ctrl |= (L1F_PHY_CTRL_RST_ANALOG | L1F_PHY_CTRL_HIB_PULSE |
+ L1F_PHY_CTRL_HIB_EN);
+
+ if (wol_en) { /* enable rx packet or tx packet */
+ if (macrx_en)
+ mac_ctrl |= (L1F_MAC_CTRL_RX_EN | L1F_MAC_CTRL_BRD_EN);
+ if (mactx_en)
+ mac_ctrl |= L1F_MAC_CTRL_TX_EN;
+ if (LX_LC_1000F == wire_spd) {
+ FIELD_SETL(mac_ctrl, L1F_MAC_CTRL_SPEED,
+ L1F_MAC_CTRL_SPEED_1000);
+ }
+ if (LX_LC_10F == wire_spd ||
+ LX_LC_100F == wire_spd ||
+ LX_LC_100F == wire_spd) {
+ mac_ctrl |= L1F_MAC_CTRL_FULLD;
+ }
+ phy_ctrl |= L1F_PHY_CTRL_DSPRST_OUT;
+ ret = l1f_write_phy(hw, false, 0, false, L1F_MII_IER,
+ L1F_IER_LINK_UP);
+ ret = l1f_write_phy(hw, true, L1F_MIIEXT_ANEG, false,
+ L1F_MIIEXT_S3DIG10, L1F_MIIEXT_S3DIG10_SL);
+ } else {
+ ret = l1f_write_phy(hw, false, 0, false, L1F_MII_IER, 0);
+ phy_ctrl |= (L1F_PHY_CTRL_IDDQ | L1F_PHY_CTRL_POWER_DOWN);
+ }
+ alx_mem_w32(hw, L1F_MASTER, master_ctrl);
+ alx_mem_w32(hw, L1F_MAC_CTRL, mac_ctrl);
+ alx_mem_w32(hw, L1F_PHY_CTRL, phy_ctrl);
+
+ /* set val of PDLL D3PLLOFF */
+ alx_mem_r32(hw, L1F_PDLL_TRNS1, &val);
+ alx_mem_w32(hw, L1F_PDLL_TRNS1, val | L1F_PDLL_TRNS1_D3PLLOFF_EN);
+
+ /* set PME_EN */
+ if (wol_en) {
+ alx_cfg_r16(hw, L1F_PM_CSR, &pm_ctrl);
+ pm_ctrl |= L1F_PM_CSR_PME_EN;
+ alx_cfg_w16(hw, L1F_PM_CSR, pm_ctrl);
+ }
+
+ return ret;
+}
+
+
+/* read phy register */
+int l1f_read_phy(struct alx_hw *hw, bool ext, u8 dev, bool fast,
+ u16 reg, u16 *data)
+{
+ u32 val;
+ u16 clk_sel, i;
+ int ret = 0;
+
+ *data = 0;
+ clk_sel = fast ?
+ (u16)L1F_MDIO_CLK_SEL_25MD4 : (u16)L1F_MDIO_CLK_SEL_25MD128;
+
+ if (ext) {
+ val = FIELDL(L1F_MDIO_EXTN_DEVAD, dev) |
+ FIELDL(L1F_MDIO_EXTN_REG, reg);
+ alx_mem_w32(hw, L1F_MDIO_EXTN, val);
+
+ val = L1F_MDIO_SPRES_PRMBL |
+ FIELDL(L1F_MDIO_CLK_SEL, clk_sel) |
+ L1F_MDIO_START |
+ L1F_MDIO_MODE_EXT |
+ L1F_MDIO_OP_READ;
+ } else {
+ val = L1F_MDIO_SPRES_PRMBL |
+ FIELDL(L1F_MDIO_CLK_SEL, clk_sel) |
+ FIELDL(L1F_MDIO_REG, reg) |
+ L1F_MDIO_START |
+ L1F_MDIO_OP_READ;
+ }
+
+ alx_mem_w32(hw, L1F_MDIO, val);
+
+ for (i = 0; i < L1F_MDIO_MAX_AC_TO; i++) {
+ alx_mem_r32(hw, L1F_MDIO, &val);
+ if ((val & L1F_MDIO_BUSY) == 0) {
+ *data = (u16)FIELD_GETX(val, L1F_MDIO_DATA);
+ break;
+ }
+ udelay(10);
+ }
+
+ if (L1F_MDIO_MAX_AC_TO == i)
+ ret = LX_ERR_MIIBUSY;
+
+ return ret;
+}
+
+/* write phy register */
+int l1f_write_phy(struct alx_hw *hw, bool ext, u8 dev, bool fast,
+ u16 reg, u16 data)
+{
+ u32 val;
+ u16 clk_sel, i;
+ int ret = 0;
+
+ clk_sel = fast ?
+ (u16)L1F_MDIO_CLK_SEL_25MD4 : (u16)L1F_MDIO_CLK_SEL_25MD128;
+
+ if (ext) {
+ val = FIELDL(L1F_MDIO_EXTN_DEVAD, dev) |
+ FIELDL(L1F_MDIO_EXTN_REG, reg);
+ alx_mem_w32(hw, L1F_MDIO_EXTN, val);
+
+ val = L1F_MDIO_SPRES_PRMBL |
+ FIELDL(L1F_MDIO_CLK_SEL, clk_sel) |
+ FIELDL(L1F_MDIO_DATA, data) |
+ L1F_MDIO_START |
+ L1F_MDIO_MODE_EXT;
+ } else {
+ val = L1F_MDIO_SPRES_PRMBL |
+ FIELDL(L1F_MDIO_CLK_SEL, clk_sel) |
+ FIELDL(L1F_MDIO_REG, reg) |
+ FIELDL(L1F_MDIO_DATA, data) |
+ L1F_MDIO_START;
+ }
+
+ alx_mem_w32(hw, L1F_MDIO, val);
+
+ for (i = 0; i < L1F_MDIO_MAX_AC_TO; i++) {
+ alx_mem_r32(hw, L1F_MDIO, &val);
+ if ((val & L1F_MDIO_BUSY) == 0)
+ break;
+ udelay(10);
+ }
+
+ if (L1F_MDIO_MAX_AC_TO == i)
+ ret = LX_ERR_MIIBUSY;
+
+ return ret;
+}
+
+int l1f_read_phydbg(struct alx_hw *hw, bool fast, u16 reg, u16 *data)
+{
+ int ret;
+
+ ret = l1f_write_phy(hw, false, 0, fast, L1F_MII_DBG_ADDR, reg);
+ ret = l1f_read_phy(hw, false, 0, fast, L1F_MII_DBG_DATA, data);
+
+ return ret;
+}
+
+int l1f_write_phydbg(struct alx_hw *hw, bool fast, u16 reg, u16 data)
+{
+ int ret;
+
+ ret = l1f_write_phy(hw, false, 0, fast, L1F_MII_DBG_ADDR, reg);
+ ret = l1f_write_phy(hw, false, 0, fast, L1F_MII_DBG_DATA, data);
+
+ return ret;
+}
+
+
+void l1f_init_mac_misc(struct alx_hw *hw, u8 *addr, u16 smb_timer, u16 int_mod,
+ bool hash_legacy)
+{
+ u32 val;
+
+ /* set mac-address */
+ val = be32_to_cpu(*(__be32 *)(addr + 2));
+ alx_mem_w32(hw, L1F_STAD0, val);
+ val = be16_to_cpu(*(__be16 *)addr) ;
+ alx_mem_w32(hw, L1F_STAD1, val);
+
+ /* clear multicast hash table, algrithm */
+ alx_mem_w32(hw, L1F_HASH_TBL0, 0);
+ alx_mem_w32(hw, L1F_HASH_TBL1, 0);
+ alx_mem_r32(hw, L1F_MAC_CTRL, &val);
+ if (hash_legacy)
+ val |= L1F_MAC_CTRL_MHASH_ALG_HI5B;
+ else
+ val &= ~L1F_MAC_CTRL_MHASH_ALG_HI5B;
+ alx_mem_w32(hw, L1F_MAC_CTRL, val);
+
+ /* clear any wol setting/status */
+ alx_mem_r32(hw, L1F_WOL0, &val);
+ alx_mem_w32(hw, L1F_WOL0, 0);
+
+ /* clk gating */
+ alx_mem_w32(hw, L1F_CLK_GATE,
+ (FIELD_GETX(hw->pci_revid, L1F_PCI_REVID) == L1F_REV_B0) ?
+ L1F_CLK_GATE_ALL_B0 : L1F_CLK_GATE_ALL_A0);
+
+ /* idle timeout to switch clk_125M */
+ if (FIELD_GETX(hw->pci_revid, L1F_PCI_REVID) == L1F_REV_B0) {
+ alx_mem_w32(hw, L1F_IDLE_DECISN_TIMER,
+ L1F_IDLE_DECISN_TIMER_DEF);
+ }
+
+ alx_mem_w32(hw, L1F_SMB_TIMER, smb_timer * 500UL);
+
+ /* int moduration */
+ alx_mem_r32(hw, L1F_MASTER, &val);
+ val = val | L1F_MASTER_IRQMOD2_EN |
+ L1F_MASTER_IRQMOD1_EN |
+ L1F_MASTER_SYSALVTIMER_EN; /* sysalive */
+ alx_mem_w32(hw, L1F_MASTER, val);
+ alx_mem_w32(hw, L1F_IRQ_MODU_TIMER,
+ FIELDL(L1F_IRQ_MODU_TIMER1, int_mod >> 1));
+
+ alx_mem_w32(hw, L1F_TINT_TIMER, int_mod);
+ /* re-send int */
+ alx_mem_w32(hw, L1F_INT_RETRIG, L1F_INT_RETRIG_TO);
+}
+
+
+int l1f_init_mac_rtx_ring_desc(struct alx_hw *hw, u32 rxmem_hi,
+ u32 rfdmem_lo, u32 rrdmem_lo,
+ u16 rxring_sz, u16 rxbuf_sz,
+ u32 txmem_hi, u32 *tx_mem_lo,
+ u8 tx_qnum, u16 txring_sz)
+{
+ alx_mem_w32(hw, L1F_TX_BASE_ADDR_HI, txmem_hi);
+ alx_mem_w32(hw, L1F_TPD_RING_SZ, txring_sz);
+ switch (tx_qnum) {
+ case 4:
+ alx_mem_w32(hw, L1F_TPD_PRI3_ADDR_LO, tx_mem_lo[3]);
+ /* fall through */
+ case 3:
+ alx_mem_w32(hw, L1F_TPD_PRI2_ADDR_LO, tx_mem_lo[2]);
+ /* fall through */
+ case 2:
+ alx_mem_w32(hw, L1F_TPD_PRI1_ADDR_LO, tx_mem_lo[1]);
+ /* fall through */
+ case 1:
+ alx_mem_w32(hw, L1F_TPD_PRI0_ADDR_LO, tx_mem_lo[0]);
+ break;
+ default:
+ return LX_ERR_PARM;
+ }
+ alx_mem_w32(hw, L1F_RX_BASE_ADDR_HI, rxmem_hi);
+ alx_mem_w32(hw, L1F_RFD_ADDR_LO, rfdmem_lo);
+ alx_mem_w32(hw, L1F_RRD_ADDR_LO, rrdmem_lo);
+ alx_mem_w32(hw, L1F_RFD_BUF_SZ, rxbuf_sz);
+ alx_mem_w32(hw, L1F_RRD_RING_SZ, rxring_sz);
+ alx_mem_w32(hw, L1F_RFD_RING_SZ, rxring_sz);
+ alx_mem_w32(hw, L1F_SRAM9, L1F_SRAM_LOAD_PTR);
+
+ /* tpd threshold to trig int */
+ alx_mem_w32(hw, L1F_TINT_TPD_THRSHLD, (u32)txring_sz / 3);
+
+ /*
+ * rfd producer index
+ * attention - rxq should be disable before set rfd produce index
+ */
+ alx_mem_w32(hw, L1F_RFD_PIDX, (u32)rxring_sz - 1);
+
+ return 0;
+}
+
+void l1f_init_mac_rtx_queue(struct alx_hw *hw, u16 mtu)
+{
+ u32 val;
+ u16 val16, devid;
+ u8 dmar_len;
+
+ alx_cfg_r16(hw, PCI_DEVICE_ID, &devid);
+
+ /* mtu */
+ alx_mem_w32(hw, L1F_MTU, (u32)(mtu + 4 + 4)); /* crc + vlan */
+ if (mtu > L1F_MTU_JUMBO_TH) {
+ alx_mem_r32(hw, L1F_MAC_CTRL, &val);
+ alx_mem_w32(hw, L1F_MAC_CTRL, val & ~L1F_MAC_CTRL_FAST_PAUSE);
+ }
+
+ /* txq */
+ if ((mtu + 8) < L1F_TXQ1_JUMBO_TSO_TH)
+ val = (u32)(mtu + 8 + 7) >> 3; /* 7 for QWORD align */
+ else
+ val = L1F_TXQ1_JUMBO_TSO_TH >> 3;
+ alx_mem_w32(hw, L1F_TXQ1, val | L1F_TXQ1_ERRLGPKT_DROP_EN);
+ alx_mem_r32(hw, L1F_DEV_CTRL, &val);
+ dmar_len = (u8)FIELD_GETX(val, L1F_DEV_CTRL_MAXRRS);
+ /* if BIOS had changed the default dma read max length,
+ * restore it to default value */
+ if (dmar_len < L1F_DEV_CTRL_MAXRRS_MIN) {
+ FIELD_SETL(val, L1F_DEV_CTRL_MAXRRS, L1F_DEV_CTRL_MAXRRS_MIN);
+ alx_mem_w32(hw, L1F_DEV_CTRL, val);
+ }
+ val = FIELDL(L1F_TXQ0_TPD_BURSTPREF, L1F_TXQ_TPD_BURSTPREF_DEF) |
+ L1F_TXQ0_MODE_ENHANCE |
+ L1F_TXQ0_LSO_8023_EN |
+ L1F_TXQ0_SUPT_IPOPT |
+ FIELDL(L1F_TXQ0_TXF_BURST_PREF, L1F_TXQ_TXF_BURST_PREF_DEF);
+ alx_mem_w32(hw, L1F_TXQ0, val);
+ val = FIELDL(L1F_HQTPD_Q1_NUMPREF, L1F_TXQ_TPD_BURSTPREF_DEF) |
+ FIELDL(L1F_HQTPD_Q2_NUMPREF, L1F_TXQ_TPD_BURSTPREF_DEF) |
+ FIELDL(L1F_HQTPD_Q3_NUMPREF, L1F_TXQ_TPD_BURSTPREF_DEF) |
+ L1F_HQTPD_BURST_EN;
+ alx_mem_w32(hw, L1F_HQTPD, val);
+
+ /* rxq */
+ alx_mem_r32(hw, L1F_SRAM5, &val);
+ val = FIELD_GETX(val, L1F_SRAM_RXF_LEN) << 3; /* bytes */
+ if (val > L1F_SRAM_RXF_LEN_8K) {
+ val16 = L1F_MTU_STD_ALGN >> 3;
+ val = (val - L1F_RXQ2_RXF_FLOW_CTRL_RSVD) >> 3;
+ } else {
+ val16 = L1F_MTU_STD_ALGN >> 3;
+ val = (val - L1F_MTU_STD_ALGN) >> 3;
+ }
+ alx_mem_w32(hw, L1F_RXQ2,
+ FIELDL(L1F_RXQ2_RXF_XOFF_THRESH, val16) |
+ FIELDL(L1F_RXQ2_RXF_XON_THRESH, val));
+ val = FIELDL(L1F_RXQ0_NUM_RFD_PREF, L1F_RXQ0_NUM_RFD_PREF_DEF) |
+ FIELDL(L1F_RXQ0_RSS_MODE, L1F_RXQ0_RSS_MODE_DIS) |
+ FIELDL(L1F_RXQ0_IDT_TBL_SIZE, L1F_RXQ0_IDT_TBL_SIZE_DEF) |
+ L1F_RXQ0_RSS_HSTYP_ALL |
+ L1F_RXQ0_RSS_HASH_EN |
+ L1F_RXQ0_IPV6_PARSE_EN;
+
+ if ((devid & 1) != 0) {
+ FIELD_SETL(val, L1F_RXQ0_ASPM_THRESH,
+ L1F_RXQ0_ASPM_THRESH_100M);
+ }
+ alx_mem_w32(hw, L1F_RXQ0, val);
+}
+
+void l1f_init_mac_dma(struct alx_hw *hw)
+{
+ u32 val;
+ u8 dmar_len;
+
+ alx_mem_r32(hw, L1F_DEV_CTRL, &val);
+ dmar_len = (u8)FIELD_GETX(val, L1F_DEV_CTRL_MAXRRS);
+ /* DMA */
+ alx_mem_r32(hw, L1F_DMA, &val);
+ val = FIELDL(L1F_DMA_RORDER_MODE, L1F_DMA_RORDER_MODE_OUT) |
+ L1F_DMA_RREQ_PRI_DATA |
+ FIELDL(L1F_DMA_RREQ_BLEN, dmar_len) |
+ FIELDL(L1F_DMA_WDLY_CNT, L1F_DMA_WDLY_CNT_DEF) |
+ FIELDL(L1F_DMA_RDLY_CNT, L1F_DMA_RDLY_CNT_DEF) |
+ FIELDL(L1F_DMA_RCHNL_SEL, hw->dma_chnl - 1);
+ alx_mem_w32(hw, L1F_DMA, val);
+}
+
+
+int l1f_get_phy_config(struct alx_hw *hw)
+{
+ u32 val;
+ u16 phy_val;
+
+ alx_mem_r32(hw, L1F_PHY_CTRL, &val);
+ /* phy in rst */
+ if ((val & L1F_PHY_CTRL_DSPRST_OUT) == 0)
+ return LX_DRV_PHY_UNKNOWN;
+
+ alx_mem_r32(hw, L1F_DRV, &val);
+ val = FIELD_GETX(val, LX_DRV_PHY);
+ if (LX_DRV_PHY_UNKNOWN == val)
+ return LX_DRV_PHY_UNKNOWN;
+
+ l1f_read_phy(hw, false, 0, false, L1F_MII_DBG_ADDR, &phy_val);
+ if (LX_PHY_INITED == phy_val)
+ return (u16) val;
+
+ return LX_DRV_PHY_UNKNOWN;
+}
diff --git a/drivers/net/ethernet/atheros/alx/alx_hw.h b/drivers/net/ethernet/atheros/alx/alx_hw.h
new file mode 100644
index 0000000..284f6b4
--- /dev/null
+++ b/drivers/net/ethernet/atheros/alx/alx_hw.h
@@ -0,0 +1,2132 @@
+/*
+ * Copyright (c) 2012 Qualcomm Atheros, Inc.
+ *
+ * Permission to use, copy, modify, and/or distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+#ifndef L1F_HW_H_
+#define L1F_HW_H_
+
+/*********************************************************************
+ * some reqs for l1f_sw.h
+ *
+ * 1. some basic type must be defined if there are not defined by
+ * your compiler:
+ * u8, u16, u32, bool
+ *
+ * 2. PETHCONTEXT difinition should be in l1x_sw.h and it must contain
+ * pci_devid & pci_venid
+ *
+ *********************************************************************/
+
+#include "alx_hwcom.h"
+
+/******************************************************************************/
+#define L1F_DEV_ID 0x1091
+#define L2F_DEV_ID 0x1090
+#define L1F_DEV_AR71ID 0x10A1
+#define L2F_DEV_AR71ID 0x10A0
+#define L1F_DEV_BFID 0xE091 /* for bigfoot */
+
+#define L1F_PCI_REVID_WTH_CR BIT(1)
+#define L1F_PCI_REVID_WTH_XD BIT(0)
+#define L1F_PCI_REVID_MASK 0x1FU
+#define L1F_PCI_REVID_SHIFT 3
+#define L1F_REV_A0 0
+#define L1F_REV_A1 1
+#define L1F_REV_B0 2
+
+#define L1F_PM_CSR 0x0044 /* 16bit */
+#define L1F_PM_CSR_PME_STAT BIT(15)
+#define L1F_PM_CSR_DSCAL_MASK 0x3U
+#define L1F_PM_CSR_DSCAL_SHIFT 13
+#define L1F_PM_CSR_DSEL_MASK 0xFU
+#define L1F_PM_CSR_DSEL_SHIFT 9
+#define L1F_PM_CSR_PME_EN BIT(8)
+#define L1F_PM_CSR_PWST_MASK 0x3U
+#define L1F_PM_CSR_PWST_SHIFT 0
+
+#define L1F_PM_DATA 0x0047 /* 8bit */
+
+
+#define L1F_DEV_CAP 0x005C
+#define L1F_DEV_CAP_SPLSL_MASK 0x3UL
+#define L1F_DEV_CAP_SPLSL_SHIFT 26
+#define L1F_DEV_CAP_SPLV_MASK 0xFFUL
+#define L1F_DEV_CAP_SPLV_SHIFT 18
+#define L1F_DEV_CAP_RBER BIT(15)
+#define L1F_DEV_CAP_PIPRS BIT(14)
+#define L1F_DEV_CAP_AIPRS BIT(13)
+#define L1F_DEV_CAP_ABPRS BIT(12)
+#define L1F_DEV_CAP_L1ACLAT_MASK 0x7UL
+#define L1F_DEV_CAP_L1ACLAT_SHIFT 9
+#define L1F_DEV_CAP_L0SACLAT_MASK 0x7UL
+#define L1F_DEV_CAP_L0SACLAT_SHIFT 6
+#define L1F_DEV_CAP_EXTAG BIT(5)
+#define L1F_DEV_CAP_PHANTOM BIT(4)
+#define L1F_DEV_CAP_MPL_MASK 0x7UL
+#define L1F_DEV_CAP_MPL_SHIFT 0
+#define L1F_DEV_CAP_MPL_128 1
+#define L1F_DEV_CAP_MPL_256 2
+#define L1F_DEV_CAP_MPL_512 3
+#define L1F_DEV_CAP_MPL_1024 4
+#define L1F_DEV_CAP_MPL_2048 5
+#define L1F_DEV_CAP_MPL_4096 6
+
+#define L1F_DEV_CTRL 0x0060 /* 16bit */
+#define L1F_DEV_CTRL_MAXRRS_MASK 0x7U
+#define L1F_DEV_CTRL_MAXRRS_SHIFT 12
+#define L1F_DEV_CTRL_MAXRRS_MIN 2
+#define L1F_DEV_CTRL_NOSNP_EN BIT(11)
+#define L1F_DEV_CTRL_AUXPWR_EN BIT(10)
+#define L1F_DEV_CTRL_PHANTOM_EN BIT(9)
+#define L1F_DEV_CTRL_EXTAG_EN BIT(8)
+#define L1F_DEV_CTRL_MPL_MASK 0x7U
+#define L1F_DEV_CTRL_MPL_SHIFT 5
+#define L1F_DEV_CTRL_RELORD_EN BIT(4)
+#define L1F_DEV_CTRL_URR_EN BIT(3)
+#define L1F_DEV_CTRL_FERR_EN BIT(2)
+#define L1F_DEV_CTRL_NFERR_EN BIT(1)
+#define L1F_DEV_CTRL_CERR_EN BIT(0)
+
+
+#define L1F_DEV_STAT 0x0062 /* 16bit */
+#define L1F_DEV_STAT_XS_PEND BIT(5)
+#define L1F_DEV_STAT_AUXPWR BIT(4)
+#define L1F_DEV_STAT_UR BIT(3)
+#define L1F_DEV_STAT_FERR BIT(2)
+#define L1F_DEV_STAT_NFERR BIT(1)
+#define L1F_DEV_STAT_CERR BIT(0)
+
+#define L1F_LNK_CAP 0x0064
+#define L1F_LNK_CAP_PRTNUM_MASK 0xFFUL
+#define L1F_LNK_CAP_PRTNUM_SHIFT 24
+#define L1F_LNK_CAP_CLK_PM BIT(18)
+#define L1F_LNK_CAP_L1EXTLAT_MASK 0x7UL
+#define L1F_LNK_CAP_L1EXTLAT_SHIFT 15
+#define L1F_LNK_CAP_L0SEXTLAT_MASK 0x7UL
+#define L1F_LNK_CAP_L0SEXTLAT_SHIFT 12
+#define L1F_LNK_CAP_ASPM_SUP_MASK 0x3UL
+#define L1F_LNK_CAP_ASPM_SUP_SHIFT 10
+#define L1F_LNK_CAP_ASPM_SUP_L0S 1
+#define L1F_LNK_CAP_ASPM_SUP_L0SL1 3
+#define L1F_LNK_CAP_MAX_LWH_MASK 0x3FUL
+#define L1F_LNK_CAP_MAX_LWH_SHIFT 4
+#define L1F_LNK_CAP_MAX_LSPD_MASK 0xFUL
+#define L1F_LNK_CAP_MAX_LSPD_SHIFT 0
+
+#define L1F_LNK_CTRL 0x0068 /* 16bit */
+#define L1F_LNK_CTRL_CLK_PM_EN BIT(8)
+#define L1F_LNK_CTRL_EXTSYNC BIT(7)
+#define L1F_LNK_CTRL_CMNCLK_CFG BIT(6)
+#define L1F_LNK_CTRL_RCB_128B BIT(3) /* 0:64b,1:128b */
+#define L1F_LNK_CTRL_ASPM_MASK 0x3U
+#define L1F_LNK_CTRL_ASPM_SHIFT 0
+#define L1F_LNK_CTRL_ASPM_DIS 0
+#define L1F_LNK_CTRL_ASPM_ENL0S 1
+#define L1F_LNK_CTRL_ASPM_ENL1 2
+#define L1F_LNK_CTRL_ASPM_ENL0SL1 3
+
+#define L1F_LNK_STAT 0x006A /* 16bit */
+#define L1F_LNK_STAT_SCLKCFG BIT(12)
+#define L1F_LNK_STAT_LNKTRAIN BIT(11)
+#define L1F_LNK_STAT_TRNERR BIT(10)
+#define L1F_LNK_STAT_LNKSPD_MASK 0xFU
+#define L1F_LNK_STAT_LNKSPD_SHIFT 0
+#define L1F_LNK_STAT_NEGLW_MASK 0x3FU
+#define L1F_LNK_STAT_NEGLW_SHIFT 4
+
+#define L1F_MSIX_MASK 0x0090
+#define L1F_MSIX_PENDING 0x0094
+
+#define L1F_UE_SVRT 0x010C
+#define L1F_UE_SVRT_UR BIT(20)
+#define L1F_UE_SVRT_ECRCERR BIT(19)
+#define L1F_UE_SVRT_MTLP BIT(18)
+#define L1F_UE_SVRT_RCVOVFL BIT(17)
+#define L1F_UE_SVRT_UNEXPCPL BIT(16)
+#define L1F_UE_SVRT_CPLABRT BIT(15)
+#define L1F_UE_SVRT_CPLTO BIT(14)
+#define L1F_UE_SVRT_FCPROTERR BIT(13)
+#define L1F_UE_SVRT_PTLP BIT(12)
+#define L1F_UE_SVRT_DLPROTERR BIT(4)
+#define L1F_UE_SVRT_TRNERR BIT(0)
+
+#define L1F_EFLD 0x0204 /* eeprom/flash load */
+#define L1F_EFLD_F_ENDADDR_MASK 0x3FFUL
+#define L1F_EFLD_F_ENDADDR_SHIFT 16
+#define L1F_EFLD_F_EXIST BIT(10)
+#define L1F_EFLD_E_EXIST BIT(9)
+#define L1F_EFLD_EXIST BIT(8)
+#define L1F_EFLD_STAT BIT(5) /* 0:finish,1:in progress */
+#define L1F_EFLD_IDLE BIT(4)
+#define L1F_EFLD_START BIT(0)
+
+#define L1F_SLD 0x0218 /* efuse load */
+#define L1F_SLD_FREQ_MASK 0x3UL
+#define L1F_SLD_FREQ_SHIFT 24
+#define L1F_SLD_FREQ_100K 0
+#define L1F_SLD_FREQ_200K 1
+#define L1F_SLD_FREQ_300K 2
+#define L1F_SLD_FREQ_400K 3
+#define L1F_SLD_EXIST BIT(23)
+#define L1F_SLD_SLVADDR_MASK 0x7FUL
+#define L1F_SLD_SLVADDR_SHIFT 16
+#define L1F_SLD_IDLE BIT(13)
+#define L1F_SLD_STAT BIT(12) /* 0:finish,1:in progress */
+#define L1F_SLD_START BIT(11)
+#define L1F_SLD_STARTADDR_MASK 0xFFUL
+#define L1F_SLD_STARTADDR_SHIFT 0
+#define L1F_SLD_MAX_TO 100
+
+#define L1F_PCIE_MSIC 0x021C
+#define L1F_PCIE_MSIC_MSIX_DIS BIT(22)
+#define L1F_PCIE_MSIC_MSI_DIS BIT(21)
+
+#define L1F_PPHY_MISC1 0x1000
+#define L1F_PPHY_MISC1_RCVDET BIT(2)
+#define L1F_PPHY_MISC1_NFTS_MASK 0xFFUL
+#define L1F_PPHY_MISC1_NFTS_SHIFT 16
+#define L1F_PPHY_MISC1_NFTS_HIPERF 0xA0 /* ???? */
+
+#define L1F_PPHY_MISC2 0x1004
+#define L1F_PPHY_MISC2_L0S_TH_MASK 0x3UL
+#define L1F_PPHY_MISC2_L0S_TH_SHIFT 18
+#define L1F_PPHY_MISC2_CDR_BW_MASK 0x3UL
+#define L1F_PPHY_MISC2_CDR_BW_SHIFT 16
+
+#define L1F_PDLL_TRNS1 0x1104
+#define L1F_PDLL_TRNS1_D3PLLOFF_EN BIT(11)
+#define L1F_PDLL_TRNS1_REGCLK_SEL_NORM BIT(10)
+#define L1F_PDLL_TRNS1_REPLY_TO_MASK 0x3FFUL
+#define L1F_PDLL_TRNS1_REPLY_TO_SHIFT 0
+
+
+#define L1F_TLEXTN_STATS 0x1208
+#define L1F_TLEXTN_STATS_DEVNO_MASK 0x1FUL
+#define L1F_TLEXTN_STATS_DEVNO_SHIFT 16
+#define L1F_TLEXTN_STATS_BUSNO_MASK 0xFFUL
+#define L1F_TLEXTN_STATS_BUSNO_SHIFT 8
+
+#define L1F_EFUSE_CTRL 0x12C0
+#define L1F_EFUSE_CTRL_FLAG BIT(31) /* 0:read,1:write */
+#define L1F_EUFSE_CTRL_ACK BIT(30)
+#define L1F_EFUSE_CTRL_ADDR_MASK 0x3FFUL
+#define L1F_EFUSE_CTRL_ADDR_SHIFT 16
+
+#define L1F_EFUSE_DATA 0x12C4
+
+#define L1F_SPI_OP1 0x12C8
+#define L1F_SPI_OP1_RDID_MASK 0xFFUL
+#define L1F_SPI_OP1_RDID_SHIFT 24
+#define L1F_SPI_OP1_CE_MASK 0xFFUL
+#define L1F_SPI_OP1_CE_SHIFT 16
+#define L1F_SPI_OP1_SE_MASK 0xFFUL
+#define L1F_SPI_OP1_SE_SHIFT 8
+#define L1F_SPI_OP1_PRGRM_MASK 0xFFUL
+#define L1F_SPI_OP1_PRGRM_SHIFT 0
+
+#define L1F_SPI_OP2 0x12CC
+#define L1F_SPI_OP2_READ_MASK 0xFFUL
+#define L1F_SPI_OP2_READ_SHIFT 24
+#define L1F_SPI_OP2_WRSR_MASK 0xFFUL
+#define L1F_SPI_OP2_WRSR_SHIFT 16
+#define L1F_SPI_OP2_RDSR_MASK 0xFFUL
+#define L1F_SPI_OP2_RDSR_SHIFT 8
+#define L1F_SPI_OP2_WREN_MASK 0xFFUL
+#define L1F_SPI_OP2_WREN_SHIFT 0
+
+#define L1F_SPI_OP3 0x12E4
+#define L1F_SPI_OP3_WRDI_MASK 0xFFUL
+#define L1F_SPI_OP3_WRDI_SHIFT 8
+#define L1F_SPI_OP3_EWSR_MASK 0xFFUL
+#define L1F_SPI_OP3_EWSR_SHIFT 0
+
+#define L1F_EF_CTRL 0x12D0
+#define L1F_EF_CTRL_FSTS_MASK 0xFFUL
+#define L1F_EF_CTRL_FSTS_SHIFT 20
+#define L1F_EF_CTRL_CLASS_MASK 0x7UL
+#define L1F_EF_CTRL_CLASS_SHIFT 16
+#define L1F_EF_CTRL_CLASS_F_UNKNOWN 0
+#define L1F_EF_CTRL_CLASS_F_STD 1
+#define L1F_EF_CTRL_CLASS_F_SST 2
+#define L1F_EF_CTRL_CLASS_E_UNKNOWN 0
+#define L1F_EF_CTRL_CLASS_E_1K 1
+#define L1F_EF_CTRL_CLASS_E_4K 2
+#define L1F_EF_CTRL_FRET BIT(15) /* 0:OK,1:fail */
+#define L1F_EF_CTRL_TYP_MASK 0x3UL
+#define L1F_EF_CTRL_TYP_SHIFT 12
+#define L1F_EF_CTRL_TYP_NONE 0
+#define L1F_EF_CTRL_TYP_F 1
+#define L1F_EF_CTRL_TYP_E 2
+#define L1F_EF_CTRL_TYP_UNKNOWN 3
+#define L1F_EF_CTRL_ONE_CLK BIT(10)
+#define L1F_EF_CTRL_ECLK_MASK 0x3UL
+#define L1F_EF_CTRL_ECLK_SHIFT 8
+#define L1F_EF_CTRL_ECLK_125K 0
+#define L1F_EF_CTRL_ECLK_250K 1
+#define L1F_EF_CTRL_ECLK_500K 2
+#define L1F_EF_CTRL_ECLK_1M 3
+#define L1F_EF_CTRL_FBUSY BIT(7)
+#define L1F_EF_CTRL_ACTION BIT(6) /* 1:start,0:stop */
+#define L1F_EF_CTRL_AUTO_OP BIT(5)
+#define L1F_EF_CTRL_SST_MODE BIT(4) /* force using sst */
+#define L1F_EF_CTRL_INST_MASK 0xFUL
+#define L1F_EF_CTRL_INST_SHIFT 0
+#define L1F_EF_CTRL_INST_NONE 0
+#define L1F_EF_CTRL_INST_READ 1 /* for flash & eeprom */
+#define L1F_EF_CTRL_INST_RDID 2
+#define L1F_EF_CTRL_INST_RDSR 3
+#define L1F_EF_CTRL_INST_WREN 4
+#define L1F_EF_CTRL_INST_PRGRM 5
+#define L1F_EF_CTRL_INST_SE 6
+#define L1F_EF_CTRL_INST_CE 7
+#define L1F_EF_CTRL_INST_WRSR 10
+#define L1F_EF_CTRL_INST_EWSR 11
+#define L1F_EF_CTRL_INST_WRDI 12
+#define L1F_EF_CTRL_INST_WRITE 2 /* only for eeprom */
+
+#define L1F_EF_ADDR 0x12D4
+#define L1F_EF_DATA 0x12D8
+#define L1F_SPI_ID 0x12DC
+
+#define L1F_SPI_CFG_START 0x12E0
+
+#define L1F_PMCTRL 0x12F8
+#define L1F_PMCTRL_HOTRST_WTEN BIT(31)
+#define L1F_PMCTRL_ASPM_FCEN BIT(30) /* L0s/L1 dis by MAC based on
+ * thrghput(setting in 15A0) */
+#define L1F_PMCTRL_SADLY_EN BIT(29)
+#define L1F_PMCTRL_L0S_BUFSRX_EN BIT(28)
+#define L1F_PMCTRL_LCKDET_TIMER_MASK 0xFUL
+#define L1F_PMCTRL_LCKDET_TIMER_SHIFT 24
+#define L1F_PMCTRL_LCKDET_TIMER_DEF 0xC
+#define L1F_PMCTRL_L1REQ_TO_MASK 0xFUL
+#define L1F_PMCTRL_L1REQ_TO_SHIFT 20 /* pm_request_l1 time > @
+ * ->L0s not L1 */
+#define L1F_PMCTRL_L1REG_TO_DEF 0xF
+#define L1F_PMCTRL_TXL1_AFTER_L0S BIT(19)
+#define L1F_PMCTRL_L1_TIMER_MASK 0x7UL
+#define L1F_PMCTRL_L1_TIMER_SHIFT 16
+#define L1F_PMCTRL_L1_TIMER_DIS 0
+#define L1F_PMCTRL_L1_TIMER_2US 1
+#define L1F_PMCTRL_L1_TIMER_4US 2
+#define L1F_PMCTRL_L1_TIMER_8US 3
+#define L1F_PMCTRL_L1_TIMER_16US 4
+#define L1F_PMCTRL_L1_TIMER_24US 5
+#define L1F_PMCTRL_L1_TIMER_32US 6
+#define L1F_PMCTRL_L1_TIMER_63US 7
+#define L1F_PMCTRL_RCVR_WT_1US BIT(15) /* 1:1us, 0:2ms */
+#define L1F_PMCTRL_PWM_VER_11 BIT(14) /* 0:1.0a,1:1.1 */
+#define L1F_PMCTRL_L1_CLKSW_EN BIT(13) /* en pcie clk sw in L1 */
+#define L1F_PMCTRL_L0S_EN BIT(12)
+#define L1F_PMCTRL_RXL1_AFTER_L0S BIT(11)
+#define L1F_PMCTRL_L0S_TIMER_MASK 0x7UL
+#define L1F_PMCTRL_L0S_TIMER_SHIFT 8
+#define L1F_PMCTRL_L1_BUFSRX_EN BIT(7)
+#define L1F_PMCTRL_L1_SRDSRX_PWD BIT(6) /* power down serdes rx */
+#define L1F_PMCTRL_L1_SRDSPLL_EN BIT(5)
+#define L1F_PMCTRL_L1_SRDS_EN BIT(4)
+#define L1F_PMCTRL_L1_EN BIT(3)
+#define L1F_PMCTRL_CLKREQ_EN BIT(2)
+#define L1F_PMCTRL_RBER_EN BIT(1)
+#define L1F_PMCTRL_SPRSDWER_EN BIT(0)
+
+#define L1F_LTSSM_CTRL 0x12FC
+#define L1F_LTSSM_WRO_EN BIT(12)
+
+
+/******************************************************************************/
+
+#define L1F_MASTER 0x1400
+#define L1F_MASTER_OTP_FLG BIT(31)
+#define L1F_MASTER_DEV_NUM_MASK 0x7FUL
+#define L1F_MASTER_DEV_NUM_SHIFT 24
+#define L1F_MASTER_REV_NUM_MASK 0xFFUL
+#define L1F_MASTER_REV_NUM_SHIFT 16
+#define L1F_MASTER_DEASSRT BIT(15) /*ISSUE DE-ASSERT MSG */
+#define L1F_MASTER_RDCLR_INT BIT(14)
+#define L1F_MASTER_DMA_RST BIT(13)
+#define L1F_MASTER_PCLKSEL_SRDS BIT(12) /* 1:alwys sel pclk from
+ * serdes, not sw to 25M */
+#define L1F_MASTER_IRQMOD2_EN BIT(11) /* IRQ MODURATION FOR RX */
+#define L1F_MASTER_IRQMOD1_EN BIT(10) /* MODURATION FOR TX/RX */
+#define L1F_MASTER_MANU_INT BIT(9) /* SOFT MANUAL INT */
+#define L1F_MASTER_MANUTIMER_EN BIT(8)
+#define L1F_MASTER_SYSALVTIMER_EN BIT(7) /* SYS ALIVE TIMER EN */
+#define L1F_MASTER_OOB_DIS BIT(6) /* OUT OF BOX DIS */
+#define L1F_MASTER_WAKEN_25M BIT(5) /* WAKE WO. PCIE CLK */
+#define L1F_MASTER_BERT_START BIT(4)
+#define L1F_MASTER_PCIE_TSTMOD_MASK 0x3UL
+#define L1F_MASTER_PCIE_TSTMOD_SHIFT 2
+#define L1F_MASTER_PCIE_RST BIT(1)
+#define L1F_MASTER_DMA_MAC_RST BIT(0) /* RST MAC & DMA */
+#define L1F_DMA_MAC_RST_TO 50
+
+#define L1F_MANU_TIMER 0x1404
+
+#define L1F_IRQ_MODU_TIMER 0x1408
+#define L1F_IRQ_MODU_TIMER2_MASK 0xFFFFUL
+#define L1F_IRQ_MODU_TIMER2_SHIFT 16 /* ONLY FOR RX */
+#define L1F_IRQ_MODU_TIMER1_MASK 0xFFFFUL
+#define L1F_IRQ_MODU_TIMER1_SHIFT 0
+
+#define L1F_PHY_CTRL 0x140C
+#define L1F_PHY_CTRL_ADDR_MASK 0x1FUL
+#define L1F_PHY_CTRL_ADDR_SHIFT 19
+#define L1F_PHY_CTRL_BP_VLTGSW BIT(18)
+#define L1F_PHY_CTRL_100AB_EN BIT(17)
+#define L1F_PHY_CTRL_10AB_EN BIT(16)
+#define L1F_PHY_CTRL_PLL_BYPASS BIT(15)
+#define L1F_PHY_CTRL_POWER_DOWN BIT(14) /* affect MAC & PHY,
+ * go to low power sts */
+#define L1F_PHY_CTRL_PLL_ON BIT(13) /* 1:PLL ALWAYS ON
+ * 0:CAN SWITCH IN LPW */
+#define L1F_PHY_CTRL_RST_ANALOG BIT(12)
+#define L1F_PHY_CTRL_HIB_PULSE BIT(11)
+#define L1F_PHY_CTRL_HIB_EN BIT(10)
+#define L1F_PHY_CTRL_GIGA_DIS BIT(9)
+#define L1F_PHY_CTRL_IDDQ_DIS BIT(8) /* POWER ON RST */
+#define L1F_PHY_CTRL_IDDQ BIT(7) /* WHILE REBOOT, BIT8(1)
+ * EFFECTS BIT7 */
+#define L1F_PHY_CTRL_LPW_EXIT BIT(6)
+#define L1F_PHY_CTRL_GATE_25M BIT(5)
+#define L1F_PHY_CTRL_RVRS_ANEG BIT(4)
+#define L1F_PHY_CTRL_ANEG_NOW BIT(3)
+#define L1F_PHY_CTRL_LED_MODE BIT(2)
+#define L1F_PHY_CTRL_RTL_MODE BIT(1)
+#define L1F_PHY_CTRL_DSPRST_OUT BIT(0) /* OUT OF DSP RST STATE */
+#define L1F_PHY_CTRL_DSPRST_TO 80
+#define L1F_PHY_CTRL_CLS (\
+ L1F_PHY_CTRL_LED_MODE |\
+ L1F_PHY_CTRL_100AB_EN |\
+ L1F_PHY_CTRL_PLL_ON)
+
+#define L1F_MAC_STS 0x1410
+#define L1F_MAC_STS_SFORCE_MASK 0xFUL
+#define L1F_MAC_STS_SFORCE_SHIFT 14
+#define L1F_MAC_STS_CALIB_DONE BIT13
+#define L1F_MAC_STS_CALIB_RES_MASK 0x1FUL
+#define L1F_MAC_STS_CALIB_RES_SHIFT 8
+#define L1F_MAC_STS_CALIBERR_MASK 0xFUL
+#define L1F_MAC_STS_CALIBERR_SHIFT 4
+#define L1F_MAC_STS_TXQ_BUSY BIT(3)
+#define L1F_MAC_STS_RXQ_BUSY BIT(2)
+#define L1F_MAC_STS_TXMAC_BUSY BIT(1)
+#define L1F_MAC_STS_RXMAC_BUSY BIT(0)
+#define L1F_MAC_STS_IDLE (\
+ L1F_MAC_STS_TXQ_BUSY |\
+ L1F_MAC_STS_RXQ_BUSY |\
+ L1F_MAC_STS_TXMAC_BUSY |\
+ L1F_MAC_STS_RXMAC_BUSY)
+
+#define L1F_MDIO 0x1414
+#define L1F_MDIO_MODE_EXT BIT(30) /* 0:normal,1:ext */
+#define L1F_MDIO_POST_READ BIT(29)
+#define L1F_MDIO_AUTO_POLLING BIT(28)
+#define L1F_MDIO_BUSY BIT(27)
+#define L1F_MDIO_CLK_SEL_MASK 0x7UL
+#define L1F_MDIO_CLK_SEL_SHIFT 24
+#define L1F_MDIO_CLK_SEL_25MD4 0 /* 25M DIV 4 */
+#define L1F_MDIO_CLK_SEL_25MD6 2
+#define L1F_MDIO_CLK_SEL_25MD8 3
+#define L1F_MDIO_CLK_SEL_25MD10 4
+#define L1F_MDIO_CLK_SEL_25MD32 5
+#define L1F_MDIO_CLK_SEL_25MD64 6
+#define L1F_MDIO_CLK_SEL_25MD128 7
+#define L1F_MDIO_START BIT(23)
+#define L1F_MDIO_SPRES_PRMBL BIT(22)
+#define L1F_MDIO_OP_READ BIT(21) /* 1:read,0:write */
+#define L1F_MDIO_REG_MASK 0x1FUL
+#define L1F_MDIO_REG_SHIFT 16
+#define L1F_MDIO_DATA_MASK 0xFFFFUL
+#define L1F_MDIO_DATA_SHIFT 0
+#define L1F_MDIO_MAX_AC_TO 120
+
+#define L1F_MDIO_EXTN 0x1448
+#define L1F_MDIO_EXTN_PORTAD_MASK 0x1FUL
+#define L1F_MDIO_EXTN_PORTAD_SHIFT 21
+#define L1F_MDIO_EXTN_DEVAD_MASK 0x1FUL
+#define L1F_MDIO_EXTN_DEVAD_SHIFT 16
+#define L1F_MDIO_EXTN_REG_MASK 0xFFFFUL
+#define L1F_MDIO_EXTN_REG_SHIFT 0
+
+#define L1F_PHY_STS 0x1418
+#define L1F_PHY_STS_LPW BIT(31)
+#define L1F_PHY_STS_LPI BIT(30)
+#define L1F_PHY_STS_PWON_STRIP_MASK 0xFFFUL
+#define L1F_PHY_STS_PWON_STRIP_SHIFT 16
+
+#define L1F_PHY_STS_DUPLEX BIT(3)
+#define L1F_PHY_STS_LINKUP BIT(2)
+#define L1F_PHY_STS_SPEED_MASK 0x3UL
+#define L1F_PHY_STS_SPEED_SHIFT 0
+#define L1F_PHY_STS_SPEED_1000M 2
+#define L1F_PHY_STS_SPEED_100M 1
+#define L1F_PHY_STS_SPEED_10M 0
+
+#define L1F_BIST0 0x141C
+#define L1F_BIST0_COL_MASK 0x3FUL
+#define L1F_BIST0_COL_SHIFT 24
+#define L1F_BIST0_ROW_MASK 0xFFFUL
+#define L1F_BIST0_ROW_SHIFT 12
+#define L1F_BIST0_STEP_MASK 0xFUL
+#define L1F_BIST0_STEP_SHIFT 8
+#define L1F_BIST0_PATTERN_MASK 0x7UL
+#define L1F_BIST0_PATTERN_SHIFT 4
+#define L1F_BIST0_CRIT BIT(3)
+#define L1F_BIST0_FIXED BIT(2)
+#define L1F_BIST0_FAIL BIT(1)
+#define L1F_BIST0_START BIT(0)
+
+#define L1F_BIST1 0x1420
+#define L1F_BIST1_COL_MASK 0x3FUL
+#define L1F_BIST1_COL_SHIFT 24
+#define L1F_BIST1_ROW_MASK 0xFFFUL
+#define L1F_BIST1_ROW_SHIFT 12
+#define L1F_BIST1_STEP_MASK 0xFUL
+#define L1F_BIST1_STEP_SHIFT 8
+#define L1F_BIST1_PATTERN_MASK 0x7UL
+#define L1F_BIST1_PATTERN_SHIFT 4
+#define L1F_BIST1_CRIT BIT(3)
+#define L1F_BIST1_FIXED BIT(2)
+#define L1F_BIST1_FAIL BIT(1)
+#define L1F_BIST1_START BIT(0)
+
+#define L1F_SERDES 0x1424
+#define L1F_SERDES_PHYCLK_SLWDWN BIT(18)
+#define L1F_SERDES_MACCLK_SLWDWN BIT(17)
+#define L1F_SERDES_SELFB_PLL_MASK 0x3UL
+#define L1F_SERDES_SELFB_PLL_SHIFT 14
+#define L1F_SERDES_PHYCLK_SEL_GTX BIT(13) /* 1:gtx_clk, 0:25M */
+#define L1F_SERDES_PCIECLK_SEL_SRDS BIT(12) /* 1:serdes,0:25M */
+#define L1F_SERDES_BUFS_RX_EN BIT(11)
+#define L1F_SERDES_PD_RX BIT(10)
+#define L1F_SERDES_PLL_EN BIT(9)
+#define L1F_SERDES_EN BIT(8)
+#define L1F_SERDES_SELFB_PLL_SEL_CSR BIT(6) /* 0:state-machine,1:csr */
+#define L1F_SERDES_SELFB_PLL_CSR_MASK 0x3UL
+#define L1F_SERDES_SELFB_PLL_CSR_SHIFT 4
+#define L1F_SERDES_SELFB_PLL_CSR_4 3 /* 4-12% OV-CLK */
+#define L1F_SERDES_SELFB_PLL_CSR_0 2 /* 0-4% OV-CLK */
+#define L1F_SERDES_SELFB_PLL_CSR_12 1 /* 12-18% OV-CLK */
+#define L1F_SERDES_SELFB_PLL_CSR_18 0 /* 18-25% OV-CLK */
+#define L1F_SERDES_VCO_SLOW BIT(3)
+#define L1F_SERDES_VCO_FAST BIT(2)
+#define L1F_SERDES_LOCKDCT_EN BIT(1)
+#define L1F_SERDES_LOCKDCTED BIT(0)
+
+#define L1F_LED_CTRL 0x1428
+#define L1F_LED_CTRL_PATMAP2_MASK 0x3UL
+#define L1F_LED_CTRL_PATMAP2_SHIFT 8
+#define L1F_LED_CTRL_PATMAP1_MASK 0x3UL
+#define L1F_LED_CTRL_PATMAP1_SHIFT 6
+#define L1F_LED_CTRL_PATMAP0_MASK 0x3UL
+#define L1F_LED_CTRL_PATMAP0_SHIFT 4
+#define L1F_LED_CTRL_D3_MODE_MASK 0x3UL
+#define L1F_LED_CTRL_D3_MODE_SHIFT 2
+#define L1F_LED_CTRL_D3_MODE_NORMAL 0
+#define L1F_LED_CTRL_D3_MODE_WOL_DIS 1
+#define L1F_LED_CTRL_D3_MODE_WOL_ANY 2
+#define L1F_LED_CTRL_D3_MODE_WOL_EN 3
+#define L1F_LED_CTRL_DUTY_CYCL_MASK 0x3UL
+#define L1F_LED_CTRL_DUTY_CYCL_SHIFT 0
+#define L1F_LED_CTRL_DUTY_CYCL_50 0 /* 50% */
+#define L1F_LED_CTRL_DUTY_CYCL_125 1 /* 12.5% */
+#define L1F_LED_CTRL_DUTY_CYCL_25 2 /* 25% */
+#define L1F_LED_CTRL_DUTY_CYCL_75 3 /* 75% */
+
+#define L1F_LED_PATN 0x142C
+#define L1F_LED_PATN1_MASK 0xFFFFUL
+#define L1F_LED_PATN1_SHIFT 16
+#define L1F_LED_PATN0_MASK 0xFFFFUL
+#define L1F_LED_PATN0_SHIFT 0
+
+#define L1F_LED_PATN2 0x1430
+#define L1F_LED_PATN2_MASK 0xFFFFUL
+#define L1F_LED_PATN2_SHIFT 0
+
+#define L1F_SYSALV 0x1434
+#define L1F_SYSALV_FLAG BIT(0)
+
+#define L1F_PCIERR_INST 0x1438
+#define L1F_PCIERR_INST_TX_RATE_MASK 0xFUL
+#define L1F_PCIERR_INST_TX_RATE_SHIFT 4
+#define L1F_PCIERR_INST_RX_RATE_MASK 0xFUL
+#define L1F_PCIERR_INST_RX_RATE_SHIFT 0
+
+#define L1F_LPI_DECISN_TIMER 0x143C
+
+#define L1F_LPI_CTRL 0x1440
+#define L1F_LPI_CTRL_CHK_DA BIT(31)
+#define L1F_LPI_CTRL_ENH_TO_MASK 0x1FFFUL
+#define L1F_LPI_CTRL_ENH_TO_SHIFT 12
+#define L1F_LPI_CTRL_ENH_TH_MASK 0x1FUL
+#define L1F_LPI_CTRL_ENH_TH_SHIFT 6
+#define L1F_LPI_CTRL_ENH_EN BIT(5)
+#define L1F_LPI_CTRL_CHK_RX BIT(4)
+#define L1F_LPI_CTRL_CHK_STATE BIT(3)
+#define L1F_LPI_CTRL_GMII BIT(2)
+#define L1F_LPI_CTRL_TO_PHY BIT(1)
+#define L1F_LPI_CTRL_EN BIT(0)
+
+#define L1F_LPI_WAIT 0x1444
+#define L1F_LPI_WAIT_TIMER_MASK 0xFFFFUL
+#define L1F_LPI_WAIT_TIMER_SHIFT 0
+
+#define L1F_HRTBT_VLAN 0x1450 /* HEARTBEAT, FOR CIFS */
+#define L1F_HRTBT_VLANID_MASK 0xFFFFUL /* OR CLOUD */
+#define L1F_HRRBT_VLANID_SHIFT 0
+
+#define L1F_HRTBT_CTRL 0x1454
+#define L1F_HRTBT_CTRL_EN BIT(31)
+#define L1F_HRTBT_CTRL_PERIOD_MASK 0x3FUL
+#define L1F_HRTBT_CTRL_PERIOD_SHIFT 25
+#define L1F_HRTBT_CTRL_HASVLAN BIT(24)
+#define L1F_HRTBT_CTRL_HDRADDR_MASK 0xFFFUL /* A0 */
+#define L1F_HRTBT_CTRL_HDRADDR_SHIFT 12
+#define L1F_HRTBT_CTRL_HDRADDRB0_MASK 0x7FFUL /* B0 */
+#define L1F_HRTBT_CTRL_HDRADDRB0_SHIFT 13
+#define L1F_HRTBT_CTRL_PKT_FRAG BIT(12) /* B0 */
+#define L1F_HRTBT_CTRL_PKTLEN_MASK 0xFFFUL
+#define L1F_HRTBT_CTRL_PKTLEN_SHIFT 0
+
+#define L1F_HRTBT_EXT_CTRL 0x1AD0 /* B0 */
+#define L1F_HRTBT_EXT_CTRL_NS_EN BIT(12)
+#define L1F_HRTBT_EXT_CTRL_FRAG_LEN_MASK 0xFFUL
+#define L1F_HRTBT_EXT_CTRL_FRAG_LEN_SHIFT 4
+#define L1F_HRTBT_EXT_CTRL_IS_8023 BIT(3)
+#define L1F_HRTBT_EXT_CTRL_IS_IPV6 BIT(2)
+#define L1F_HRTBT_EXT_CTRL_WAKEUP_EN BIT(1)
+#define L1F_HRTBT_EXT_CTRL_ARP_EN BIT(0)
+
+#define L1F_HRTBT_REM_IPV4_ADDR 0x1AD4
+#define L1F_HRTBT_HOST_IPV4_ADDR 0x1478/*use L1F_TRD_BUBBLE_DA_IP4*/
+#define L1F_HRTBT_REM_IPV6_ADDR3 0x1AD8
+#define L1F_HRTBT_REM_IPV6_ADDR2 0x1ADC
+#define L1F_HRTBT_REM_IPV6_ADDR1 0x1AE0
+#define L1F_HRTBT_REM_IPV6_ADDR0 0x1AE4
+/*SWOI_HOST_IPV6_ADDR reuse reg1a60-1a6c, 1a70-1a7c, 1aa0-1aac, 1ab0-1abc.*/
+#define L1F_HRTBT_WAKEUP_PORT 0x1AE8
+#define L1F_HRTBT_WAKEUP_PORT_SRC_MASK 0xFFFFUL
+#define L1F_HRTBT_WAKEUP_PORT_SRC_SHIFT 16
+#define L1F_HRTBT_WAKEUP_PORT_DEST_MASK 0xFFFFUL
+#define L1F_HRTBT_WAKEUP_PORT_DEST_SHIFT 0
+
+#define L1F_HRTBT_WAKEUP_DATA7 0x1AEC
+#define L1F_HRTBT_WAKEUP_DATA6 0x1AF0
+#define L1F_HRTBT_WAKEUP_DATA5 0x1AF4
+#define L1F_HRTBT_WAKEUP_DATA4 0x1AF8
+#define L1F_HRTBT_WAKEUP_DATA3 0x1AFC
+#define L1F_HRTBT_WAKEUP_DATA2 0x1B80
+#define L1F_HRTBT_WAKEUP_DATA1 0x1B84
+#define L1F_HRTBT_WAKEUP_DATA0 0x1B88
+
+#define L1F_RXPARSE 0x1458
+#define L1F_RXPARSE_FLT6_L4_MASK 0x3UL
+#define L1F_RXPARSE_FLT6_L4_SHIFT 30
+#define L1F_RXPARSE_FLT6_L3_MASK 0x3UL
+#define L1F_RXPARSE_FLT6_L3_SHIFT 28
+#define L1F_RXPARSE_FLT5_L4_MASK 0x3UL
+#define L1F_RXPARSE_FLT5_L4_SHIFT 26
+#define L1F_RXPARSE_FLT5_L3_MASK 0x3UL
+#define L1F_RXPARSE_FLT5_L3_SHIFT 24
+#define L1F_RXPARSE_FLT4_L4_MASK 0x3UL
+#define L1F_RXPARSE_FLT4_L4_SHIFT 22
+#define L1F_RXPARSE_FLT4_L3_MASK 0x3UL
+#define L1F_RXPARSE_FLT4_L3_SHIFT 20
+#define L1F_RXPARSE_FLT3_L4_MASK 0x3UL
+#define L1F_RXPARSE_FLT3_L4_SHIFT 18
+#define L1F_RXPARSE_FLT3_L3_MASK 0x3UL
+#define L1F_RXPARSE_FLT3_L3_SHIFT 16
+#define L1F_RXPARSE_FLT2_L4_MASK 0x3UL
+#define L1F_RXPARSE_FLT2_L4_SHIFT 14
+#define L1F_RXPARSE_FLT2_L3_MASK 0x3UL
+#define L1F_RXPARSE_FLT2_L3_SHIFT 12
+#define L1F_RXPARSE_FLT1_L4_MASK 0x3UL
+#define L1F_RXPARSE_FLT1_L4_SHIFT 10
+#define L1F_RXPARSE_FLT1_L3_MASK 0x3UL
+#define L1F_RXPARSE_FLT1_L3_SHIFT 8
+#define L1F_RXPARSE_FLT6_EN BIT(5)
+#define L1F_RXPARSE_FLT5_EN BIT(4)
+#define L1F_RXPARSE_FLT4_EN BIT(3)
+#define L1F_RXPARSE_FLT3_EN BIT(2)
+#define L1F_RXPARSE_FLT2_EN BIT(1)
+#define L1F_RXPARSE_FLT1_EN BIT(0)
+#define L1F_RXPARSE_FLT_L4_UDP 0
+#define L1F_RXPARSE_FLT_L4_TCP 1
+#define L1F_RXPARSE_FLT_L4_BOTH 2
+#define L1F_RXPARSE_FLT_L4_NONE 3
+#define L1F_RXPARSE_FLT_L3_IPV6 0
+#define L1F_RXPARSE_FLT_L3_IPV4 1
+#define L1F_RXPARSE_FLT_L3_BOTH 2
+
+/* Terodo support */
+#define L1F_TRD_CTRL 0x145C
+#define L1F_TRD_CTRL_EN BIT(31)
+#define L1F_TRD_CTRL_BUBBLE_WAKE_EN BIT(30)
+#define L1F_TRD_CTRL_PREFIX_CMP_HW BIT(28)
+#define L1F_TRD_CTRL_RSHDR_ADDR_MASK 0xFFFUL
+#define L1F_TRD_CTRL_RSHDR_ADDR_SHIFT 16
+#define L1F_TRD_CTRL_SINTV_MAX_MASK 0xFFUL
+#define L1F_TRD_CTRL_SINTV_MAX_SHIFT 8
+#define L1F_TRD_CTRL_SINTV_MIN_MASK 0xFFUL
+#define L1F_TRD_CTRL_SINTV_MIN_SHIFT 0
+
+#define L1F_TRD_RS 0x1460
+#define L1F_TRD_RS_SZ_MASK 0xFFFUL
+#define L1F_TRD_RS_SZ_SHIFT 20
+#define L1F_TRD_RS_NONCE_OFS_MASK 0xFFFUL
+#define L1F_TRD_RS_NONCE_OFS_SHIFT 8
+#define L1F_TRD_RS_SEQ_OFS_MASK 0xFFUL
+#define L1F_TRD_RS_SEQ_OFS_SHIFT 0
+
+#define L1F_TRD_SRV_IP4 0x1464
+
+#define L1F_TRD_CLNT_EXTNL_IP4 0x1468
+
+#define L1F_TRD_PORT 0x146C
+#define L1F_TRD_PORT_CLNT_EXTNL_MASK 0xFFFFUL
+#define L1F_TRD_PORT_CLNT_EXTNL_SHIFT 16
+#define L1F_TRD_PORT_SRV_MASK 0xFFFFUL
+#define L1F_TRD_PORT_SRV_SHIFT 0
+
+#define L1F_TRD_PREFIX 0x1470
+
+#define L1F_TRD_BUBBLE_DA_IP4 0x1478
+
+#define L1F_TRD_BUBBLE_DA_PORT 0x147C
+
+
+#define L1F_IDLE_DECISN_TIMER 0x1474 /* B0 */
+#define L1F_IDLE_DECISN_TIMER_DEF 0x400 /* 1ms */
+
+
+#define L1F_MAC_CTRL 0x1480
+#define L1F_MAC_CTRL_FAST_PAUSE BIT(31)
+#define L1F_MAC_CTRL_WOLSPED_SWEN BIT(30)
+#define L1F_MAC_CTRL_MHASH_ALG_HI5B BIT(29) /* 1:legacy, 0:marvl(low5b)*/
+#define L1F_MAC_CTRL_SPAUSE_EN BIT(28)
+#define L1F_MAC_CTRL_DBG_EN BIT(27)
+#define L1F_MAC_CTRL_BRD_EN BIT(26)
+#define L1F_MAC_CTRL_MULTIALL_EN BIT(25)
+#define L1F_MAC_CTRL_RX_XSUM_EN BIT(24)
+#define L1F_MAC_CTRL_THUGE BIT(23)
+#define L1F_MAC_CTRL_MBOF BIT(22)
+#define L1F_MAC_CTRL_SPEED_MASK 0x3UL
+#define L1F_MAC_CTRL_SPEED_SHIFT 20
+#define L1F_MAC_CTRL_SPEED_10_100 1
+#define L1F_MAC_CTRL_SPEED_1000 2
+#define L1F_MAC_CTRL_SIMR BIT(19)
+#define L1F_MAC_CTRL_SSTCT BIT(17)
+#define L1F_MAC_CTRL_TPAUSE BIT(16)
+#define L1F_MAC_CTRL_PROMISC_EN BIT(15)
+#define L1F_MAC_CTRL_VLANSTRIP BIT(14)
+#define L1F_MAC_CTRL_PRMBLEN_MASK 0xFUL
+#define L1F_MAC_CTRL_PRMBLEN_SHIFT 10
+#define L1F_MAC_CTRL_RHUGE_EN BIT(9)
+#define L1F_MAC_CTRL_FLCHK BIT(8)
+#define L1F_MAC_CTRL_PCRCE BIT(7)
+#define L1F_MAC_CTRL_CRCE BIT(6)
+#define L1F_MAC_CTRL_FULLD BIT(5)
+#define L1F_MAC_CTRL_LPBACK_EN BIT(4)
+#define L1F_MAC_CTRL_RXFC_EN BIT(3)
+#define L1F_MAC_CTRL_TXFC_EN BIT(2)
+#define L1F_MAC_CTRL_RX_EN BIT(1)
+#define L1F_MAC_CTRL_TX_EN BIT(0)
+
+#define L1F_GAP 0x1484
+#define L1F_GAP_IPGR2_MASK 0x7FUL
+#define L1F_GAP_IPGR2_SHIFT 24
+#define L1F_GAP_IPGR1_MASK 0x7FUL
+#define L1F_GAP_IPGR1_SHIFT 16
+#define L1F_GAP_MIN_IFG_MASK 0xFFUL
+#define L1F_GAP_MIN_IFG_SHIFT 8
+#define L1F_GAP_IPGT_MASK 0x7FUL /* A0 diff with B0 */
+#define L1F_GAP_IPGT_SHIFT 0
+
+#define L1F_STAD0 0x1488
+#define L1F_STAD1 0x148C
+
+#define L1F_HASH_TBL0 0x1490
+#define L1F_HASH_TBL1 0x1494
+
+#define L1F_HALFD 0x1498
+#define L1F_HALFD_JAM_IPG_MASK 0xFUL
+#define L1F_HALFD_JAM_IPG_SHIFT 24
+#define L1F_HALFD_ABEBT_MASK 0xFUL
+#define L1F_HALFD_ABEBT_SHIFT 20
+#define L1F_HALFD_ABEBE BIT(19)
+#define L1F_HALFD_BPNB BIT(18)
+#define L1F_HALFD_NOBO BIT(17)
+#define L1F_HALFD_EDXSDFR BIT(16)
+#define L1F_HALFD_RETRY_MASK 0xFUL
+#define L1F_HALFD_RETRY_SHIFT 12
+#define L1F_HALFD_LCOL_MASK 0x3FFUL
+#define L1F_HALFD_LCOL_SHIFT 0
+
+#define L1F_MTU 0x149C
+#define L1F_MTU_JUMBO_TH 1514
+#define L1F_MTU_STD_ALGN 1536
+#define L1F_MTU_MIN 64
+
+#define L1F_SRAM0 0x1500
+#define L1F_SRAM_RFD_TAIL_ADDR_MASK 0xFFFUL
+#define L1F_SRAM_RFD_TAIL_ADDR_SHIFT 16
+#define L1F_SRAM_RFD_HEAD_ADDR_MASK 0xFFFUL
+#define L1F_SRAM_RFD_HEAD_ADDR_SHIFT 0
+
+#define L1F_SRAM1 0x1510
+#define L1F_SRAM_RFD_LEN_MASK 0xFFFUL /* 8BYTES UNIT */
+#define L1F_SRAM_RFD_LEN_SHIFT 0
+
+#define L1F_SRAM2 0x1518
+#define L1F_SRAM_TRD_TAIL_ADDR_MASK 0xFFFUL
+#define L1F_SRAM_TRD_TAIL_ADDR_SHIFT 16
+#define L1F_SRMA_TRD_HEAD_ADDR_MASK 0xFFFUL
+#define L1F_SRAM_TRD_HEAD_ADDR_SHIFT 0
+
+#define L1F_SRAM3 0x151C
+#define L1F_SRAM_TRD_LEN_MASK 0xFFFUL /* 8BYTES UNIT */
+#define L1F_SRAM_TRD_LEN_SHIFT 0
+
+#define L1F_SRAM4 0x1520
+#define L1F_SRAM_RXF_TAIL_ADDR_MASK 0xFFFUL
+#define L1F_SRAM_RXF_TAIL_ADDR_SHIFT 16
+#define L1F_SRAM_RXF_HEAD_ADDR_MASK 0xFFFUL
+#define L1F_SRAM_RXF_HEAD_ADDR_SHIFT 0
+
+#define L1F_SRAM5 0x1524
+#define L1F_SRAM_RXF_LEN_MASK 0xFFFUL /* 8BYTES UNIT */
+#define L1F_SRAM_RXF_LEN_SHIFT 0
+#define L1F_SRAM_RXF_LEN_8K (8*1024)
+
+#define L1F_SRAM6 0x1528
+#define L1F_SRAM_TXF_TAIL_ADDR_MASK 0xFFFUL
+#define L1F_SRAM_TXF_TAIL_ADDR_SHIFT 16
+#define L1F_SRAM_TXF_HEAD_ADDR_MASK 0xFFFUL
+#define L1F_SRAM_TXF_HEAD_ADDR_SHIFT 0
+
+#define L1F_SRAM7 0x152C
+#define L1F_SRAM_TXF_LEN_MASK 0xFFFUL /* 8BYTES UNIT */
+#define L1F_SRAM_TXF_LEN_SHIFT 0
+
+#define L1F_SRAM8 0x1530
+#define L1F_SRAM_PATTERN_ADDR_MASK 0xFFFUL
+#define L1F_SRAM_PATTERN_ADDR_SHIFT 16
+#define L1F_SRAM_TSO_ADDR_MASK 0xFFFUL
+#define L1F_SRAM_TSO_ADDR_SHIFT 0
+
+#define L1F_SRAM9 0x1534
+#define L1F_SRAM_LOAD_PTR BIT(0)
+
+#define L1F_RX_BASE_ADDR_HI 0x1540
+
+#define L1F_TX_BASE_ADDR_HI 0x1544
+
+#define L1F_RFD_ADDR_LO 0x1550
+#define L1F_RFD_RING_SZ 0x1560
+#define L1F_RFD_BUF_SZ 0x1564
+#define L1F_RFD_BUF_SZ_MASK 0xFFFFUL
+#define L1F_RFD_BUF_SZ_SHIFT 0
+
+#define L1F_RRD_ADDR_LO 0x1568
+#define L1F_RRD_RING_SZ 0x1578
+#define L1F_RRD_RING_SZ_MASK 0xFFFUL
+#define L1F_RRD_RING_SZ_SHIFT 0
+
+#define L1F_TPD_PRI3_ADDR_LO 0x14E4 /* HIGHEST PRIORITY */
+#define L1F_TPD_PRI2_ADDR_LO 0x14E0
+#define L1F_TPD_PRI1_ADDR_LO 0x157C
+#define L1F_TPD_PRI0_ADDR_LO 0x1580 /* LOWEST PRORITY */
+
+#define L1F_TPD_PRI3_PIDX 0x1618 /* 16BIT */
+#define L1F_TPD_PRI2_PIDX 0x161A /* 16BIT */
+#define L1F_TPD_PRI1_PIDX 0x15F0 /* 16BIT */
+#define L1F_TPD_PRI0_PIDX 0x15F2 /* 16BIT */
+
+#define L1F_TPD_PRI3_CIDX 0x161C /* 16BIT */
+#define L1F_TPD_PRI2_CIDX 0x161E /* 16BIT */
+#define L1F_TPD_PRI1_CIDX 0x15F4 /* 16BIT */
+#define L1F_TPD_PRI0_CIDX 0x15F6 /* 16BIT */
+
+#define L1F_TPD_RING_SZ 0x1584
+#define L1F_TPD_RING_SZ_MASK 0xFFFFUL
+#define L1F_TPD_RING_SZ_SHIFT 0
+
+#define L1F_CMB_ADDR_LO 0x1588 /* NOT USED */
+
+#define L1F_TXQ0 0x1590
+#define L1F_TXQ0_TXF_BURST_PREF_MASK 0xFFFFUL
+#define L1F_TXQ0_TXF_BURST_PREF_SHIFT 16
+#define L1F_TXQ_TXF_BURST_PREF_DEF 0x200
+#define L1F_TXQ0_PEDING_CLR BIT(8)
+#define L1F_TXQ0_LSO_8023_EN BIT(7)
+#define L1F_TXQ0_MODE_ENHANCE BIT(6)
+#define L1F_TXQ0_EN BIT(5)
+#define L1F_TXQ0_SUPT_IPOPT BIT(4)
+#define L1F_TXQ0_TPD_BURSTPREF_MASK 0xFUL
+#define L1F_TXQ0_TPD_BURSTPREF_SHIFT 0
+#define L1F_TXQ_TPD_BURSTPREF_DEF 5
+
+#define L1F_TXQ1 0x1594
+#define L1F_TXQ1_ERRLGPKT_DROP_EN BIT(11) /* drop error large
+ * (>rfd buf) packet */
+#define L1F_TXQ1_JUMBO_TSOTHR_MASK 0x7FFUL /* 8BYTES UNIT */
+#define L1F_TXQ1_JUMBO_TSOTHR_SHIFT 0
+#define L1F_TXQ1_JUMBO_TSO_TH (7*1024) /* byte */
+
+#define L1F_TXQ2 0x1598 /* ENTER L1 CONTROL */
+#define L1F_TXQ2_BURST_EN BIT(31)
+#define L1F_TXQ2_BURST_HI_WM_MASK 0xFFFUL
+#define L1F_TXQ2_BURST_HI_WM_SHIFT 16
+#define L1F_TXQ2_BURST_LO_WM_MASK 0xFFFUL
+#define L1F_TXQ2_BURST_LO_WM_SHIFT 0
+
+#define L1F_RXQ0 0x15A0
+#define L1F_RXQ0_EN BIT(31)
+#define L1F_RXQ0_CUT_THRU_EN BIT(30)
+#define L1F_RXQ0_RSS_HASH_EN BIT(29)
+#define L1F_RXQ0_NON_IP_QTBL BIT(28) /* 0:q0,1:table */
+#define L1F_RXQ0_RSS_MODE_MASK 0x3UL
+#define L1F_RXQ0_RSS_MODE_SHIFT 26
+#define L1F_RXQ0_RSS_MODE_DIS 0
+#define L1F_RXQ0_RSS_MODE_SQSI 1
+#define L1F_RXQ0_RSS_MODE_MQSI 2
+#define L1F_RXQ0_RSS_MODE_MQMI 3
+#define L1F_RXQ0_NUM_RFD_PREF_MASK 0x3FUL
+#define L1F_RXQ0_NUM_RFD_PREF_SHIFT 20
+#define L1F_RXQ0_NUM_RFD_PREF_DEF 8
+#define L1F_RXQ0_IDT_TBL_SIZE_MASK 0x1FFUL
+#define L1F_RXQ0_IDT_TBL_SIZE_SHIFT 8
+#define L1F_RXQ0_IDT_TBL_SIZE_DEF 0x100
+#define L1F_RXQ0_IPV6_PARSE_EN BIT(7)
+#define L1F_RXQ0_RSS_HSTYP_IPV6_TCP_EN BIT(5)
+#define L1F_RXQ0_RSS_HSTYP_IPV6_EN BIT(4)
+#define L1F_RXQ0_RSS_HSTYP_IPV4_TCP_EN BIT(3)
+#define L1F_RXQ0_RSS_HSTYP_IPV4_EN BIT(2)
+#define L1F_RXQ0_RSS_HSTYP_ALL (\
+ L1F_RXQ0_RSS_HSTYP_IPV6_TCP_EN |\
+ L1F_RXQ0_RSS_HSTYP_IPV4_TCP_EN |\
+ L1F_RXQ0_RSS_HSTYP_IPV6_EN |\
+ L1F_RXQ0_RSS_HSTYP_IPV4_EN)
+#define L1F_RXQ0_ASPM_THRESH_MASK 0x3UL
+#define L1F_RXQ0_ASPM_THRESH_SHIFT 0
+#define L1F_RXQ0_ASPM_THRESH_NO 0
+#define L1F_RXQ0_ASPM_THRESH_1M 1
+#define L1F_RXQ0_ASPM_THRESH_10M 2
+#define L1F_RXQ0_ASPM_THRESH_100M 3
+
+#define L1F_RXQ1 0x15A4
+#define L1F_RXQ1_JUMBO_LKAH_MASK 0xFUL /* 32BYTES UNIT */
+#define L1F_RXQ1_JUMBO_LKAH_SHIFT 12
+#define L1F_RXQ1_RFD_PREF_DOWN_MASK 0x3FUL
+#define L1F_RXQ1_RFD_PREF_DOWN_SHIFT 6
+#define L1F_RXQ1_RFD_PREF_UP_MASK 0x3FUL
+#define L1F_RXQ1_RFD_PREF_UP_SHIFT 0
+
+#define L1F_RXQ2 0x15A8
+/* XOFF: USED SRAM LOWER THAN IT, THEN NOTIFY THE PEER TO SEND AGAIN */
+#define L1F_RXQ2_RXF_XOFF_THRESH_MASK 0xFFFUL
+#define L1F_RXQ2_RXF_XOFF_THRESH_SHIFT 16
+#define L1F_RXQ2_RXF_XON_THRESH_MASK 0xFFFUL
+#define L1F_RXQ2_RXF_XON_THRESH_SHIFT 0
+/*
+ * Size = tx-packet(1522) + IPG(12) + SOF(8) + 64(Pause) + IPG(12) + SOF(8) +
+ * rx-packet(1522) + delay-of-link(64)
+ * = 3212.
+ */
+#define L1F_RXQ2_RXF_FLOW_CTRL_RSVD 3212
+
+#define L1F_RXQ3 0x15AC
+#define L1F_RXQ3_RXD_TIMER_MASK 0x7FFFUL
+#define L1F_RXQ3_RXD_TIMER_SHIFT 16
+#define L1F_RXQ3_RXD_THRESH_MASK 0xFFFUL /* 8BYTES UNIT */
+#define L1F_RXQ3_RXD_THRESH_SHIFT 0
+
+#define L1F_DMA 0x15C0
+#define L1F_DMA_SMB_NOW BIT(31)
+#define L1F_DMA_WPEND_CLR BIT(30)
+#define L1F_DMA_RPEND_CLR BIT(29)
+#define L1F_DMA_WSRAM_RDCTRL BIT(28)
+#define L1F_DMA_RCHNL_SEL_MASK 0x3UL
+#define L1F_DMA_RCHNL_SEL_SHIFT 26
+#define L1F_DMA_RCHNL_SEL_1 0
+#define L1F_DMA_RCHNL_SEL_2 1
+#define L1F_DMA_RCHNL_SEL_3 2
+#define L1F_DMA_RCHNL_SEL_4 3
+#define L1F_DMA_SMB_EN BIT(21) /* smb dma enable */
+#define L1F_DMA_WDLY_CNT_MASK 0xFUL
+#define L1F_DMA_WDLY_CNT_SHIFT 16
+#define L1F_DMA_WDLY_CNT_DEF 4
+#define L1F_DMA_RDLY_CNT_MASK 0x1FUL
+#define L1F_DMA_RDLY_CNT_SHIFT 11
+#define L1F_DMA_RDLY_CNT_DEF 15
+#define L1F_DMA_RREQ_PRI_DATA BIT(10) /* 0:tpd, 1:data */
+#define L1F_DMA_WREQ_BLEN_MASK 0x7UL
+#define L1F_DMA_WREQ_BLEN_SHIFT 7
+#define L1F_DMA_RREQ_BLEN_MASK 0x7UL
+#define L1F_DMA_RREQ_BLEN_SHIFT 4
+#define L1F_DMA_PENDING_AUTO_RST BIT(3)
+#define L1F_DMA_RORDER_MODE_MASK 0x7UL
+#define L1F_DMA_RORDER_MODE_SHIFT 0
+#define L1F_DMA_RORDER_MODE_OUT 4
+#define L1F_DMA_RORDER_MODE_ENHANCE 2
+#define L1F_DMA_RORDER_MODE_IN 1
+
+#define L1F_WOL0 0x14A0
+#define L1F_WOL0_PT7_MATCH BIT(31)
+#define L1F_WOL0_PT6_MATCH BIT(30)
+#define L1F_WOL0_PT5_MATCH BIT(29)
+#define L1F_WOL0_PT4_MATCH BIT(28)
+#define L1F_WOL0_PT3_MATCH BIT(27)
+#define L1F_WOL0_PT2_MATCH BIT(26)
+#define L1F_WOL0_PT1_MATCH BIT(25)
+#define L1F_WOL0_PT0_MATCH BIT(24)
+#define L1F_WOL0_PT7_EN BIT(23)
+#define L1F_WOL0_PT6_EN BIT(22)
+#define L1F_WOL0_PT5_EN BIT(21)
+#define L1F_WOL0_PT4_EN BIT(20)
+#define L1F_WOL0_PT3_EN BIT(19)
+#define L1F_WOL0_PT2_EN BIT(18)
+#define L1F_WOL0_PT1_EN BIT(17)
+#define L1F_WOL0_PT0_EN BIT(16)
+#define L1F_WOL0_IPV4_SYNC_EVT BIT(14)
+#define L1F_WOL0_IPV6_SYNC_EVT BIT(13)
+#define L1F_WOL0_LINK_EVT BIT(10)
+#define L1F_WOL0_MAGIC_EVT BIT(9)
+#define L1F_WOL0_PATTERN_EVT BIT(8)
+#define L1F_WOL0_OOB_EN BIT(6)
+#define L1F_WOL0_PME_LINK BIT(5)
+#define L1F_WOL0_LINK_EN BIT(4)
+#define L1F_WOL0_PME_MAGIC_EN BIT(3)
+#define L1F_WOL0_MAGIC_EN BIT(2)
+#define L1F_WOL0_PME_PATTERN_EN BIT(1)
+#define L1F_WOL0_PATTERN_EN BIT(0)
+
+#define L1F_WOL1 0x14A4
+#define L1F_WOL1_PT3_LEN_MASK 0xFFUL
+#define L1F_WOL1_PT3_LEN_SHIFT 24
+#define L1F_WOL1_PT2_LEN_MASK 0xFFUL
+#define L1F_WOL1_PT2_LEN_SHIFT 16
+#define L1F_WOL1_PT1_LEN_MASK 0xFFUL
+#define L1F_WOL1_PT1_LEN_SHIFT 8
+#define L1F_WOL1_PT0_LEN_MASK 0xFFUL
+#define L1F_WOL1_PT0_LEN_SHIFT 0
+
+#define L1F_WOL2 0x14A8
+#define L1F_WOL2_PT7_LEN_MASK 0xFFUL
+#define L1F_WOL2_PT7_LEN_SHIFT 24
+#define L1F_WOL2_PT6_LEN_MASK 0xFFUL
+#define L1F_WOL2_PT6_LEN_SHIFT 16
+#define L1F_WOL2_PT5_LEN_MASK 0xFFUL
+#define L1F_WOL2_PT5_LEN_SHIFT 8
+#define L1F_WOL2_PT4_LEN_MASK 0xFFUL
+#define L1F_WOL2_PT4_LEN_SHIFT 0
+
+#define L1F_RFD_PIDX 0x15E0
+#define L1F_RFD_PIDX_MASK 0xFFFUL
+#define L1F_RFD_PIDX_SHIFT 0
+
+#define L1F_RFD_CIDX 0x15F8
+#define L1F_RFD_CIDX_MASK 0xFFFUL
+#define L1F_RFD_CIDX_SHIFT 0
+
+/* MIB */
+#define L1F_MIB_BASE 0x1700
+#define L1F_MIB_RX_OK (L1F_MIB_BASE + 0)
+#define L1F_MIB_RX_BC (L1F_MIB_BASE + 4)
+#define L1F_MIB_RX_MC (L1F_MIB_BASE + 8)
+#define L1F_MIB_RX_PAUSE (L1F_MIB_BASE + 12)
+#define L1F_MIB_RX_CTRL (L1F_MIB_BASE + 16)
+#define L1F_MIB_RX_FCS (L1F_MIB_BASE + 20)
+#define L1F_MIB_RX_LENERR (L1F_MIB_BASE + 24)
+#define L1F_MIB_RX_BYTCNT (L1F_MIB_BASE + 28)
+#define L1F_MIB_RX_RUNT (L1F_MIB_BASE + 32)
+#define L1F_MIB_RX_FRAGMENT (L1F_MIB_BASE + 36)
+#define L1F_MIB_RX_64B (L1F_MIB_BASE + 40)
+#define L1F_MIB_RX_127B (L1F_MIB_BASE + 44)
+#define L1F_MIB_RX_255B (L1F_MIB_BASE + 48)
+#define L1F_MIB_RX_511B (L1F_MIB_BASE + 52)
+#define L1F_MIB_RX_1023B (L1F_MIB_BASE + 56)
+#define L1F_MIB_RX_1518B (L1F_MIB_BASE + 60)
+#define L1F_MIB_RX_SZMAX (L1F_MIB_BASE + 64)
+#define L1F_MIB_RX_OVSZ (L1F_MIB_BASE + 68)
+#define L1F_MIB_RXF_OV (L1F_MIB_BASE + 72)
+#define L1F_MIB_RRD_OV (L1F_MIB_BASE + 76)
+#define L1F_MIB_RX_ALIGN (L1F_MIB_BASE + 80)
+#define L1F_MIB_RX_BCCNT (L1F_MIB_BASE + 84)
+#define L1F_MIB_RX_MCCNT (L1F_MIB_BASE + 88)
+#define L1F_MIB_RX_ERRADDR (L1F_MIB_BASE + 92)
+#define L1F_MIB_TX_OK (L1F_MIB_BASE + 96)
+#define L1F_MIB_TX_BC (L1F_MIB_BASE + 100)
+#define L1F_MIB_TX_MC (L1F_MIB_BASE + 104)
+#define L1F_MIB_TX_PAUSE (L1F_MIB_BASE + 108)
+#define L1F_MIB_TX_EXCDEFER (L1F_MIB_BASE + 112)
+#define L1F_MIB_TX_CTRL (L1F_MIB_BASE + 116)
+#define L1F_MIB_TX_DEFER (L1F_MIB_BASE + 120)
+#define L1F_MIB_TX_BYTCNT (L1F_MIB_BASE + 124)
+#define L1F_MIB_TX_64B (L1F_MIB_BASE + 128)
+#define L1F_MIB_TX_127B (L1F_MIB_BASE + 132)
+#define L1F_MIB_TX_255B (L1F_MIB_BASE + 136)
+#define L1F_MIB_TX_511B (L1F_MIB_BASE + 140)
+#define L1F_MIB_TX_1023B (L1F_MIB_BASE + 144)
+#define L1F_MIB_TX_1518B (L1F_MIB_BASE + 148)
+#define L1F_MIB_TX_SZMAX (L1F_MIB_BASE + 152)
+#define L1F_MIB_TX_1COL (L1F_MIB_BASE + 156)
+#define L1F_MIB_TX_2COL (L1F_MIB_BASE + 160)
+#define L1F_MIB_TX_LATCOL (L1F_MIB_BASE + 164)
+#define L1F_MIB_TX_ABRTCOL (L1F_MIB_BASE + 168)
+#define L1F_MIB_TX_UNDRUN (L1F_MIB_BASE + 172)
+#define L1F_MIB_TX_TRDBEOP (L1F_MIB_BASE + 176)
+#define L1F_MIB_TX_LENERR (L1F_MIB_BASE + 180)
+#define L1F_MIB_TX_TRUNC (L1F_MIB_BASE + 184)
+#define L1F_MIB_TX_BCCNT (L1F_MIB_BASE + 188)
+#define L1F_MIB_TX_MCCNT (L1F_MIB_BASE + 192)
+#define L1F_MIB_UPDATE (L1F_MIB_BASE + 196)
+
+/******************************************************************************/
+
+#define L1F_ISR 0x1600
+#define L1F_ISR_DIS BIT(31)
+#define L1F_ISR_RX_Q7 BIT(30)
+#define L1F_ISR_RX_Q6 BIT(29)
+#define L1F_ISR_RX_Q5 BIT(28)
+#define L1F_ISR_RX_Q4 BIT(27)
+#define L1F_ISR_PCIE_LNKDOWN BIT(26)
+#define L1F_ISR_PCIE_CERR BIT(25)
+#define L1F_ISR_PCIE_NFERR BIT(24)
+#define L1F_ISR_PCIE_FERR BIT(23)
+#define L1F_ISR_PCIE_UR BIT(22)
+#define L1F_ISR_MAC_TX BIT(21)
+#define L1F_ISR_MAC_RX BIT(20)
+#define L1F_ISR_RX_Q3 BIT(19)
+#define L1F_ISR_RX_Q2 BIT(18)
+#define L1F_ISR_RX_Q1 BIT(17)
+#define L1F_ISR_RX_Q0 BIT(16)
+#define L1F_ISR_TX_Q0 BIT(15)
+#define L1F_ISR_TXQ_TO BIT(14)
+#define L1F_ISR_PHY_LPW BIT(13)
+#define L1F_ISR_PHY BIT(12)
+#define L1F_ISR_TX_CREDIT BIT(11)
+#define L1F_ISR_DMAW BIT(10)
+#define L1F_ISR_DMAR BIT(9)
+#define L1F_ISR_TXF_UR BIT(8)
+#define L1F_ISR_TX_Q3 BIT(7)
+#define L1F_ISR_TX_Q2 BIT(6)
+#define L1F_ISR_TX_Q1 BIT(5)
+#define L1F_ISR_RFD_UR BIT(4)
+#define L1F_ISR_RXF_OV BIT(3)
+#define L1F_ISR_MANU BIT(2)
+#define L1F_ISR_TIMER BIT(1)
+#define L1F_ISR_SMB BIT(0)
+
+#define L1F_IMR 0x1604
+
+#define L1F_INT_RETRIG 0x1608 /* re-send deassrt/assert
+ * if sw no reflect */
+#define L1F_INT_RETRIG_TIMER_MASK 0xFFFFUL
+#define L1F_INT_RETRIG_TIMER_SHIFT 0
+#define L1F_INT_RETRIG_TO 20000 /* 40ms */
+
+#define L1F_INT_DEASST_TIMER 0x1614 /* re-send deassert
+ * if sw no reflect */
+
+#define L1F_PATTERN_MASK 0x1620 /* 128bytes, sleep state */
+#define L1F_PATTERN_MASK_LEN 128
+
+
+#define L1F_FLT1_SRC_IP0 0x1A00
+#define L1F_FLT1_SRC_IP1 0x1A04
+#define L1F_FLT1_SRC_IP2 0x1A08
+#define L1F_FLT1_SRC_IP3 0x1A0C
+#define L1F_FLT1_DST_IP0 0x1A10
+#define L1F_FLT1_DST_IP1 0x1A14
+#define L1F_FLT1_DST_IP2 0x1A18
+#define L1F_FLT1_DST_IP3 0x1A1C
+#define L1F_FLT1_PORT 0x1A20
+#define L1F_FLT1_PORT_DST_MASK 0xFFFFUL
+#define L1F_FLT1_PORT_DST_SHIFT 16
+#define L1F_FLT1_PORT_SRC_MASK 0xFFFFUL
+#define L1F_FLT1_PORT_SRC_SHIFT 0
+
+#define L1F_FLT2_SRC_IP0 0x1A24
+#define L1F_FLT2_SRC_IP1 0x1A28
+#define L1F_FLT2_SRC_IP2 0x1A2C
+#define L1F_FLT2_SRC_IP3 0x1A30
+#define L1F_FLT2_DST_IP0 0x1A34
+#define L1F_FLT2_DST_IP1 0x1A38
+#define L1F_FLT2_DST_IP2 0x1A40
+#define L1F_FLT2_DST_IP3 0x1A44
+#define L1F_FLT2_PORT 0x1A48
+#define L1F_FLT2_PORT_DST_MASK 0xFFFFUL
+#define L1F_FLT2_PORT_DST_SHIFT 16
+#define L1F_FLT2_PORT_SRC_MASK 0xFFFFUL
+#define L1F_FLT2_PORT_SRC_SHIFT 0
+
+#define L1F_FLT3_SRC_IP0 0x1A4C
+#define L1F_FLT3_SRC_IP1 0x1A50
+#define L1F_FLT3_SRC_IP2 0x1A54
+#define L1F_FLT3_SRC_IP3 0x1A58
+#define L1F_FLT3_DST_IP0 0x1A5C
+#define L1F_FLT3_DST_IP1 0x1A60
+#define L1F_FLT3_DST_IP2 0x1A64
+#define L1F_FLT3_DST_IP3 0x1A68
+#define L1F_FLT3_PORT 0x1A6C
+#define L1F_FLT3_PORT_DST_MASK 0xFFFFUL
+#define L1F_FLT3_PORT_DST_SHIFT 16
+#define L1F_FLT3_PORT_SRC_MASK 0xFFFFUL
+#define L1F_FLT3_PORT_SRC_SHIFT 0
+
+#define L1F_FLT4_SRC_IP0 0x1A70
+#define L1F_FLT4_SRC_IP1 0x1A74
+#define L1F_FLT4_SRC_IP2 0x1A78
+#define L1F_FLT4_SRC_IP3 0x1A7C
+#define L1F_FLT4_DST_IP0 0x1A80
+#define L1F_FLT4_DST_IP1 0x1A84
+#define L1F_FLT4_DST_IP2 0x1A88
+#define L1F_FLT4_DST_IP3 0x1A8C
+#define L1F_FLT4_PORT 0x1A90
+#define L1F_FLT4_PORT_DST_MASK 0xFFFFUL
+#define L1F_FLT4_PORT_DST_SHIFT 16
+#define L1F_FLT4_PORT_SRC_MASK 0xFFFFUL
+#define L1F_FLT4_PORT_SRC_SHIFT 0
+
+#define L1F_FLT5_SRC_IP0 0x1A94
+#define L1F_FLT5_SRC_IP1 0x1A98
+#define L1F_FLT5_SRC_IP2 0x1A9C
+#define L1F_FLT5_SRC_IP3 0x1AA0
+#define L1F_FLT5_DST_IP0 0x1AA4
+#define L1F_FLT5_DST_IP1 0x1AA8
+#define L1F_FLT5_DST_IP2 0x1AAC
+#define L1F_FLT5_DST_IP3 0x1AB0
+#define L1F_FLT5_PORT 0x1AB4
+#define L1F_FLT5_PORT_DST_MASK 0xFFFFUL
+#define L1F_FLT5_PORT_DST_SHIFT 16
+#define L1F_FLT5_PORT_SRC_MASK 0xFFFFUL
+#define L1F_FLT5_PORT_SRC_SHIFT 0
+
+#define L1F_FLT6_SRC_IP0 0x1AB8
+#define L1F_FLT6_SRC_IP1 0x1ABC
+#define L1F_FLT6_SRC_IP2 0x1AC0
+#define L1F_FLT6_SRC_IP3 0x1AC8
+#define L1F_FLT6_DST_IP0 0x1620 /* only S0 state */
+#define L1F_FLT6_DST_IP1 0x1624
+#define L1F_FLT6_DST_IP2 0x1628
+#define L1F_FLT6_DST_IP3 0x162C
+#define L1F_FLT6_PORT 0x1630
+#define L1F_FLT6_PORT_DST_MASK 0xFFFFUL
+#define L1F_FLT6_PORT_DST_SHIFT 16
+#define L1F_FLT6_PORT_SRC_MASK 0xFFFFUL
+#define L1F_FLT6_PORT_SRC_SHIFT 0
+
+#define L1F_FLTCTRL 0x1634
+#define L1F_FLTCTRL_PSTHR_TIMER_MASK 0xFFUL
+#define L1F_FLTCTRL_PSTHR_TIMER_SHIFT 24
+#define L1F_FLTCTRL_CHK_DSTPRT6 BIT(23)
+#define L1F_FLTCTRL_CHK_SRCPRT6 BIT(22)
+#define L1F_FLTCTRL_CHK_DSTIP6 BIT(21)
+#define L1F_FLTCTRL_CHK_SRCIP6 BIT(20)
+#define L1F_FLTCTRL_CHK_DSTPRT5 BIT(19)
+#define L1F_FLTCTRL_CHK_SRCPRT5 BIT(18)
+#define L1F_FLTCTRL_CHK_DSTIP5 BIT(17)
+#define L1F_FLTCTRL_CHK_SRCIP5 BIT(16)
+#define L1F_FLTCTRL_CHK_DSTPRT4 BIT(15)
+#define L1F_FLTCTRL_CHK_SRCPRT4 BIT(14)
+#define L1F_FLTCTRL_CHK_DSTIP4 BIT(13)
+#define L1F_FLTCTRL_CHK_SRCIP4 BIT(12)
+#define L1F_FLTCTRL_CHK_DSTPRT3 BIT(11)
+#define L1F_FLTCTRL_CHK_SRCPRT3 BIT(10)
+#define L1F_FLTCTRL_CHK_DSTIP3 BIT(9)
+#define L1F_FLTCTRL_CHK_SRCIP3 BIT(8)
+#define L1F_FLTCTRL_CHK_DSTPRT2 BIT(7)
+#define L1F_FLTCTRL_CHK_SRCPRT2 BIT(6)
+#define L1F_FLTCTRL_CHK_DSTIP2 BIT(5)
+#define L1F_FLTCTRL_CHK_SRCIP2 BIT(4)
+#define L1F_FLTCTRL_CHK_DSTPRT1 BIT(3)
+#define L1F_FLTCTRL_CHK_SRCPRT1 BIT(2)
+#define L1F_FLTCTRL_CHK_DSTIP1 BIT(1)
+#define L1F_FLTCTRL_CHK_SRCIP1 BIT(0)
+
+#define L1F_DROP_ALG1 0x1638
+#define L1F_DROP_ALG1_BWCHGVAL_MASK 0xFFFFFUL
+#define L1F_DROP_ALG1_BWCHGVAL_SHIFT 12
+#define L1F_DROP_ALG1_BWCHGSCL_6 BIT(11) /* 0:3.125%, 1:6.25% */
+#define L1F_DROP_ALG1_ASUR_LWQ_EN BIT(10)
+#define L1F_DROP_ALG1_BWCHGVAL_EN BIT(9)
+#define L1F_DROP_ALG1_BWCHGSCL_EN BIT(8)
+#define L1F_DROP_ALG1_PSTHR_AUTO BIT(7) /* 0:manual, 1:auto */
+#define L1F_DROP_ALG1_MIN_PSTHR_MASK 0x3UL
+#define L1F_DROP_ALG1_MIN_PSTHR_SHIFT 5
+#define L1F_DROP_ALG1_MIN_PSTHR_1_16 0
+#define L1F_DROP_ALG1_MIN_PSTHR_1_8 1
+#define L1F_DROP_ALG1_MIN_PSTHR_1_4 2
+#define L1F_DROP_ALG1_MIN_PSTHR_1_2 3
+#define L1F_DROP_ALG1_PSCL_MASK 0x3UL
+#define L1F_DROP_ALG1_PSCL_SHIFT 3
+#define L1F_DROP_ALG1_PSCL_1_4 0
+#define L1F_DROP_ALG1_PSCL_1_8 1
+#define L1F_DROP_ALG1_PSCL_1_16 2
+#define L1F_DROP_ALG1_PSCL_1_32 3
+#define L1F_DROP_ALG1_TIMESLOT_MASK 0x7UL
+#define L1F_DROP_ALG1_TIMESLOT_SHIFT 0
+#define L1F_DROP_ALG1_TIMESLOT_4MS 0
+#define L1F_DROP_ALG1_TIMESLOT_8MS 1
+#define L1F_DROP_ALG1_TIMESLOT_16MS 2
+#define L1F_DROP_ALG1_TIMESLOT_32MS 3
+#define L1F_DROP_ALG1_TIMESLOT_64MS 4
+#define L1F_DROP_ALG1_TIMESLOT_128MS 5
+#define L1F_DROP_ALG1_TIMESLOT_256MS 6
+#define L1F_DROP_ALG1_TIMESLOT_512MS 7
+
+#define L1F_DROP_ALG2 0x163C
+#define L1F_DROP_ALG2_SMPLTIME_MASK 0xFUL
+#define L1F_DROP_ALG2_SMPLTIME_SHIFT 24
+#define L1F_DROP_ALG2_LWQBW_MASK 0xFFFFFFUL
+#define L1F_DROP_ALG2_LWQBW_SHIFT 0
+
+#define L1F_SMB_TIMER 0x15C4
+
+#define L1F_TINT_TPD_THRSHLD 0x15C8
+
+#define L1F_TINT_TIMER 0x15CC
+
+#define L1F_CLK_GATE 0x1814
+#define L1F_CLK_GATE_125M_SW_DIS_CR BIT(8) /* B0 */
+#define L1F_CLK_GATE_125M_SW_AZ BIT(7) /* B0 */
+#define L1F_CLK_GATE_125M_SW_IDLE BIT(6) /* B0 */
+#define L1F_CLK_GATE_RXMAC BIT(5)
+#define L1F_CLK_GATE_TXMAC BIT(4)
+#define L1F_CLK_GATE_RXQ BIT(3)
+#define L1F_CLK_GATE_TXQ BIT(2)
+#define L1F_CLK_GATE_DMAR BIT(1)
+#define L1F_CLK_GATE_DMAW BIT(0)
+#define L1F_CLK_GATE_ALL_A0 (\
+ L1F_CLK_GATE_RXMAC |\
+ L1F_CLK_GATE_TXMAC |\
+ L1F_CLK_GATE_RXQ |\
+ L1F_CLK_GATE_TXQ |\
+ L1F_CLK_GATE_DMAR |\
+ L1F_CLK_GATE_DMAW)
+#define L1F_CLK_GATE_ALL_B0 (\
+ L1F_CLK_GATE_ALL_A0 |\
+ L1F_CLK_GATE_125M_SW_AZ |\
+ L1F_CLK_GATE_125M_SW_IDLE)
+
+
+
+
+
+#define L1F_BTROM_CFG 0x1800 /* pwon rst */
+
+#define L1F_DRV 0x1804
+/* bit definition is in lx_hwcomm.h */
+
+#define L1F_DRV_ERR1 0x1808 /* perst */
+#define L1F_DRV_ERR1_GEN BIT(31) /* geneneral err */
+#define L1F_DRV_ERR1_NOR BIT(30) /* rrd.nor */
+#define L1F_DRV_ERR1_TRUNC BIT(29)
+#define L1F_DRV_ERR1_RES BIT(28)
+#define L1F_DRV_ERR1_INTFATAL BIT(27)
+#define L1F_DRV_ERR1_TXQPEND BIT(26)
+#define L1F_DRV_ERR1_DMAW BIT(25)
+#define L1F_DRV_ERR1_DMAR BIT(24)
+#define L1F_DRV_ERR1_PCIELNKDWN BIT(23)
+#define L1F_DRV_ERR1_PKTSIZE BIT(22)
+#define L1F_DRV_ERR1_FIFOFUL BIT(21)
+#define L1F_DRV_ERR1_RFDUR BIT(20)
+#define L1F_DRV_ERR1_RRDSI BIT(19)
+#define L1F_DRV_ERR1_UPDATE BIT(18)
+
+#define L1F_DRV_ERR2 0x180C
+
+#define L1F_DBG_ADDR 0x1900 /* DWORD reg */
+#define L1F_DBG_DATA 0x1904 /* DWORD reg */
+
+#define L1F_SYNC_IPV4_SA 0x1A00
+#define L1F_SYNC_IPV4_DA 0x1A04
+
+#define L1F_SYNC_V4PORT 0x1A08
+#define L1F_SYNC_V4PORT_DST_MASK 0xFFFFUL
+#define L1F_SYNC_V4PORT_DST_SHIFT 16
+#define L1F_SYNC_V4PORT_SRC_MASK 0xFFFFUL
+#define L1F_SYNC_V4PORT_SRC_SHIFT 0
+
+#define L1F_SYNC_IPV6_SA0 0x1A0C
+#define L1F_SYNC_IPV6_SA1 0x1A10
+#define L1F_SYNC_IPV6_SA2 0x1A14
+#define L1F_SYNC_IPV6_SA3 0x1A18
+#define L1F_SYNC_IPV6_DA0 0x1A1C
+#define L1F_SYNC_IPV6_DA1 0x1A20
+#define L1F_SYNC_IPV6_DA2 0x1A24
+#define L1F_SYNC_IPV6_DA3 0x1A28
+
+#define L1F_SYNC_V6PORT 0x1A2C
+#define L1F_SYNC_V6PORT_DST_MASK 0xFFFFUL
+#define L1F_SYNC_V6PORT_DST_SHIFT 16
+#define L1F_SYNC_V6PORT_SRC_MASK 0xFFFFUL
+#define L1F_SYNC_V6PORT_SRC_SHIFT 0
+
+#define L1F_ARP_REMOTE_IPV4 0x1A30
+#define L1F_ARP_HOST_IPV4 0x1A34
+#define L1F_ARP_MAC0 0x1A38
+#define L1F_ARP_MAC1 0x1A3C
+
+#define L1F_1ST_REMOTE_IPV6_0 0x1A40
+#define L1F_1ST_REMOTE_IPV6_1 0x1A44
+#define L1F_1ST_REMOTE_IPV6_2 0x1A48
+#define L1F_1ST_REMOTE_IPV6_3 0x1A4C
+
+#define L1F_1ST_SN_IPV6_0 0x1A50
+#define L1F_1ST_SN_IPV6_1 0x1A54
+#define L1F_1ST_SN_IPV6_2 0x1A58
+#define L1F_1ST_SN_IPV6_3 0x1A5C
+
+#define L1F_1ST_TAR_IPV6_1_0 0x1A60
+#define L1F_1ST_TAR_IPV6_1_1 0x1A64
+#define L1F_1ST_TAR_IPV6_1_2 0x1A68
+#define L1F_1ST_TAR_IPV6_1_3 0x1A6C
+#define L1F_1ST_TAR_IPV6_2_0 0x1A70
+#define L1F_1ST_TAR_IPV6_2_1 0x1A74
+#define L1F_1ST_TAR_IPV6_2_2 0x1A78
+#define L1F_1ST_TAR_IPV6_2_3 0x1A7C
+
+#define L1F_2ND_REMOTE_IPV6_0 0x1A80
+#define L1F_2ND_REMOTE_IPV6_1 0x1A84
+#define L1F_2ND_REMOTE_IPV6_2 0x1A88
+#define L1F_2ND_REMOTE_IPV6_3 0x1A8C
+
+#define L1F_2ND_SN_IPV6_0 0x1A90
+#define L1F_2ND_SN_IPV6_1 0x1A94
+#define L1F_2ND_SN_IPV6_2 0x1A98
+#define L1F_2ND_SN_IPV6_3 0x1A9C
+
+#define L1F_2ND_TAR_IPV6_1_0 0x1AA0
+#define L1F_2ND_TAR_IPV6_1_1 0x1AA4
+#define L1F_2ND_TAR_IPV6_1_2 0x1AA8
+#define L1F_2ND_TAR_IPV6_1_3 0x1AAC
+#define L1F_2ND_TAR_IPV6_2_0 0x1AB0
+#define L1F_2ND_TAR_IPV6_2_1 0x1AB4
+#define L1F_2ND_TAR_IPV6_2_2 0x1AB8
+#define L1F_2ND_TAR_IPV6_2_3 0x1ABC
+
+#define L1F_1ST_NS_MAC0 0x1AC0
+#define L1F_1ST_NS_MAC1 0x1AC4
+
+#define L1F_2ND_NS_MAC0 0x1AC8
+#define L1F_2ND_NS_MAC1 0x1ACC
+
+#define L1F_PMOFLD 0x144C
+#define L1F_PMOFLD_ECMA_IGNR_FRG_SSSR BIT(11) /* B0 */
+#define L1F_PMOFLD_ARP_CNFLCT_WAKEUP BIT(10) /* B0 */
+#define L1F_PMOFLD_MULTI_SOLD BIT(9)
+#define L1F_PMOFLD_ICMP_XSUM BIT(8)
+#define L1F_PMOFLD_GARP_REPLY BIT(7)
+#define L1F_PMOFLD_SYNCV6_ANY BIT(6)
+#define L1F_PMOFLD_SYNCV4_ANY BIT(5)
+#define L1F_PMOFLD_BY_HW BIT(4)
+#define L1F_PMOFLD_NS_EN BIT(3)
+#define L1F_PMOFLD_ARP_EN BIT(2)
+#define L1F_PMOFLD_SYNCV6_EN BIT(1)
+#define L1F_PMOFLD_SYNCV4_EN BIT(0)
+
+#define L1F_RSS_KEY0 0x14B0
+#define L1F_RSS_KEY1 0x14B4
+#define L1F_RSS_KEY2 0x14B8
+#define L1F_RSS_KEY3 0x14BC
+#define L1F_RSS_KEY4 0x14C0
+#define L1F_RSS_KEY5 0x14C4
+#define L1F_RSS_KEY6 0x14C8
+#define L1F_RSS_KEY7 0x14CC
+#define L1F_RSS_KEY8 0x14D0
+#define L1F_RSS_KEY9 0x14D4
+
+#define L1F_RSS_IDT_TBL0 0x1B00
+#define L1F_RSS_IDT_TBL1 0x1B04
+#define L1F_RSS_IDT_TBL2 0x1B08
+#define L1F_RSS_IDT_TBL3 0x1B0C
+#define L1F_RSS_IDT_TBL4 0x1B10
+#define L1F_RSS_IDT_TBL5 0x1B14
+#define L1F_RSS_IDT_TBL6 0x1B18
+#define L1F_RSS_IDT_TBL7 0x1B1C
+#define L1F_RSS_IDT_TBL8 0x1B20
+#define L1F_RSS_IDT_TBL9 0x1B24
+#define L1F_RSS_IDT_TBL10 0x1B28
+#define L1F_RSS_IDT_TBL11 0x1B2C
+#define L1F_RSS_IDT_TBL12 0x1B30
+#define L1F_RSS_IDT_TBL13 0x1B34
+#define L1F_RSS_IDT_TBL14 0x1B38
+#define L1F_RSS_IDT_TBL15 0x1B3C
+#define L1F_RSS_IDT_TBL16 0x1B40
+#define L1F_RSS_IDT_TBL17 0x1B44
+#define L1F_RSS_IDT_TBL18 0x1B48
+#define L1F_RSS_IDT_TBL19 0x1B4C
+#define L1F_RSS_IDT_TBL20 0x1B50
+#define L1F_RSS_IDT_TBL21 0x1B54
+#define L1F_RSS_IDT_TBL22 0x1B58
+#define L1F_RSS_IDT_TBL23 0x1B5C
+#define L1F_RSS_IDT_TBL24 0x1B60
+#define L1F_RSS_IDT_TBL25 0x1B64
+#define L1F_RSS_IDT_TBL26 0x1B68
+#define L1F_RSS_IDT_TBL27 0x1B6C
+#define L1F_RSS_IDT_TBL28 0x1B70
+#define L1F_RSS_IDT_TBL29 0x1B74
+#define L1F_RSS_IDT_TBL30 0x1B78
+#define L1F_RSS_IDT_TBL31 0x1B7C
+
+#define L1F_RSS_HASH_VAL 0x15B0
+#define L1F_RSS_HASH_FLAG 0x15B4
+
+#define L1F_RSS_BASE_CPU_NUM 0x15B8
+
+#define L1F_MSI_MAP_TBL1 0x15D0
+#define L1F_MSI_MAP_TBL1_ALERT_MASK 0xFUL
+#define L1F_MSI_MAP_TBL1_ALERT_SHIFT 28
+#define L1F_MSI_MAP_TBL1_TIMER_MASK 0xFUL
+#define L1F_MSI_MAP_TBL1_TIMER_SHIFT 24
+#define L1F_MSI_MAP_TBL1_TXQ1_MASK 0xFUL
+#define L1F_MSI_MAP_TBL1_TXQ1_SHIFT 20
+#define L1F_MSI_MAP_TBL1_TXQ0_MASK 0xFUL
+#define L1F_MSI_MAP_TBL1_TXQ0_SHIFT 16
+#define L1F_MSI_MAP_TBL1_RXQ3_MASK 0xFUL
+#define L1F_MSI_MAP_TBL1_RXQ3_SHIFT 12
+#define L1F_MSI_MAP_TBL1_RXQ2_MASK 0xFUL
+#define L1F_MSI_MAP_TBL1_RXQ2_SHIFT 8
+#define L1F_MSI_MAP_TBL1_RXQ1_MASK 0xFUL
+#define L1F_MSI_MAP_TBL1_RXQ1_SHIFT 4
+#define L1F_MSI_MAP_TBL1_RXQ0_MASK 0xFUL
+#define L1F_MSI_MAP_TBL1_RXQ0_SHIFT 0
+
+#define L1F_MSI_MAP_TBL2 0x15D8
+#define L1F_MSI_MAP_TBL2_PHY_MASK 0xFUL
+#define L1F_MSI_MAP_TBL2_PHY_SHIFT 28
+#define L1F_MSI_MAP_TBL2_SMB_MASK 0xFUL
+#define L1F_MSI_MAP_TBL2_SMB_SHIFT 24
+#define L1F_MSI_MAP_TBL2_TXQ3_MASK 0xFUL
+#define L1F_MSI_MAP_TBL2_TXQ3_SHIFT 20
+#define L1F_MSI_MAP_TBL2_TXQ2_MASK 0xFUL
+#define L1F_MSI_MAP_TBL2_TXQ2_SHIFT 16
+#define L1F_MSI_MAP_TBL2_RXQ7_MASK 0xFUL
+#define L1F_MSI_MAP_TBL2_RXQ7_SHIFT 12
+#define L1F_MSI_MAP_TBL2_RXQ6_MASK 0xFUL
+#define L1F_MSI_MAP_TBL2_RXQ6_SHIFT 8
+#define L1F_MSI_MAP_TBL2_RXQ5_MASK 0xFUL
+#define L1F_MSI_MAP_TBL2_RXQ5_SHIFT 4
+#define L1F_MSI_MAP_TBL2_RXQ4_MASK 0xFUL
+#define L1F_MSI_MAP_TBL2_RXQ4_SHIFT 0
+
+#define L1F_MSI_ID_MAP 0x15D4
+#define L1F_MSI_ID_MAP_RXQ7 BIT(30)
+#define L1F_MSI_ID_MAP_RXQ6 BIT(29)
+#define L1F_MSI_ID_MAP_RXQ5 BIT(28)
+#define L1F_MSI_ID_MAP_RXQ4 BIT(27)
+#define L1F_MSI_ID_MAP_PCIELNKDW BIT(26) /* 0:common,1:timer */
+#define L1F_MSI_ID_MAP_PCIECERR BIT(25)
+#define L1F_MSI_ID_MAP_PCIENFERR BIT(24)
+#define L1F_MSI_ID_MAP_PCIEFERR BIT(23)
+#define L1F_MSI_ID_MAP_PCIEUR BIT(22)
+#define L1F_MSI_ID_MAP_MACTX BIT(21)
+#define L1F_MSI_ID_MAP_MACRX BIT(20)
+#define L1F_MSI_ID_MAP_RXQ3 BIT(19)
+#define L1F_MSI_ID_MAP_RXQ2 BIT(18)
+#define L1F_MSI_ID_MAP_RXQ1 BIT(17)
+#define L1F_MSI_ID_MAP_RXQ0 BIT(16)
+#define L1F_MSI_ID_MAP_TXQ0 BIT(15)
+#define L1F_MSI_ID_MAP_TXQTO BIT(14)
+#define L1F_MSI_ID_MAP_LPW BIT(13)
+#define L1F_MSI_ID_MAP_PHY BIT(12)
+#define L1F_MSI_ID_MAP_TXCREDIT BIT(11)
+#define L1F_MSI_ID_MAP_DMAW BIT(10)
+#define L1F_MSI_ID_MAP_DMAR BIT(9)
+#define L1F_MSI_ID_MAP_TXFUR BIT(8)
+#define L1F_MSI_ID_MAP_TXQ3 BIT(7)
+#define L1F_MSI_ID_MAP_TXQ2 BIT(6)
+#define L1F_MSI_ID_MAP_TXQ1 BIT(5)
+#define L1F_MSI_ID_MAP_RFDUR BIT(4)
+#define L1F_MSI_ID_MAP_RXFOV BIT(3)
+#define L1F_MSI_ID_MAP_MANU BIT(2)
+#define L1F_MSI_ID_MAP_TIMER BIT(1)
+#define L1F_MSI_ID_MAP_SMB BIT(0)
+
+#define L1F_MSI_RETRANS_TIMER 0x1920
+#define L1F_MSI_MASK_SEL_LINE BIT(16) /* 1:line,0:standard*/
+#define L1F_MSI_RETRANS_TM_MASK 0xFFFFUL
+#define L1F_MSI_RETRANS_TM_SHIFT 0
+
+#define L1F_CR_DMA_CTRL 0x1930
+#define L1F_CR_DMA_CTRL_PRI BIT(22)
+#define L1F_CR_DMA_CTRL_RRDRXD_JOINT BIT(21)
+#define L1F_CR_DMA_CTRL_BWCREDIT_MASK 0x3UL
+#define L1F_CR_DMA_CTRL_BWCREDIT_SHIFT 19
+#define L1F_CR_DMA_CTRL_BWCREDIT_2KB 0
+#define L1F_CR_DMA_CTRL_BWCREDIT_1KB 1
+#define L1F_CR_DMA_CTRL_BWCREDIT_4KB 2
+#define L1F_CR_DMA_CTRL_BWCREDIT_8KB 3
+#define L1F_CR_DMA_CTRL_BW_EN BIT(18)
+#define L1F_CR_DMA_CTRL_BW_RATIO_MASK 0x3UL
+#define L1F_CR_DMA_CTRL_BW_RATIO_1_2 0
+#define L1F_CR_DMA_CTRL_BW_RATIO_1_4 1
+#define L1F_CR_DMA_CTRL_BW_RATIO_1_8 2
+#define L1F_CR_DMA_CTRL_BW_RATIO_2_1 3
+#define L1F_CR_DMA_CTRL_SOFT_RST BIT(11)
+#define L1F_CR_DMA_CTRL_TXEARLY_EN BIT(10)
+#define L1F_CR_DMA_CTRL_RXEARLY_EN BIT(9)
+#define L1F_CR_DMA_CTRL_WEARLY_EN BIT(8)
+#define L1F_CR_DMA_CTRL_RXTH_MASK 0xFUL
+#define L1F_CR_DMA_CTRL_WTH_MASK 0xFUL
+
+
+#define L1F_EFUSE_BIST 0x1934
+#define L1F_EFUSE_BIST_COL_MASK 0x3FUL
+#define L1F_EFUSE_BIST_COL_SHIFT 24
+#define L1F_EFUSE_BIST_ROW_MASK 0x7FUL
+#define L1F_EFUSE_BIST_ROW_SHIFT 12
+#define L1F_EFUSE_BIST_STEP_MASK 0xFUL
+#define L1F_EFUSE_BIST_STEP_SHIFT 8
+#define L1F_EFUSE_BIST_PAT_MASK 0x7UL
+#define L1F_EFUSE_BIST_PAT_SHIFT 4
+#define L1F_EFUSE_BIST_CRITICAL BIT(3)
+#define L1F_EFUSE_BIST_FIXED BIT(2)
+#define L1F_EFUSE_BIST_FAIL BIT(1)
+#define L1F_EFUSE_BIST_NOW BIT(0)
+
+/* CR DMA ctrl */
+
+/* TX QoS */
+#define L1F_WRR 0x1938
+#define L1F_WRR_PRI_MASK 0x3UL
+#define L1F_WRR_PRI_SHIFT 29
+#define L1F_WRR_PRI_RESTRICT_ALL 0
+#define L1F_WRR_PRI_RESTRICT_HI 1
+#define L1F_WRR_PRI_RESTRICT_HI2 2
+#define L1F_WRR_PRI_RESTRICT_NONE 3
+#define L1F_WRR_PRI3_MASK 0x1FUL
+#define L1F_WRR_PRI3_SHIFT 24
+#define L1F_WRR_PRI2_MASK 0x1FUL
+#define L1F_WRR_PRI2_SHIFT 16
+#define L1F_WRR_PRI1_MASK 0x1FUL
+#define L1F_WRR_PRI1_SHIFT 8
+#define L1F_WRR_PRI0_MASK 0x1FUL
+#define L1F_WRR_PRI0_SHIFT 0
+
+#define L1F_HQTPD 0x193C
+#define L1F_HQTPD_BURST_EN BIT(31)
+#define L1F_HQTPD_Q3_NUMPREF_MASK 0xFUL
+#define L1F_HQTPD_Q3_NUMPREF_SHIFT 8
+#define L1F_HQTPD_Q2_NUMPREF_MASK 0xFUL
+#define L1F_HQTPD_Q2_NUMPREF_SHIFT 4
+#define L1F_HQTPD_Q1_NUMPREF_MASK 0xFUL
+#define L1F_HQTPD_Q1_NUMPREF_SHIFT 0
+
+#define L1F_CPUMAP1 0x19A0
+#define L1F_CPUMAP1_VCT7_MASK 0xFUL
+#define L1F_CPUMAP1_VCT7_SHIFT 28
+#define L1F_CPUMAP1_VCT6_MASK 0xFUL
+#define L1F_CPUMAP1_VCT6_SHIFT 24
+#define L1F_CPUMAP1_VCT5_MASK 0xFUL
+#define L1F_CPUMAP1_VCT5_SHIFT 20
+#define L1F_CPUMAP1_VCT4_MASK 0xFUL
+#define L1F_CPUMAP1_VCT4_SHIFT 16
+#define L1F_CPUMAP1_VCT3_MASK 0xFUL
+#define L1F_CPUMAP1_VCT3_SHIFT 12
+#define L1F_CPUMAP1_VCT2_MASK 0xFUL
+#define L1F_CPUMAP1_VCT2_SHIFT 8
+#define L1F_CPUMAP1_VCT1_MASK 0xFUL
+#define L1F_CPUMAP1_VCT1_SHIFT 4
+#define L1F_CPUMAP1_VCT0_MASK 0xFUL
+#define L1F_CPUMAP1_VCT0_SHIFT 0
+
+#define L1F_CPUMAP2 0x19A4
+#define L1F_CPUMAP2_VCT15_MASK 0xFUL
+#define L1F_CPUMAP2_VCT15_SHIFT 28
+#define L1F_CPUMAP2_VCT14_MASK 0xFUL
+#define L1F_CPUMAP2_VCT14_SHIFT 24
+#define L1F_CPUMAP2_VCT13_MASK 0xFUL
+#define L1F_CPUMAP2_VCT13_SHIFT 20
+#define L1F_CPUMAP2_VCT12_MASK 0xFUL
+#define L1F_CPUMAP2_VCT12_SHIFT 16
+#define L1F_CPUMAP2_VCT11_MASK 0xFUL
+#define L1F_CPUMAP2_VCT11_SHIFT 12
+#define L1F_CPUMAP2_VCT10_MASK 0xFUL
+#define L1F_CPUMAP2_VCT10_SHIFT 8
+#define L1F_CPUMAP2_VCT9_MASK 0xFUL
+#define L1F_CPUMAP2_VCT9_SHIFT 4
+#define L1F_CPUMAP2_VCT8_MASK 0xFUL
+#define L1F_CPUMAP2_VCT8_SHIFT 0
+
+#define L1F_MISC 0x19C0
+#define L1F_MISC_MODU BIT(31) /* 0:vector,1:cpu */
+#define L1F_MISC_OVERCUR BIT(29)
+#define L1F_MISC_PSWR_EN BIT(28)
+#define L1F_MISC_PSW_CTRL_MASK 0xFUL
+#define L1F_MISC_PSW_CTRL_SHIFT 24
+#define L1F_MISC_PSW_OCP_MASK 0x7UL
+#define L1F_MISC_PSW_OCP_SHIFT 21
+#define L1F_MISC_V18_HIGH BIT(20)
+#define L1F_MISC_LPO_CTRL_MASK 0xFUL
+#define L1F_MISC_LPO_CTRL_SHIFT 16
+#define L1F_MISC_ISO_EN BIT(12)
+#define L1F_MISC_XSTANA_ALWAYS_ON BIT(11)
+#define L1F_MISC_SYS25M_SEL_ADAPTIVE BIT(10)
+#define L1F_MISC_SPEED_SIM BIT(9)
+#define L1F_MISC_S1_LWP_EN BIT(8)
+#define L1F_MISC_MACLPW BIT(7) /* pcie/mac do pwsaving
+ * as phy in lpw state */
+#define L1F_MISC_125M_SW BIT(6)
+#define L1F_MISC_INTNLOSC_OFF_EN BIT(5)
+#define L1F_MISC_EXTN25M_SEL BIT(4) /* 0:chipset,1:cystle */
+#define L1F_MISC_INTNLOSC_OPEN BIT(3)
+#define L1F_MISC_SMBUS_AT_LED BIT(2)
+#define L1F_MISC_PPS_AT_LED_MASK 0x3UL
+#define L1F_MISC_PPS_AT_LED_SHIFT 0
+#define L1F_MISC_PPS_AT_LED_ACT 1
+#define L1F_MISC_PPS_AT_LED_10_100 2
+#define L1F_MISC_PPS_AT_LED_1000 3
+
+#define L1F_MISC1 0x19C4
+#define L1F_MSC1_BLK_CRASPM_REQ BIT(15)
+
+#define L1F_MISC3 0x19CC
+#define L1F_MISC3_25M_BY_SW BIT(1) /* 1:Software control 25M */
+#define L1F_MISC3_25M_NOTO_INTNL BIT(0) /* 0:25M switch to intnl OSC */
+
+
+
+/***************************** IO mapping registers ***************************/
+#define L1F_IO_ADDR 0x00 /* DWORD reg */
+#define L1F_IO_DATA 0x04 /* DWORD reg */
+#define L1F_IO_MASTER 0x08 /* DWORD same as reg0x1400 */
+#define L1F_IO_MAC_CTRL 0x0C /* DWORD same as reg0x1480*/
+#define L1F_IO_ISR 0x10 /* DWORD same as reg0x1600 */
+#define L1F_IO_IMR 0x14 /* DWORD same as reg0x1604 */
+#define L1F_IO_TPD_PRI1_PIDX 0x18 /* WORD same as reg0x15F0 */
+#define L1F_IO_TPD_PRI0_PIDX 0x1A /* WORD same as reg0x15F2 */
+#define L1F_IO_TPD_PRI1_CIDX 0x1C /* WORD same as reg0x15F4 */
+#define L1F_IO_TPD_PRI0_CIDX 0x1E /* WORD same as reg0x15F6 */
+#define L1F_IO_RFD_PIDX 0x20 /* WORD same as reg0x15E0 */
+#define L1F_IO_RFD_CIDX 0x30 /* WORD same as reg0x15F8 */
+#define L1F_IO_MDIO 0x38 /* WORD same as reg0x1414 */
+#define L1F_IO_PHY_CTRL 0x3C /* DWORD same as reg0x140C */
+
+
+/********************* PHY regs definition ***************************/
+
+/* Autoneg Advertisement Register */
+#define L1F_ADVERTISE_SPEED_MASK 0x01E0
+#define L1F_ADVERTISE_DEFAULT_CAP 0x1DE0 /* diff with L1C */
+
+/* 1000BASE-T Control Register (0x9) */
+#define L1F_GIGA_CR_1000T_HD_CAPS 0x0100
+#define L1F_GIGA_CR_1000T_FD_CAPS 0x0200
+#define L1F_GIGA_CR_1000T_REPEATER_DTE 0x0400
+
+#define L1F_GIGA_CR_1000T_MS_VALUE 0x0800
+
+#define L1F_GIGA_CR_1000T_MS_ENABLE 0x1000
+
+#define L1F_GIGA_CR_1000T_TEST_MODE_NORMAL 0x0000
+#define L1F_GIGA_CR_1000T_TEST_MODE_1 0x2000
+#define L1F_GIGA_CR_1000T_TEST_MODE_2 0x4000
+#define L1F_GIGA_CR_1000T_TEST_MODE_3 0x6000
+#define L1F_GIGA_CR_1000T_TEST_MODE_4 0x8000
+#define L1F_GIGA_CR_1000T_SPEED_MASK 0x0300
+#define L1F_GIGA_CR_1000T_DEFAULT_CAP 0x0300
+
+/* 1000BASE-T Status Register */
+#define L1F_MII_GIGA_SR 0x0A
+
+/* PHY Specific Status Register */
+#define L1F_MII_GIGA_PSSR 0x11
+#define L1F_GIGA_PSSR_FC_RXEN 0x0004
+#define L1F_GIGA_PSSR_FC_TXEN 0x0008
+#define L1F_GIGA_PSSR_SPD_DPLX_RESOLVED 0x0800
+#define L1F_GIGA_PSSR_DPLX 0x2000
+#define L1F_GIGA_PSSR_SPEED 0xC000
+#define L1F_GIGA_PSSR_10MBS 0x0000
+#define L1F_GIGA_PSSR_100MBS 0x4000
+#define L1F_GIGA_PSSR_1000MBS 0x8000
+
+/* PHY Interrupt Enable Register */
+#define L1F_MII_IER 0x12
+#define L1F_IER_LINK_UP 0x0400
+#define L1F_IER_LINK_DOWN 0x0800
+
+/* PHY Interrupt Status Register */
+#define L1F_MII_ISR 0x13
+#define L1F_ISR_LINK_UP 0x0400
+#define L1F_ISR_LINK_DOWN 0x0800
+
+/* Cable-Detect-Test Control Register */
+#define L1F_MII_CDTC 0x16
+#define L1F_CDTC_EN 1 /* sc */
+#define L1F_CDTC_PAIR_MASK 0x3U
+#define L1F_CDTC_PAIR_SHIFT 8
+
+
+/* Cable-Detect-Test Status Register */
+#define L1F_MII_CDTS 0x1C
+#define L1F_CDTS_STATUS_MASK 0x3U
+#define L1F_CDTS_STATUS_SHIFT 8
+#define L1F_CDTS_STATUS_NORMAL 0
+#define L1F_CDTS_STATUS_SHORT 1
+#define L1F_CDTS_STATUS_OPEN 2
+#define L1F_CDTS_STATUS_INVALID 3
+
+#define L1F_MII_DBG_ADDR 0x1D
+#define L1F_MII_DBG_DATA 0x1E
+
+/***************************** debug port *************************************/
+
+#define L1F_MIIDBG_ANACTRL 0x00
+#define L1F_ANACTRL_CLK125M_DELAY_EN BIT(15)
+#define L1F_ANACTRL_VCO_FAST BIT(14)
+#define L1F_ANACTRL_VCO_SLOW BIT(13)
+#define L1F_ANACTRL_AFE_MODE_EN BIT(12)
+#define L1F_ANACTRL_LCKDET_PHY BIT(11)
+#define L1F_ANACTRL_LCKDET_EN BIT(10)
+#define L1F_ANACTRL_OEN_125M BIT(9)
+#define L1F_ANACTRL_HBIAS_EN BIT(8)
+#define L1F_ANACTRL_HB_EN BIT(7)
+#define L1F_ANACTRL_SEL_HSP BIT(6)
+#define L1F_ANACTRL_CLASSA_EN BIT(5)
+#define L1F_ANACTRL_MANUSWON_SWR_MASK 0x3U
+#define L1F_ANACTRL_MANUSWON_SWR_SHIFT 2
+#define L1F_ANACTRL_MANUSWON_SWR_2V 0
+#define L1F_ANACTRL_MANUSWON_SWR_1P9V 1
+#define L1F_ANACTRL_MANUSWON_SWR_1P8V 2
+#define L1F_ANACTRL_MANUSWON_SWR_1P7V 3
+#define L1F_ANACTRL_MANUSWON_BW3_4M BIT(1)
+#define L1F_ANACTRL_RESTART_CAL BIT(0)
+#define L1F_ANACTRL_DEF 0x02EF
+
+
+#define L1F_MIIDBG_SYSMODCTRL 0x04
+#define L1F_SYSMODCTRL_IECHOADJ_PFMH_PHY BIT(15)
+#define L1F_SYSMODCTRL_IECHOADJ_BIASGEN BIT(14)
+#define L1F_SYSMODCTRL_IECHOADJ_PFML_PHY BIT(13)
+#define L1F_SYSMODCTRL_IECHOADJ_PS_MASK 0x3U
+#define L1F_SYSMODCTRL_IECHOADJ_PS_SHIFT 10
+#define L1F_SYSMODCTRL_IECHOADJ_PS_40 3
+#define L1F_SYSMODCTRL_IECHOADJ_PS_20 2
+#define L1F_SYSMODCTRL_IECHOADJ_PS_0 1
+#define L1F_SYSMODCTRL_IECHOADJ_10BT_100MV BIT(6) /* 1:100mv, 0:200mv */
+#define L1F_SYSMODCTRL_IECHOADJ_HLFAP_MASK 0x3U
+#define L1F_SYSMODCTRL_IECHOADJ_HLFAP_SHIFT 4
+#define L1F_SYSMODCTRL_IECHOADJ_VDFULBW BIT(3)
+#define L1F_SYSMODCTRL_IECHOADJ_VDBIASHLF BIT(2)
+#define L1F_SYSMODCTRL_IECHOADJ_VDAMPHLF BIT(1)
+#define L1F_SYSMODCTRL_IECHOADJ_VDLANSW BIT(0)
+#define L1F_SYSMODCTRL_IECHOADJ_DEF 0xBB8B /* en half bias */
+
+
+#define L1F_MIIDBG_SRDSYSMOD 0x05
+#define L1F_SRDSYSMOD_LCKDET_EN BIT(13)
+#define L1F_SRDSYSMOD_PLL_EN BIT(11)
+#define L1F_SRDSYSMOD_SEL_HSP BIT(10)
+#define L1F_SRDSYSMOD_HLFTXDR BIT(9)
+#define L1F_SRDSYSMOD_TXCLK_DELAY_EN BIT(8)
+#define L1F_SRDSYSMOD_TXELECIDLE BIT(7)
+#define L1F_SRDSYSMOD_DEEMP_EN BIT(6)
+#define L1F_SRDSYSMOD_MS_PAD BIT(2)
+#define L1F_SRDSYSMOD_CDR_ADC_VLTG BIT(1)
+#define L1F_SRDSYSMOD_CDR_DAC_1MA BIT(0)
+#define L1F_SRDSYSMOD_DEF 0x2C46
+
+
+#define L1F_MIIDBG_HIBNEG 0x0B
+#define L1F_HIBNEG_PSHIB_EN BIT(15)
+#define L1F_HIBNEG_WAKE_BOTH BIT(14)
+#define L1F_HIBNEG_ONOFF_ANACHG_SUDEN BIT(13)
+#define L1F_HIBNEG_HIB_PULSE BIT(12)
+#define L1F_HIBNEG_GATE_25M_EN BIT(11)
+#define L1F_HIBNEG_RST_80U BIT(10)
+#define L1F_HIBNEG_RST_TIMER_MASK 0x3U
+#define L1F_HIBNEG_RST_TIMER_SHIFT 8
+#define L1F_HIBNEG_GTX_CLK_DELAY_MASK 0x3U
+#define L1F_HIBNEG_GTX_CLK_DELAY_SHIFT 5
+#define L1F_HIBNEG_BYPSS_BRKTIMER BIT(4)
+#define L1F_HIBNEG_DEF 0xBC40
+
+#define L1F_MIIDBG_TST10BTCFG 0x12
+#define L1F_TST10BTCFG_INTV_TIMER_MASK 0x3U
+#define L1F_TST10BTCFG_INTV_TIMER_SHIFT 14
+#define L1F_TST10BTCFG_TRIGER_TIMER_MASK 0x3U
+#define L1F_TST10BTCFG_TRIGER_TIMER_SHIFT 12
+#define L1F_TST10BTCFG_DIV_MAN_MLT3_EN BIT(11)
+#define L1F_TST10BTCFG_OFF_DAC_IDLE BIT(10)
+#define L1F_TST10BTCFG_LPBK_DEEP BIT(2) /* 1:deep,0:shallow */
+#define L1F_TST10BTCFG_DEF 0x4C04
+
+#define L1F_MIIDBG_AZ_ANADECT 0x15
+#define L1F_AZ_ANADECT_10BTRX_TH BIT(15)
+#define L1F_AZ_ANADECT_BOTH_01CHNL BIT(14)
+#define L1F_AZ_ANADECT_INTV_MASK 0x3FU
+#define L1F_AZ_ANADECT_INTV_SHIFT 8
+#define L1F_AZ_ANADECT_THRESH_MASK 0xFU
+#define L1F_AZ_ANADECT_THRESH_SHIFT 4
+#define L1F_AZ_ANADECT_CHNL_MASK 0xFU
+#define L1F_AZ_ANADECT_CHNL_SHIFT 0
+#define L1F_AZ_ANADECT_DEF 0x3220
+#define L1F_AZ_ANADECT_LONG 0x3210
+
+#define L1F_MIIDBG_MSE16DB 0x18
+#define L1F_MSE16DB_UP 0x05EA
+#define L1F_MSE16DB_DOWN 0x02EA
+
+#define L1F_MIIDBG_MSE20DB 0x1C
+#define L1F_MSE20DB_TH_MASK 0x7F
+#define L1F_MSE20DB_TH_SHIFT 2
+#define L1F_MSE20DB_TH_DEF 0x2E
+#define L1F_MSE20DB_TH_HI 0x54
+
+#define L1F_MIIDBG_AGC 0x23
+#define L1F_AGC_2_VGA_MASK 0x3FU
+#define L1F_AGC_2_VGA_SHIFT 8
+#define L1F_AGC_LONG1G_LIMT 40
+#define L1F_AGC_LONG100M_LIMT 44
+
+#define L1F_MIIDBG_LEGCYPS 0x29
+#define L1F_LEGCYPS_EN BIT(15)
+#define L1F_LEGCYPS_DAC_AMP1000_MASK 0x7U
+#define L1F_LEGCYPS_DAC_AMP1000_SHIFT 12
+#define L1F_LEGCYPS_DAC_AMP100_MASK 0x7U
+#define L1F_LEGCYPS_DAC_AMP100_SHIFT 9
+#define L1F_LEGCYPS_DAC_AMP10_MASK 0x7U
+#define L1F_LEGCYPS_DAC_AMP10_SHIFT 6
+#define L1F_LEGCYPS_UNPLUG_TIMER_MASK 0x7U
+#define L1F_LEGCYPS_UNPLUG_TIMER_SHIFT 3
+#define L1F_LEGCYPS_UNPLUG_DECT_EN BIT(2)
+#define L1F_LEGCYPS_ECNC_PS_EN BIT(0)
+#define L1F_LEGCYPS_DEF 0x129D
+
+#define L1F_MIIDBG_TST100BTCFG 0x36
+#define L1F_TST100BTCFG_NORMAL_BW_EN BIT(15)
+#define L1F_TST100BTCFG_BADLNK_BYPASS BIT(14)
+#define L1F_TST100BTCFG_SHORTCABL_TH_MASK 0x3FU
+#define L1F_TST100BTCFG_SHORTCABL_TH_SHIFT 8
+#define L1F_TST100BTCFG_LITCH_EN BIT(7)
+#define L1F_TST100BTCFG_VLT_SW BIT(6)
+#define L1F_TST100BTCFG_LONGCABL_TH_MASK 0x3FU
+#define L1F_TST100BTCFG_LONGCABL_TH_SHIFT 0
+#define L1F_TST100BTCFG_DEF 0xE12C
+
+#define L1F_MIIDBG_GREENCFG 0x3B
+#define L1F_GREENCFG_MSTPS_MSETH2_MASK 0xFFU
+#define L1F_GREENCFG_MSTPS_MSETH2_SHIFT 8
+#define L1F_GREENCFG_MSTPS_MSETH1_MASK 0xFFU
+#define L1F_GREENCFG_MSTPS_MSETH1_SHIFT 0
+#define L1F_GREENCFG_DEF 0x7078
+
+#define L1F_MIIDBG_GREENCFG2 0x3D
+#define L1F_GREENCFG2_BP_GREEN BIT(15)
+#define L1F_GREENCFG2_GATE_DFSE_EN BIT(7)
+
+
+/***************************** extension **************************************/
+
+/******* dev 3 *********/
+#define L1F_MIIEXT_PCS 3
+
+#define L1F_MIIEXT_CLDCTRL3 0x8003
+#define L1F_CLDCTRL3_BP_CABLE1TH_DET_GT BIT(15)
+#define L1F_CLDCTRL3_AZ_DISAMP BIT(12)
+
+#define L1F_MIIEXT_CLDCTRL5 0x8005
+#define L1F_CLDCTRL5_BP_VD_HLFBIAS BIT(14)
+
+#define L1F_MIIEXT_CLDCTRL6 0x8006
+#define L1F_CLDCTRL6_CAB_LEN_MASK 0xFFU
+#define L1F_CLDCTRL6_CAB_LEN_SHIFT 0
+#define L1F_CLDCTRL6_CAB_LEN_SHORT1G 116
+#define L1F_CLDCTRL6_CAB_LEN_SHORT100M 152
+
+#define L1F_MIIEXT_CLDCTRL7 0x8007
+#define L1F_CLDCTRL7_VDHLF_BIAS_TH_MASK 0x7FU
+#define L1F_CLDCTRL7_VDHLF_BIAS_TH_SHIFT 9
+#define L1F_CLDCTRL7_AFE_AZ_MASK 0x1FU
+#define L1F_CLDCTRL7_AFE_AZ_SHIFT 4
+#define L1F_CLDCTRL7_SIDE_PEAK_TH_MASK 0xFU
+#define L1F_CLDCTRL7_SIDE_PEAK_TH_SHIFT 0
+#define L1F_CLDCTRL7_DEF 0x6BF6 /* ???? */
+
+#define L1F_MIIEXT_AZCTRL 0x8008
+#define L1F_AZCTRL_SHORT_TH_MASK 0xFFU
+#define L1F_AZCTRL_SHORT_TH_SHIFT 8
+#define L1F_AZCTRL_LONG_TH_MASK 0xFFU
+#define L1F_AZCTRL_LONG_TH_SHIFT 0
+#define L1F_AZCTRL_DEF 0x1629
+
+#define L1F_MIIEXT_AZCTRL2 0x8009
+#define L1F_AZCTRL2_WAKETRNING_MASK 0xFFU
+#define L1F_AZCTRL2_WAKETRNING_SHIFT 8
+#define L1F_AZCTRL2_QUIET_TIMER_MASK 0x3U
+#define L1F_AZCTRL2_QUIET_TIMER_SHIFT 6
+#define L1F_AZCTRL2_PHAS_JMP2 BIT(4)
+#define L1F_AZCTRL2_CLKTRCV_125MD16 BIT(3)
+#define L1F_AZCTRL2_GATE1000_EN BIT(2)
+#define L1F_AZCTRL2_AVRG_FREQ BIT(1)
+#define L1F_AZCTRL2_PHAS_JMP4 BIT(0)
+#define L1F_AZCTRL2_DEF 0x32C0
+
+#define L1F_MIIEXT_AZCTRL6 0x800D
+
+#define L1F_MIIEXT_VDRVBIAS 0x8062
+#define L1F_VDRVBIAS_SEL_MASK 0x3U
+#define L1F_VDRVBIAS_SEL_SHIFT 0
+#define L1F_VDRVBIAS_DEF 0x3
+
+/********* dev 7 **********/
+#define L1F_MIIEXT_ANEG 7
+
+#define L1F_MIIEXT_LOCAL_EEEADV 0x3C
+#define L1F_LOCAL_EEEADV_1000BT BIT(2)
+#define L1F_LOCAL_EEEADV_100BT BIT(1)
+
+#define L1F_MIIEXT_REMOTE_EEEADV 0x3D
+#define L1F_REMOTE_EEEADV_1000BT BIT(2)
+#define L1F_REMOTE_EEEADV_100BT BIT(1)
+
+#define L1F_MIIEXT_EEE_ANEG 0x8000
+#define L1F_EEE_ANEG_1000M BIT(2)
+#define L1F_EEE_ANEG_100M BIT(1)
+
+#define L1F_MIIEXT_AFE 0x801A
+#define L1F_AFE_10BT_100M_TH BIT(6)
+
+#define L1F_MIIEXT_S3DIG10 0x8023
+#define L1F_MIIEXT_S3DIG10_SL BIT(0) /* 1=bypass 10BT rx fifo */
+#define L1F_MIIEXT_S3DIG10_DEF 0 /* 0= original 10BT rx */
+
+#define L1F_MIIEXT_NLP34 0x8025
+#define L1F_MIIEXT_NLP34_DEF 0x1010 /* for 160m */
+
+#define L1F_MIIEXT_NLP56 0x8026
+#define L1F_MIIEXT_NLP56_DEF 0x1010 /* for 160m */
+
+#define L1F_MIIEXT_NLP78 0x8027
+#define L1F_MIIEXT_NLP78_160M_DEF 0x8D05 /* for 160m */
+#define L1F_MIIEXT_NLP78_120M_DEF 0x8A05 /* for 120m */
+
+
+
+/******************************************************************************/
+
+/* functions */
+
+
+/* get permanent mac address from
+ * return
+ * 0: success
+ * non-0:fail
+ */
+int l1f_get_perm_macaddr(struct alx_hw *hw, u8 *addr);
+
+
+/* reset mac & dma
+ * return
+ * 0: success
+ * non-0:fail
+ */
+int l1f_reset_mac(struct alx_hw *hw);
+
+/* reset phy
+ * return
+ * 0: success
+ * non-0:fail
+ */
+int l1f_reset_phy(struct alx_hw *hw, bool pws_en, bool az_en, bool ptp_en);
+
+
+/* reset pcie
+ * just reset pcie relative registers (pci command, clk, aspm...)
+ * return
+ * 0:success
+ * non-0:fail
+ */
+int l1f_reset_pcie(struct alx_hw *hw, bool l0s_en, bool l1_en);
+
+
+/* disable/enable MAC/RXQ/TXQ
+ * en
+ * true:enable
+ * false:disable
+ * return
+ * 0:success
+ * non-0-fail
+ */
+int l1f_enable_mac(struct alx_hw *hw, bool en, u16 en_ctrl);
+
+
+/* enable/disable aspm support
+ * that will change settings for phy/mac/pcie
+ */
+int l1f_enable_aspm(struct alx_hw *hw, bool l0s_en, bool l1_en, u8 lnk_stat);
+
+
+/* initialize phy for speed / flow control
+ * lnk_cap
+ * if autoNeg, is link capability to tell the peer
+ * if force mode, is forced speed/duplex
+ */
+int l1f_init_phy_spdfc(struct alx_hw *hw, bool auto_neg,
+ u8 lnk_cap, bool fc_en);
+
+/* do post setting on phy if link up/down event occur
+ */
+int l1f_post_phy_link(struct alx_hw *hw, bool az_en, bool linkon, u8 wire_spd);
+
+
+/* do power saving setting befor enter suspend mode
+ * NOTE:
+ * 1. phy link must be established before calling this function
+ * 2. wol option (pattern,magic,link,etc.) is configed before call it.
+ */
+int l1f_powersaving(struct alx_hw *hw, u8 wire_spd, bool wol_en,
+ bool mahw_en, bool macrx_en, bool pws_en);
+
+/* read phy register */
+int l1f_read_phy(struct alx_hw *hw, bool ext, u8 dev, bool fast, u16 reg,
+ u16 *data);
+
+/* write phy register */
+int l1f_write_phy(struct alx_hw *hw, bool ext, u8 dev, bool fast, u16 reg,
+ u16 data);
+
+/* phy debug port */
+int l1f_read_phydbg(struct alx_hw *hw, bool fast, u16 reg, u16 *data);
+int l1f_write_phydbg(struct alx_hw *hw, bool fast, u16 reg, u16 data);
+
+
+/* check the configuration of the PHY */
+int l1f_get_phy_config(struct alx_hw *hw);
+
+/*
+ * initialize mac basically
+ * most of hi-feature no init
+ * MAC/PHY should be reset before call this function
+ */
+void l1f_init_mac_misc(struct alx_hw *hw, u8 *addr, u16 smb_timer,
+ u16 int_mod, bool hash_legacy);
+
+int l1f_init_mac_rtx_ring_desc(struct alx_hw *hw, u32 rxmem_hi,
+ u32 rfdmem_lo, u32 rrdmem_lo,
+ u16 rxring_sz, u16 rxbuf_sz,
+ u32 txmem_hi, u32 *tx_mem_lo,
+ u8 tx_qnum, u16 txring_sz);
+
+void l1f_init_mac_rtx_queue(struct alx_hw *hw, u16 mtu);
+
+void l1f_init_mac_dma(struct alx_hw *hw);
+
+
+#endif/*L1F_HW_H_*/
diff --git a/drivers/net/ethernet/atheros/alx/alx_hwcom.h b/drivers/net/ethernet/atheros/alx/alx_hwcom.h
new file mode 100644
index 0000000..260a63f
--- /dev/null
+++ b/drivers/net/ethernet/atheros/alx/alx_hwcom.h
@@ -0,0 +1,128 @@
+/*
+ * Copyright (c) 2012 Qualcomm Atheros, Inc.
+ *
+ * Permission to use, copy, modify, and/or distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+#ifndef _ALX_HWCOMMON_H_
+#define _ALX_HWCOMMON_H_
+
+#include <linux/bitops.h>
+#include "alx_sw.h"
+
+
+#define BIT_ALL 0xffffffffUL
+
+#define FIELD_GETX(_x, _name) (((_x) >> (_name##_SHIFT)) & (_name##_MASK))
+#define FIELD_SETS(_x, _name, _v) (\
+(_x) = \
+((_x) & ~((_name##_MASK) << (_name##_SHIFT))) |\
+(((u16)(_v) & (_name##_MASK)) << (_name##_SHIFT)))
+#define FIELD_SETL(_x, _name, _v) (\
+(_x) = \
+((_x) & ~((_name##_MASK) << (_name##_SHIFT))) |\
+(((u32)(_v) & (_name##_MASK)) << (_name##_SHIFT)))
+#define FIELDL(_name, _v) (((u32)(_v) & (_name##_MASK)) << (_name##_SHIFT))
+#define FIELDS(_name, _v) (((u16)(_v) & (_name##_MASK)) << (_name##_SHIFT))
+
+
+#define LX_ERR_SUCCESS 0x0000
+#define LX_ERR_ALOAD 0x0001
+#define LX_ERR_RSTMAC 0x0002
+#define LX_ERR_PARM 0x0003
+#define LX_ERR_MIIBUSY 0x0004
+
+/* link capability */
+#define LX_LC_10H 0x01
+#define LX_LC_10F 0x02
+#define LX_LC_100H 0x04
+#define LX_LC_100F 0x08
+#define LX_LC_1000H 0x10
+#define LX_LC_1000F 0x20
+#define LX_LC_ALL \
+ (LX_LC_10H|LX_LC_10F|LX_LC_100H|LX_LC_100F|LX_LC_1000F)
+
+/* options for MAC contrl */
+#define LX_FLT_DIRECT BIT(0)
+#define LX_FLT_BROADCAST BIT(1) /* 1:enable rx-broadcast */
+#define LX_FLT_MULTI_ALL BIT(2)
+#define LX_FLT_PROMISC BIT(3)
+#define LX_VLAN_STRIP BIT(4)
+#define LX_LOOPBACK BIT(5)
+#define LX_FC_TXEN BIT(6)
+#define LX_FC_RXEN BIT(7)
+#define LX_ADD_FCS BIT(8)
+#define LX_SINGLE_PAUSE BIT(9)
+#define LX_MACSPEED_1000 BIT(10) /* 1:1000M, 0:10/100M */
+#define LX_MACDUPLEX_FULL BIT(11) /* 1:full, 0:half */
+
+
+/* interop between drivers */
+#define LX_DRV_TYPE_MASK 0x1FUL
+#define LX_DRV_TYPE_SHIFT 27
+#define LX_DRV_TYPE_UNKNOWN 0
+#define LX_DRV_TYPE_BIOS 1
+#define LX_DRV_TYPE_BTROM 2
+#define LX_DRV_TYPE_PKT 3
+#define LX_DRV_TYPE_NDS2 4
+#define LX_DRV_TYPE_UEFI 5
+#define LX_DRV_TYPE_NDS5 6
+#define LX_DRV_TYPE_NDS62 7
+#define LX_DRV_TYPE_NDS63 8
+#define LX_DRV_TYPE_LNX 9
+#define LX_DRV_TYPE_ODI16 10
+#define LX_DRV_TYPE_ODI32 11
+#define LX_DRV_TYPE_FRBSD 12
+#define LX_DRV_TYPE_NTBSD 13
+#define LX_DRV_TYPE_WCE 14
+#define LX_DRV_PHY_AUTO BIT(26) /* 1:auto, 0:force */
+#define LX_DRV_PHY_1000 BIT(25)
+#define LX_DRV_PHY_100 BIT(24)
+#define LX_DRV_PHY_10 BIT(23)
+#define LX_DRV_PHY_DUPLEX BIT(22) /* 1:full, 0:half */
+#define LX_DRV_PHY_FC BIT(21) /* 1:en flow control */
+#define LX_DRV_PHY_MASK 0x1FUL
+#define LX_DRV_PHY_SHIFT 21
+#define LX_DRV_PHY_UNKNOWN 0
+#define LX_DRV_DISABLE BIT(18)
+#define LX_DRV_WOLS5_EN BIT(17)
+#define LX_DRV_WOLS5_BIOS_EN BIT(16)
+#define LX_DRV_AZ_EN BIT(12)
+#define LX_DRV_WOLPATTERN_EN BIT(11)
+#define LX_DRV_WOLLINKUP_EN BIT(10)
+#define LX_DRV_WOLMAGIC_EN BIT(9)
+#define LX_DRV_WOLCAP_BIOS_EN BIT(8)
+#define LX_DRV_ASPM_SPD1000LMT_MASK 0x3UL
+#define LX_DRV_ASPM_SPD1000LMT_SHIFT 4
+#define LX_DRV_ASPM_SPD1000LMT_100M 0
+#define LX_DRV_ASPM_SPD1000LMT_NO 1
+#define LX_DRV_ASPM_SPD1000LMT_1M 2
+#define LX_DRV_ASPM_SPD1000LMT_10M 3
+#define LX_DRV_ASPM_SPD100LMT_MASK 0x3UL
+#define LX_DRV_ASPM_SPD100LMT_SHIFT 2
+#define LX_DRV_ASPM_SPD100LMT_1M 0
+#define LX_DRV_ASPM_SPD100LMT_10M 1
+#define LX_DRV_ASPM_SPD100LMT_100M 2
+#define LX_DRV_ASPM_SPD100LMT_NO 3
+#define LX_DRV_ASPM_SPD10LMT_MASK 0x3UL
+#define LX_DRV_ASPM_SPD10LMT_SHIFT 0
+#define LX_DRV_ASPM_SPD10LMT_1M 0
+#define LX_DRV_ASPM_SPD10LMT_10M 1
+#define LX_DRV_ASPM_SPD10LMT_100M 2
+#define LX_DRV_ASPM_SPD10LMT_NO 3
+
+/* flag of phy inited */
+#define LX_PHY_INITED 0x003F
+
+
+#endif/*_ALX_HWCOMMON_H_*/
diff --git a/drivers/net/ethernet/atheros/alx/alx_main.c b/drivers/net/ethernet/atheros/alx/alx_main.c
new file mode 100644
index 0000000..17b4b9d
--- /dev/null
+++ b/drivers/net/ethernet/atheros/alx/alx_main.c
@@ -0,0 +1,3830 @@
+/*
+ * Copyright (c) 2012 Qualcomm Atheros, Inc.
+ *
+ * Permission to use, copy, modify, and/or distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+#include <linux/module.h>
+#include <linux/pci.h>
+#include <linux/interrupt.h>
+#include <linux/ip.h>
+#include <linux/ipv6.h>
+#include <linux/if_vlan.h>
+#include <linux/mii.h>
+#include <linux/aer.h>
+#include <linux/debugfs.h>
+
+#include "alx.h"
+#include "alx_hwcom.h"
+
+char alx_drv_name[] = "alx";
+static const char alx_drv_description[] =
+"Qualcomm Atheros(R) AR816x PCI-E Ethernet Network Driver";
+
+/* alx_pci_tbl - PCI Device ID Table
+ *
+ * Wildcard entries (PCI_ANY_ID) should come last
+ * Last entry must be all 0s
+ *
+ * { Vendor ID, Device ID, SubVendor ID, SubDevice ID,
+ * Class, Class Mask, private data (not used) }
+ */
+#define ALX_ETHER_DEVICE(device_id) {\
+ PCI_DEVICE(ALX_VENDOR_ID, device_id)}
+static DEFINE_PCI_DEVICE_TABLE(alx_pci_tbl) = {
+ ALX_ETHER_DEVICE(ALX_DEV_ID_AR8161),
+ ALX_ETHER_DEVICE(ALX_DEV_ID_AR8162),
+ {0,}
+};
+MODULE_DEVICE_TABLE(pci, alx_pci_tbl);
+MODULE_AUTHOR("Qualcomm Corporation, <nic-devel@...lcomm.com>");
+MODULE_DESCRIPTION("Qualcomm Atheros Gigabit Ethernet Driver");
+MODULE_LICENSE("Dual BSD/GPL");
+
+
+int alx_cfg_r32(const struct alx_hw *hw, int reg, u32 *pval)
+{
+ return pci_read_config_dword(hw->adpt->pdev, reg, pval);
+}
+
+
+int alx_cfg_w32(const struct alx_hw *hw, int reg, u32 val)
+{
+ return pci_write_config_dword(hw->adpt->pdev, reg, val);
+}
+
+
+int alx_cfg_r16(const struct alx_hw *hw, int reg, u16 *pval)
+{
+ return pci_read_config_word(hw->adpt->pdev, reg, pval);
+}
+
+
+int alx_cfg_w16(const struct alx_hw *hw, int reg, u16 val)
+{
+ return pci_write_config_word(hw->adpt->pdev, reg, val);
+}
+
+
+void alx_mem_flush(const struct alx_hw *hw)
+{
+ readl(hw->hw_addr);
+}
+
+
+void alx_mem_r32(const struct alx_hw *hw, int reg, u32 *val)
+{
+ if (unlikely(!hw->link_up))
+ readl(hw->hw_addr + reg);
+ *val = readl(hw->hw_addr + reg);
+}
+
+
+void alx_mem_w32(const struct alx_hw *hw, int reg, u32 val)
+{
+ writel(val, hw->hw_addr + reg);
+}
+
+
+void alx_mem_r16(const struct alx_hw *hw, int reg, u16 *val)
+{
+ if (unlikely(!hw->link_up))
+ readl(hw->hw_addr + reg);
+ *val = readw(hw->hw_addr + reg);
+}
+
+
+void alx_mem_w16(const struct alx_hw *hw, int reg, u16 val)
+{
+ writew(val, hw->hw_addr + reg);
+}
+
+
+void alx_mem_w8(const struct alx_hw *hw, int reg, u8 val)
+{
+ writeb(val, hw->hw_addr + reg);
+}
+
+
+/*
+ * alx_hw_printk
+ */
+void alx_hw_printk(const char *level, const struct alx_hw *hw,
+ const char *fmt, ...)
+{
+ struct va_format vaf;
+ va_list args;
+
+ va_start(args, fmt);
+ vaf.fmt = fmt;
+ vaf.va = &args;
+
+ if (hw && hw->adpt && hw->adpt->netdev)
+ __netdev_printk(level, hw->adpt->netdev, &vaf);
+ else
+ printk("%salx_hw: %pV", level, &vaf);
+
+ va_end(args);
+}
+
+
+/*
+ * alx_validate_mac_addr - Validate MAC address
+ */
+int alx_validate_mac_addr(u8 *mac_addr)
+{
+ int retval = 0;
+
+ if (is_broadcast_ether_addr(mac_addr)) {
+ pr_debug("MAC address is broadcast\n");
+ retval = -EADDRNOTAVAIL;
+ } else if (is_multicast_ether_addr(mac_addr)) {
+ pr_debug("MAC address is multicast\n");
+ retval = -EADDRNOTAVAIL;
+ } else if (is_zero_ether_addr(mac_addr)) {
+ pr_debug("MAC address is all zeros\n");
+ retval = -EADDRNOTAVAIL;
+ }
+ return retval;
+}
+
+
+/*
+ * alx_set_mac_type - Sets MAC type
+ */
+static int alx_set_mac_type(struct alx_adapter *adpt)
+{
+ struct alx_hw *hw = &adpt->hw;
+ int retval = 0;
+
+ if (hw->pci_venid == ALX_VENDOR_ID) {
+ switch (hw->pci_devid) {
+ case ALX_DEV_ID_AR8161:
+ hw->mac_type = alx_mac_l1f;
+ break;
+ case ALX_DEV_ID_AR8162:
+ hw->mac_type = alx_mac_l2f;
+ break;
+ default:
+ retval = -EINVAL;
+ break;
+ }
+ } else {
+ retval = -EINVAL;
+ }
+
+ netif_info(adpt, hw, adpt->netdev,
+ "found mac: %d, returns: %d\n", hw->mac_type, retval);
+ return retval;
+}
+
+
+/*
+ * alx_init_hw
+ */
+static void alx_init_hw(struct alx_adapter *adpt)
+{
+ struct alx_hw *hw = &adpt->hw;
+
+ hw->flags = ALX_HW_FLAG_BROADCAST_EN;
+
+ alx_set_mac_type(adpt);
+ alf_init_hw(hw);
+}
+
+
+void alx_reinit_locked(struct alx_adapter *adpt)
+{
+ WARN_ON(in_interrupt());
+
+ while (CHK_ADPT_FLAG(STATE_RESETTING))
+ msleep(20);
+ SET_ADPT_FLAG(STATE_RESETTING);
+
+ alx_stop_internal(adpt, ALX_OPEN_CTRL_RESET_MAC);
+ alx_open_internal(adpt, ALX_OPEN_CTRL_RESET_MAC);
+
+ CLI_ADPT_FLAG(STATE_RESETTING);
+}
+
+
+static void alx_task_schedule(struct alx_adapter *adpt)
+{
+ if (!CHK_ADPT_FLAG(STATE_DOWN) &&
+ !CHK_ADPT_FLAG(STATE_WATCH_DOG)) {
+ SET_ADPT_FLAG(STATE_WATCH_DOG);
+ schedule_work(&adpt->alx_task);
+ }
+}
+
+
+static void alx_check_lsc(struct alx_adapter *adpt)
+{
+ SET_ADPT_FLAG(TASK_LSC_REQ);
+ adpt->link_jiffies = jiffies + ALX_TRY_LINK_TIMEOUT;
+
+ if (!CHK_ADPT_FLAG(STATE_DOWN))
+ alx_task_schedule(adpt);
+}
+
+
+/*
+ * alx_tx_timeout - Respond to a Tx Hang
+ */
+static void alx_tx_timeout(struct net_device *netdev)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+
+ /* Do the reset outside of interrupt context */
+ if (!CHK_ADPT_FLAG(STATE_DOWN)) {
+ SET_ADPT_FLAG(TASK_REINIT_REQ);
+ alx_task_schedule(adpt);
+ }
+}
+
+
+/*
+ * alx_set_multicase_list - Multicast and Promiscuous mode set
+ */
+static void alx_set_multicase_list(struct net_device *netdev)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ struct alx_hw *hw = &adpt->hw;
+ struct netdev_hw_addr *ha;
+
+ /* Check for Promiscuous and All Multicast modes */
+ if (netdev->flags & IFF_PROMISC) {
+ SET_HW_FLAG(PROMISC_EN);
+ } else if (netdev->flags & IFF_ALLMULTI) {
+ SET_HW_FLAG(MULTIALL_EN);
+ CLI_HW_FLAG(PROMISC_EN);
+ } else {
+ CLI_HW_FLAG(MULTIALL_EN);
+ CLI_HW_FLAG(PROMISC_EN);
+ }
+ alf_update_mac_filter(hw);
+
+ /* clear the old settings from the multicast hash table */
+ alf_clear_mc_addr(hw);
+
+ /* comoute mc addresses' hash value ,and put it into hash table */
+ netdev_for_each_mc_addr(ha, netdev)
+ alf_set_mc_addr(hw, ha->addr);
+}
+
+
+/*
+ * alx_set_mac - Change the Ethernet Address of the NIC
+ */
+static int alx_set_mac_address(struct net_device *netdev, void *data)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ struct alx_hw *hw = &adpt->hw;
+ struct sockaddr *addr = data;
+
+ if (!is_valid_ether_addr(addr->sa_data))
+ return -EADDRNOTAVAIL;
+
+ if (netif_running(netdev))
+ return -EBUSY;
+
+ if (netdev->addr_assign_type & NET_ADDR_RANDOM)
+ netdev->addr_assign_type ^= NET_ADDR_RANDOM;
+
+ memcpy(netdev->dev_addr, addr->sa_data, netdev->addr_len);
+ memcpy(hw->mac_addr, addr->sa_data, netdev->addr_len);
+
+ alf_set_mac_addr(hw, hw->mac_addr);
+ return 0;
+}
+
+
+/*
+ * Read / Write Ptr Initialize:
+ */
+static void alx_init_ring_ptrs(struct alx_adapter *adpt)
+{
+ int i, j;
+
+ for (i = 0; i < adpt->num_txques; i++) {
+ struct alx_tx_queue *txque = adpt->tx_queue[i];
+ struct alx_buffer *tpbuf = txque->tpq.tpbuff;
+ txque->tpq.produce_idx = 0;
+ txque->tpq.consume_idx = 0;
+ for (j = 0; j < txque->tpq.count; j++)
+ tpbuf[j].dma = 0;
+ }
+
+ for (i = 0; i < adpt->num_hw_rxques; i++) {
+ struct alx_rx_queue *rxque = adpt->rx_queue[i];
+ struct alx_buffer *rfbuf = rxque->rfq.rfbuff;
+ rxque->rrq.produce_idx = 0;
+ rxque->rrq.consume_idx = 0;
+ rxque->rfq.produce_idx = 0;
+ rxque->rfq.consume_idx = 0;
+ for (j = 0; j < rxque->rfq.count; j++)
+ rfbuf[j].dma = 0;
+ }
+
+ if (CHK_ADPT_FLAG(SRSS_EN))
+ goto srrs_enable;
+
+ return;
+
+srrs_enable:
+ for (i = 0; i < adpt->num_sw_rxques; i++) {
+ struct alx_rx_queue *rxque = adpt->rx_queue[i];
+ rxque->swq.produce_idx = 0;
+ rxque->swq.consume_idx = 0;
+ }
+}
+
+
+static void alx_config_rss(struct alx_adapter *adpt)
+{
+ static const u8 key[40] = {
+ 0xE2, 0x91, 0xD7, 0x3D, 0x18, 0x05, 0xEC, 0x6C,
+ 0x2A, 0x94, 0xB3, 0x0D, 0xA5, 0x4F, 0x2B, 0xEC,
+ 0xEA, 0x49, 0xAF, 0x7C, 0xE2, 0x14, 0xAD, 0x3D,
+ 0xB8, 0x55, 0xAA, 0xBE, 0x6A, 0x3E, 0x67, 0xEA,
+ 0x14, 0x36, 0x4D, 0x17, 0x3B, 0xED, 0x20, 0x0D};
+
+ struct alx_hw *hw = &adpt->hw;
+ u32 reta = 0;
+ int i, j;
+
+ /* initialize rss hash type and idt table size */
+ hw->rss_hstype = ALX_RSS_HSTYP_ALL_EN;
+ hw->rss_idt_size = 0x100;
+
+ /* Fill out redirection table */
+ memcpy(hw->rss_key, key, sizeof(hw->rss_key));
+
+ /* Fill out redirection table */
+ memset(hw->rss_idt, 0x0, sizeof(hw->rss_idt));
+ for (i = 0, j = 0; i < 256; i++, j++) {
+ if (j == adpt->max_rxques)
+ j = 0;
+ reta |= (j << ((i & 7) * 4));
+ if ((i & 7) == 7) {
+ hw->rss_idt[i>>3] = reta;
+ reta = 0;
+ }
+ }
+
+ alf_config_rss(hw, CHK_ADPT_FLAG(SRSS_EN));
+}
+
+
+/*
+ * alx_receive_skb
+ */
+static void alx_receive_skb(struct alx_adapter *adpt,
+ struct sk_buff *skb,
+ u16 vlan_tag, bool vlan_flag)
+{
+ if (vlan_flag) {
+ u16 vlan;
+ ALX_TAG_TO_VLAN(vlan_tag, vlan);
+ __vlan_hwaccel_put_tag(skb, vlan);
+ }
+ netif_receive_skb(skb);
+}
+
+
+static bool alx_get_rrdesc(struct alx_rx_queue *rxque,
+ union alx_sw_rrdesc *srrd)
+{
+ union alx_hw_rrdesc *hrrd =
+ ALX_RRD(rxque, rxque->rrq.consume_idx);
+
+ srrd->dfmt.dw3 = le32_to_cpu(hrrd->dfmt.dw3);
+ if (!srrd->genr.update)
+ return false;
+
+ srrd->dfmt.dw2 = le32_to_cpu(hrrd->dfmt.dw2);
+ srrd->dfmt.dw1 = le32_to_cpu(hrrd->dfmt.dw1);
+ srrd->dfmt.dw0 = le32_to_cpu(hrrd->dfmt.dw0);
+
+ srrd->genr.update = 0;
+ hrrd->dfmt.dw3 = cpu_to_le32(srrd->dfmt.dw3);
+ if (++rxque->rrq.consume_idx == rxque->rrq.count)
+ rxque->rrq.consume_idx = 0;
+
+ return true;
+}
+
+
+static bool alx_set_rfdesc(struct alx_rx_queue *rxque,
+ union alx_sw_rfdesc *srfd)
+{
+ union alx_hw_rfdesc *hrfd =
+ ALX_RFD(rxque, rxque->rfq.produce_idx);
+
+ hrfd->qfmt.qw0 = cpu_to_le64(srfd->qfmt.qw0);
+
+ if (++rxque->rfq.produce_idx == rxque->rfq.count)
+ rxque->rfq.produce_idx = 0;
+
+ return true;
+}
+
+
+bool alx_set_tpdesc(struct alx_tx_queue *txque, union alx_sw_tpdesc *stpd)
+{
+ union alx_hw_tpdesc *htpd;
+
+ txque->tpq.last_produce_idx = txque->tpq.produce_idx;
+ htpd = ALX_TPD(txque, txque->tpq.produce_idx);
+
+ if (++txque->tpq.produce_idx == txque->tpq.count)
+ txque->tpq.produce_idx = 0;
+
+ htpd->dfmt.dw0 = cpu_to_le32(stpd->dfmt.dw0);
+ htpd->dfmt.dw1 = cpu_to_le32(stpd->dfmt.dw1);
+ htpd->qfmt.qw1 = cpu_to_le64(stpd->qfmt.qw1);
+
+ return true;
+}
+
+
+void alx_set_tpdesc_lastfrag(struct alx_tx_queue *txque)
+{
+ union alx_hw_tpdesc *htpd;
+#define ALX_TPD_LAST_FLAGMENT 0x80000000
+ htpd = ALX_TPD(txque, txque->tpq.last_produce_idx);
+ htpd->dfmt.dw1 |= cpu_to_le32(ALX_TPD_LAST_FLAGMENT);
+}
+
+
+static int alx_refresh_rx_buffer(struct alx_rx_queue *rxque)
+{
+ struct alx_adapter *adpt = netdev_priv(rxque->netdev);
+ struct alx_hw *hw = &adpt->hw;
+ struct alx_buffer *curr_rxbuf;
+ struct alx_buffer *next_rxbuf;
+ union alx_sw_rfdesc srfd;
+ struct sk_buff *skb;
+ void *skb_data = NULL;
+ u16 count = 0;
+ u16 next_produce_idx;
+
+ next_produce_idx = rxque->rfq.produce_idx;
+ if (++next_produce_idx == rxque->rfq.count)
+ next_produce_idx = 0;
+ curr_rxbuf = GET_RF_BUFFER(rxque, rxque->rfq.produce_idx);
+ next_rxbuf = GET_RF_BUFFER(rxque, next_produce_idx);
+
+ /* this always has a blank rx_buffer*/
+ while (next_rxbuf->dma == 0) {
+ skb = dev_alloc_skb(adpt->rxbuf_size);
+ if (unlikely(!skb)) {
+ alx_err(adpt, "alloc rx buffer failed\n");
+ break;
+ }
+
+ /*
+ * Make buffer alignment 2 beyond a 16 byte boundary
+ * this will result in a 16 byte aligned IP header after
+ * the 14 byte MAC header is removed
+ */
+ skb_data = skb->data;
+ /*skb_reserve(skb, NET_IP_ALIGN);*/
+ curr_rxbuf->skb = skb;
+ curr_rxbuf->length = adpt->rxbuf_size;
+ curr_rxbuf->dma = dma_map_single(rxque->dev,
+ skb_data,
+ curr_rxbuf->length,
+ DMA_FROM_DEVICE);
+ srfd.genr.addr = curr_rxbuf->dma;
+ alx_set_rfdesc(rxque, &srfd);
+
+ next_produce_idx = rxque->rfq.produce_idx;
+ if (++next_produce_idx == rxque->rfq.count)
+ next_produce_idx = 0;
+ curr_rxbuf = GET_RF_BUFFER(rxque, rxque->rfq.produce_idx);
+ next_rxbuf = GET_RF_BUFFER(rxque, next_produce_idx);
+ count++;
+ }
+
+ if (count) {
+ wmb();
+ alx_mem_w16(hw, rxque->produce_reg, rxque->rfq.produce_idx);
+ netif_info(adpt, rx_err, adpt->netdev,
+ "RX[%d]: prod_reg[%x] = 0x%x, rfq.prod_idx = 0x%x\n",
+ rxque->que_idx, rxque->produce_reg,
+ rxque->rfq.produce_idx, rxque->rfq.produce_idx);
+ }
+ return count;
+}
+
+
+static void alx_clean_rfdesc(struct alx_rx_queue *rxque,
+ union alx_sw_rrdesc *srrd)
+{
+ struct alx_buffer *rfbuf = rxque->rfq.rfbuff;
+ u32 consume_idx = srrd->genr.si;
+ u32 i;
+
+ for (i = 0; i < srrd->genr.nor; i++) {
+ rfbuf[consume_idx].skb = NULL;
+ if (++consume_idx == rxque->rfq.count)
+ consume_idx = 0;
+ }
+ rxque->rfq.consume_idx = consume_idx;
+}
+
+
+static bool alx_dispatch_rx_irq(struct alx_msix_param *msix,
+ struct alx_rx_queue *rxque)
+{
+ struct alx_adapter *adpt = msix->adpt;
+ struct alx_hw *hw = &adpt->hw;
+ struct pci_dev *pdev = adpt->pdev;
+ struct net_device *netdev = adpt->netdev;
+ union alx_sw_rrdesc srrd;
+ struct alx_buffer *rfbuf;
+ struct sk_buff *skb;
+ struct alx_rx_queue *swque;
+ struct alx_sw_buffer *curr_swbuf;
+ struct alx_sw_buffer *next_swbuf;
+ u16 next_produce_idx;
+ u16 hw_consume_idx, num_consume_pkts;
+ u16 count = 0;
+
+ alx_mem_r16(hw, rxque->consume_reg, &hw_consume_idx);
+ num_consume_pkts = (hw_consume_idx >= rxque->rrq.consume_idx) ?
+ (hw_consume_idx - rxque->rrq.consume_idx) :
+ (hw_consume_idx + rxque->rrq.count - rxque->rrq.consume_idx);
+
+ while (1) {
+ if (!num_consume_pkts)
+ break;
+
+ if (!alx_get_rrdesc(rxque, &srrd))
+ break;
+
+ if (srrd.genr.res || srrd.genr.lene) {
+ alx_clean_rfdesc(rxque, &srrd);
+ netif_warn(adpt, rx_err, adpt->netdev,
+ "wrong packet! rrd->word3 is 0x%08x\n",
+ srrd.dfmt.dw3);
+ continue;
+ }
+
+ /* Bad Receive */
+ if (srrd.genr.nor != 1) {
+ alx_err(adpt, "Multil rfd not support yet!\n");
+ break;
+ }
+
+ rfbuf = GET_RF_BUFFER(rxque, srrd.genr.si);
+ pci_unmap_single(pdev, rfbuf->dma, rfbuf->length,
+ DMA_FROM_DEVICE);
+ rfbuf->dma = 0;
+ skb = rfbuf->skb;
+ alx_clean_rfdesc(rxque, &srrd);
+
+ skb_put(skb, srrd.genr.pkt_len - ETH_FCS_LEN);
+ skb->protocol = eth_type_trans(skb, netdev);
+ skb_checksum_none_assert(skb);
+
+ /* start to dispatch */
+ swque = adpt->rx_queue[srrd.genr.rss_cpu];
+ next_produce_idx = swque->swq.produce_idx;
+ if (++next_produce_idx == swque->swq.count)
+ next_produce_idx = 0;
+
+ curr_swbuf = GET_SW_BUFFER(swque, swque->swq.produce_idx);
+ next_swbuf = GET_SW_BUFFER(swque, next_produce_idx);
+
+ /*
+ * if full, will discard the packet,
+ * and at lease has a blank sw_buffer.
+ */
+ if (!next_swbuf->skb) {
+ curr_swbuf->skb = skb;
+ curr_swbuf->vlan_tag = srrd.genr.vlan_tag;
+ curr_swbuf->vlan_flag = srrd.genr.vlan_flag;
+ if (++swque->swq.produce_idx == swque->swq.count)
+ swque->swq.produce_idx = 0;
+ }
+
+ count++;
+ if (count == 32)
+ break;
+ }
+ if (count)
+ alx_refresh_rx_buffer(rxque);
+ return true;
+}
+
+
+static bool alx_handle_srx_irq(struct alx_msix_param *msix,
+ struct alx_rx_queue *rxque,
+ int *num_pkts, int max_pkts)
+{
+ struct alx_adapter *adpt = msix->adpt;
+ struct alx_sw_buffer *swbuf;
+ bool retval = true;
+
+ while (rxque->swq.consume_idx != rxque->swq.produce_idx) {
+ swbuf = GET_SW_BUFFER(rxque, rxque->swq.consume_idx);
+
+ alx_receive_skb(adpt, swbuf->skb, (u16)swbuf->vlan_tag,
+ (bool)swbuf->vlan_flag);
+ swbuf->skb = NULL;
+
+ if (++rxque->swq.consume_idx == rxque->swq.count)
+ rxque->swq.consume_idx = 0;
+
+ (*num_pkts)++;
+ if (*num_pkts >= max_pkts) {
+ retval = false;
+ break;
+ }
+ }
+ return retval;
+}
+
+
+static bool alx_handle_rx_irq(struct alx_msix_param *msix,
+ struct alx_rx_queue *rxque,
+ int *num_pkts, int max_pkts)
+{
+ struct alx_adapter *adpt = msix->adpt;
+ struct alx_hw *hw = &adpt->hw;
+ struct pci_dev *pdev = adpt->pdev;
+ struct net_device *netdev = adpt->netdev;
+ union alx_sw_rrdesc srrd;
+ struct alx_buffer *rfbuf;
+ struct sk_buff *skb;
+ u16 hw_consume_idx, num_consume_pkts;
+ u16 count = 0;
+
+ alx_mem_r16(hw, rxque->consume_reg, &hw_consume_idx);
+ num_consume_pkts = (hw_consume_idx >= rxque->rrq.consume_idx) ?
+ (hw_consume_idx - rxque->rrq.consume_idx) :
+ (hw_consume_idx + rxque->rrq.count - rxque->rrq.consume_idx);
+
+ while (1) {
+ if (!num_consume_pkts)
+ break;
+
+ if (!alx_get_rrdesc(rxque, &srrd))
+ break;
+
+ if (srrd.genr.res || srrd.genr.lene) {
+ alx_clean_rfdesc(rxque, &srrd);
+ netif_warn(adpt, rx_err, adpt->netdev,
+ "wrong packet! rrd->word3 is 0x%08x\n",
+ srrd.dfmt.dw3);
+ continue;
+ }
+
+ /* Bad Receive */
+ if (srrd.genr.nor != 1) {
+ alx_err(adpt, "Multil rfd not support yet!\n");
+ break;
+ }
+
+ rfbuf = GET_RF_BUFFER(rxque, srrd.genr.si);
+ pci_unmap_single(pdev, rfbuf->dma, rfbuf->length,
+ DMA_FROM_DEVICE);
+ rfbuf->dma = 0;
+ skb = rfbuf->skb;
+ alx_clean_rfdesc(rxque, &srrd);
+
+#ifdef CONFIG_ALX_DEBUGFS
+ if (CHK_ADPT_FLAG(STATE_DIAG_RUNNING)) {
+ skb_put(skb, srrd.genr.pkt_len - ETH_FCS_LEN);
+ alx_dfs_diag_receive_skb(adpt, skb, &srrd);
+ } else {
+#endif
+ skb_put(skb, srrd.genr.pkt_len - ETH_FCS_LEN);
+ skb->protocol = eth_type_trans(skb, netdev);
+ skb_checksum_none_assert(skb);
+ alx_receive_skb(adpt, skb, (u16)srrd.genr.vlan_tag,
+ (bool)srrd.genr.vlan_flag);
+#ifdef CONFIG_ALX_DEBUGFS
+ }
+#endif
+ num_consume_pkts--;
+ count++;
+ (*num_pkts)++;
+ if (*num_pkts >= max_pkts)
+ break;
+ }
+ if (count)
+ alx_refresh_rx_buffer(rxque);
+
+ return true;
+}
+
+
+static bool alx_handle_tx_irq(struct alx_msix_param *msix,
+ struct alx_tx_queue *txque)
+{
+ struct alx_adapter *adpt = msix->adpt;
+ struct alx_hw *hw = &adpt->hw;
+ struct alx_buffer *tpbuf;
+ u16 consume_data;
+
+ alx_mem_r16(hw, txque->consume_reg, &consume_data);
+ netif_info(adpt, tx_err, adpt->netdev,
+ "TX[%d]: consume_reg[0x%x] = 0x%x, tpq.consume_idx = 0x%x\n",
+ txque->que_idx, txque->consume_reg, consume_data,
+ txque->tpq.consume_idx);
+
+
+ while (txque->tpq.consume_idx != consume_data) {
+ tpbuf = GET_TP_BUFFER(txque, txque->tpq.consume_idx);
+ if (tpbuf->dma) {
+ pci_unmap_page(adpt->pdev, tpbuf->dma, tpbuf->length,
+ DMA_TO_DEVICE);
+ tpbuf->dma = 0;
+ }
+
+ if (tpbuf->skb) {
+ dev_kfree_skb_irq(tpbuf->skb);
+ tpbuf->skb = NULL;
+ }
+
+ if (++txque->tpq.consume_idx == txque->tpq.count)
+ txque->tpq.consume_idx = 0;
+ }
+
+ if (netif_queue_stopped(adpt->netdev) &&
+ netif_carrier_ok(adpt->netdev)) {
+ netif_wake_queue(adpt->netdev);
+ }
+ return true;
+}
+
+
+static irqreturn_t alx_msix_timer(int irq, void *data)
+{
+ struct alx_msix_param *msix = data;
+ struct alx_adapter *adpt = msix->adpt;
+ struct alx_hw *hw = &adpt->hw;
+ u32 isr;
+
+ alf_disable_msix_intr(hw, msix->vec_idx);
+
+ alx_mem_r32(hw, ALX_ISR, &isr);
+ isr = isr & (ALX_ISR_TIMER | ALX_ISR_MANU);
+
+
+ if (isr == 0) {
+ alf_enable_msix_intr(hw, msix->vec_idx);
+ return IRQ_NONE;
+ }
+
+ /* Ack ISR */
+ alx_mem_w32(hw, ALX_ISR, isr);
+
+ if (isr & ALX_ISR_MANU) {
+ adpt->net_stats.tx_carrier_errors++;
+ alx_check_lsc(adpt);
+ }
+
+ alf_enable_msix_intr(hw, msix->vec_idx);
+
+ return IRQ_HANDLED;
+}
+
+
+static irqreturn_t alx_msix_alert(int irq, void *data)
+{
+ struct alx_msix_param *msix = data;
+ struct alx_adapter *adpt = msix->adpt;
+ struct alx_hw *hw = &adpt->hw;
+ u32 isr;
+
+ alf_disable_msix_intr(hw, msix->vec_idx);
+
+ alx_mem_r32(hw, ALX_ISR, &isr);
+ isr = isr & ALX_ISR_ALERT_MASK;
+
+ if (isr == 0) {
+ alf_enable_msix_intr(hw, msix->vec_idx);
+ return IRQ_NONE;
+ }
+ alx_mem_w32(hw, ALX_ISR, isr);
+
+ alf_enable_msix_intr(hw, msix->vec_idx);
+
+ return IRQ_HANDLED;
+}
+
+
+static irqreturn_t alx_msix_smb(int irq, void *data)
+{
+ struct alx_msix_param *msix = data;
+ struct alx_adapter *adpt = msix->adpt;
+ struct alx_hw *hw = &adpt->hw;
+
+ alf_disable_msix_intr(hw, msix->vec_idx);
+
+ alf_enable_msix_intr(hw, msix->vec_idx);
+
+ return IRQ_HANDLED;
+}
+
+
+static irqreturn_t alx_msix_phy(int irq, void *data)
+{
+ struct alx_msix_param *msix = data;
+ struct alx_adapter *adpt = msix->adpt;
+ struct alx_hw *hw = &adpt->hw;
+
+ alf_disable_msix_intr(hw, msix->vec_idx);
+
+ alf_ack_phy_intr(hw);
+
+ adpt->net_stats.tx_carrier_errors++;
+ alx_check_lsc(adpt);
+
+ alf_enable_msix_intr(hw, msix->vec_idx);
+
+ return IRQ_HANDLED;
+}
+
+
+/*
+ * alx_msix_rtx
+ */
+static irqreturn_t alx_msix_rtx(int irq, void *data)
+{
+ struct alx_msix_param *msix = data;
+ struct alx_adapter *adpt = msix->adpt;
+ struct alx_hw *hw = &adpt->hw;
+
+ netif_info(adpt, intr, adpt->netdev,
+ "msix vec_idx = %d\n", msix->vec_idx);
+
+ alf_disable_msix_intr(hw, msix->vec_idx);
+ if (!msix->rx_count && !msix->tx_count) {
+ alf_enable_msix_intr(hw, msix->vec_idx);
+ return IRQ_HANDLED;
+ }
+
+ napi_schedule(&msix->napi);
+ return IRQ_HANDLED;
+}
+
+
+/*
+ * alx_napi_msix_rtx
+ */
+static int alx_napi_msix_rtx(struct napi_struct *napi, int max_pkts)
+{
+ struct alx_msix_param *msix =
+ container_of(napi, struct alx_msix_param, napi);
+ struct alx_adapter *adpt = msix->adpt;
+ struct alx_hw *hw = &adpt->hw;
+ struct alx_rx_queue *rxque;
+ struct alx_rx_queue *swque;
+ struct alx_tx_queue *txque;
+ unsigned long flags = 0;
+ bool complete = true;
+ int num_pkts = 0;
+ int rque_idx, tque_idx;
+ int i, j;
+
+ netif_info(adpt, intr, adpt->netdev,
+ "NAPI: msix vec_idx = %d\n", msix->vec_idx);
+
+ /* RX */
+ for (i = 0; i < msix->rx_count; i++) {
+ rque_idx = msix->rx_map[i];
+ num_pkts = 0;
+ if (CHK_ADPT_FLAG(SRSS_EN)) {
+ if (!spin_trylock_irqsave(&adpt->rx_lock, flags))
+ goto clean_sw_irq;
+
+ for (j = 0; j < adpt->num_hw_rxques; j++)
+ alx_dispatch_rx_irq(msix, adpt->rx_queue[j]);
+
+ spin_unlock_irqrestore(&adpt->rx_lock, flags);
+clean_sw_irq:
+ swque = adpt->rx_queue[rque_idx];
+ complete &= alx_handle_srx_irq(msix, swque, &num_pkts,
+ max_pkts);
+
+ } else {
+ rxque = adpt->rx_queue[rque_idx];
+ complete &= alx_handle_rx_irq(msix, rxque, &num_pkts,
+ max_pkts);
+ }
+ }
+
+
+ /* Handle TX */
+ for (i = 0; i < msix->tx_count; i++) {
+ tque_idx = msix->tx_map[i];
+ txque = adpt->tx_queue[tque_idx];
+ complete &= alx_handle_tx_irq(msix, txque);
+ }
+
+ if (!complete) {
+ netif_info(adpt, intr, adpt->netdev,
+ "Some packets in the queue are not handled!\n");
+ num_pkts = max_pkts;
+ }
+
+ netif_info(adpt, intr, adpt->netdev,
+ "num_pkts = %d, max_pkts = %d\n", num_pkts, max_pkts);
+ /* If all work done, exit the polling mode */
+ if (num_pkts < max_pkts) {
+ napi_complete(napi);
+ if (!CHK_ADPT_FLAG(STATE_DOWN))
+ alf_enable_msix_intr(hw, msix->vec_idx);
+ }
+
+ return num_pkts;
+}
+
+
+
+/*
+ * alx_napi_legacy_rtx - NAPI Rx polling callback
+ */
+static int alx_napi_legacy_rtx(struct napi_struct *napi, int max_pkts)
+{
+ struct alx_msix_param *msix =
+ container_of(napi, struct alx_msix_param, napi);
+ struct alx_adapter *adpt = msix->adpt;
+ struct alx_hw *hw = &adpt->hw;
+ int complete = true;
+ int num_pkts = 0;
+ int que_idx;
+
+ netif_info(adpt, intr, adpt->netdev,
+ "NAPI: msix vec_idx = %d\n", msix->vec_idx);
+
+ /* Keep link state information with original netdev */
+#ifdef CONFIG_ALX_DEBUGFS
+ if (!netif_carrier_ok(adpt->netdev) &&
+ !CHK_ADPT_FLAG(STATE_DIAG_RUNNING))
+ goto enable_rtx_irq;
+#else
+ if (!netif_carrier_ok(adpt->netdev))
+ goto enable_rtx_irq;
+#endif
+
+ for (que_idx = 0; que_idx < adpt->num_txques; que_idx++)
+ complete &= alx_handle_tx_irq(msix, adpt->tx_queue[que_idx]);
+
+ for (que_idx = 0; que_idx < adpt->num_hw_rxques; que_idx++) {
+ num_pkts = 0;
+ complete &= alx_handle_rx_irq(msix, adpt->rx_queue[que_idx],
+ &num_pkts, max_pkts);
+ }
+
+ if (!complete)
+ num_pkts = max_pkts;
+
+ if (num_pkts < max_pkts) {
+enable_rtx_irq:
+ napi_complete(napi);
+ hw->intr_mask |= (ALX_ISR_RXQ | ALX_ISR_TXQ);
+ alx_mem_w32(hw, ALX_IMR, hw->intr_mask);
+ }
+ return num_pkts;
+}
+
+
+static void alx_set_msix_flags(struct alx_msix_param *msix,
+ enum alx_msix_type type, int index)
+{
+ if (type == alx_msix_type_rx) {
+ switch (index) {
+ case 0:
+ SET_MSIX_FLAG(RX0);
+ break;
+ case 1:
+ SET_MSIX_FLAG(RX1);
+ break;
+ case 2:
+ SET_MSIX_FLAG(RX2);
+ break;
+ case 3:
+ SET_MSIX_FLAG(RX3);
+ break;
+ case 4:
+ SET_MSIX_FLAG(RX4);
+ break;
+ case 5:
+ SET_MSIX_FLAG(RX5);
+ break;
+ case 6:
+ SET_MSIX_FLAG(RX6);
+ break;
+ case 7:
+ SET_MSIX_FLAG(RX7);
+ break;
+ default:
+ pr_err("%s: rx error\n", __func__);
+ break;
+ }
+ } else if (type == alx_msix_type_tx) {
+ switch (index) {
+ case 0:
+ SET_MSIX_FLAG(TX0);
+ break;
+ case 1:
+ SET_MSIX_FLAG(TX1);
+ break;
+ case 2:
+ SET_MSIX_FLAG(TX2);
+ break;
+ case 3:
+ SET_MSIX_FLAG(TX3);
+ break;
+ default:
+ pr_err("%s: tx error\n", __func__);
+ break;
+ }
+ } else if (type == alx_msix_type_other) {
+ switch (index) {
+ case ALX_MSIX_TYPE_OTH_TIMER:
+ SET_MSIX_FLAG(TIMER);
+ break;
+ case ALX_MSIX_TYPE_OTH_ALERT:
+ SET_MSIX_FLAG(ALERT);
+ break;
+ case ALX_MSIX_TYPE_OTH_SMB:
+ SET_MSIX_FLAG(SMB);
+ break;
+ case ALX_MSIX_TYPE_OTH_PHY:
+ SET_MSIX_FLAG(PHY);
+ break;
+ default:
+ pr_err("%s: other error\n", __func__);
+ break;
+ }
+ }
+}
+
+
+/* alx_setup_msix_maps */
+static int alx_setup_msix_maps(struct alx_adapter *adpt)
+{
+ int msix_idx = 0;
+ int que_idx = 0;
+ int num_rxques = adpt->num_rxques;
+ int num_txques = adpt->num_txques;
+ int num_msix_rxques = adpt->num_msix_rxques;
+ int num_msix_txques = adpt->num_msix_txques;
+ int num_msix_noques = adpt->num_msix_noques;
+
+ if (CHK_ADPT_FLAG(FIXED_MSIX))
+ goto fixed_msix_map;
+
+ netif_warn(adpt, ifup, adpt->netdev,
+ "don't support non-fixed msix map\n");
+ return -EINVAL;
+
+fixed_msix_map:
+ /*
+ * For RX queue msix map
+ */
+ msix_idx = 0;
+ for (que_idx = 0; que_idx < num_msix_rxques; que_idx++, msix_idx++) {
+ struct alx_msix_param *msix = adpt->msix[msix_idx];
+ if (que_idx < num_rxques) {
+ adpt->rx_queue[que_idx]->msix = msix;
+ msix->rx_map[msix->rx_count] = que_idx;
+ msix->rx_count++;
+ alx_set_msix_flags(msix, alx_msix_type_rx, que_idx);
+ }
+ }
+ if (msix_idx != num_msix_rxques)
+ netif_warn(adpt, ifup, adpt->netdev, "msix_idx is wrong\n");
+
+ /*
+ * For TX queue msix map
+ */
+ for (que_idx = 0; que_idx < num_msix_txques; que_idx++, msix_idx++) {
+ struct alx_msix_param *msix = adpt->msix[msix_idx];
+ if (que_idx < num_txques) {
+ adpt->tx_queue[que_idx]->msix = msix;
+ msix->tx_map[msix->tx_count] = que_idx;
+ msix->tx_count++;
+ alx_set_msix_flags(msix, alx_msix_type_tx, que_idx);
+ }
+ }
+ if (msix_idx != (num_msix_rxques + num_msix_txques))
+ netif_warn(adpt, ifup, adpt->netdev, "msix_idx is wrong\n");
+
+
+ /*
+ * For NON queue msix map
+ */
+ for (que_idx = 0; que_idx < num_msix_noques; que_idx++, msix_idx++) {
+ struct alx_msix_param *msix = adpt->msix[msix_idx];
+ alx_set_msix_flags(msix, alx_msix_type_other, que_idx);
+ }
+ return 0;
+}
+
+
+static inline void alx_reset_msix_maps(struct alx_adapter *adpt)
+{
+ int que_idx, msix_idx;
+
+ for (que_idx = 0; que_idx < adpt->num_rxques; que_idx++)
+ adpt->rx_queue[que_idx]->msix = NULL;
+ for (que_idx = 0; que_idx < adpt->num_txques; que_idx++)
+ adpt->tx_queue[que_idx]->msix = NULL;
+
+ for (msix_idx = 0; msix_idx < adpt->num_msix_intrs; msix_idx++) {
+ struct alx_msix_param *msix = adpt->msix[msix_idx];
+ memset(msix->rx_map, 0, sizeof(msix->rx_map));
+ memset(msix->tx_map, 0, sizeof(msix->tx_map));
+ msix->rx_count = 0;
+ msix->tx_count = 0;
+ CLI_MSIX_FLAG(ALL);
+ }
+}
+
+
+/*
+ * alx_enable_intr - Enable default interrupt generation settings
+ */
+static void alx_enable_intr(struct alx_adapter *adpt)
+{
+ struct alx_hw *hw = &adpt->hw;
+ int i;
+
+ if (!atomic_dec_and_test(&adpt->irq_sem))
+ return;
+
+ alf_enable_legacy_intr(hw);
+
+ /* enable all MSIX IRQs */
+ for (i = 0; i < adpt->num_msix_intrs; i++) {
+ alf_disable_msix_intr(hw, i);
+ alf_enable_msix_intr(hw, i);
+ }
+}
+
+
+/* alx_disable_intr - Mask off interrupt generation on the NIC */
+void alx_disable_intr(struct alx_adapter *adpt)
+{
+ struct alx_hw *hw = &adpt->hw;
+ atomic_inc(&adpt->irq_sem);
+
+ alf_disable_legacy_intr(hw);
+
+ if (CHK_ADPT_FLAG(MSIX_EN)) {
+ int i;
+ for (i = 0; i < adpt->num_msix_intrs; i++) {
+ synchronize_irq(adpt->msix_entries[i].vector);
+ alf_disable_msix_intr(hw, i);
+ }
+ } else {
+ synchronize_irq(adpt->pdev->irq);
+ }
+
+
+}
+
+
+/*
+ * alx_interrupt - Interrupt Handler
+ */
+static irqreturn_t alx_interrupt(int irq, void *data)
+{
+ struct net_device *netdev = data;
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ struct alx_hw *hw = &adpt->hw;
+ struct alx_msix_param *msix = adpt->msix[0];
+ int max_intrs = ALX_MAX_HANDLED_INTRS;
+ u32 isr, status;
+
+ do {
+ alx_mem_r32(hw, ALX_ISR, &isr);
+ status = isr & hw->intr_mask;
+
+ if (status == 0) {
+ alx_mem_w32(hw, ALX_ISR, 0);
+ if (max_intrs != ALX_MAX_HANDLED_INTRS)
+ return IRQ_HANDLED;
+ return IRQ_NONE;
+ }
+
+ /* ack ISR to PHY register */
+ if (status & ALX_ISR_PHY)
+ alf_ack_phy_intr(hw);
+ /* ack ISR to MAC register */
+ alx_mem_w32(hw, ALX_ISR, status | ALX_ISR_DIS);
+
+ if (status & ALX_ISR_ERROR) {
+ netif_warn(adpt, intr, adpt->netdev,
+ "isr error (status = 0x%x)\n",
+ status & ALX_ISR_ERROR);
+ if (status & ALX_ISR_PCIE_FERR) {
+ alx_mem_w16(hw, ALX_PCI_DEV_STAT,
+ ALX_PCI_DEV_STAT_FERR |
+ ALX_PCI_DEV_STAT_NFERR |
+ ALX_PCI_DEV_STAT_CERR);
+ }
+ /* reset MAC */
+ SET_ADPT_FLAG(TASK_REINIT_REQ);
+ alx_task_schedule(adpt);
+ return IRQ_HANDLED;
+ }
+
+ if (status & (ALX_ISR_RXQ | ALX_ISR_TXQ)) {
+ if (napi_schedule_prep(&(msix->napi))) {
+ hw->intr_mask &= ~(ALX_ISR_RXQ | ALX_ISR_TXQ);
+ alx_mem_w32(hw, ALX_IMR, hw->intr_mask);
+ __napi_schedule(&(msix->napi));
+ }
+ }
+
+ if (status & ALX_ISR_OVER) {
+ netif_warn(adpt, intr, adpt->netdev,
+ "TX/RX overflow (status = 0x%x)\n",
+ status & ALX_ISR_OVER);
+ }
+
+ /* link event */
+ if (status & (ALX_ISR_PHY | ALX_ISR_MANU)) {
+ netdev->stats.tx_carrier_errors++;
+ alx_check_lsc(adpt);
+ break;
+ }
+
+ } while (--max_intrs > 0);
+ /* re-enable Interrupt*/
+ alx_mem_w32(hw, ALX_ISR, 0);
+ return IRQ_HANDLED;
+}
+
+
+/*
+ * alx_request_msix_irqs - Initialize MSI-X interrupts
+ */
+static int alx_request_msix_irq(struct alx_adapter *adpt)
+{
+ struct net_device *netdev = adpt->netdev;
+ irqreturn_t (*handler)(int, void *);
+ int msix_idx;
+ int num_msix_intrs = adpt->num_msix_intrs;
+ int rx_idx = 0, tx_idx = 0;
+ int i;
+ int retval;
+
+ retval = alx_setup_msix_maps(adpt);
+ if (retval)
+ return retval;
+
+ for (msix_idx = 0; msix_idx < num_msix_intrs; msix_idx++) {
+ struct alx_msix_param *msix = adpt->msix[msix_idx];
+
+ if (CHK_MSIX_FLAG(RXS) && CHK_MSIX_FLAG(TXS)) {
+ handler = alx_msix_rtx;
+ sprintf(msix->name, "%s:%s%d",
+ netdev->name, "rtx", rx_idx);
+ rx_idx++;
+ tx_idx++;
+ } else if (CHK_MSIX_FLAG(RXS)) {
+ handler = alx_msix_rtx;
+ sprintf(msix->name, "%s:%s%d",
+ netdev->name, "rx", rx_idx);
+ rx_idx++;
+ } else if (CHK_MSIX_FLAG(TXS)) {
+ handler = alx_msix_rtx;
+ sprintf(msix->name, "%s:%s%d",
+ netdev->name, "tx", tx_idx);
+ tx_idx++;
+ } else if (CHK_MSIX_FLAG(TIMER)) {
+ handler = alx_msix_timer;
+ sprintf(msix->name, "%s:%s", netdev->name, "timer");
+ } else if (CHK_MSIX_FLAG(ALERT)) {
+ handler = alx_msix_alert;
+ sprintf(msix->name, "%s:%s", netdev->name, "alert");
+ } else if (CHK_MSIX_FLAG(SMB)) {
+ handler = alx_msix_smb;
+ sprintf(msix->name, "%s:%s", netdev->name, "smb");
+ } else if (CHK_MSIX_FLAG(PHY)) {
+ handler = alx_msix_phy;
+ sprintf(msix->name, "%s:%s", netdev->name, "phy");
+ } else {
+ netif_info(adpt, ifup, adpt->netdev,
+ "MSIX entry [%d] is blank\n",
+ msix->vec_idx);
+ continue;
+ }
+ netif_info(adpt, ifup, adpt->netdev,
+ "MSIX entry [%d] is %s\n",
+ msix->vec_idx, msix->name);
+ retval = request_irq(adpt->msix_entries[msix_idx].vector,
+ handler, 0, msix->name, msix);
+ if (retval)
+ goto free_msix_irq;
+
+ /* assign the mask for this irq */
+ irq_set_affinity_hint(adpt->msix_entries[msix_idx].vector,
+ msix->affinity_mask);
+ }
+ return retval;
+
+
+free_msix_irq:
+ for (i = 0; i < msix_idx; i++) {
+ irq_set_affinity_hint(adpt->msix_entries[i].vector, NULL);
+ free_irq(adpt->msix_entries[i].vector, adpt->msix[i]);
+ }
+ CLI_ADPT_FLAG(MSIX_EN);
+ pci_disable_msix(adpt->pdev);
+ kfree(adpt->msix_entries);
+ adpt->msix_entries = NULL;
+ return retval;
+}
+
+
+/*
+ * alx_request_irq - initialize interrupts
+ */
+static int alx_request_irq(struct alx_adapter *adpt)
+{
+ struct net_device *netdev = adpt->netdev;
+ int retval;
+
+ /* request MSIX irq */
+ if (CHK_ADPT_FLAG(MSIX_EN)) {
+ retval = alx_request_msix_irq(adpt);
+ if (retval) {
+ alx_err(adpt, "request msix irq failed, error = %d\n",
+ retval);
+ }
+ goto out;
+ }
+
+ /* request MSI irq */
+ if (CHK_ADPT_FLAG(MSI_EN)) {
+ retval = request_irq(adpt->pdev->irq, alx_interrupt, 0,
+ netdev->name, netdev);
+ if (retval) {
+ alx_err(adpt, "request msix irq failed, error = %d\n",
+ retval);
+ }
+ goto out;
+ }
+
+ /* request shared irq */
+ retval = request_irq(adpt->pdev->irq, alx_interrupt, IRQF_SHARED,
+ netdev->name, netdev);
+ if (retval) {
+ alx_err(adpt, "request shared irq failed, error = %d\n",
+ retval);
+ }
+out:
+ return retval;
+}
+
+
+static void alx_free_irq(struct alx_adapter *adpt)
+{
+ struct net_device *netdev = adpt->netdev;
+ int i;
+
+ if (CHK_ADPT_FLAG(MSIX_EN)) {
+ for (i = 0; i < adpt->num_msix_intrs; i++) {
+ struct alx_msix_param *msix = adpt->msix[i];
+ netif_info(adpt, ifdown, adpt->netdev,
+ "msix entry = %d\n", i);
+ if (!CHK_MSIX_FLAG(ALL))
+ continue;
+ if (CHK_MSIX_FLAG(RXS) || CHK_MSIX_FLAG(TXS)) {
+ irq_set_affinity_hint(
+ adpt->msix_entries[i].vector, NULL);
+ }
+ free_irq(adpt->msix_entries[i].vector, msix);
+ }
+ alx_reset_msix_maps(adpt);
+ } else {
+ free_irq(adpt->pdev->irq, netdev);
+ }
+}
+
+
+static void alx_vlan_mode(struct net_device *netdev,
+ netdev_features_t features)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ struct alx_hw *hw = &adpt->hw;
+
+ if (!CHK_ADPT_FLAG(STATE_DOWN))
+ alx_disable_intr(adpt);
+
+ if (features & NETIF_F_HW_VLAN_RX) {
+ /* enable VLAN tag insert/strip */
+ SET_HW_FLAG(VLANSTRIP_EN);
+ } else {
+ /* disable VLAN tag insert/strip */
+ CLI_HW_FLAG(VLANSTRIP_EN);
+ }
+ alf_update_mac_filter(hw);
+
+ if (!CHK_ADPT_FLAG(STATE_DOWN))
+ alx_enable_intr(adpt);
+}
+
+
+static void alx_restore_vlan(struct alx_adapter *adpt)
+{
+ alx_vlan_mode(adpt->netdev, adpt->netdev->features);
+}
+
+
+static void alx_napi_enable_all(struct alx_adapter *adpt)
+{
+ struct alx_msix_param *msix;
+ int num_msix_intrs = adpt->num_msix_intrs;
+ int msix_idx;
+
+ if (!CHK_ADPT_FLAG(MSIX_EN))
+ num_msix_intrs = 1;
+
+ for (msix_idx = 0; msix_idx < num_msix_intrs; msix_idx++) {
+ struct napi_struct *napi;
+ msix = adpt->msix[msix_idx];
+ napi = &msix->napi;
+ napi_enable(napi);
+ }
+}
+
+
+static void alx_napi_disable_all(struct alx_adapter *adpt)
+{
+ struct alx_msix_param *msix;
+ int num_msix_intrs = adpt->num_msix_intrs;
+ int msix_idx;
+
+ if (!CHK_ADPT_FLAG(MSIX_EN))
+ num_msix_intrs = 1;
+
+ for (msix_idx = 0; msix_idx < num_msix_intrs; msix_idx++) {
+ msix = adpt->msix[msix_idx];
+ napi_disable(&msix->napi);
+ }
+}
+
+
+static void alx_clean_tx_queue(struct alx_tx_queue *txque)
+{
+ struct device *dev = txque->dev;
+ unsigned long size;
+ u16 i;
+
+ /* ring already cleared, nothing to do */
+ if (!txque->tpq.tpbuff)
+ return;
+
+ for (i = 0; i < txque->tpq.count; i++) {
+ struct alx_buffer *tpbuf;
+ tpbuf = GET_TP_BUFFER(txque, i);
+ if (tpbuf->dma) {
+ pci_unmap_single(to_pci_dev(dev),
+ tpbuf->dma,
+ tpbuf->length,
+ DMA_TO_DEVICE);
+ tpbuf->dma = 0;
+ }
+ if (tpbuf->skb) {
+ dev_kfree_skb_any(tpbuf->skb);
+ tpbuf->skb = NULL;
+ }
+ }
+
+ size = sizeof(struct alx_buffer) * txque->tpq.count;
+ memset(txque->tpq.tpbuff, 0, size);
+
+ /* Zero out Tx-buffers */
+ memset(txque->tpq.tpdesc, 0, txque->tpq.size);
+
+ txque->tpq.consume_idx = 0;
+ txque->tpq.produce_idx = 0;
+}
+
+
+/*
+ * alx_clean_all_tx_queues
+ */
+static void alx_clean_all_tx_queues(struct alx_adapter *adpt)
+{
+ int i;
+
+ for (i = 0; i < adpt->num_txques; i++)
+ alx_clean_tx_queue(adpt->tx_queue[i]);
+}
+
+
+static void alx_clean_rx_queue(struct alx_rx_queue *rxque)
+{
+ struct device *dev = rxque->dev;
+ unsigned long size;
+ int i;
+
+ if (CHK_RX_FLAG(HW_QUE)) {
+ /* ring already cleared, nothing to do */
+ if (!rxque->rfq.rfbuff)
+ goto clean_sw_queue;
+
+ for (i = 0; i < rxque->rfq.count; i++) {
+ struct alx_buffer *rfbuf;
+ rfbuf = GET_RF_BUFFER(rxque, i);
+
+ if (rfbuf->dma) {
+ pci_unmap_single(to_pci_dev(dev),
+ rfbuf->dma,
+ rfbuf->length,
+ DMA_FROM_DEVICE);
+ rfbuf->dma = 0;
+ }
+ if (rfbuf->skb) {
+ dev_kfree_skb(rfbuf->skb);
+ rfbuf->skb = NULL;
+ }
+ }
+ size = sizeof(struct alx_buffer) * rxque->rfq.count;
+ memset(rxque->rfq.rfbuff, 0, size);
+
+ /* zero out the descriptor ring */
+ memset(rxque->rrq.rrdesc, 0, rxque->rrq.size);
+ rxque->rrq.produce_idx = 0;
+ rxque->rrq.consume_idx = 0;
+
+ memset(rxque->rfq.rfdesc, 0, rxque->rfq.size);
+ rxque->rfq.produce_idx = 0;
+ rxque->rfq.consume_idx = 0;
+ }
+clean_sw_queue:
+ if (CHK_RX_FLAG(SW_QUE)) {
+ /* ring already cleared, nothing to do */
+ if (!rxque->swq.swbuff)
+ return;
+
+ for (i = 0; i < rxque->swq.count; i++) {
+ struct alx_sw_buffer *swbuf;
+ swbuf = GET_SW_BUFFER(rxque, i);
+
+ /* swq doesn't map DMA*/
+
+ if (swbuf->skb) {
+ dev_kfree_skb(swbuf->skb);
+ swbuf->skb = NULL;
+ }
+ }
+ size = sizeof(struct alx_buffer) * rxque->swq.count;
+ memset(rxque->swq.swbuff, 0, size);
+
+ /* swq doesn't have any descripter rings */
+ rxque->swq.produce_idx = 0;
+ rxque->swq.consume_idx = 0;
+ }
+}
+
+
+/*
+ * alx_clean_all_rx_queues
+ */
+static void alx_clean_all_rx_queues(struct alx_adapter *adpt)
+{
+ int i;
+ for (i = 0; i < adpt->num_rxques; i++)
+ alx_clean_rx_queue(adpt->rx_queue[i]);
+}
+
+
+/*
+ * alx_set_rss_queues: Allocate queues for RSS
+ */
+static inline void alx_set_num_txques(struct alx_adapter *adpt)
+{
+ adpt->num_txques = 4;
+}
+
+
+/*
+ * alx_set_rss_queues: Allocate queues for RSS
+ */
+static inline void alx_set_num_rxques(struct alx_adapter *adpt)
+{
+ if (CHK_ADPT_FLAG(SRSS_CAP)) {
+ adpt->num_hw_rxques = 1;
+ adpt->num_sw_rxques = adpt->max_rxques;
+ adpt->num_rxques =
+ max_t(u16, adpt->num_hw_rxques, adpt->num_sw_rxques);
+ }
+}
+
+
+/*
+ * alx_set_num_queues: Allocate queues for device, feature dependant
+ */
+static void alx_set_num_queues(struct alx_adapter *adpt)
+{
+ /* Start with default case */
+ adpt->num_txques = 1;
+ adpt->num_rxques = 1;
+ adpt->num_hw_rxques = 1;
+ adpt->num_sw_rxques = 0;
+
+ alx_set_num_rxques(adpt);
+ alx_set_num_txques(adpt);
+}
+
+
+/* alx_alloc_all_rtx_queue - allocate all queues */
+static int alx_alloc_all_rtx_queue(struct alx_adapter *adpt)
+{
+ int que_idx;
+
+ for (que_idx = 0; que_idx < adpt->num_txques; que_idx++) {
+ struct alx_tx_queue *txque = adpt->tx_queue[que_idx];
+
+ txque = kzalloc(sizeof(struct alx_tx_queue), GFP_KERNEL);
+ if (!txque)
+ goto err_alloc_tx_queue;
+ txque->tpq.count = adpt->num_txdescs;
+ txque->que_idx = que_idx;
+ txque->dev = &adpt->pdev->dev;
+ txque->netdev = adpt->netdev;
+
+ adpt->tx_queue[que_idx] = txque;
+ }
+
+ for (que_idx = 0; que_idx < adpt->num_rxques; que_idx++) {
+ struct alx_rx_queue *rxque = adpt->rx_queue[que_idx];
+
+ rxque = kzalloc(sizeof(struct alx_rx_queue), GFP_KERNEL);
+ if (!rxque)
+ goto err_alloc_rx_queue;
+ rxque->rrq.count = adpt->num_rxdescs;
+ rxque->rfq.count = adpt->num_rxdescs;
+ rxque->swq.count = adpt->num_rxdescs;
+ rxque->que_idx = que_idx;
+ rxque->dev = &adpt->pdev->dev;
+ rxque->netdev = adpt->netdev;
+
+ if (CHK_ADPT_FLAG(SRSS_EN)) {
+ if (que_idx < adpt->num_hw_rxques)
+ SET_RX_FLAG(HW_QUE);
+ if (que_idx < adpt->num_sw_rxques)
+ SET_RX_FLAG(SW_QUE);
+ } else {
+ SET_RX_FLAG(HW_QUE);
+ }
+ adpt->rx_queue[que_idx] = rxque;
+ }
+ netif_dbg(adpt, probe, adpt->netdev,
+ "num_tx_descs = %d, num_rx_descs = %d\n",
+ adpt->num_txdescs, adpt->num_rxdescs);
+ return 0;
+
+err_alloc_rx_queue:
+ alx_err(adpt, "goto err_alloc_rx_queue\n");
+ for (que_idx = 0; que_idx < adpt->num_rxques; que_idx++)
+ kfree(adpt->rx_queue[que_idx]);
+err_alloc_tx_queue:
+ alx_err(adpt, "goto err_alloc_tx_queue\n");
+ for (que_idx = 0; que_idx < adpt->num_txques; que_idx++)
+ kfree(adpt->tx_queue[que_idx]);
+ return -ENOMEM;
+}
+
+
+/* alx_free_all_rtx_queue */
+static void alx_free_all_rtx_queue(struct alx_adapter *adpt)
+{
+ int que_idx;
+
+ for (que_idx = 0; que_idx < adpt->num_txques; que_idx++) {
+ kfree(adpt->tx_queue[que_idx]);
+ adpt->tx_queue[que_idx] = NULL;
+ }
+ for (que_idx = 0; que_idx < adpt->num_rxques; que_idx++) {
+ kfree(adpt->rx_queue[que_idx]);
+ adpt->rx_queue[que_idx] = NULL;
+ }
+}
+
+
+/* alx_set_interrupt_param - set interrupt parameter */
+static int alx_set_interrupt_param(struct alx_adapter *adpt)
+{
+ struct alx_msix_param *msix;
+ int (*poll)(struct napi_struct *, int);
+ int msix_idx;
+
+ if (CHK_ADPT_FLAG(MSIX_EN)) {
+ poll = &alx_napi_msix_rtx;
+ } else {
+ adpt->num_msix_intrs = 1;
+ poll = &alx_napi_legacy_rtx;
+ }
+
+ for (msix_idx = 0; msix_idx < adpt->num_msix_intrs; msix_idx++) {
+ msix = kzalloc(sizeof(struct alx_msix_param),
+ GFP_KERNEL);
+ if (!msix)
+ goto err_alloc_msix;
+ msix->adpt = adpt;
+ msix->vec_idx = msix_idx;
+ /* Allocate the affinity_hint cpumask, configure the mask */
+ if (!alloc_cpumask_var(&msix->affinity_mask, GFP_KERNEL))
+ goto err_alloc_cpumask;
+
+ cpumask_set_cpu((msix_idx % num_online_cpus()),
+ msix->affinity_mask);
+
+ netif_napi_add(adpt->netdev, &msix->napi, (*poll), 64);
+ adpt->msix[msix_idx] = msix;
+ }
+ return 0;
+
+err_alloc_cpumask:
+ kfree(msix);
+ adpt->msix[msix_idx] = NULL;
+err_alloc_msix:
+ for (msix_idx--; msix_idx >= 0; msix_idx--) {
+ msix = adpt->msix[msix_idx];
+ netif_napi_del(&msix->napi);
+ free_cpumask_var(msix->affinity_mask);
+ kfree(msix);
+ adpt->msix[msix_idx] = NULL;
+ }
+ alx_err(adpt, "can't allocate memory\n");
+ return -ENOMEM;
+}
+
+
+/*
+ * alx_reset_interrupt_param - Free memory allocated for interrupt vectors
+ */
+static void alx_reset_interrupt_param(struct alx_adapter *adpt)
+{
+ int msix_idx;
+
+ for (msix_idx = 0; msix_idx < adpt->num_msix_intrs; msix_idx++) {
+ struct alx_msix_param *msix = adpt->msix[msix_idx];
+ netif_napi_del(&msix->napi);
+ free_cpumask_var(msix->affinity_mask);
+ kfree(msix);
+ adpt->msix[msix_idx] = NULL;
+ }
+}
+
+
+/* set msix interrupt mode */
+static int alx_set_msix_interrupt_mode(struct alx_adapter *adpt)
+{
+ int msix_intrs, msix_idx;
+ int retval = 0;
+
+ adpt->msix_entries = kcalloc(adpt->max_msix_intrs,
+ sizeof(struct msix_entry), GFP_KERNEL);
+ if (!adpt->msix_entries) {
+ netif_info(adpt, probe, adpt->netdev,
+ "can't allocate msix entry\n");
+ CLI_ADPT_FLAG(MSIX_EN);
+ goto try_msi_mode;
+ }
+
+ for (msix_idx = 0; msix_idx < adpt->max_msix_intrs; msix_idx++)
+ adpt->msix_entries[msix_idx].entry = msix_idx;
+
+
+ msix_intrs = adpt->max_msix_intrs;
+ while (msix_intrs >= adpt->min_msix_intrs) {
+ retval = pci_enable_msix(adpt->pdev, adpt->msix_entries,
+ msix_intrs);
+ if (!retval) /* Success in acquiring all requested vectors. */
+ break;
+ else if (retval < 0)
+ msix_intrs = 0; /* Nasty failure, quit now */
+ else /* error == number of vectors we should try again with */
+ msix_intrs = retval;
+ }
+ if (msix_intrs < adpt->min_msix_intrs) {
+ netif_info(adpt, probe, adpt->netdev,
+ "can't enable MSI-X interrupts\n");
+ CLI_ADPT_FLAG(MSIX_EN);
+ kfree(adpt->msix_entries);
+ adpt->msix_entries = NULL;
+ goto try_msi_mode;
+ }
+
+ netif_info(adpt, probe, adpt->netdev,
+ "enable MSI-X interrupts, num_msix_intrs = %d\n",
+ msix_intrs);
+ SET_ADPT_FLAG(MSIX_EN);
+ if (CHK_ADPT_FLAG(SRSS_CAP))
+ SET_ADPT_FLAG(SRSS_EN);
+
+ adpt->num_msix_intrs = min_t(int, msix_intrs, adpt->max_msix_intrs);
+ retval = 0;
+ return retval;
+
+try_msi_mode:
+ CLI_ADPT_FLAG(SRSS_CAP);
+ CLI_ADPT_FLAG(SRSS_EN);
+ alx_set_num_queues(adpt);
+ retval = -1;
+ return retval;
+}
+
+
+/* set msi interrupt mode */
+static int alx_set_msi_interrupt_mode(struct alx_adapter *adpt)
+{
+ int retval;
+
+ retval = pci_enable_msi(adpt->pdev);
+ if (retval) {
+ netif_info(adpt, probe, adpt->netdev,
+ "can't enable MSI interrupt, error = %d\n", retval);
+ return retval;
+ }
+ SET_ADPT_FLAG(MSI_EN);
+ return retval;
+}
+
+
+/* set interrupt mode */
+static int alx_set_interrupt_mode(struct alx_adapter *adpt)
+{
+ int retval = 0;
+
+ if (CHK_ADPT_FLAG(MSIX_CAP)) {
+ netif_info(adpt, probe, adpt->netdev,
+ "try to set MSIX interrupt\n");
+ retval = alx_set_msix_interrupt_mode(adpt);
+ if (!retval)
+ return retval;
+ }
+
+ if (CHK_ADPT_FLAG(MSI_CAP)) {
+ netif_info(adpt, probe, adpt->netdev,
+ "try to set MSI interrupt\n");
+ retval = alx_set_msi_interrupt_mode(adpt);
+ if (!retval)
+ return retval;
+ }
+
+ netif_info(adpt, probe, adpt->netdev,
+ "can't enable MSIX and MSI, will enable shared interrupt\n");
+ retval = 0;
+ return retval;
+}
+
+
+static void alx_reset_interrupt_mode(struct alx_adapter *adpt)
+{
+ if (CHK_ADPT_FLAG(MSIX_EN)) {
+ CLI_ADPT_FLAG(MSIX_EN);
+ pci_disable_msix(adpt->pdev);
+ kfree(adpt->msix_entries);
+ adpt->msix_entries = NULL;
+ } else if (CHK_ADPT_FLAG(MSI_EN)) {
+ CLI_ADPT_FLAG(MSI_EN);
+ pci_disable_msi(adpt->pdev);
+ }
+}
+
+
+static void __devinit alx_init_adapter_special(struct alx_adapter *adpt)
+{
+ if (CHK_ADPT_FLAG(MSIX_CAP)) {
+ /* msix for tx, rx and none queues */
+ adpt->num_msix_txques = 4;
+ adpt->num_msix_rxques = 8;
+ adpt->num_msix_noques = ALF_MAX_MSIX_NOQUE_INTRS;
+
+ /* msix vector range */
+ adpt->max_msix_intrs = ALF_MAX_MSIX_INTRS;
+ adpt->min_msix_intrs = ALF_MIN_MSIX_INTRS;
+ } else {
+ /* msi for tx, rx and none queues */
+ adpt->num_msix_txques = 0;
+ adpt->num_msix_rxques = 0;
+ adpt->num_msix_noques = 0;
+ }
+}
+
+
+/*
+ * alx_init_adapter
+ */
+static int __devinit alx_init_adapter(struct alx_adapter *adpt)
+{
+ struct alx_hw *hw = &adpt->hw;
+ struct pci_dev *pdev = adpt->pdev;
+ u16 revision;
+ int max_frame;
+
+ /* PCI config space info */
+ hw->pci_venid = pdev->vendor;
+ hw->pci_devid = pdev->device;
+ alx_cfg_r16(hw, PCI_CLASS_REVISION, &revision);
+ hw->pci_revid = revision & 0xFF;
+ hw->pci_sub_venid = pdev->subsystem_vendor;
+ hw->pci_sub_devid = pdev->subsystem_device;
+
+ alx_init_hw(adpt);
+
+ if (alf_identify_nic(hw) != 0) {
+ alx_err(adpt, "HW is disabled\n");
+ return -EINVAL;
+ }
+
+ /* Set adapter flags */
+#ifdef CONFIG_ALX_MSI
+ SET_ADPT_FLAG(MSI_CAP);
+#endif
+#ifdef CONFIG_ALX_MSIX
+ SET_ADPT_FLAG(MSIX_CAP);
+#endif
+ if (CHK_ADPT_FLAG(MSIX_CAP)) {
+ SET_ADPT_FLAG(FIXED_MSIX);
+ SET_ADPT_FLAG(MRQ_CAP);
+#ifdef CONFIG_ALX_RSS
+ SET_ADPT_FLAG(SRSS_CAP);
+#endif
+ }
+ pdev->dev_flags |= PCI_DEV_FLAGS_MSI_INTX_DISABLE_BUG;
+
+ /* set default for alx_adapter */
+ adpt->max_msix_intrs = 1;
+ adpt->min_msix_intrs = 1;
+ max_frame = adpt->netdev->mtu + ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN;
+ adpt->rxbuf_size = adpt->netdev->mtu > ALX_DEF_RX_BUF_SIZE ?
+ ALIGN(max_frame, 8) : ALX_DEF_RX_BUF_SIZE;
+ adpt->wol = 0;
+ device_set_wakeup_enable(&pdev->dev, false);
+
+ /* set default for alx_hw */
+ hw->msi_lnkpatch = false;
+ hw->link_up = false;
+ hw->link_speed = 0;
+ hw->preamble = 7;
+ hw->intr_mask = ALX_IMR_NORMAL_MASK;
+ hw->smb_timer = 400; /* 400ms */
+ hw->mtu = adpt->netdev->mtu;
+ hw->imt_mod = 100; /* set to 200us */
+
+ /* set default for wrr */
+ hw->wrr_prio0 = 4;
+ hw->wrr_prio1 = 4;
+ hw->wrr_prio2 = 4;
+ hw->wrr_prio3 = 4;
+ hw->wrr_mode = alx_wrr_mode_none;
+
+ /* set default flow control settings */
+ hw->req_fc_mode = alx_fc_full;
+ hw->cur_fc_mode = alx_fc_full; /* init for ethtool output */
+ hw->disable_fc_autoneg = false;
+ hw->fc_was_autonegged = false;
+ hw->fc_single_pause = true;
+
+ /* set default for rss info*/
+ hw->rss_hstype = 0;
+ hw->rss_mode = alx_rss_mode_disable;
+ hw->rss_idt_size = 0;
+ hw->rss_base_cpu = 0;
+ memset(hw->rss_idt, 0x0, sizeof(hw->rss_idt));
+ memset(hw->rss_key, 0x0, sizeof(hw->rss_key));
+
+ atomic_set(&adpt->irq_sem, 1);
+ spin_lock_init(&adpt->tx_lock);
+ spin_lock_init(&adpt->rx_lock);
+
+ alx_init_adapter_special(adpt);
+
+ if (alf_init_phy(hw))
+ return -EINVAL;
+
+ SET_ADPT_FLAG(STATE_DOWN);
+ return 0;
+}
+
+
+static void alx_set_register_info_special(struct alx_adapter *adpt)
+{
+ struct alx_hw *hw = &adpt->hw;
+ int num_txques = adpt->num_txques;
+
+ /* setting for Produce Index and Consume Index */
+ adpt->rx_queue[0]->produce_reg = hw->rx_prod_reg[0];
+ adpt->rx_queue[0]->consume_reg = hw->rx_cons_reg[0];
+
+ switch (num_txques) {
+ case 4:
+ adpt->tx_queue[3]->produce_reg = hw->tx_prod_reg[3];
+ adpt->tx_queue[3]->consume_reg = hw->tx_cons_reg[3];
+ case 3:
+ adpt->tx_queue[2]->produce_reg = hw->tx_prod_reg[2];
+ adpt->tx_queue[2]->consume_reg = hw->tx_cons_reg[2];
+ case 2:
+ adpt->tx_queue[1]->produce_reg = hw->tx_prod_reg[1];
+ adpt->tx_queue[1]->consume_reg = hw->tx_cons_reg[1];
+ case 1:
+ adpt->tx_queue[0]->produce_reg = hw->tx_prod_reg[0];
+ adpt->tx_queue[0]->consume_reg = hw->tx_cons_reg[0];
+ }
+}
+
+
+/* alx_alloc_tx_descriptor - allocate Tx Descriptors */
+static int alx_alloc_tx_descriptor(struct alx_adapter *adpt,
+ struct alx_tx_queue *txque)
+{
+ struct alx_ring_header *ring_header = &adpt->ring_header;
+ struct alx_hw *hw = &adpt->hw;
+ u16 que_idx = txque->que_idx;
+ int size;
+
+ netif_info(adpt, ifup, adpt->netdev,
+ "tpq.count = %d\n", txque->tpq.count);
+
+ size = sizeof(struct alx_buffer) * txque->tpq.count;
+ txque->tpq.tpbuff = kzalloc(size, GFP_KERNEL);
+ if (!txque->tpq.tpbuff)
+ goto err_alloc_tpq_buffer;
+
+ /* round up to nearest 4K */
+ txque->tpq.size = txque->tpq.count * sizeof(union alx_hw_tpdesc);
+
+ txque->tpq.tpdma = ring_header->dma + ring_header->used;
+ txque->tpq.tpdesc = ring_header->desc + ring_header->used;
+ ring_header->used += ALIGN(txque->tpq.size, 8);
+
+ hw->dma.tpdmem_hi[que_idx] = ALX_DMA_ADDR_HI(txque->tpq.tpdma);
+ hw->dma.tpdmem_lo[que_idx] = ALX_DMA_ADDR_LO(txque->tpq.tpdma);
+
+ txque->tpq.produce_idx = 0;
+ txque->tpq.consume_idx = 0;
+ txque->max_packets = txque->tpq.count;
+ return 0;
+
+err_alloc_tpq_buffer:
+ alx_err(adpt, "Unable to allocate memory for the Tx descriptor\n");
+ return -ENOMEM;
+}
+
+
+/* alx_alloc_all_tx_descriptor - allocate all Tx Descriptors */
+static int alx_alloc_all_tx_descriptor(struct alx_adapter *adpt)
+{
+ int i, retval = 0;
+ netif_info(adpt, ifup, adpt->netdev,
+ "num_tques = %d\n", adpt->num_txques);
+
+ for (i = 0; i < adpt->num_txques; i++) {
+ retval = alx_alloc_tx_descriptor(adpt, adpt->tx_queue[i]);
+ if (!retval)
+ continue;
+
+ alx_err(adpt, "Allocation for Tx Queue %u failed\n", i);
+ break;
+ }
+
+ return retval;
+}
+
+
+/* alx_alloc_rx_descriptor - allocate Rx Descriptors */
+static int alx_alloc_rx_descriptor(struct alx_adapter *adpt,
+ struct alx_rx_queue *rxque)
+{
+ struct alx_ring_header *ring_header = &adpt->ring_header;
+ struct alx_hw *hw = &adpt->hw;
+ u16 que_idx = rxque->que_idx;
+ int size;
+
+ netif_info(adpt, ifup, adpt->netdev,
+ "RRD.count = %d, RFD.count = %d, SWD.count = %d\n",
+ rxque->rrq.count, rxque->rfq.count, rxque->swq.count);
+
+ if (CHK_RX_FLAG(HW_QUE)) {
+ /* alloc buffer info */
+ size = sizeof(struct alx_buffer) * rxque->rfq.count;
+ rxque->rfq.rfbuff = kzalloc(size, GFP_KERNEL);
+ if (!rxque->rfq.rfbuff)
+ goto err_alloc_rfq_buffer;
+ /*
+ * set dma's point of rrq and rfq
+ */
+
+ /* Round up to nearest 4K */
+ rxque->rrq.size =
+ rxque->rrq.count * sizeof(union alx_hw_rrdesc);
+ rxque->rfq.size =
+ rxque->rfq.count * sizeof(union alx_hw_rfdesc);
+
+ rxque->rrq.rrdma = ring_header->dma + ring_header->used;
+ rxque->rrq.rrdesc = ring_header->desc + ring_header->used;
+ ring_header->used += ALIGN(rxque->rrq.size, 8);
+
+ rxque->rfq.rfdma = ring_header->dma + ring_header->used;
+ rxque->rfq.rfdesc = ring_header->desc + ring_header->used;
+ ring_header->used += ALIGN(rxque->rfq.size, 8);
+
+ hw->dma.rrdmem_hi[que_idx] = ALX_DMA_ADDR_HI(rxque->rrq.rrdma);
+ hw->dma.rrdmem_lo[que_idx] = ALX_DMA_ADDR_LO(rxque->rrq.rrdma);
+ hw->dma.rfdmem_hi[que_idx] = ALX_DMA_ADDR_HI(rxque->rfq.rfdma);
+ hw->dma.rfdmem_lo[que_idx] = ALX_DMA_ADDR_LO(rxque->rfq.rfdma);
+
+ /* clean all counts within rxque */
+ rxque->rrq.produce_idx = 0;
+ rxque->rrq.consume_idx = 0;
+
+ rxque->rfq.produce_idx = 0;
+ rxque->rfq.consume_idx = 0;
+ }
+
+ if (CHK_RX_FLAG(SW_QUE)) {
+ size = sizeof(struct alx_sw_buffer) * rxque->swq.count;
+ rxque->swq.swbuff = kzalloc(size, GFP_KERNEL);
+ if (!rxque->swq.swbuff)
+ goto err_alloc_swq_buffer;
+
+ rxque->swq.consume_idx = 0;
+ rxque->swq.produce_idx = 0;
+ }
+
+ rxque->max_packets = rxque->rrq.count / 2;
+ return 0;
+
+err_alloc_swq_buffer:
+ kfree(rxque->rfq.rfbuff);
+ rxque->rfq.rfbuff = NULL;
+err_alloc_rfq_buffer:
+ alx_err(adpt, "Unable to allocate memory for the Rx descriptor\n");
+ return -ENOMEM;
+}
+
+
+/* alx_alloc_all_rx_descriptor - allocate all Rx Descriptors */
+static int alx_alloc_all_rx_descriptor(struct alx_adapter *adpt)
+{
+ int i, error = 0;
+
+ for (i = 0; i < adpt->num_rxques; i++) {
+ error = alx_alloc_rx_descriptor(adpt, adpt->rx_queue[i]);
+ if (!error)
+ continue;
+ alx_err(adpt, "Allocation for Rx Queue %u failed\n", i);
+ break;
+ }
+
+ return error;
+}
+
+
+/* alx_free_tx_descriptor - Free Tx Descriptor */
+static void alx_free_tx_descriptor(struct alx_tx_queue *txque)
+{
+ alx_clean_tx_queue(txque);
+
+ kfree(txque->tpq.tpbuff);
+ txque->tpq.tpbuff = NULL;
+
+ /* if not set, then don't free */
+ if (!txque->tpq.tpdesc)
+ return;
+ txque->tpq.tpdesc = NULL;
+}
+
+
+/* alx_free_all_tx_descriptor - Free all Tx Descriptor */
+static void alx_free_all_tx_descriptor(struct alx_adapter *adpt)
+{
+ int i;
+
+ for (i = 0; i < adpt->num_txques; i++)
+ alx_free_tx_descriptor(adpt->tx_queue[i]);
+}
+
+
+/* alx_free_all_rx_descriptor - Free all Rx Descriptor */
+static void alx_free_rx_descriptor(struct alx_rx_queue *rxque)
+{
+ alx_clean_rx_queue(rxque);
+
+ if (CHK_RX_FLAG(HW_QUE)) {
+ kfree(rxque->rfq.rfbuff);
+ rxque->rfq.rfbuff = NULL;
+
+ /* if not set, then don't free */
+ if (!rxque->rrq.rrdesc)
+ return;
+ rxque->rrq.rrdesc = NULL;
+
+ if (!rxque->rfq.rfdesc)
+ return;
+ rxque->rfq.rfdesc = NULL;
+ }
+
+ if (CHK_RX_FLAG(SW_QUE)) {
+ kfree(rxque->swq.swbuff);
+ rxque->swq.swbuff = NULL;
+ }
+}
+
+
+/* alx_free_all_rx_descriptor - Free all Rx Descriptor */
+static void alx_free_all_rx_descriptor(struct alx_adapter *adpt)
+{
+ int i;
+ for (i = 0; i < adpt->num_rxques; i++)
+ alx_free_rx_descriptor(adpt->rx_queue[i]);
+}
+
+
+/*
+ * alx_alloc_all_rtx_descriptor - allocate Tx / RX descriptor queues
+ */
+static int alx_alloc_all_rtx_descriptor(struct alx_adapter *adpt)
+{
+ struct device *dev = &adpt->pdev->dev;
+ struct alx_ring_header *ring_header = &adpt->ring_header;
+ int num_tques = adpt->num_txques;
+ int num_rques = adpt->num_hw_rxques;
+ unsigned int num_tx_descs = adpt->num_txdescs;
+ unsigned int num_rx_descs = adpt->num_rxdescs;
+ int retval;
+
+ /*
+ * real ring DMA buffer
+ * each ring/block may need up to 8 bytes for alignment, hence the
+ * additional bytes tacked onto the end.
+ */
+ ring_header->size =
+ num_tques * num_tx_descs * sizeof(union alx_hw_tpdesc) +
+ num_rques * num_rx_descs * sizeof(union alx_hw_rfdesc) +
+ num_rques * num_rx_descs * sizeof(union alx_hw_rrdesc) +
+ num_tques * 8 + num_rques * 2 * 8;
+ netif_info(adpt, ifup, adpt->netdev,
+ "num_tques = %d, num_tx_descs = %d\n",
+ num_tques, num_tx_descs);
+ netif_info(adpt, ifup, adpt->netdev,
+ "num_rques = %d, num_rx_descs = %d\n",
+ num_rques, num_rx_descs);
+
+ ring_header->used = 0;
+ ring_header->desc = dma_alloc_coherent(dev, ring_header->size,
+ &ring_header->dma, GFP_KERNEL);
+
+ if (!ring_header->desc) {
+ alx_err(adpt, "dma_alloc_coherent failed\n");
+ retval = -ENOMEM;
+ goto err_alloc_dma;
+ }
+ memset(ring_header->desc, 0, ring_header->size);
+ ring_header->used = ALIGN(ring_header->dma, 8) - ring_header->dma;
+
+ netif_info(adpt, ifup, adpt->netdev,
+ "ring header: size = %d, used= %d\n",
+ ring_header->size, ring_header->used);
+
+ /* allocate receive descriptors */
+ retval = alx_alloc_all_tx_descriptor(adpt);
+ if (retval)
+ goto err_alloc_tx;
+
+ /* allocate receive descriptors */
+ retval = alx_alloc_all_rx_descriptor(adpt);
+ if (retval)
+ goto err_alloc_rx;
+
+ return 0;
+
+err_alloc_rx:
+ alx_free_all_rx_descriptor(adpt);
+err_alloc_tx:
+ alx_free_all_tx_descriptor(adpt);
+err_alloc_dma:
+ return retval;
+}
+
+
+/*
+ * alx_alloc_all_rtx_descriptor - allocate Tx / RX descriptor queues
+ */
+static void alx_free_all_rtx_descriptor(struct alx_adapter *adpt)
+{
+ struct pci_dev *pdev = adpt->pdev;
+ struct alx_ring_header *ring_header = &adpt->ring_header;
+
+ alx_free_all_tx_descriptor(adpt);
+ alx_free_all_rx_descriptor(adpt);
+
+ pci_free_consistent(pdev, ring_header->size, ring_header->desc,
+ ring_header->dma);
+ ring_header->desc = NULL;
+ ring_header->size = ring_header->used = 0;
+}
+
+static netdev_features_t alx_fix_features(struct net_device *netdev,
+ netdev_features_t features)
+{
+ /*
+ * Since there is no support for separate rx/tx vlan accel
+ * enable/disable make sure tx flag is always in same state as rx.
+ */
+ if (features & NETIF_F_HW_VLAN_RX)
+ features |= NETIF_F_HW_VLAN_TX;
+ else
+ features &= ~NETIF_F_HW_VLAN_TX;
+
+ if (netdev->mtu > ALX_MAX_TSO_PKT_SIZE)
+ features &= ~(NETIF_F_TSO | NETIF_F_TSO6);
+
+ return features;
+}
+
+
+static int alx_set_features(struct net_device *netdev,
+ netdev_features_t features)
+{
+ netdev_features_t changed = netdev->features ^ features;
+
+ if (changed & NETIF_F_HW_VLAN_RX)
+ alx_vlan_mode(netdev, features);
+ return 0;
+}
+
+/*
+ * alx_change_mtu - Change the Maximum Transfer Unit
+ */
+static int alx_change_mtu(struct net_device *netdev, int new_mtu)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ int old_mtu = netdev->mtu;
+ int max_frame = new_mtu + ETH_HLEN + ETH_FCS_LEN + VLAN_HLEN;
+
+ if ((max_frame < ALX_MIN_ETH_FRAME_SIZE) ||
+ (max_frame > ALX_MAX_ETH_FRAME_SIZE)) {
+ alx_err(adpt, "invalid MTU setting\n");
+ return -EINVAL;
+ }
+ /* set MTU */
+ if (old_mtu != new_mtu && netif_running(netdev)) {
+ netif_info(adpt, hw, adpt->netdev,
+ "changing MTU from %d to %d\n",
+ netdev->mtu, new_mtu);
+ netdev->mtu = new_mtu;
+ adpt->hw.mtu = new_mtu;
+ adpt->rxbuf_size = new_mtu > ALX_DEF_RX_BUF_SIZE ?
+ ALIGN(max_frame, 8) : ALX_DEF_RX_BUF_SIZE;
+ netdev_update_features(netdev);
+ alx_reinit_locked(adpt);
+ }
+
+ return 0;
+}
+
+
+int alx_open_internal(struct alx_adapter *adpt, u32 ctrl)
+{
+ struct alx_hw *hw = &adpt->hw;
+ int retval = 0;
+ int i;
+
+ alx_init_ring_ptrs(adpt);
+
+#ifdef CONFIG_ALX_DEBUGFS
+ alx_flush_mac_address(adpt);
+#endif
+
+ alx_set_multicase_list(adpt->netdev);
+ alx_restore_vlan(adpt);
+
+ alf_init_mac(hw, adpt->rxbuf_size, adpt->num_hw_rxques,
+ adpt->num_rxdescs, adpt->num_txques, adpt->num_txdescs);
+
+ alf_config_tx(hw);
+
+ alx_config_rss(adpt);
+
+ for (i = 0; i < adpt->num_hw_rxques; i++)
+ alx_refresh_rx_buffer(adpt->rx_queue[i]);
+
+ /* configure HW regsiters of MSIX */
+ retval = alf_config_msix(hw, adpt->num_msix_intrs,
+ CHK_ADPT_FLAG(MSIX_EN),
+ CHK_ADPT_FLAG(MSI_EN));
+
+ if (ctrl & ALX_OPEN_CTRL_IRQ_EN) {
+ retval = alx_request_irq(adpt);
+ if (retval)
+ goto err_request_irq;
+ }
+
+ /* enable NAPI, INTR and TX */
+ alx_napi_enable_all(adpt);
+
+ alx_enable_intr(adpt);
+
+ netif_tx_start_all_queues(adpt->netdev);
+
+ CLI_ADPT_FLAG(STATE_DOWN);
+
+ /* check link status */
+ SET_ADPT_FLAG(TASK_LSC_REQ);
+ adpt->link_jiffies = jiffies + ALX_TRY_LINK_TIMEOUT;
+ mod_timer(&adpt->alx_timer, jiffies);
+
+ return retval;
+
+err_request_irq:
+ alx_clean_all_rx_queues(adpt);
+ return retval;
+}
+
+
+void alx_stop_internal(struct alx_adapter *adpt, u32 ctrl)
+{
+ struct net_device *netdev = adpt->netdev;
+ struct alx_hw *hw = &adpt->hw;
+
+ SET_ADPT_FLAG(STATE_DOWN);
+
+ netif_tx_stop_all_queues(netdev);
+ /* call carrier off first to avoid false dev_watchdog timeouts */
+ netif_carrier_off(netdev);
+ netif_tx_disable(netdev);
+
+ alx_disable_intr(adpt);
+
+ alx_napi_disable_all(adpt);
+
+ if (ctrl & ALX_OPEN_CTRL_IRQ_EN)
+ alx_free_irq(adpt);
+
+ CLI_ADPT_FLAG(TASK_LSC_REQ);
+ CLI_ADPT_FLAG(TASK_REINIT_REQ);
+ del_timer_sync(&adpt->alx_timer);
+
+ if (ctrl & ALX_OPEN_CTRL_RESET_PHY)
+ alf_reset_phy(hw);
+
+ if (ctrl & ALX_OPEN_CTRL_RESET_MAC)
+ alf_reset_mac(hw);
+
+ adpt->hw.link_speed = 0;
+
+ alx_clean_all_tx_queues(adpt);
+ alx_clean_all_rx_queues(adpt);
+}
+
+
+/*
+ * alx_open - Called when a network interface is made active
+ */
+static int alx_open(struct net_device *netdev)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ struct alx_hw *hw = &adpt->hw;
+ int retval;
+
+ /* disallow open during test */
+ if (CHK_ADPT_FLAG(STATE_TESTING) ||
+ CHK_ADPT_FLAG(STATE_DIAG_RUNNING))
+ return -EBUSY;
+
+ netif_carrier_off(netdev);
+
+ /* allocate rx/tx dma buffer & descriptors */
+ retval = alx_alloc_all_rtx_descriptor(adpt);
+ if (retval) {
+ alx_err(adpt, "error in alx_alloc_all_rtx_descriptor\n");
+ goto err_alloc_rtx;
+ }
+
+ retval = alx_open_internal(adpt, ALX_OPEN_CTRL_IRQ_EN);
+ if (retval)
+ goto err_open_internal;
+
+ return retval;
+
+err_open_internal:
+ alx_stop_internal(adpt, ALX_OPEN_CTRL_IRQ_EN);
+err_alloc_rtx:
+ alx_free_all_rtx_descriptor(adpt);
+ alf_reset_mac(hw);
+ return retval;
+}
+
+
+/*
+ * alx_stop - Disables a network interface
+ */
+static int alx_stop(struct net_device *netdev)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+
+ if (CHK_ADPT_FLAG(STATE_RESETTING))
+ netif_warn(adpt, ifdown, adpt->netdev,
+ "flag STATE_RESETTING has already set\n");
+
+ alx_stop_internal(adpt, ALX_OPEN_CTRL_IRQ_EN |
+ ALX_OPEN_CTRL_RESET_MAC);
+ alx_free_all_rtx_descriptor(adpt);
+
+ return 0;
+}
+
+
+static int alx_shutdown_internal(struct pci_dev *pdev, bool *wakeup)
+{
+ struct alx_adapter *adpt = pci_get_drvdata(pdev);
+ struct net_device *netdev = adpt->netdev;
+ struct alx_hw *hw = &adpt->hw;
+ u32 misc, wufc = adpt->wol;
+ u16 lpa;
+ u8 speed, adv_speed;
+ bool link_up;
+ bool swoi_en = false;
+ int i;
+ int retval = 0;
+
+ alf_config_aspm(hw, false, false);
+
+ netif_device_detach(netdev);
+ if (netif_running(netdev))
+ alx_stop_internal(adpt, 0);
+
+#ifdef CONFIG_PM_SLEEP
+ retval = pci_save_state(pdev);
+ if (retval)
+ return retval;
+#endif
+ alf_check_phy_link(hw, &speed, &link_up);
+
+ if (link_up) {
+ alx_mem_r32(hw, ALX_MISC, &misc);
+ misc |= ALX_MISC_INTNLOSC_OPEN;
+ alx_mem_w32(hw, ALX_MISC, misc);
+
+ retval = alf_read_phy_reg(hw, MII_LPA, &lpa);
+ if (retval)
+ return retval;
+
+ adv_speed = LX_LC_10H;
+ if (lpa & LPA_10FULL)
+ adv_speed = LX_LC_10F;
+ else if (lpa & LPA_10HALF)
+ adv_speed = LX_LC_10H;
+ else if (lpa & LPA_100FULL)
+ adv_speed = LX_LC_100F;
+ else if (lpa & LPA_100HALF)
+ adv_speed = LX_LC_100H;
+
+ retval = alf_setup_phy_link(hw, adv_speed, true,
+ !hw->disable_fc_autoneg);
+ if (retval)
+ return retval;
+
+ for (i = 0; i < ALX_MAX_SETUP_LNK_CYCLE; i++) {
+ mdelay(100);
+ retval = alf_check_phy_link(hw, &speed, &link_up);
+ if (retval)
+ continue;
+ if (link_up)
+ break;
+ }
+ } else {
+ speed = LX_LC_10H;
+ link_up = false;
+ }
+ hw->link_speed = speed;
+ hw->link_up = link_up;
+
+ retval = alf_config_wol(hw, wufc);
+ if (retval)
+ return retval;
+
+ /* clear phy interrupt */
+ retval = alf_ack_phy_intr(hw);
+ if (retval)
+ return retval;
+
+#ifdef CONFIG_ALX_DEBUGFS
+ if (adpt->cifs && adpt->hw.swoi_offload.len != 0) {
+ netif_info(adpt, wol, adpt->netdev, "swoi enabled\n");
+ alx_setup_annce(adpt, adpt->hw.link_speed);
+ swoi_en = true;
+ }
+#endif
+
+ if (wufc || swoi_en) {
+ /* pcie patch */
+ device_set_wakeup_enable(&pdev->dev, 1);
+ }
+
+ retval = alf_config_pow_save(hw, adpt->hw.link_speed,
+ (wufc ? true : false) || swoi_en, swoi_en,
+ (wufc & ALX_WOL_MAGIC ? true : false) || swoi_en, true);
+ if (retval)
+ return retval;
+
+ *wakeup = (wufc ? true : false) || swoi_en;
+ pci_disable_device(pdev);
+ return 0;
+}
+
+
+static void alx_shutdown(struct pci_dev *pdev)
+{
+ bool wakeup;
+ alx_shutdown_internal(pdev, &wakeup);
+
+ pci_wake_from_d3(pdev, wakeup);
+ pci_set_power_state(pdev, PCI_D3hot);
+}
+
+
+#ifdef CONFIG_PM_SLEEP
+static int alx_suspend(struct device *dev)
+{
+ struct pci_dev *pdev = to_pci_dev(dev);
+ int retval;
+ bool wakeup;
+
+ retval = alx_shutdown_internal(pdev, &wakeup);
+ if (retval)
+ return retval;
+
+ if (wakeup) {
+ pci_prepare_to_sleep(pdev);
+ } else {
+ pci_wake_from_d3(pdev, false);
+ pci_set_power_state(pdev, PCI_D3hot);
+ }
+
+ return 0;
+}
+
+
+static int alx_resume(struct device *dev)
+{
+ struct pci_dev *pdev = to_pci_dev(dev);
+ struct alx_adapter *adpt = pci_get_drvdata(pdev);
+ struct net_device *netdev = adpt->netdev;
+ struct alx_hw *hw = &adpt->hw;
+ u32 retval;
+
+ pci_set_power_state(pdev, PCI_D0);
+ pci_restore_state(pdev);
+ /*
+ * pci_restore_state clears dev->state_saved so call
+ * pci_save_state to restore it.
+ */
+ pci_save_state(pdev);
+
+ pci_enable_wake(pdev, PCI_D3hot, 0);
+ pci_enable_wake(pdev, PCI_D3cold, 0);
+
+ alf_reset_pcie(hw, true, true);
+ alf_reset_phy(hw);
+ alf_reset_mac(hw);
+ alf_setup_phy_link(hw, hw->autoneg_advertised, true,
+ !hw->disable_fc_autoneg);
+
+ alf_config_wol(hw, 0);
+
+ if (netif_running(netdev)) {
+ retval = alx_open_internal(adpt, 0);
+ if (retval)
+ return retval;
+ }
+
+ netif_device_attach(netdev);
+ return 0;
+}
+#endif
+
+
+/*
+ * alx_update_hw_stats - Update the board statistics counters.
+ */
+static void alx_update_hw_stats(struct alx_adapter *adpt)
+{
+ struct net_device_stats *net_stats;
+ struct alx_hw *hw = &adpt->hw;
+ struct alx_hw_stats *hwstats = &adpt->hw_stats;
+ unsigned long *hwstat_item = NULL;
+ u32 hwstat_reg;
+ u32 hwstat_data;
+
+ if (CHK_ADPT_FLAG(STATE_DOWN) || CHK_ADPT_FLAG(STATE_RESETTING))
+ return;
+
+ /* update RX status */
+ hwstat_reg = hw->rxstat_reg;
+ hwstat_item = &hwstats->rx_ok;
+ while (hwstat_reg < hw->rxstat_reg + hw->rxstat_sz) {
+ alx_mem_r32(hw, hwstat_reg, &hwstat_data);
+ *hwstat_item += hwstat_data;
+ hwstat_reg += 4;
+ hwstat_item++;
+ }
+
+ /* update TX status */
+ hwstat_reg = hw->txstat_reg;
+ hwstat_item = &hwstats->tx_ok;
+ while (hwstat_reg < hw->txstat_reg + hw->txstat_sz) {
+ alx_mem_r32(hw, hwstat_reg, &hwstat_data);
+ *hwstat_item += hwstat_data;
+ hwstat_reg += 4;
+ hwstat_item++;
+ }
+
+ net_stats = &adpt->netdev->stats;
+ net_stats->rx_packets = hwstats->rx_ok;
+ net_stats->tx_packets = hwstats->tx_ok;
+ net_stats->rx_bytes = hwstats->rx_byte_cnt;
+ net_stats->tx_bytes = hwstats->tx_byte_cnt;
+ net_stats->multicast = hwstats->rx_mcast;
+ net_stats->collisions = hwstats->tx_single_col +
+ hwstats->tx_multi_col * 2 +
+ hwstats->tx_late_col + hwstats->tx_abort_col;
+
+ net_stats->rx_errors = hwstats->rx_frag + hwstats->rx_fcs_err +
+ hwstats->rx_len_err + hwstats->rx_ov_sz +
+ hwstats->rx_ov_rrd + hwstats->rx_align_err;
+
+ net_stats->rx_fifo_errors = hwstats->rx_ov_rxf;
+ net_stats->rx_length_errors = hwstats->rx_len_err;
+ net_stats->rx_crc_errors = hwstats->rx_fcs_err;
+ net_stats->rx_frame_errors = hwstats->rx_align_err;
+ net_stats->rx_over_errors = hwstats->rx_ov_rrd + hwstats->rx_ov_rxf;
+
+ net_stats->rx_missed_errors = hwstats->rx_ov_rrd + hwstats->rx_ov_rxf;
+
+ net_stats->tx_errors = hwstats->tx_late_col + hwstats->tx_abort_col +
+ hwstats->tx_underrun + hwstats->tx_trunc;
+ net_stats->tx_fifo_errors = hwstats->tx_underrun;
+ net_stats->tx_aborted_errors = hwstats->tx_abort_col;
+ net_stats->tx_window_errors = hwstats->tx_late_col;
+}
+
+
+/*
+ * alx_get_stats - Get System Network Statistics
+ *
+ * Returns the address of the device statistics structure.
+ * The statistics are actually updated from the timer callback.
+ */
+static struct net_device_stats *alx_get_stats(struct net_device *netdev)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+
+ alx_update_hw_stats(adpt);
+ return &netdev->stats;
+}
+
+
+static void alx_link_task_routine(struct alx_adapter *adpt)
+{
+ struct net_device *netdev = adpt->netdev;
+ struct alx_hw *hw = &adpt->hw;
+ char *link_desc;
+
+ if (!CHK_ADPT_FLAG(TASK_LSC_REQ))
+ return;
+ CLI_ADPT_FLAG(TASK_LSC_REQ);
+
+ if (CHK_ADPT_FLAG(STATE_DOWN))
+ return;
+
+ alf_check_phy_link(hw, &hw->link_speed, &hw->link_up);
+ netif_info(adpt, timer, adpt->netdev,
+ "link_speed = %d, link_up = %d\n",
+ hw->link_speed, hw->link_up);
+
+ if (!hw->link_up && time_after(adpt->link_jiffies, jiffies))
+ SET_ADPT_FLAG(TASK_LSC_REQ);
+
+ if (hw->link_up) {
+ if (netif_carrier_ok(netdev))
+ return;
+
+ link_desc = hw->link_speed == LX_LC_1000F ?
+ "1 Gbps Duplex Full" :
+ hw->link_speed == LX_LC_100F ?
+ "100 Mbps Duplex Full" :
+ hw->link_speed == LX_LC_100H ?
+ "100 Mbps Duplex Half" :
+ hw->link_speed == LX_LC_10F ?
+ "10 Mbps Duplex Full" :
+ hw->link_speed == LX_LC_10H ?
+ "10 Mbps Duplex HALF" :
+ "unknown speed";
+ netif_info(adpt, timer, adpt->netdev,
+ "NIC Link is Up %s\n", link_desc);
+
+ alf_post_phy_link(hw, CHK_HW_FLAG(AZ_EN), hw->link_up,
+ hw->link_speed);
+ alf_config_aspm(hw, true, true);
+ alf_start_mac(hw);
+ netif_carrier_on(netdev);
+ } else {
+ /* only continue if link was up previously */
+ if (!netif_carrier_ok(netdev))
+ return;
+
+ hw->link_speed = 0;
+ netif_info(adpt, timer, adpt->netdev, "NIC Link is Down\n");
+ netif_carrier_off(netdev);
+
+ alf_post_phy_link(hw, CHK_HW_FLAG(AZ_EN), hw->link_up,
+ hw->link_speed);
+ alf_stop_mac(hw);
+ alf_config_aspm(hw, false, true);
+ alf_setup_phy_link(hw, hw->autoneg_advertised, true,
+ !hw->disable_fc_autoneg);
+ }
+}
+
+
+static void alx_reinit_task_routine(struct alx_adapter *adpt)
+{
+ if (!CHK_ADPT_FLAG(TASK_REINIT_REQ))
+ return;
+ CLI_ADPT_FLAG(TASK_REINIT_REQ);
+
+ if (CHK_ADPT_FLAG(STATE_DOWN) || CHK_ADPT_FLAG(STATE_RESETTING))
+ return;
+
+ alx_reinit_locked(adpt);
+}
+
+
+/*
+ * alx_timer_routine - Timer Call-back
+ */
+static void alx_timer_routine(unsigned long data)
+{
+ struct alx_adapter *adpt = (struct alx_adapter *)data;
+ unsigned long delay;
+
+ /* poll faster when waiting for link */
+ if (CHK_ADPT_FLAG(TASK_LSC_REQ))
+ delay = HZ / 10;
+ else
+ delay = HZ * 2;
+
+ /* Reset the timer */
+ mod_timer(&adpt->alx_timer, delay + jiffies);
+
+ alx_task_schedule(adpt);
+}
+
+
+/*
+ * alx_task_routine - manages and runs subtasks
+ */
+static void alx_task_routine(struct work_struct *work)
+{
+ struct alx_adapter *adpt = container_of(work,
+ struct alx_adapter, alx_task);
+ /* test state of adapter */
+ if (!CHK_ADPT_FLAG(STATE_WATCH_DOG))
+ netif_warn(adpt, timer, adpt->netdev,
+ "flag STATE_WATCH_DOG doesn't set\n");
+
+ /* reinit task */
+ alx_reinit_task_routine(adpt);
+
+ /* link task */
+ alx_link_task_routine(adpt);
+
+ /* flush memory to make sure state is correct before next watchog */
+ smp_mb__before_clear_bit();
+
+ CLI_ADPT_FLAG(STATE_WATCH_DOG);
+}
+
+
+/* Calculate the transmit packet descript needed*/
+static bool alx_check_num_tpdescs(struct alx_tx_queue *txque,
+ const struct sk_buff *skb)
+{
+ u16 num_required = 1;
+ u16 num_available = 0;
+ u16 produce_idx = txque->tpq.produce_idx;
+ u16 consume_idx = txque->tpq.consume_idx;
+ int i = 0;
+
+ u16 proto_hdr_len = 0;
+ if (skb_is_gso(skb)) {
+ proto_hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
+ if (proto_hdr_len < skb_headlen(skb))
+ num_required++;
+ if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6)
+ num_required++;
+ }
+ for (i = 0; i < skb_shinfo(skb)->nr_frags; i++)
+ num_required++;
+ num_available = (u16)(consume_idx > produce_idx) ?
+ (consume_idx - produce_idx - 1) :
+ (txque->tpq.count + consume_idx - produce_idx - 1);
+
+ return num_required < num_available;
+}
+
+
+static int alx_tso_csum(struct alx_adapter *adpt,
+ struct alx_tx_queue *txque,
+ struct sk_buff *skb,
+ union alx_sw_tpdesc *stpd)
+{
+ struct pci_dev *pdev = adpt->pdev;
+ u8 hdr_len;
+ int retval;
+
+ if (skb_is_gso(skb)) {
+ if (skb_header_cloned(skb)) {
+ retval = pskb_expand_head(skb, 0, 0, GFP_ATOMIC);
+ if (unlikely(retval))
+ return retval;
+ }
+
+ if (skb->protocol == htons(ETH_P_IP)) {
+ u32 pkt_len =
+ ((unsigned char *)ip_hdr(skb) - skb->data) +
+ ntohs(ip_hdr(skb)->tot_len);
+ if (skb->len > pkt_len)
+ pskb_trim(skb, pkt_len);
+ }
+
+ hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
+ if (unlikely(skb->len == hdr_len)) {
+ /* we only need to do csum */
+ dev_warn(&pdev->dev,
+ "tso doesn't need, if packet with 0 data\n");
+ goto do_csum;
+ }
+
+ if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV4) {
+ ip_hdr(skb)->check = 0;
+ tcp_hdr(skb)->check = ~csum_tcpudp_magic(
+ ip_hdr(skb)->saddr,
+ ip_hdr(skb)->daddr,
+ 0, IPPROTO_TCP, 0);
+ stpd->genr.ipv4 = 1;
+ }
+
+ if (skb_shinfo(skb)->gso_type & SKB_GSO_TCPV6) {
+ /* ipv6 tso need an extra tpd */
+ union alx_sw_tpdesc extra_tpd;
+
+ memset(stpd, 0, sizeof(union alx_sw_tpdesc));
+ memset(&extra_tpd, 0, sizeof(union alx_sw_tpdesc));
+
+ ipv6_hdr(skb)->payload_len = 0;
+ tcp_hdr(skb)->check = ~csum_ipv6_magic(
+ &ipv6_hdr(skb)->saddr,
+ &ipv6_hdr(skb)->daddr,
+ 0, IPPROTO_TCP, 0);
+ extra_tpd.tso.pkt_len = skb->len;
+ extra_tpd.tso.lso = 0x1;
+ extra_tpd.tso.lso_v2 = 0x1;
+ alx_set_tpdesc(txque, &extra_tpd);
+ stpd->tso.lso_v2 = 0x1;
+ }
+
+ stpd->tso.lso = 0x1;
+ stpd->tso.tcphdr_offset = skb_transport_offset(skb);
+ stpd->tso.mss = skb_shinfo(skb)->gso_size;
+ return 0;
+ }
+
+do_csum:
+ if (likely(skb->ip_summed == CHECKSUM_PARTIAL)) {
+ u8 css, cso;
+ cso = skb_checksum_start_offset(skb);
+
+ if (unlikely(cso & 0x1)) {
+ dev_err(&pdev->dev,
+ "payload offset can't be an event number\n");
+ return -1;
+ } else {
+ css = cso + skb->csum_offset;
+
+ stpd->csum.payld_offset = cso >> 1;
+ stpd->csum.cxsum_offset = css >> 1;
+ stpd->csum.c_csum = 0x1;
+ }
+ }
+ return 0;
+}
+
+
+static void alx_tx_map(struct alx_adapter *adpt,
+ struct alx_tx_queue *txque,
+ struct sk_buff *skb,
+ union alx_sw_tpdesc *stpd)
+{
+ struct alx_buffer *tpbuf = NULL;
+
+ unsigned int nr_frags = skb_shinfo(skb)->nr_frags;
+
+ unsigned int len = skb_headlen(skb);
+
+ u16 map_len = 0;
+ u16 mapped_len = 0;
+ u16 hdr_len = 0;
+ u16 f;
+ u32 tso = stpd->tso.lso;
+
+ if (tso) {
+ /* TSO */
+ map_len = hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
+
+ tpbuf = GET_TP_BUFFER(txque, txque->tpq.produce_idx);
+ tpbuf->length = map_len;
+ tpbuf->dma = dma_map_single(txque->dev,
+ skb->data, hdr_len, DMA_TO_DEVICE);
+ mapped_len += map_len;
+ stpd->genr.addr = tpbuf->dma;
+ stpd->genr.buffer_len = tpbuf->length;
+
+ alx_set_tpdesc(txque, stpd);
+ }
+
+ if (mapped_len < len) {
+ tpbuf = GET_TP_BUFFER(txque, txque->tpq.produce_idx);
+ tpbuf->length = len - mapped_len;
+ tpbuf->dma =
+ dma_map_single(txque->dev, skb->data + mapped_len,
+ tpbuf->length, DMA_TO_DEVICE);
+ stpd->genr.addr = tpbuf->dma;
+ stpd->genr.buffer_len = tpbuf->length;
+ alx_set_tpdesc(txque, stpd);
+ }
+
+ for (f = 0; f < nr_frags; f++) {
+ struct skb_frag_struct *frag;
+
+ frag = &skb_shinfo(skb)->frags[f];
+
+ tpbuf = GET_TP_BUFFER(txque, txque->tpq.produce_idx);
+ tpbuf->length = skb_frag_size(frag);
+ tpbuf->dma = skb_frag_dma_map(txque->dev, frag, 0,
+ tpbuf->length, DMA_TO_DEVICE);
+ stpd->genr.addr = tpbuf->dma;
+ stpd->genr.buffer_len = tpbuf->length;
+ alx_set_tpdesc(txque, stpd);
+ }
+
+
+ /* The last tpd */
+ alx_set_tpdesc_lastfrag(txque);
+ /*
+ * The last buffer info contain the skb address,
+ * so it will be free after unmap
+ */
+ tpbuf->skb = skb;
+}
+
+
+static netdev_tx_t alx_start_xmit_frame(struct alx_adapter *adpt,
+ struct alx_tx_queue *txque,
+ struct sk_buff *skb)
+{
+ struct alx_hw *hw = &adpt->hw;
+ unsigned long flags = 0;
+ union alx_sw_tpdesc stpd; /* normal*/
+
+ if (CHK_ADPT_FLAG(STATE_DOWN) ||
+ CHK_ADPT_FLAG(STATE_DIAG_RUNNING)) {
+ dev_kfree_skb_any(skb);
+ return NETDEV_TX_OK;
+ }
+
+ if (!spin_trylock_irqsave(&adpt->tx_lock, flags)) {
+ alx_err(adpt, "tx locked!\n");
+ return NETDEV_TX_LOCKED;
+ }
+
+ if (!alx_check_num_tpdescs(txque, skb)) {
+ /* no enough descriptor, just stop queue */
+ netif_stop_queue(adpt->netdev);
+ spin_unlock_irqrestore(&adpt->tx_lock, flags);
+ return NETDEV_TX_BUSY;
+ }
+
+ memset(&stpd, 0, sizeof(union alx_sw_tpdesc));
+ /* do TSO and check sum */
+ if (alx_tso_csum(adpt, txque, skb, &stpd) != 0) {
+ spin_unlock_irqrestore(&adpt->tx_lock, flags);
+ dev_kfree_skb_any(skb);
+ return NETDEV_TX_OK;
+ }
+
+ if (unlikely(vlan_tx_tag_present(skb))) {
+ u16 vlan = vlan_tx_tag_get(skb);
+ u16 tag;
+ ALX_VLAN_TO_TAG(vlan, tag);
+ stpd.genr.vlan_tag = tag;
+ stpd.genr.instag = 0x1;
+ }
+
+ if (skb_network_offset(skb) != ETH_HLEN)
+ stpd.genr.type = 0x1; /* Ethernet frame */
+
+ alx_tx_map(adpt, txque, skb, &stpd);
+
+
+ /* update produce idx */
+ wmb();
+ alx_mem_w16(hw, txque->produce_reg, txque->tpq.produce_idx);
+ netif_info(adpt, tx_err, adpt->netdev,
+ "TX[%d]: tpq.consume_idx = 0x%x, tpq.produce_idx = 0x%x\n",
+ txque->que_idx, txque->tpq.consume_idx,
+ txque->tpq.produce_idx);
+ netif_info(adpt, tx_err, adpt->netdev,
+ "TX[%d]: Produce Reg[%x] = 0x%x\n",
+ txque->que_idx, txque->produce_reg, txque->tpq.produce_idx);
+
+ spin_unlock_irqrestore(&adpt->tx_lock, flags);
+ return NETDEV_TX_OK;
+}
+
+
+static netdev_tx_t alx_start_xmit(struct sk_buff *skb,
+ struct net_device *netdev)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ struct alx_tx_queue *txque;
+
+ txque = adpt->tx_queue[0];
+ return alx_start_xmit_frame(adpt, txque, skb);
+}
+
+
+/*
+ * alx_mii_ioctl
+ */
+static int alx_mii_ioctl(struct net_device *netdev,
+ struct ifreq *ifr, int cmd)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ struct alx_hw *hw = &adpt->hw;
+ struct mii_ioctl_data *data = if_mii(ifr);
+ int retval = 0;
+
+ if (!netif_running(netdev))
+ return -EINVAL;
+
+ switch (cmd) {
+ case SIOCGMIIPHY:
+ data->phy_id = 0;
+ break;
+
+ case SIOCGMIIREG:
+ if (data->reg_num & ~(0x1F)) {
+ retval = -EFAULT;
+ goto out;
+ }
+
+ retval = alf_read_phy_reg(hw, data->reg_num, &data->val_out);
+ netif_dbg(adpt, hw, adpt->netdev, "read phy %02x %04x\n",
+ data->reg_num, data->val_out);
+ if (retval) {
+ retval = -EIO;
+ goto out;
+ }
+ break;
+
+ case SIOCSMIIREG:
+ if (data->reg_num & ~(0x1F)) {
+ retval = -EFAULT;
+ goto out;
+ }
+
+ retval = alf_write_phy_reg(hw, data->reg_num, data->val_in);
+ netif_dbg(adpt, hw, adpt->netdev, "write phy %02x %04x\n",
+ data->reg_num, data->val_in);
+ if (retval) {
+ retval = -EIO;
+ goto out;
+ }
+ break;
+ default:
+ retval = -EOPNOTSUPP;
+ break;
+ }
+out:
+ return retval;
+
+}
+
+
+static int alx_ioctl(struct net_device *netdev, struct ifreq *ifr, int cmd)
+{
+ switch (cmd) {
+ case SIOCGMIIPHY:
+ case SIOCGMIIREG:
+ case SIOCSMIIREG:
+ return alx_mii_ioctl(netdev, ifr, cmd);
+ default:
+ return -EOPNOTSUPP;
+ }
+}
+
+static bool alx_enable_ioport(struct pci_dev *pdev)
+{
+#ifdef CONFIG_ALX_DEBUGFS
+ return true;
+#else
+ return false;
+#endif
+}
+
+
+#ifdef CONFIG_NET_POLL_CONTROLLER
+static void alx_poll_controller(struct net_device *netdev)
+{
+ struct alx_adapter *adpt = netdev_priv(netdev);
+ int num_msix_intrs = adpt->num_msix_intrs;
+ int msix_idx;
+
+ /* if interface is down do nothing */
+ if (CHK_ADPT_FLAG(STATE_DOWN))
+ return;
+
+ if (CHK_ADPT_FLAG(MSIX_EN)) {
+ for (msix_idx = 0; msix_idx < num_msix_intrs; msix_idx++) {
+ struct alx_msix_param *msix = adpt->msix[msix_idx];
+ if (CHK_MSIX_FLAG(RXS) || CHK_MSIX_FLAG(TXS))
+ alx_msix_rtx(0, msix);
+ else if (CHK_MSIX_FLAG(TIMER))
+ alx_msix_timer(0, msix);
+ else if (CHK_MSIX_FLAG(ALERT))
+ alx_msix_alert(0, msix);
+ else if (CHK_MSIX_FLAG(SMB))
+ alx_msix_smb(0, msix);
+ else if (CHK_MSIX_FLAG(PHY))
+ alx_msix_phy(0, msix);
+ }
+ } else {
+ alx_interrupt(adpt->pdev->irq, netdev);
+ }
+}
+#endif
+
+#ifdef CONFIG_ALX_DEBUGFS
+
+static struct dentry *alx_debugfs_dir;
+
+static const struct file_operations alx_fops_debugfs = {
+ .open = simple_open,
+ .unlocked_ioctl = alx_debugfs_ioctl,
+ .compat_ioctl = alx_debugfs_ioctl,
+};
+
+/*
+ * Use network device events to create/remove/rename
+ * debugfs file entries
+ */
+static int alx_debugfs_event(struct notifier_block *nb,
+ unsigned long event, void *ptr)
+{
+ struct net_device *netdev = ptr;
+ struct alx_adapter *adpt = netdev_priv(netdev);
+
+ if (netdev->netdev_ops->ndo_open != alx_open || !alx_debugfs_dir)
+ return NOTIFY_DONE;
+
+ switch (event) {
+ case NETDEV_CHANGENAME:
+ if (adpt->dfs.fent) {
+ adpt->dfs.fent = debugfs_rename(alx_debugfs_dir,
+ adpt->dfs.fent,
+ alx_debugfs_dir,
+ netdev->name);
+ }
+ break;
+ case NETDEV_GOING_DOWN:
+ if (adpt->dfs.fent) {
+ debugfs_remove(adpt->dfs.fent);
+ adpt->dfs.fent = NULL;
+ }
+ break;
+ case NETDEV_UP:
+ adpt->dfs.fent = debugfs_create_file(netdev->name, S_IRUGO,
+ alx_debugfs_dir, adpt,
+ &alx_fops_debugfs);
+ if (IS_ERR(adpt->dfs.fent))
+ adpt->dfs.fent = NULL;
+ }
+
+ return NOTIFY_DONE;
+}
+
+
+static struct notifier_block alx_notify_debugfs = {
+ .notifier_call = alx_debugfs_event,
+};
+
+static void __init alx_debug_init(void)
+{
+ struct dentry *dent;
+
+ dent = debugfs_create_dir("alx", NULL);
+ if (!dent || IS_ERR(dent))
+ return;
+ alx_debugfs_dir = dent;
+
+ register_netdevice_notifier(&alx_notify_debugfs);
+}
+
+static void __exit alx_debug_exit(void)
+{
+ if (alx_debugfs_dir) {
+ unregister_netdevice_notifier(&alx_notify_debugfs);
+ debugfs_remove(alx_debugfs_dir);
+ alx_debugfs_dir = NULL;
+ }
+}
+
+#else
+
+#define alx_debug_init()
+#define alx_debug_exit()
+
+#endif
+
+
+static const struct net_device_ops alx_netdev_ops = {
+ .ndo_open = alx_open,
+ .ndo_stop = alx_stop,
+ .ndo_start_xmit = alx_start_xmit,
+ .ndo_get_stats = alx_get_stats,
+ .ndo_set_rx_mode = alx_set_multicase_list,
+ .ndo_validate_addr = eth_validate_addr,
+ .ndo_set_mac_address = alx_set_mac_address,
+ .ndo_change_mtu = alx_change_mtu,
+ .ndo_do_ioctl = alx_ioctl,
+ .ndo_tx_timeout = alx_tx_timeout,
+ .ndo_fix_features = alx_fix_features,
+ .ndo_set_features = alx_set_features,
+#ifdef CONFIG_NET_POLL_CONTROLLER
+ .ndo_poll_controller = alx_poll_controller,
+#endif
+};
+
+
+/*
+ * alx_init - Device Initialization Routine
+ */
+static int __devinit alx_init(struct pci_dev *pdev,
+ const struct pci_device_id *ent)
+{
+ struct net_device *netdev;
+ struct alx_adapter *adpt = NULL;
+ struct alx_hw *hw = NULL;
+ static int cards_found;
+ bool enable_ioport;
+ int i, bars, retval;
+
+ /* enable device (incl. PCI PM wakeup and hotplug setup) */
+ enable_ioport = alx_enable_ioport(pdev);
+ if (enable_ioport) {
+ bars = pci_select_bars(pdev, IORESOURCE_MEM | IORESOURCE_IO);
+ retval = pci_enable_device(pdev);
+ } else {
+ bars = pci_select_bars(pdev, IORESOURCE_MEM);
+ retval = pci_enable_device_mem(pdev);
+ }
+
+ if (retval) {
+ dev_err(&pdev->dev, "cannot enable PCI device\n");
+ goto err_alloc_device;
+ }
+
+ /*
+ * The alx chip can DMA to 64-bit addresses, but it uses a single
+ * shared register for the high 32 bits, so only a single, aligned,
+ * 4 GB physical address range can be used at a time.
+ */
+ if (!dma_set_mask(&pdev->dev, DMA_BIT_MASK(64)) &&
+ !dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(64))) {
+ dev_info(&pdev->dev, "DMA to 64-BIT addresses\n");
+ } else {
+ retval = dma_set_mask(&pdev->dev, DMA_BIT_MASK(32));
+ if (retval) {
+ retval = dma_set_coherent_mask(&pdev->dev,
+ DMA_BIT_MASK(32));
+ if (retval) {
+ dev_err(&pdev->dev,
+ "No usable DMA config, aborting\n");
+ goto err_alloc_pci_res_mem;
+ }
+ }
+ }
+
+ retval = pci_request_selected_regions(pdev, bars, alx_drv_name);
+ if (retval) {
+ dev_err(&pdev->dev,
+ "pci_request_selected_regions failed(bars:%d)\n", bars);
+ goto err_alloc_pci_res_mem;
+ }
+
+ pci_enable_pcie_error_reporting(pdev);
+ pci_set_master(pdev);
+
+ netdev = alloc_etherdev(sizeof(struct alx_adapter));
+ if (netdev == NULL) {
+ dev_err(&pdev->dev, "etherdev alloc failed\n");
+ retval = -ENOMEM;
+ goto err_alloc_netdev;
+ }
+
+ SET_NETDEV_DEV(netdev, &pdev->dev);
+ netdev->irq = pdev->irq;
+
+ adpt = netdev_priv(netdev);
+ pci_set_drvdata(pdev, adpt);
+ adpt->netdev = netdev;
+ adpt->pdev = pdev;
+ hw = &adpt->hw;
+ hw->adpt = adpt;
+ adpt->msg_enable = ALX_MSG_DEFAULT;
+ adpt->ioport = enable_ioport;
+ adpt->bars = bars;
+
+ hw->hw_addr = pci_ioremap_bar(pdev, BAR_0);
+ if (!hw->hw_addr) {
+ alx_err(adpt, "cannot map device registers\n");
+ retval = -EIO;
+ goto err_iomap;
+ }
+ netdev->base_addr = (unsigned long)adpt->hw.hw_addr;
+
+ if (adpt->ioport) {
+ for (i = BAR_1; i <= BAR_5; i++) {
+ if (pci_resource_len(pdev, i) == 0)
+ continue;
+ if (pci_resource_flags(pdev, i) & IORESOURCE_IO) {
+ hw->io_addr = pci_resource_start(pdev, i);
+ break;
+ }
+ }
+ }
+
+ /* set cb member of netdev structure*/
+ netdev->netdev_ops = &alx_netdev_ops;
+ alx_set_ethtool_ops(netdev);
+ netdev->watchdog_timeo = ALX_WATCHDOG_TIME;
+ strncpy(netdev->name, pci_name(pdev), sizeof(netdev->name) - 1);
+
+ adpt->bd_number = cards_found;
+
+ /* init alx_adapte structure */
+ retval = alx_init_adapter(adpt);
+ if (retval) {
+ alx_err(adpt, "net device private data init failed\n");
+ goto err_init_adapter;
+ }
+
+ /* reset pcie */
+ retval = alf_reset_pcie(hw, true, true);
+ if (retval) {
+ alx_err(adpt, "PCIE Reset failed, error = %d\n", retval);
+ retval = -EIO;
+ goto err_init_adapter;
+ }
+
+ /* Init GPHY as early as possible due to power saving issue */
+ retval = alf_reset_phy(hw);
+ if (retval) {
+ alx_err(adpt, "PHY Reset failed, error = %d\n", retval);
+ retval = -EIO;
+ goto err_init_adapter;
+ }
+
+ /* reset mac */
+ retval = alf_reset_mac(hw);
+ if (retval) {
+ alx_err(adpt, "MAC Reset failed, error = %d\n", retval);
+ retval = -EIO;
+ goto err_init_adapter;
+ }
+
+ /* setup link to put it in a known good starting state */
+ retval = alf_setup_phy_link(hw, hw->autoneg_advertised, true,
+ !hw->disable_fc_autoneg);
+
+ /* get user settings */
+ adpt->num_txdescs = 1024;
+ adpt->num_rxdescs = 512;
+ adpt->max_rxques = min_t(int, ALX_MAX_RX_QUEUES, num_online_cpus());
+ adpt->max_txques = min_t(int, ALX_MAX_TX_QUEUES, num_online_cpus());
+
+ netdev->hw_features = NETIF_F_SG |
+ NETIF_F_HW_CSUM |
+ NETIF_F_HW_VLAN_RX |
+ NETIF_F_TSO |
+ NETIF_F_TSO6;
+ netdev->features = netdev->hw_features |
+ NETIF_F_HW_VLAN_TX;
+
+ /* get mac addr and perm mac addr, set to register */
+ retval = alf_get_mac_addr(hw, hw->mac_perm_addr);
+ if (retval) {
+ eth_hw_addr_random(netdev);
+ memcpy(hw->mac_perm_addr, netdev->dev_addr, netdev->addr_len);
+ }
+
+ memcpy(hw->mac_addr, hw->mac_perm_addr, netdev->addr_len);
+ alf_set_mac_addr(hw, hw->mac_addr);
+
+ memcpy(netdev->dev_addr, hw->mac_perm_addr, netdev->addr_len);
+ memcpy(netdev->perm_addr, hw->mac_perm_addr, netdev->addr_len);
+ retval = alx_validate_mac_addr(netdev->perm_addr);
+ if (retval) {
+ alx_err(adpt, "invalid MAC address\n");
+ goto err_init_adapter;
+ }
+
+ setup_timer(&adpt->alx_timer, &alx_timer_routine,
+ (unsigned long)adpt);
+ INIT_WORK(&adpt->alx_task, alx_task_routine);
+
+ /* Number of supported queues */
+ alx_set_num_queues(adpt);
+ retval = alx_set_interrupt_mode(adpt);
+ if (retval) {
+ alx_err(adpt, "can't set interrupt mode\n");
+ goto err_set_interrupt_mode;
+ }
+
+ retval = alx_set_interrupt_param(adpt);
+ if (retval) {
+ alx_err(adpt, "can't set interrupt parameter\n");
+ goto err_set_interrupt_param;
+ }
+
+ retval = alx_alloc_all_rtx_queue(adpt);
+ if (retval) {
+ alx_err(adpt, "can't allocate memory for queues\n");
+ goto err_alloc_rtx_queue;
+ }
+
+ alx_set_register_info_special(adpt);
+
+ netif_dbg(adpt, probe, adpt->netdev,
+ "num_msix_noque_intrs = %d, num_msix_rxque_intrs = %d, num_msix_txque_intrs = %d\n",
+ adpt->num_msix_noques, adpt->num_msix_rxques,
+ adpt->num_msix_txques);
+ netif_dbg(adpt, probe, adpt->netdev, "num_msix_all_intrs = %d\n",
+ adpt->num_msix_intrs);
+
+ netif_dbg(adpt, probe, adpt->netdev,
+ "RX Queue Count = %u, HRX Queue Count = %u, SRX Queue Count = %u, TX Queue Count = %u\n",
+ adpt->num_rxques, adpt->num_hw_rxques, adpt->num_sw_rxques,
+ adpt->num_txques);
+
+ /* WOL not supported for all but the following */
+ adpt->wol = (ALX_WOL_MAGIC | ALX_WOL_PHY);
+ adpt->cifs = true;
+ device_set_wakeup_enable(&adpt->pdev->dev, adpt->wol);
+
+ SET_ADPT_FLAG(STATE_DOWN);
+ strcpy(netdev->name, "eth%d");
+ retval = register_netdev(netdev);
+ if (retval) {
+ alx_err(adpt, "register netdevice failed\n");
+ goto err_register_netdev;
+ }
+ adpt->netdev_registered = true;
+
+ /* carrier off reporting is important to ethtool even BEFORE open */
+ netif_carrier_off(netdev);
+ /* keep stopping all the transmit queues for older kernels */
+ netif_tx_stop_all_queues(netdev);
+
+ /* print the MAC address */
+ netif_info(adpt, probe, adpt->netdev, "%pM\n", netdev->dev_addr);
+
+ /* print the adapter capability */
+ if (CHK_ADPT_FLAG(MSI_CAP)) {
+ netif_info(adpt, probe, adpt->netdev,
+ "MSI Capable: %s\n",
+ CHK_ADPT_FLAG(MSI_EN) ? "Enable" : "Disable");
+ }
+ if (CHK_ADPT_FLAG(MSIX_CAP)) {
+ netif_info(adpt, probe, adpt->netdev,
+ "MSIX Capable: %s\n",
+ CHK_ADPT_FLAG(MSIX_EN) ? "Enable" : "Disable");
+ }
+ if (CHK_ADPT_FLAG(MRQ_CAP)) {
+ netif_info(adpt, probe, adpt->netdev,
+ "MRQ Capable: %s\n",
+ CHK_ADPT_FLAG(MRQ_EN) ? "Enable" : "Disable");
+ }
+ if (CHK_ADPT_FLAG(MRQ_CAP)) {
+ netif_info(adpt, probe, adpt->netdev,
+ "MTQ Capable: %s\n",
+ CHK_ADPT_FLAG(MTQ_EN) ? "Enable" : "Disable");
+ }
+ if (CHK_ADPT_FLAG(SRSS_CAP)) {
+ netif_info(adpt, probe, adpt->netdev,
+ "RSS(SW) Capable: %s\n",
+ CHK_ADPT_FLAG(SRSS_EN) ? "Enable" : "Disable");
+ }
+
+ pr_info("alx: Atheros Gigabit Network Connection\n");
+ cards_found++;
+ return 0;
+
+err_register_netdev:
+ alx_free_all_rtx_queue(adpt);
+err_alloc_rtx_queue:
+ alx_reset_interrupt_param(adpt);
+err_set_interrupt_param:
+ alx_reset_interrupt_mode(adpt);
+err_set_interrupt_mode:
+err_init_adapter:
+ iounmap(adpt->hw.hw_addr);
+err_iomap:
+ free_netdev(netdev);
+err_alloc_netdev:
+ pci_release_selected_regions(pdev, bars);
+err_alloc_pci_res_mem:
+ pci_disable_device(pdev);
+err_alloc_device:
+ dev_err(&pdev->dev, "error when probe device, error = %d\n", retval);
+ return retval;
+}
+
+
+/*
+ * alx_remove - Device Removal Routine
+ */
+static void __devexit alx_remove(struct pci_dev *pdev)
+{
+ struct alx_adapter *adpt = pci_get_drvdata(pdev);
+ struct alx_hw *hw = &adpt->hw;
+ struct net_device *netdev = adpt->netdev;
+
+ SET_ADPT_FLAG(STATE_DOWN);
+ cancel_work_sync(&adpt->alx_task);
+
+ alf_config_pow_save(hw, 0, false, false, false, false);
+
+ /* resume permanent mac address */
+ alf_set_mac_addr(hw, hw->mac_perm_addr);
+
+ if (adpt->netdev_registered) {
+ unregister_netdev(netdev);
+ adpt->netdev_registered = false;
+ }
+
+ alx_free_all_rtx_queue(adpt);
+ alx_reset_interrupt_param(adpt);
+ alx_reset_interrupt_mode(adpt);
+
+ iounmap(adpt->hw.hw_addr);
+ pci_release_selected_regions(pdev, adpt->bars);
+
+ netif_info(adpt, probe, adpt->netdev, "complete\n");
+ free_netdev(netdev);
+
+ pci_disable_pcie_error_reporting(pdev);
+
+ pci_disable_device(pdev);
+}
+
+
+/*
+ * alx_pci_error_detected
+ */
+static pci_ers_result_t alx_pci_error_detected(struct pci_dev *pdev,
+ pci_channel_state_t state)
+{
+ struct alx_adapter *adpt = pci_get_drvdata(pdev);
+ struct net_device *netdev = adpt->netdev;
+ pci_ers_result_t retval = PCI_ERS_RESULT_NEED_RESET;
+
+ netif_device_detach(netdev);
+
+ if (state == pci_channel_io_perm_failure) {
+ retval = PCI_ERS_RESULT_DISCONNECT;
+ goto out;
+ }
+
+ if (netif_running(netdev))
+ alx_stop_internal(adpt, ALX_OPEN_CTRL_RESET_MAC);
+ pci_disable_device(pdev);
+out:
+ return retval;
+}
+
+
+/*
+ * alx_pci_error_slot_reset
+ */
+static pci_ers_result_t alx_pci_error_slot_reset(struct pci_dev *pdev)
+{
+ struct alx_adapter *adpt = pci_get_drvdata(pdev);
+ pci_ers_result_t retval = PCI_ERS_RESULT_DISCONNECT;
+
+ if (pci_enable_device(pdev)) {
+ alx_err(adpt, "cannot re-enable PCI device after reset\n");
+ goto out;
+ }
+
+ pci_set_master(pdev);
+ pci_enable_wake(pdev, PCI_D3hot, 0);
+ pci_enable_wake(pdev, PCI_D3cold, 0);
+ alf_reset_mac(&adpt->hw);
+ retval = PCI_ERS_RESULT_RECOVERED;
+out:
+ pci_cleanup_aer_uncorrect_error_status(pdev);
+ return retval;
+}
+
+
+/*
+ * alx_pci_error_resume
+ */
+static void alx_pci_error_resume(struct pci_dev *pdev)
+{
+ struct alx_adapter *adpt = pci_get_drvdata(pdev);
+ struct net_device *netdev = adpt->netdev;
+
+ if (netif_running(netdev)) {
+ if (alx_open_internal(adpt, 0))
+ return;
+ }
+
+ netif_device_attach(netdev);
+}
+
+
+static struct pci_error_handlers alx_err_handler = {
+ .error_detected = alx_pci_error_detected,
+ .slot_reset = alx_pci_error_slot_reset,
+ .resume = alx_pci_error_resume,
+};
+
+
+#ifdef CONFIG_PM_SLEEP
+static SIMPLE_DEV_PM_OPS(alx_pm_ops, alx_suspend, alx_resume);
+#define ALX_PM_OPS (&alx_pm_ops)
+#else
+#define ALX_PM_OPS NULL
+#endif
+
+
+static struct pci_driver alx_driver = {
+ .name = alx_drv_name,
+ .id_table = alx_pci_tbl,
+ .probe = alx_init,
+ .remove = __devexit_p(alx_remove),
+ .shutdown = alx_shutdown,
+ .err_handler = &alx_err_handler,
+ .driver.pm = ALX_PM_OPS,
+};
+
+
+static int __init alx_init_module(void)
+{
+ pr_info("%s\n", alx_drv_description);
+
+ alx_debug_init();
+ return pci_register_driver(&alx_driver);
+}
+module_init(alx_init_module);
+
+
+static void __exit alx_exit_module(void)
+{
+ pci_unregister_driver(&alx_driver);
+ alx_debug_exit();
+}
+module_exit(alx_exit_module);
diff --git a/drivers/net/ethernet/atheros/alx/alx_sw.h b/drivers/net/ethernet/atheros/alx/alx_sw.h
new file mode 100644
index 0000000..9780813
--- /dev/null
+++ b/drivers/net/ethernet/atheros/alx/alx_sw.h
@@ -0,0 +1,478 @@
+/*
+ * Copyright (c) 2012 Qualcomm Atheros, Inc.
+ *
+ * Permission to use, copy, modify, and/or distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+#ifndef _ALX_SW_H_
+#define _ALX_SW_H_
+
+#include "alx_cifs.h"
+
+/* cifs */
+#define CIFS_ANNCE_ADDR 0x700 /* ANNOUNCEMENT msg address */
+#define PACKET_MAP_ADDR 0x8000
+
+/* Vendor ID */
+#define ALX_VENDOR_ID 0x1969
+
+/* Device IDs */
+#define ALX_DEV_ID_AR8161 0x1091 /* l1f */
+#define ALX_DEV_ID_AR8162 0x1090 /* l2f */
+
+/* revision IDs */
+#define ALX_REV_ID_AR8161_V2_0 0x10 /* B0 */
+
+
+#define ALX_PCI_CMD (PCI_COMMAND_IO | PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER)
+
+/* Generic Registers */
+#define ALX_PCI_DEV_STAT 0x62 /* 16 bits */
+#define ALX_PCI_DEV_STAT_CERR 0x0001
+#define ALX_PCI_DEV_STAT_NFERR 0x0002
+#define ALX_PCI_DEV_STAT_FERR 0x0004
+
+#define ALX_PCI_IND_ACC_ADDR 0x80 /* 32 bits */
+#define ALX_PCI_IND_ACC_DATA 0x84 /* 32 bits */
+
+#define ALX_PCI_DEV_MISC_CTRL 0x21C
+
+#define ALX_ISR 0x1600
+#define ALX_IMR 0x1604
+#define ALX_ISR_SMB 0x00000001
+#define ALX_ISR_TIMER 0x00000002
+#define ALX_ISR_MANU 0x00000004
+#define ALX_ISR_RXF_OV 0x00000008
+#define ALX_ISR_RFD_UR 0x00000010
+#define ALX_ISR_TX_Q1 0x00000020
+#define ALX_ISR_TX_Q2 0x00000040
+#define ALX_ISR_TX_Q3 0x00000080
+#define ALX_ISR_TXF_UR 0x00000100
+#define ALX_ISR_DMAR 0x00000200
+#define ALX_ISR_DMAW 0x00000400
+#define ALX_ISR_TX_CREDIT 0x00000800
+#define ALX_ISR_PHY 0x00001000
+#define ALX_ISR_PHY_LPW 0x00002000
+#define ALX_ISR_TXQ_TO 0x00004000
+#define ALX_ISR_TX_Q0 0x00008000
+#define ALX_ISR_RX_Q0 0x00010000
+#define ALX_ISR_RX_Q1 0x00020000
+#define ALX_ISR_RX_Q2 0x00040000
+#define ALX_ISR_RX_Q3 0x00080000
+#define ALX_ISR_MAC_RX 0x00100000
+#define ALX_ISR_MAC_TX 0x00200000
+#define ALX_ISR_PCIE_UR 0x00400000
+#define ALX_ISR_PCIE_FERR 0x00800000
+#define ALX_ISR_PCIE_NFERR 0x01000000
+#define ALX_ISR_PCIE_CERR 0x02000000
+#define ALX_ISR_PCIE_LNKDOWN 0x04000000
+#define ALX_ISR_RX_Q4 0x08000000
+#define ALX_ISR_RX_Q5 0x10000000
+#define ALX_ISR_RX_Q6 0x20000000
+#define ALX_ISR_RX_Q7 0x40000000
+#define ALX_ISR_DIS 0x80000000
+
+
+#define ALX_IMR_NORMAL_MASK (\
+ ALX_ISR_MANU |\
+ ALX_ISR_OVER |\
+ ALX_ISR_TXQ |\
+ ALX_ISR_RXQ |\
+ ALX_ISR_PHY_LPW |\
+ ALX_ISR_PHY |\
+ ALX_ISR_ERROR)
+
+#define ALX_ISR_ALERT_MASK (\
+ ALX_ISR_DMAR |\
+ ALX_ISR_DMAW |\
+ ALX_ISR_TXQ_TO |\
+ ALX_ISR_PCIE_FERR |\
+ ALX_ISR_PCIE_LNKDOWN |\
+ ALX_ISR_RFD_UR |\
+ ALX_ISR_RXF_OV)
+
+#define ALX_ISR_TXQ (\
+ ALX_ISR_TX_Q0 |\
+ ALX_ISR_TX_Q1 |\
+ ALX_ISR_TX_Q2 |\
+ ALX_ISR_TX_Q3)
+
+#define ALX_ISR_RXQ (\
+ ALX_ISR_RX_Q0 |\
+ ALX_ISR_RX_Q1 |\
+ ALX_ISR_RX_Q2 |\
+ ALX_ISR_RX_Q3 |\
+ ALX_ISR_RX_Q4 |\
+ ALX_ISR_RX_Q5 |\
+ ALX_ISR_RX_Q6 |\
+ ALX_ISR_RX_Q7)
+
+#define ALX_ISR_OVER (\
+ ALX_ISR_RFD_UR |\
+ ALX_ISR_RXF_OV |\
+ ALX_ISR_TXF_UR)
+
+#define ALX_ISR_ERROR (\
+ ALX_ISR_DMAR |\
+ ALX_ISR_TXQ_TO |\
+ ALX_ISR_DMAW |\
+ ALX_ISR_PCIE_ERROR)
+
+#define ALX_ISR_PCIE_ERROR (\
+ ALX_ISR_PCIE_FERR |\
+ ALX_ISR_PCIE_LNKDOWN)
+
+/* MISC Register */
+#define ALX_MISC 0x19C0
+#define ALX_MISC_INTNLOSC_OPEN 0x00000008
+
+#define ALX_CLK_GATE 0x1814
+
+/* DMA address */
+#define DMA_ADDR_HI_MASK 0xffffffff00000000ULL
+#define DMA_ADDR_LO_MASK 0x00000000ffffffffULL
+
+#define ALX_DMA_ADDR_HI(_addr) \
+ ((u32)(((u64)(_addr) & DMA_ADDR_HI_MASK) >> 32))
+#define ALX_DMA_ADDR_LO(_addr) \
+ ((u32)((u64)(_addr) & DMA_ADDR_LO_MASK))
+
+#define ALX_MAX_HW_TXQ 4
+#define ALX_MAX_HW_RXQ 1
+
+/* mac address length */
+#define ALX_ETH_LENGTH_OF_ADDRESS ETH_ALEN
+#define ALX_ETH_LENGTH_OF_HEADER ETH_HLEN
+
+#define ALX_ETH_CRC(_addr, _len) ether_crc((_len), (_addr));
+
+#define ALX_MAX_SETUP_LNK_CYCLE 100
+
+/* Device Type definitions for new protocol MDIO commands */
+#define ALX_MDIO_DEV_TYPE_NORM 0
+
+/* Wake On Lan */
+#define ALX_WOL_PHY 0x00000001 /* PHY Status Change */
+#define ALX_WOL_MAGIC 0x00000002 /* Magic Packet */
+
+#define ALX_MAX_EEPROM_LEN 0x200
+#define ALX_MAX_HWREG_LEN 0x200
+
+/* RSS Settings */
+enum alx_rss_mode {
+ alx_rss_mode_disable = 0,
+ alx_rss_sig_que = 1,
+ alx_rss_mul_que_sig_int = 2,
+ alx_rss_mul_que_mul_int = 4,
+};
+
+/* Flow Control Settings */
+enum alx_fc_mode {
+ alx_fc_none = 0,
+ alx_fc_rx_pause,
+ alx_fc_tx_pause,
+ alx_fc_full,
+ alx_fc_default
+};
+
+/* WRR Restrict Settings */
+enum alx_wrr_mode {
+ alx_wrr_mode_none = 0,
+ alx_wrr_mode_high,
+ alx_wrr_mode_high2,
+ alx_wrr_mode_all
+};
+
+enum alx_mac_type {
+ alx_mac_unknown = 0,
+ alx_mac_l1f,
+ alx_mac_l2f,
+};
+
+struct alx_hw_dma {
+ u32 rfdmem_hi[ALX_MAX_HW_RXQ];
+ u32 rfdmem_lo[ALX_MAX_HW_RXQ];
+ u32 rrdmem_hi[ALX_MAX_HW_RXQ];
+ u32 rrdmem_lo[ALX_MAX_HW_RXQ];
+
+ u32 tpdmem_hi[ALX_MAX_HW_TXQ];
+ u32 tpdmem_lo[ALX_MAX_HW_TXQ];
+};
+
+/* Statistics counters collected by the MAC */
+struct alx_hw_stats {
+ /* rx */
+ unsigned long rx_ok;
+ unsigned long rx_bcast;
+ unsigned long rx_mcast;
+ unsigned long rx_pause;
+ unsigned long rx_ctrl;
+ unsigned long rx_fcs_err;
+ unsigned long rx_len_err;
+ unsigned long rx_byte_cnt;
+ unsigned long rx_runt;
+ unsigned long rx_frag;
+ unsigned long rx_sz_64B;
+ unsigned long rx_sz_127B;
+ unsigned long rx_sz_255B;
+ unsigned long rx_sz_511B;
+ unsigned long rx_sz_1023B;
+ unsigned long rx_sz_1518B;
+ unsigned long rx_sz_max;
+ unsigned long rx_ov_sz;
+ unsigned long rx_ov_rxf;
+ unsigned long rx_ov_rrd;
+ unsigned long rx_align_err;
+ unsigned long rx_bc_byte_cnt;
+ unsigned long rx_mc_byte_cnt;
+ unsigned long rx_err_addr;
+
+ /* tx */
+ unsigned long tx_ok;
+ unsigned long tx_bcast;
+ unsigned long tx_mcast;
+ unsigned long tx_pause;
+ unsigned long tx_exc_defer;
+ unsigned long tx_ctrl;
+ unsigned long tx_defer;
+ unsigned long tx_byte_cnt;
+ unsigned long tx_sz_64B;
+ unsigned long tx_sz_127B;
+ unsigned long tx_sz_255B;
+ unsigned long tx_sz_511B;
+ unsigned long tx_sz_1023B;
+ unsigned long tx_sz_1518B;
+ unsigned long tx_sz_max;
+ unsigned long tx_single_col;
+ unsigned long tx_multi_col;
+ unsigned long tx_late_col;
+ unsigned long tx_abort_col;
+ unsigned long tx_underrun;
+ unsigned long tx_trd_eop;
+ unsigned long tx_len_err;
+ unsigned long tx_trunc;
+ unsigned long tx_bc_byte_cnt;
+ unsigned long tx_mc_byte_cnt;
+ unsigned long update;
+};
+
+struct alx_hw {
+ struct alx_adapter *adpt;
+ u8 __iomem *hw_addr; /* inner register address */
+ u16 io_addr; /* ioport base address */
+ u16 pci_venid;
+ u16 pci_devid;
+ u16 pci_sub_devid;
+ u16 pci_sub_venid;
+ u8 pci_revid;
+
+ bool msi_lnkpatch;
+ u32 dma_chnl;
+ u32 hwreg_sz;
+ u32 eeprom_sz;
+
+ /* PHY parameter */
+ u32 phy_id;
+ u32 autoneg_advertised;
+ u8 link_speed;
+ bool link_up;
+ spinlock_t mdio_lock;
+
+ /* MAC parameter */
+ enum alx_mac_type mac_type;
+ u8 mac_addr[ALX_ETH_LENGTH_OF_ADDRESS];
+ u8 mac_perm_addr[ALX_ETH_LENGTH_OF_ADDRESS];
+
+ u32 mtu;
+ u32 raw_mtu;
+ u16 rxstat_reg;
+ u16 rxstat_sz;
+ u16 txstat_reg;
+ u16 txstat_sz;
+
+ u16 tx_prod_reg[4];
+ u16 tx_cons_reg[4];
+ u16 rx_prod_reg[2];
+ u16 rx_cons_reg[2];
+
+ struct alx_hw_dma dma;
+
+ /* WRR parameter */
+ enum alx_wrr_mode wrr_mode;
+ u32 wrr_prio0;
+ u32 wrr_prio1;
+ u32 wrr_prio2;
+ u32 wrr_prio3;
+
+ /* RSS parameter */
+ enum alx_rss_mode rss_mode;
+ u8 rss_hstype;
+ u8 rss_base_cpu;
+ u16 rss_idt_size;
+ u32 rss_idt[32];
+ u8 rss_key[40];
+
+ /* flow control parameter */
+ enum alx_fc_mode cur_fc_mode; /* FC mode in effect */
+ enum alx_fc_mode req_fc_mode; /* FC mode requested by caller */
+ bool disable_fc_autoneg; /* Do not autonegotiate FC */
+ bool fc_was_autonegged; /* the result of autonegging */
+ bool fc_single_pause;
+
+ /* Others */
+ u32 preamble;
+ u32 intr_mask;
+ u16 smb_timer;
+ u16 imt_mod; /* Interrupt Moderator timer (2us) */
+ u32 flags;
+
+#ifdef CONFIG_ALX_DEBUGFS
+ /* SWOI parameter */
+ u32 annce_addr_off;
+ struct alx_swoi swoi_offload;
+ struct alx_teredo teredo_offload;
+#endif
+};
+
+
+#define ALX_HW_FLAG_LX_MASK 0x3F
+#define ALX_HW_FLAG_BROADCAST_EN LX_FLT_BROADCAST
+#define ALX_HW_FLAG_MULTIALL_EN LX_FLT_MULTI_ALL
+#define ALX_HW_FLAG_PROMISC_EN LX_FLT_PROMISC
+#define ALX_HW_FLAG_VLANSTRIP_EN LX_VLAN_STRIP
+#define ALX_HW_FLAG_LOOPBACK_EN LX_LOOPBACK
+
+#define ALX_HW_FLAG_L0S_CAP 0x00010000
+#define ALX_HW_FLAG_L0S_EN 0x00020000
+#define ALX_HW_FLAG_L1_CAP 0x00040000
+#define ALX_HW_FLAG_L1_EN 0x00080000
+#define ALX_HW_FLAG_PWSAVE_CAP 0x00100000
+#define ALX_HW_FLAG_PWSAVE_EN 0x00200000
+#define ALX_HW_FLAG_AZ_CAP 0x00400000
+#define ALX_HW_FLAG_AZ_EN 0x00800000
+#define ALX_HW_FLAG_PTP_CAP 0x01000000
+#define ALX_HW_FLAG_PTP_EN 0x02000000
+#define ALX_HW_FLAG_GIGA_CAP 0x04000000
+
+
+#define CHK_HW_FLAG(_flag) CHK_FLAG(hw, HW, _flag)
+#define SET_HW_FLAG(_flag) SET_FLAG(hw, HW, _flag)
+#define CLI_HW_FLAG(_flag) CLI_FLAG(hw, HW, _flag)
+
+
+/* RSS hstype Definitions */
+#define ALX_RSS_HSTYP_IPV4_EN 0x00000001
+#define ALX_RSS_HSTYP_TCP4_EN 0x00000002
+#define ALX_RSS_HSTYP_IPV6_EN 0x00000004
+#define ALX_RSS_HSTYP_TCP6_EN 0x00000008
+#define ALX_RSS_HSTYP_ALL_EN (\
+ ALX_RSS_HSTYP_IPV4_EN |\
+ ALX_RSS_HSTYP_TCP4_EN |\
+ ALX_RSS_HSTYP_IPV6_EN |\
+ ALX_RSS_HSTYP_TCP6_EN)
+
+
+/* definitions for flags */
+#define CHK_FLAG(_st, _type, _flag) \
+ ((_st)->flags & (ALX_##_type##_FLAG_##_flag))
+
+#define SET_FLAG(_st, _type, _flag) \
+ ((_st)->flags |= (ALX_##_type##_FLAG_##_flag))
+
+#define CLI_FLAG(_st, _type, _flag) \
+ ((_st)->flags &= ~(ALX_##_type##_FLAG_##_flag))
+
+
+int alx_cfg_r32(const struct alx_hw *hw, int reg, u32 *pval);
+int alx_cfg_w32(const struct alx_hw *hw, int reg, u32 val);
+int alx_cfg_r16(const struct alx_hw *hw, int reg, u16 *pval);
+int alx_cfg_w16(const struct alx_hw *hw, int reg, u16 val);
+
+
+void alx_mem_flush(const struct alx_hw *hw);
+void alx_mem_r32(const struct alx_hw *hw, int reg, u32 *val);
+void alx_mem_w32(const struct alx_hw *hw, int reg, u32 val);
+void alx_mem_r16(const struct alx_hw *hw, int reg, u16 *val);
+void alx_mem_w16(const struct alx_hw *hw, int reg, u16 val);
+void alx_mem_w8(const struct alx_hw *hw, int reg, u8 val);
+
+
+/* special definitions for hw */
+#define ALF_MAX_MSIX_NOQUE_INTRS 4
+#define ALF_MIN_MSIX_NOQUE_INTRS 4
+#define ALF_MAX_MSIX_QUEUE_INTRS 12
+#define ALF_MIN_MSIX_QUEUE_INTRS 12
+#define ALF_MAX_MSIX_INTRS \
+ (ALF_MAX_MSIX_QUEUE_INTRS + ALF_MAX_MSIX_NOQUE_INTRS)
+#define ALF_MIN_MSIX_INTRS \
+ (ALF_MIN_MSIX_NOQUE_INTRS + ALF_MIN_MSIX_QUEUE_INTRS)
+
+
+/* function */
+extern int alf_identify_nic(struct alx_hw *hw);
+extern int alf_read_phy_reg(struct alx_hw *hw, u16 reg_addr, u16 *phy_data);
+extern int alf_write_phy_reg(struct alx_hw *hw, u16 reg_addr, u16 phy_data);
+#ifdef CONFIG_ALX_DEBUGFS
+extern int alf_read_ext_phy_reg(struct alx_hw *hw, u8 type, u16 reg_addr,
+ u16 *phy_data);
+extern int alf_write_ext_phy_reg(struct alx_hw *hw, u8 type, u16 reg_addr,
+ u16 phy_data);
+#endif
+extern int alf_init_phy(struct alx_hw *hw);
+extern int alf_reset_phy(struct alx_hw *hw);
+extern int alf_setup_phy_link(struct alx_hw *hw, u8 speed, bool autoneg,
+ bool fc);
+extern int alf_check_phy_link(struct alx_hw *hw, u8 *speed, bool *link_up);
+extern int alf_post_phy_link(struct alx_hw *hw, bool az_en, bool link_up,
+ u8 speed);
+extern int alf_reset_mac(struct alx_hw *hw);
+extern int alf_start_mac(struct alx_hw *hw);
+extern int alf_stop_mac(struct alx_hw *hw);
+extern int alf_init_mac(struct alx_hw *hw, u16 rxbuf_sz, u16 rx_qnum,
+ u16 rxring_sz, u16 tx_qnum, u16 txring_sz);
+extern int alf_get_mac_addr(struct alx_hw *hw, u8 *addr);
+extern void alf_set_mac_addr(struct alx_hw *hw, u8 *addr);
+extern int alf_reset_pcie(struct alx_hw *hw, bool l0s_en, bool l1_en);
+extern int alf_config_aspm(struct alx_hw *hw, bool l0s_en, bool l1_en);
+extern int alf_config_wol(struct alx_hw *hw, u32 wufc);
+extern void alf_update_mac_filter(struct alx_hw *hw);
+extern int alf_config_pow_save(struct alx_hw *hw, u8 speed, bool wol_en,
+ bool tx_en, bool rx_en, bool pws_en);
+extern int alf_set_mc_addr(struct alx_hw *hw, u8 *addr);
+extern int alf_clear_mc_addr(struct alx_hw *hw);
+extern void alf_config_tx(struct alx_hw *hw);
+extern int alf_config_msix(struct alx_hw *hw, u16 num_intrs,
+ bool msix_en, bool msi_en);
+extern int alf_ack_phy_intr(struct alx_hw *hw);
+extern int alf_enable_legacy_intr(struct alx_hw *hw);
+extern int alf_disable_legacy_intr(struct alx_hw *hw);
+extern int alf_enable_msix_intr(struct alx_hw *hw, u8 entry_idx);
+
+extern int alf_disable_msix_intr(struct alx_hw *hw, u8 entry_idx);
+extern int alf_config_rss(struct alx_hw *hw, bool rss_en);
+extern int alf_config_fc(struct alx_hw *hw);
+extern void alf_get_ethtool_regs(struct alx_hw *hw, void *buff);
+extern void alf_init_hw(struct alx_hw *hw);
+
+/* Logging message functions */
+void __printf(3, 4)
+alx_hw_printk(const char *level, const struct alx_hw *hw, const char *fmt, ...);
+
+#define alx_hw_err(_hw, _format, ...) \
+ alx_hw_printk(KERN_ERR, _hw, _format, ##__VA_ARGS__)
+#define alx_hw_warn(_hw, _format, ...) \
+ alx_hw_printk(KERN_WARNING, _hw, _format, ##__VA_ARGS__)
+#define alx_hw_info(_hw, _format, ...) \
+ alx_hw_printk(KERN_INFO, _hw, _format, ##__VA_ARGS__)
+
+#endif /* _ALX_SW_H_ */
--
1.7.7
Powered by blists - more mailing lists