lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090604232610.GA16053@intel.com>
Date:	Fri, 5 Jun 2009 07:26:10 +0800
From:	Yunpeng Gao <yunpeng.gao@...el.com>
To:	Greg KH <gregkh@...e.de>
Cc:	linux-kernel@...r.kernel.org
Subject: Re: linux-next: Tree for June 3 (staging)

On Thu, Jun 04, 2009 at 07:11:52AM +0800, Greg KH wrote:
> On Wed, Jun 03, 2009 at 08:40:33AM -0700, Randy Dunlap wrote:
> > Greg KH wrote:
> > > On Wed, Jun 03, 2009 at 08:24:49AM -0700, Randy Dunlap wrote:
> > >> Stephen Rothwell wrote:
> > >>> Hi all,
> > >>>
> > >>> Looks like a real rush since -rc7 ...
> > >>
> > >> drivers/staging/comedi/drivers/adq12b.c:328: error: implicit declaration of function 'udelay'
> > >>
> > >>
> > >> drivers/staging/mrst_nand/ffsport.c:360:error: 'struct request' has no member named 'sector'
> > >> drivers/staging/mrst_nand/ffsport.c:376:error: 'struct request' has no member named 'sector'
> > >> drivers/staging/mrst_nand/ffsport.c:376:error: 'struct request' has no member named 'current_nr_sectors'
> > >> drivers/staging/mrst_nand/ffsport.c:380:error: 'struct request' has no member named 'sector'
> > >> drivers/staging/mrst_nand/ffsport.c:381:error: 'struct request' has no member named 'current_nr_sectors'
> > >> drivers/staging/mrst_nand/ffsport.c:392:error: 'struct request' has no member named 'current_nr_sectors'
> > >> drivers/staging/mrst_nand/ffsport.c:499:error: implicit declaration of function 'elv_next_request'
> > >> drivers/staging/mrst_nand/ffsport.c:517:error: implicit declaration of function 'end_request'
> > >> drivers/staging/mrst_nand/ffsport.c:738:error: implicit declaration of function 'blk_queue_hardsect_size'
> > > 
> > > Can I get your .config for this?  It works here with a "default" one.
> > 
> > attached (for mrst_nand/ffsport.c build errors)
> 
> Ah, thanks, figured it out.
> 
> Gao, this looks like a problem in your ffsport.c code.  It isn't up to
> date with the 2.6.30-rc8 kernel tree.  Care to send me a follow-on patch
> to fix this up?
> 
> thanks,
> 
> greg k-h

Hi Greg,
   I have fixed these compiling bugs in file ffsport.c.

   But just found that my email client (Outlook) will replace
   TAB with blanks automatically. So, unfortunately, all the
   TABs in my first submission patch have been changed to
   blanks by the Outlook.
   
   So I'd like to use the Mutt email client and re-submit
   the patch as below (the bug-fix of ffsport.c has been included).
   Does it make sense?

   Thanks.

   Rgds,
   Yunpeng

-------------------------------------------------------------------------

>From 99a8d7b9e9fe1c0cf11d72b7ece44fb924417dae Mon Sep 17 00:00:00 2001
From: Gao Yunpeng <yunpeng.gao@...el.com>
Date: Fri, 5 Jun 2009 06:21:38 +0800
Subject: [PATCH] First submission of Intel Moorestown platform NAND flash driver

This driver is a standalone linux block device driver. It acts as
a normal hard disk. Please reference the README file for the detail.

Signed-off-by: Gao Yunpeng <yunpeng.gao@...el.com>
---
 drivers/staging/Kconfig                     |    2 +
 drivers/staging/Makefile                    |    1 +
 drivers/staging/mrst_nand/Kconfig           |   27 +
 drivers/staging/mrst_nand/Makefile          |    7 +
 drivers/staging/mrst_nand/NAND_Regs_4.h     |  619 ++++
 drivers/staging/mrst_nand/README            |   29 +
 drivers/staging/mrst_nand/ffsdefs.h         |   58 +
 drivers/staging/mrst_nand/ffsport.c         |  954 ++++++
 drivers/staging/mrst_nand/ffsport.h         |   89 +
 drivers/staging/mrst_nand/flash.c           | 4150 +++++++++++++++++++++++++++
 drivers/staging/mrst_nand/flash.h           |  158 +
 drivers/staging/mrst_nand/lld.c             |  492 ++++
 drivers/staging/mrst_nand/lld.h             |  123 +
 drivers/staging/mrst_nand/lld_cdma.c        | 2736 ++++++++++++++++++
 drivers/staging/mrst_nand/lld_cdma.h        |  138 +
 drivers/staging/mrst_nand/lld_emu.c         |  788 +++++
 drivers/staging/mrst_nand/lld_emu.h         |   51 +
 drivers/staging/mrst_nand/lld_nand.c        | 3113 ++++++++++++++++++++
 drivers/staging/mrst_nand/lld_nand.h        |  116 +
 drivers/staging/mrst_nand/spectraswconfig.h |   86 +
 20 files changed, 13737 insertions(+), 0 deletions(-)
 create mode 100644 drivers/staging/mrst_nand/Kconfig
 create mode 100644 drivers/staging/mrst_nand/Makefile
 create mode 100644 drivers/staging/mrst_nand/NAND_Regs_4.h
 create mode 100644 drivers/staging/mrst_nand/README
 create mode 100644 drivers/staging/mrst_nand/ffsdefs.h
 create mode 100644 drivers/staging/mrst_nand/ffsport.c
 create mode 100644 drivers/staging/mrst_nand/ffsport.h
 create mode 100644 drivers/staging/mrst_nand/flash.c
 create mode 100644 drivers/staging/mrst_nand/flash.h
 create mode 100644 drivers/staging/mrst_nand/lld.c
 create mode 100644 drivers/staging/mrst_nand/lld.h
 create mode 100644 drivers/staging/mrst_nand/lld_cdma.c
 create mode 100644 drivers/staging/mrst_nand/lld_cdma.h
 create mode 100644 drivers/staging/mrst_nand/lld_emu.c
 create mode 100644 drivers/staging/mrst_nand/lld_emu.h
 create mode 100644 drivers/staging/mrst_nand/lld_nand.c
 create mode 100644 drivers/staging/mrst_nand/lld_nand.h
 create mode 100644 drivers/staging/mrst_nand/spectraswconfig.h

diff --git a/drivers/staging/Kconfig b/drivers/staging/Kconfig
index 0dcf9ca..3fb8b75 100644
--- a/drivers/staging/Kconfig
+++ b/drivers/staging/Kconfig
@@ -115,5 +115,7 @@ source "drivers/staging/line6/Kconfig"
 
 source "drivers/staging/serqt_usb/Kconfig"
 
+source "drivers/staging/mrst_nand/Kconfig"
+
 endif # !STAGING_EXCLUDE_BUILD
 endif # STAGING
diff --git a/drivers/staging/Makefile b/drivers/staging/Makefile
index 47dfd5b..4da30fe 100644
--- a/drivers/staging/Makefile
+++ b/drivers/staging/Makefile
@@ -40,3 +40,4 @@ obj-$(CONFIG_PLAN9AUTH)		+= p9auth/
 obj-$(CONFIG_HECI)		+= heci/
 obj-$(CONFIG_LINE6_USB)		+= line6/
 obj-$(CONFIG_USB_SERIAL_QUATECH_ESU100)	+= serqt_usb/
+obj-$(CONFIG_MRST_NAND)		+= mrst_nand/
diff --git a/drivers/staging/mrst_nand/Kconfig b/drivers/staging/mrst_nand/Kconfig
new file mode 100644
index 0000000..a2017df
--- /dev/null
+++ b/drivers/staging/mrst_nand/Kconfig
@@ -0,0 +1,27 @@
+
+menuconfig MRST_NAND
+	tristate "Moorestown NAND Flash controller"
+	depends on BLOCK
+	default m
+	---help---
+	  Enable the driver for the NAND Flash controller in Intel Moorestown
+	  Platform
+
+choice
+	prompt "Compile for"
+	depends on MRST_NAND
+	default MRST_NAND_HW
+
+config MRST_NAND_HW
+	bool "Actual hardware mode"
+	help
+	  Driver communicates with the actual hardware's register interface.
+	  in DMA mode.
+
+config MRST_NAND_EMU
+	bool "RAM emulator testing"
+	help
+	  Driver emulates Flash on a RAM buffer and / or disk file.  Useful to test the behavior of FTL layer.
+
+endchoice
+
diff --git a/drivers/staging/mrst_nand/Makefile b/drivers/staging/mrst_nand/Makefile
new file mode 100644
index 0000000..261891c
--- /dev/null
+++ b/drivers/staging/mrst_nand/Makefile
@@ -0,0 +1,7 @@
+#
+# Makefile of Intel Moorestown NAND controller driver
+#
+
+obj-$(CONFIG_MRST_NAND) += spectra.o
+spectra-objs := ffsport.o flash.o lld.o lld_emu.o lld_nand.o lld_cdma.o
+
diff --git a/drivers/staging/mrst_nand/NAND_Regs_4.h b/drivers/staging/mrst_nand/NAND_Regs_4.h
new file mode 100644
index 0000000..e192e4a
--- /dev/null
+++ b/drivers/staging/mrst_nand/NAND_Regs_4.h
@@ -0,0 +1,619 @@
+/*
+ * NAND Flash Controller Device Driver
+ * Copyright (c) 2009, Intel Corporation and its suppliers.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
+ *
+ */
+
+#define DEVICE_RESET				0x0
+#define     DEVICE_RESET__BANK0				0x0001
+#define     DEVICE_RESET__BANK1				0x0002
+#define     DEVICE_RESET__BANK2				0x0004
+#define     DEVICE_RESET__BANK3				0x0008
+
+#define TRANSFER_SPARE_REG			0x10
+#define     TRANSFER_SPARE_REG__FLAG			0x0001
+
+#define LOAD_WAIT_CNT				0x20
+#define     LOAD_WAIT_CNT__VALUE				0xffff
+
+#define PROGRAM_WAIT_CNT			0x30
+#define     PROGRAM_WAIT_CNT__VALUE			0xffff
+
+#define ERASE_WAIT_CNT				0x40
+#define     ERASE_WAIT_CNT__VALUE			0xffff
+
+#define INT_MON_CYCCNT				0x50
+#define     INT_MON_CYCCNT__VALUE			0xffff
+
+#define RB_PIN_ENABLED				0x60
+#define     RB_PIN_ENABLED__BANK0			0x0001
+#define     RB_PIN_ENABLED__BANK1			0x0002
+#define     RB_PIN_ENABLED__BANK2			0x0004
+#define     RB_PIN_ENABLED__BANK3			0x0008
+
+#define MULTIPLANE_OPERATION			0x70
+#define     MULTIPLANE_OPERATION__FLAG			0x0001
+
+#define MULTIPLANE_READ_ENABLE			0x80
+#define     MULTIPLANE_READ_ENABLE__FLAG		0x0001
+
+#define COPYBACK_DISABLE			0x90
+#define     COPYBACK_DISABLE__FLAG			0x0001
+
+#define CACHE_WRITE_ENABLE			0xa0
+#define     CACHE_WRITE_ENABLE__FLAG			0x0001
+
+#define CACHE_READ_ENABLE			0xb0
+#define     CACHE_READ_ENABLE__FLAG			0x0001
+
+#define PREFETCH_MODE				0xc0
+#define     PREFETCH_MODE__PREFETCH_EN			0x0001
+#define     PREFETCH_MODE__PREFETCH_BURST_LENGTH	0xfff0
+
+#define CHIP_ENABLE_DONT_CARE			0xd0
+#define     CHIP_EN_DONT_CARE__FLAG			0x01
+
+#define ECC_ENABLE				0xe0
+#define     ECC_ENABLE__FLAG				0x0001
+
+#define GLOBAL_INT_ENABLE			0xf0
+#define     GLOBAL_INT_EN_FLAG				0x01
+
+#define WE_2_RE					0x100
+#define     WE_2_RE__VALUE				0x003f
+
+#define ADDR_2_DATA				0x110
+#define     ADDR_2_DATA__VALUE				0x003f
+
+#define RE_2_WE					0x120
+#define     RE_2_WE__VALUE				0x003f
+
+#define ACC_CLKS    				0x130
+#define     ACC_CLKS__VALUE				0x000f
+
+#define NUMBER_OF_PLANES			0x140
+#define     NUMBER_OF_PLANES__VALUE			0x0007
+
+#define PAGES_PER_BLOCK				0x150
+#define     PAGES_PER_BLOCK__VALUE			0xffff
+
+#define DEVICE_WIDTH				0x160
+#define     DEVICE_WIDTH__VALUE				0x0003
+
+#define DEVICE_MAIN_AREA_SIZE			0x170
+#define     DEVICE_MAIN_AREA_SIZE__VALUE		0xffff
+
+#define DEVICE_SPARE_AREA_SIZE			0x180
+#define     DEVICE_SPARE_AREA_SIZE__VALUE		0xffff
+
+#define TWO_ROW_ADDR_CYCLES			0x190
+#define     TWO_ROW_ADDR_CYCLES__FLAG			0x0001
+
+#define MULTIPLANE_ADDR_RESTRICT		0x1a0
+#define     MULTIPLANE_ADDR_RESTRICT__FLAG		0x0001
+
+#define ECC_CORRECTION				0x1b0
+#define     ECC_CORRECTION__VALUE			0x001f
+
+#define READ_MODE				0x1c0
+#define     READ_MODE__VALUE				0x000f
+
+#define WRITE_MODE				0x1d0
+#define     WRITE_MODE__VALUE				0x000f
+
+#define COPYBACK_MODE				0x1e0
+#define     COPYBACK_MODE__VALUE			0x000f
+
+#define RDWR_EN_LO_CNT				0x1f0
+#define     RDWR_EN_LO_CNT__VALUE			0x001f
+
+#define RDWR_EN_HI_CNT				0x200
+#define     RDWR_EN_HI_CNT__VALUE			0x001f
+
+#define MAX_RD_DELAY				0x210
+#define     MAX_RD_DELAY__VALUE				0x000f
+
+#define CS_SETUP_CNT				0x220
+#define     CS_SETUP_CNT__VALUE				0x001f
+
+#define SPARE_AREA_SKIP_BYTES			0x230
+#define     SPARE_AREA_SKIP_BYTES__VALUE		0x003f
+
+#define SPARE_AREA_MARKER			0x240
+#define     SPARE_AREA_MARKER__VALUE			0xffff
+
+#define DEVICES_CONNECTED			0x250
+#define     DEVICES_CONNECTED__VALUE			0x0007
+
+#define DIE_MASK					0x260
+#define     DIE_MASK__VALUE				0x00ff
+
+#define FIRST_BLOCK_OF_NEXT_PLANE		0x270
+#define     FIRST_BLOCK_OF_NEXT_PLANE__VALUE		0xffff
+
+#define WRITE_PROTECT				0x280
+#define     WRITE_PROTECT__FLAG				0x0001
+
+#define RE_2_RE					0x290
+#define     RE_2_RE__VALUE				0x003f
+
+#define MANUFACTURER_ID			0x300
+#define     MANUFACTURER_ID__VALUE			0x00ff
+
+#define DEVICE_ID				0x310
+#define     DEVICE_ID__VALUE				0x00ff
+
+#define DEVICE_PARAM_0				0x320
+#define     DEVICE_PARAM_0__VALUE			0x00ff
+
+#define DEVICE_PARAM_1				0x330
+#define     DEVICE_PARAM_1__VALUE			0x00ff
+
+#define DEVICE_PARAM_2				0x340
+#define     DEVICE_PARAM_2__VALUE			0x00ff
+
+#define LOGICAL_PAGE_DATA_SIZE			0x350
+#define     LOGICAL_PAGE_DATA_SIZE__VALUE		0xffff
+
+#define LOGICAL_PAGE_SPARE_SIZE			0x360
+#define     LOGICAL_PAGE_SPARE_SIZE__VALUE		0xffff
+
+#define REVISION					0x370
+#define     REVISION__VALUE				0xffff
+
+#define ONFI_DEVICE_FEATURES			0x380
+#define     ONFI_DEVICE_FEATURES__VALUE			0x003f
+
+#define ONFI_OPTIONAL_COMMANDS		0x390
+#define     ONFI_OPTIONAL_COMMANDS__VALUE		0x003f
+
+#define ONFI_TIMING_MODE			0x3a0
+#define     ONFI_TIMING_MODE__VALUE			0x003f
+
+#define ONFI_PGM_CACHE_TIMING_MODE		0x3b0
+#define     ONFI_PGM_CACHE_TIMING_MODE__VALUE		0x003f
+
+#define ONFI_DEVICE_NO_OF_LUNS			0x3c0
+#define     ONFI_DEVICE_NO_OF_LUNS__NO_OF_LUNS		0x00ff
+#define     ONFI_DEVICE_NO_OF_LUNS__ONFI_DEVICE		0x0100
+
+#define ONFI_DEVICE_NO_OF_BLOCKS_PER_LUN_L	0x3d0
+#define     ONFI_DEVICE_NO_OF_BLOCKS_PER_LUN_L__VALUE	0xffff
+
+#define ONFI_DEVICE_NO_OF_BLOCKS_PER_LUN_U	0x3e0
+#define     ONFI_DEVICE_NO_OF_BLOCKS_PER_LUN_U__VALUE	0xffff
+
+#define FEATURES					0x3f0
+#define     FEATURES__N_BANKS				0x0003
+#define     FEATURES__ECC_MAX_ERR			0x003c
+#define     FEATURES__DMA					0x0040
+#define     FEATURES__CMD_DMA				0x0080
+#define     FEATURES__PARTITION				0x0100
+#define     FEATURES__XDMA_SIDEBAND			0x0200
+#define     FEATURES__GPREG				0x0400
+#define     FEATURES__INDEX_ADDR				0x0800
+
+#define TRANSFER_MODE				0x400
+#define     TRANSFER_MODE__VALUE			0x0003
+
+#define INTR_STATUS0				0x410
+#define     INTR_STATUS0__ECC_TRANSACTION_DONE		0x0001
+#define     INTR_STATUS0__ECC_ERR			0x0002
+#define     INTR_STATUS0__DMA_CMD_COMP			0x0004
+#define     INTR_STATUS0__TIME_OUT			0x0008
+#define     INTR_STATUS0__PROGRAM_FAIL			0x0010
+#define     INTR_STATUS0__ERASE_FAIL			0x0020
+#define     INTR_STATUS0__LOAD_COMP			0x0040
+#define     INTR_STATUS0__PROGRAM_COMP			0x0080
+#define     INTR_STATUS0__ERASE_COMP			0x0100
+#define     INTR_STATUS0__PIPE_CPYBCK_CMD_COMP		0x0200
+#define     INTR_STATUS0__LOCKED_BLK			0x0400
+#define     INTR_STATUS0__UNSUP_CMD			0x0800
+#define     INTR_STATUS0__INT_ACT			0x1000
+#define     INTR_STATUS0__RST_COMP			0x2000
+#define     INTR_STATUS0__PIPE_CMD_ERR			0x4000
+#define     INTR_STATUS0__PAGE_XFER_INC			0x8000
+
+#define INTR_EN0					0x420
+#define     INTR_EN0__ECC_TRANSACTION_DONE		0x0001
+#define     INTR_EN0__ECC_ERR				0x0002
+#define     INTR_EN0__DMA_CMD_COMP			0x0004
+#define     INTR_EN0__TIME_OUT				0x0008
+#define     INTR_EN0__PROGRAM_FAIL			0x0010
+#define     INTR_EN0__ERASE_FAIL				0x0020
+#define     INTR_EN0__LOAD_COMP				0x0040
+#define     INTR_EN0__PROGRAM_COMP			0x0080
+#define     INTR_EN0__ERASE_COMP				0x0100
+#define     INTR_EN0__PIPE_CPYBCK_CMD_COMP		0x0200
+#define     INTR_EN0__LOCKED_BLK				0x0400
+#define     INTR_EN0__UNSUP_CMD				0x0800
+#define     INTR_EN0__INT_ACT				0x1000
+#define     INTR_EN0__RST_COMP				0x2000
+#define     INTR_EN0__PIPE_CMD_ERR			0x4000
+#define     INTR_EN0__PAGE_XFER_INC			0x8000
+
+#define PAGE_CNT0				0x430
+#define     PAGE_CNT0__VALUE				0x00ff
+
+#define ERR_PAGE_ADDR0				0x440
+#define     ERR_PAGE_ADDR0__VALUE			0xffff
+
+#define ERR_BLOCK_ADDR0			0x450
+#define     ERR_BLOCK_ADDR0__VALUE			0xffff
+
+#define INTR_STATUS1				0x460
+#define     INTR_STATUS1__ECC_TRANSACTION_DONE		0x0001
+#define     INTR_STATUS1__ECC_ERR			0x0002
+#define     INTR_STATUS1__DMA_CMD_COMP			0x0004
+#define     INTR_STATUS1__TIME_OUT			0x0008
+#define     INTR_STATUS1__PROGRAM_FAIL			0x0010
+#define     INTR_STATUS1__ERASE_FAIL			0x0020
+#define     INTR_STATUS1__LOAD_COMP			0x0040
+#define     INTR_STATUS1__PROGRAM_COMP			0x0080
+#define     INTR_STATUS1__ERASE_COMP			0x0100
+#define     INTR_STATUS1__PIPE_CPYBCK_CMD_COMP		0x0200
+#define     INTR_STATUS1__LOCKED_BLK			0x0400
+#define     INTR_STATUS1__UNSUP_CMD			0x0800
+#define     INTR_STATUS1__INT_ACT			0x1000
+#define     INTR_STATUS1__RST_COMP			0x2000
+#define     INTR_STATUS1__PIPE_CMD_ERR			0x4000
+#define     INTR_STATUS1__PAGE_XFER_INC			0x8000
+
+#define INTR_EN1					0x470
+#define     INTR_EN1__ECC_TRANSACTION_DONE		0x0001
+#define     INTR_EN1__ECC_ERR				0x0002
+#define     INTR_EN1__DMA_CMD_COMP			0x0004
+#define     INTR_EN1__TIME_OUT				0x0008
+#define     INTR_EN1__PROGRAM_FAIL			0x0010
+#define     INTR_EN1__ERASE_FAIL				0x0020
+#define     INTR_EN1__LOAD_COMP				0x0040
+#define     INTR_EN1__PROGRAM_COMP			0x0080
+#define     INTR_EN1__ERASE_COMP				0x0100
+#define     INTR_EN1__PIPE_CPYBCK_CMD_COMP		0x0200
+#define     INTR_EN1__LOCKED_BLK				0x0400
+#define     INTR_EN1__UNSUP_CMD				0x0800
+#define     INTR_EN1__INT_ACT				0x1000
+#define     INTR_EN1__RST_COMP				0x2000
+#define     INTR_EN1__PIPE_CMD_ERR			0x4000
+#define     INTR_EN1__PAGE_XFER_INC			0x8000
+
+#define PAGE_CNT1				0x480
+#define     PAGE_CNT1__VALUE				0x00ff
+
+#define ERR_PAGE_ADDR1				0x490
+#define     ERR_PAGE_ADDR1__VALUE			0xffff
+
+#define ERR_BLOCK_ADDR1			0x4a0
+#define     ERR_BLOCK_ADDR1__VALUE			0xffff
+
+#define INTR_STATUS2				0x4b0
+#define     INTR_STATUS2__ECC_TRANSACTION_DONE		0x0001
+#define     INTR_STATUS2__ECC_ERR			0x0002
+#define     INTR_STATUS2__DMA_CMD_COMP			0x0004
+#define     INTR_STATUS2__TIME_OUT			0x0008
+#define     INTR_STATUS2__PROGRAM_FAIL			0x0010
+#define     INTR_STATUS2__ERASE_FAIL			0x0020
+#define     INTR_STATUS2__LOAD_COMP			0x0040
+#define     INTR_STATUS2__PROGRAM_COMP			0x0080
+#define     INTR_STATUS2__ERASE_COMP			0x0100
+#define     INTR_STATUS2__PIPE_CPYBCK_CMD_COMP		0x0200
+#define     INTR_STATUS2__LOCKED_BLK			0x0400
+#define     INTR_STATUS2__UNSUP_CMD			0x0800
+#define     INTR_STATUS2__INT_ACT			0x1000
+#define     INTR_STATUS2__RST_COMP			0x2000
+#define     INTR_STATUS2__PIPE_CMD_ERR			0x4000
+#define     INTR_STATUS2__PAGE_XFER_INC			0x8000
+
+#define INTR_EN2					0x4c0
+#define     INTR_EN2__ECC_TRANSACTION_DONE		0x0001
+#define     INTR_EN2__ECC_ERR				0x0002
+#define     INTR_EN2__DMA_CMD_COMP			0x0004
+#define     INTR_EN2__TIME_OUT				0x0008
+#define     INTR_EN2__PROGRAM_FAIL			0x0010
+#define     INTR_EN2__ERASE_FAIL				0x0020
+#define     INTR_EN2__LOAD_COMP				0x0040
+#define     INTR_EN2__PROGRAM_COMP			0x0080
+#define     INTR_EN2__ERASE_COMP				0x0100
+#define     INTR_EN2__PIPE_CPYBCK_CMD_COMP		0x0200
+#define     INTR_EN2__LOCKED_BLK				0x0400
+#define     INTR_EN2__UNSUP_CMD				0x0800
+#define     INTR_EN2__INT_ACT				0x1000
+#define     INTR_EN2__RST_COMP				0x2000
+#define     INTR_EN2__PIPE_CMD_ERR			0x4000
+#define     INTR_EN2__PAGE_XFER_INC			0x8000
+
+#define PAGE_CNT2				0x4d0
+#define     PAGE_CNT2__VALUE				0x00ff
+
+#define ERR_PAGE_ADDR2				0x4e0
+#define     ERR_PAGE_ADDR2__VALUE			0xffff
+
+#define ERR_BLOCK_ADDR2			0x4f0
+#define     ERR_BLOCK_ADDR2__VALUE			0xffff
+
+#define INTR_STATUS3				0x500
+#define     INTR_STATUS3__ECC_TRANSACTION_DONE		0x0001
+#define     INTR_STATUS3__ECC_ERR			0x0002
+#define     INTR_STATUS3__DMA_CMD_COMP			0x0004
+#define     INTR_STATUS3__TIME_OUT			0x0008
+#define     INTR_STATUS3__PROGRAM_FAIL			0x0010
+#define     INTR_STATUS3__ERASE_FAIL			0x0020
+#define     INTR_STATUS3__LOAD_COMP			0x0040
+#define     INTR_STATUS3__PROGRAM_COMP			0x0080
+#define     INTR_STATUS3__ERASE_COMP			0x0100
+#define     INTR_STATUS3__PIPE_CPYBCK_CMD_COMP		0x0200
+#define     INTR_STATUS3__LOCKED_BLK			0x0400
+#define     INTR_STATUS3__UNSUP_CMD			0x0800
+#define     INTR_STATUS3__INT_ACT			0x1000
+#define     INTR_STATUS3__RST_COMP			0x2000
+#define     INTR_STATUS3__PIPE_CMD_ERR			0x4000
+#define     INTR_STATUS3__PAGE_XFER_INC			0x8000
+
+#define INTR_EN3					0x510
+#define     INTR_EN3__ECC_TRANSACTION_DONE		0x0001
+#define     INTR_EN3__ECC_ERR				0x0002
+#define     INTR_EN3__DMA_CMD_COMP			0x0004
+#define     INTR_EN3__TIME_OUT				0x0008
+#define     INTR_EN3__PROGRAM_FAIL			0x0010
+#define     INTR_EN3__ERASE_FAIL				0x0020
+#define     INTR_EN3__LOAD_COMP				0x0040
+#define     INTR_EN3__PROGRAM_COMP			0x0080
+#define     INTR_EN3__ERASE_COMP				0x0100
+#define     INTR_EN3__PIPE_CPYBCK_CMD_COMP		0x0200
+#define     INTR_EN3__LOCKED_BLK				0x0400
+#define     INTR_EN3__UNSUP_CMD				0x0800
+#define     INTR_EN3__INT_ACT				0x1000
+#define     INTR_EN3__RST_COMP				0x2000
+#define     INTR_EN3__PIPE_CMD_ERR			0x4000
+#define     INTR_EN3__PAGE_XFER_INC			0x8000
+
+#define PAGE_CNT3				0x520
+#define     PAGE_CNT3__VALUE				0x00ff
+
+#define ERR_PAGE_ADDR3				0x530
+#define     ERR_PAGE_ADDR3__VALUE			0xffff
+
+#define ERR_BLOCK_ADDR3			0x540
+#define     ERR_BLOCK_ADDR3__VALUE			0xffff
+
+#define DATA_INTR				0x550
+#define     DATA_INTR__WRITE_SPACE_AV			0x0001
+#define     DATA_INTR__READ_DATA_AV			0x0002
+
+#define DATA_INTR_EN				0x560
+#define     DATA_INTR_EN__WRITE_SPACE_AV		0x0001
+#define     DATA_INTR_EN__READ_DATA_AV			0x0002
+
+#define GPREG_0					0x570
+#define     GPREG_0__VALUE				0xffff
+
+#define GPREG_1					0x580
+#define     GPREG_1__VALUE				0xffff
+
+#define GPREG_2					0x590
+#define     GPREG_2__VALUE				0xffff
+
+#define GPREG_3					0x5a0
+#define     GPREG_3__VALUE				0xffff
+
+#define ECC_THRESHOLD				0x600
+#define     ECC_THRESHOLD__VALUE				0x03ff
+
+#define ECC_ERROR_BLOCK_ADDRESS		0x610
+#define     ECC_ERROR_BLOCK_ADDRESS__VALUE		0xffff
+
+#define ECC_ERROR_PAGE_ADDRESS			0x620
+#define     ECC_ERROR_PAGE_ADDRESS__VALUE		0x0fff
+#define     ECC_ERROR_PAGE_ADDRESS__BANK		0xf000
+
+#define ECC_ERROR_ADDRESS			0x630
+#define     ECC_ERROR_ADDRESS__OFFSET			0x0fff
+#define     ECC_ERROR_ADDRESS__SECTOR_NR		0xf000
+
+#define ERR_CORRECTION_INFO			0x640
+#define     ERR_CORRECTION_INFO__BYTEMASK		0x00ff
+#define     ERR_CORRECTION_INFO__DEVICE_NR		0x0f00
+#define     ERR_CORRECTION_INFO__ERROR_TYPE		0x4000
+#define     ERR_CORRECTION_INFO__LAST_ERR_INFO		0x8000
+
+#define DMA_ENABLE				0x700
+#define     DMA_ENABLE__FLAG				0x0001
+
+#define IGNORE_ECC_DONE				0x710
+#define     IGNORE_ECC_DONE__FLAG			0x0001
+
+#define DMA_INTR				0x720
+#define     DMA_INTR__TARGET_ERROR			0x0001
+#define     DMA_INTR__DESC_COMP_CHANNEL0		0x0002
+#define     DMA_INTR__DESC_COMP_CHANNEL1		0x0004
+#define     DMA_INTR__DESC_COMP_CHANNEL2		0x0008
+#define     DMA_INTR__DESC_COMP_CHANNEL3		0x0010
+#define     DMA_INTR__MEMCOPY_DESC_COMP		0x0020
+
+#define DMA_INTR_EN				0x730
+#define     DMA_INTR_EN__TARGET_ERROR			0x0001
+#define     DMA_INTR_EN__DESC_COMP_CHANNEL0		0x0002
+#define     DMA_INTR_EN__DESC_COMP_CHANNEL1		0x0004
+#define     DMA_INTR_EN__DESC_COMP_CHANNEL2		0x0008
+#define     DMA_INTR_EN__DESC_COMP_CHANNEL3		0x0010
+#define     DMA_INTR_EN__MEMCOPY_DESC_COMP		0x0020
+
+#define TARGET_ERR_ADDR_LO			0x740
+#define     TARGET_ERR_ADDR_LO__VALUE			0xffff
+
+#define TARGET_ERR_ADDR_HI			0x750
+#define     TARGET_ERR_ADDR_HI__VALUE			0xffff
+
+#define CHNL_ACTIVE				0x760
+#define     CHNL_ACTIVE__CHANNEL0			0x0001
+#define     CHNL_ACTIVE__CHANNEL1			0x0002
+#define     CHNL_ACTIVE__CHANNEL2			0x0004
+#define     CHNL_ACTIVE__CHANNEL3			0x0008
+
+#define ACTIVE_SRC_ID				0x800
+#define     ACTIVE_SRC_ID__VALUE				0x00ff
+
+#define PTN_INTR					0x810
+#define     PTN_INTR__CONFIG_ERROR			0x0001
+#define     PTN_INTR__ACCESS_ERROR_BANK0		0x0002
+#define     PTN_INTR__ACCESS_ERROR_BANK1		0x0004
+#define     PTN_INTR__ACCESS_ERROR_BANK2		0x0008
+#define     PTN_INTR__ACCESS_ERROR_BANK3		0x0010
+#define     PTN_INTR__REG_ACCESS_ERROR			0x0020
+
+#define PTN_INTR_EN				0x820
+#define     PTN_INTR_EN__CONFIG_ERROR			0x0001
+#define     PTN_INTR_EN__ACCESS_ERROR_BANK0		0x0002
+#define     PTN_INTR_EN__ACCESS_ERROR_BANK1		0x0004
+#define     PTN_INTR_EN__ACCESS_ERROR_BANK2		0x0008
+#define     PTN_INTR_EN__ACCESS_ERROR_BANK3		0x0010
+#define     PTN_INTR_EN__REG_ACCESS_ERROR		0x0020
+
+#define PERM_SRC_ID_0				0x830
+#define     PERM_SRC_ID_0__SRCID				0x00ff
+#define     PERM_SRC_ID_0__DIRECT_ACCESS_ACTIVE		0x0800
+#define     PERM_SRC_ID_0__WRITE_ACTIVE			0x2000
+#define     PERM_SRC_ID_0__READ_ACTIVE			0x4000
+#define     PERM_SRC_ID_0__PARTITION_VALID		0x8000
+
+#define MIN_BLK_ADDR_0				0x840
+#define     MIN_BLK_ADDR_0__VALUE			0xffff
+
+#define MAX_BLK_ADDR_0				0x850
+#define     MAX_BLK_ADDR_0__VALUE			0xffff
+
+#define MIN_MAX_BANK_0				0x860
+#define     MIN_MAX_BANK_0__MIN_VALUE			0x0003
+#define     MIN_MAX_BANK_0__MAX_VALUE			0x000c
+
+#define PERM_SRC_ID_1				0x870
+#define     PERM_SRC_ID_1__SRCID				0x00ff
+#define     PERM_SRC_ID_1__DIRECT_ACCESS_ACTIVE		0x0800
+#define     PERM_SRC_ID_1__WRITE_ACTIVE			0x2000
+#define     PERM_SRC_ID_1__READ_ACTIVE			0x4000
+#define     PERM_SRC_ID_1__PARTITION_VALID		0x8000
+
+#define MIN_BLK_ADDR_1				0x880
+#define     MIN_BLK_ADDR_1__VALUE			0xffff
+
+#define MAX_BLK_ADDR_1				0x890
+#define     MAX_BLK_ADDR_1__VALUE			0xffff
+
+#define MIN_MAX_BANK_1				0x8a0
+#define     MIN_MAX_BANK_1__MIN_VALUE			0x0003
+#define     MIN_MAX_BANK_1__MAX_VALUE			0x000c
+
+#define PERM_SRC_ID_2				0x8b0
+#define     PERM_SRC_ID_2__SRCID				0x00ff
+#define     PERM_SRC_ID_2__DIRECT_ACCESS_ACTIVE		0x0800
+#define     PERM_SRC_ID_2__WRITE_ACTIVE			0x2000
+#define     PERM_SRC_ID_2__READ_ACTIVE			0x4000
+#define     PERM_SRC_ID_2__PARTITION_VALID		0x8000
+
+#define MIN_BLK_ADDR_2				0x8c0
+#define     MIN_BLK_ADDR_2__VALUE			0xffff
+
+#define MAX_BLK_ADDR_2				0x8d0
+#define     MAX_BLK_ADDR_2__VALUE			0xffff
+
+#define MIN_MAX_BANK_2				0x8e0
+#define     MIN_MAX_BANK_2__MIN_VALUE			0x0003
+#define     MIN_MAX_BANK_2__MAX_VALUE			0x000c
+
+#define PERM_SRC_ID_3				0x8f0
+#define     PERM_SRC_ID_3__SRCID				0x00ff
+#define     PERM_SRC_ID_3__DIRECT_ACCESS_ACTIVE		0x0800
+#define     PERM_SRC_ID_3__WRITE_ACTIVE			0x2000
+#define     PERM_SRC_ID_3__READ_ACTIVE			0x4000
+#define     PERM_SRC_ID_3__PARTITION_VALID		0x8000
+
+#define MIN_BLK_ADDR_3				0x900
+#define     MIN_BLK_ADDR_3__VALUE			0xffff
+
+#define MAX_BLK_ADDR_3				0x910
+#define     MAX_BLK_ADDR_3__VALUE			0xffff
+
+#define MIN_MAX_BANK_3				0x920
+#define     MIN_MAX_BANK_3__MIN_VALUE			0x0003
+#define     MIN_MAX_BANK_3__MAX_VALUE			0x000c
+
+#define PERM_SRC_ID_4				0x930
+#define     PERM_SRC_ID_4__SRCID				0x00ff
+#define     PERM_SRC_ID_4__DIRECT_ACCESS_ACTIVE		0x0800
+#define     PERM_SRC_ID_4__WRITE_ACTIVE			0x2000
+#define     PERM_SRC_ID_4__READ_ACTIVE			0x4000
+#define     PERM_SRC_ID_4__PARTITION_VALID		0x8000
+
+#define MIN_BLK_ADDR_4				0x940
+#define     MIN_BLK_ADDR_4__VALUE			0xffff
+
+#define MAX_BLK_ADDR_4				0x950
+#define     MAX_BLK_ADDR_4__VALUE			0xffff
+
+#define MIN_MAX_BANK_4				0x960
+#define     MIN_MAX_BANK_4__MIN_VALUE			0x0003
+#define     MIN_MAX_BANK_4__MAX_VALUE			0x000c
+
+#define PERM_SRC_ID_5				0x970
+#define     PERM_SRC_ID_5__SRCID				0x00ff
+#define     PERM_SRC_ID_5__DIRECT_ACCESS_ACTIVE		0x0800
+#define     PERM_SRC_ID_5__WRITE_ACTIVE			0x2000
+#define     PERM_SRC_ID_5__READ_ACTIVE			0x4000
+#define     PERM_SRC_ID_5__PARTITION_VALID		0x8000
+
+#define MIN_BLK_ADDR_5				0x980
+#define     MIN_BLK_ADDR_5__VALUE			0xffff
+
+#define MAX_BLK_ADDR_5				0x990
+#define     MAX_BLK_ADDR_5__VALUE			0xffff
+
+#define MIN_MAX_BANK_5				0x9a0
+#define     MIN_MAX_BANK_5__MIN_VALUE			0x0003
+#define     MIN_MAX_BANK_5__MAX_VALUE			0x000c
+
+#define PERM_SRC_ID_6				0x9b0
+#define     PERM_SRC_ID_6__SRCID				0x00ff
+#define     PERM_SRC_ID_6__DIRECT_ACCESS_ACTIVE		0x0800
+#define     PERM_SRC_ID_6__WRITE_ACTIVE			0x2000
+#define     PERM_SRC_ID_6__READ_ACTIVE			0x4000
+#define     PERM_SRC_ID_6__PARTITION_VALID		0x8000
+
+#define MIN_BLK_ADDR_6				0x9c0
+#define     MIN_BLK_ADDR_6__VALUE			0xffff
+
+#define MAX_BLK_ADDR_6				0x9d0
+#define     MAX_BLK_ADDR_6__VALUE			0xffff
+
+#define MIN_MAX_BANK_6				0x9e0
+#define     MIN_MAX_BANK_6__MIN_VALUE			0x0003
+#define     MIN_MAX_BANK_6__MAX_VALUE			0x000c
+
+#define PERM_SRC_ID_7				0x9f0
+#define     PERM_SRC_ID_7__SRCID				0x00ff
+#define     PERM_SRC_ID_7__DIRECT_ACCESS_ACTIVE		0x0800
+#define     PERM_SRC_ID_7__WRITE_ACTIVE			0x2000
+#define     PERM_SRC_ID_7__READ_ACTIVE			0x4000
+#define     PERM_SRC_ID_7__PARTITION_VALID		0x8000
+
+#define MIN_BLK_ADDR_7				0xa00
+#define     MIN_BLK_ADDR_7__VALUE			0xffff
+
+#define MAX_BLK_ADDR_7				0xa10
+#define     MAX_BLK_ADDR_7__VALUE			0xffff
+
+#define MIN_MAX_BANK_7				0xa20
+#define     MIN_MAX_BANK_7__MIN_VALUE			0x0003
+#define     MIN_MAX_BANK_7__MAX_VALUE			0x000c
diff --git a/drivers/staging/mrst_nand/README b/drivers/staging/mrst_nand/README
new file mode 100644
index 0000000..ecba559
--- /dev/null
+++ b/drivers/staging/mrst_nand/README
@@ -0,0 +1,29 @@
+This is a driver for NAND controller of Intel Moorestown platform.
+
+This driver is a standalone linux block device driver, it acts as if it's a normal hard disk.
+It includes three layer:
+	block layer interface - file ffsport.c
+	Flash Translation Layer (FTL) - file flash.c (implement the NAND flash Translation Layer, includs address mapping, garbage collection, wear-leveling and so on)
+	Low level layer - file lld_nand.c/lld_cdma.c/lld_emu.c (which implements actual controller hardware registers access)
+
+This driver can be build as modules or build-in.
+
+Dependency:
+This driver has dependency on IA Firmware of Intel Moorestown platform.
+It need the IA Firmware to create the block table for the first time.
+And to validate this driver code without IA Firmware, you can change the
+macro AUTO_FORMAT_FLASH from 0 to 1 in file spectraswconfig.h. Thus the
+driver will erase the whole nand flash and create a new block table.
+
+TODO:
+	- Enable Command DMA feature support
+	- lower the memory footprint
+	- Remove most of the unnecessary global variables
+	- Change all the upcase variable / functions name to lowercase
+	- Some other misc bugs
+
+Please send patches to:
+	Greg Kroah-Hartman <gregkh@...e.de>
+
+And Cc to: Gao Yunpeng <yunpeng.gao@...el.com>
+
diff --git a/drivers/staging/mrst_nand/ffsdefs.h b/drivers/staging/mrst_nand/ffsdefs.h
new file mode 100644
index 0000000..a9e9cd2
--- /dev/null
+++ b/drivers/staging/mrst_nand/ffsdefs.h
@@ -0,0 +1,58 @@
+/*
+ * NAND Flash Controller Device Driver
+ * Copyright (c) 2009, Intel Corporation and its suppliers.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
+ *
+ */
+
+#ifndef _FFSDEFS_
+#define _FFSDEFS_
+
+#define CLEAR 0			/*use this to clear a field instead of "fail"*/
+#define SET   1			/*use this to set a field instead of "pass"*/
+#define FAIL 1			/*failed flag*/
+#define PASS 0			/*success flag*/
+#define ERR -1			/*error flag*/
+
+#define   ERASE_CMD             10
+#define   WRITE_MAIN_CMD        11
+#define   READ_MAIN_CMD         12
+#define   WRITE_SPARE_CMD       13
+#define   READ_SPARE_CMD        14
+#define   WRITE_MAIN_SPARE_CMD  15
+#define   READ_MAIN_SPARE_CMD   16
+#define   MEMCOPY_CMD           17
+#define   DUMMY_CMD             99
+
+#define     EVENT_PASS                                  0x00
+#define     EVENT_CORRECTABLE_DATA_ERROR_FIXED         0x01
+#define     EVENT_UNCORRECTABLE_DATA_ERROR              0x02
+#define     EVENT_TIME_OUT                              0x03
+#define     EVENT_PROGRAM_FAILURE                       0x04
+#define     EVENT_ERASE_FAILURE                         0x05
+#define     EVENT_MEMCOPY_FAILURE                       0x06
+#define     EVENT_FAIL                                  0x07
+
+#define     EVENT_NONE                                  0x22
+#define     EVENT_DMA_CMD_COMP                          0x77
+#define     EVENT_ECC_TRANSACTION_DONE                  0x88
+#define     EVENT_DMA_CMD_FAIL                          0x99
+
+#define CMD_PASS        0
+#define CMD_FAIL        1
+#define CMD_ABORT       2
+#define CMD_NOT_DONE    3
+
+#endif /* _FFSDEFS_ */
diff --git a/drivers/staging/mrst_nand/ffsport.c b/drivers/staging/mrst_nand/ffsport.c
new file mode 100644
index 0000000..68640bd
--- /dev/null
+++ b/drivers/staging/mrst_nand/ffsport.c
@@ -0,0 +1,954 @@
+/*
+ * NAND Flash Controller Device Driver
+ * Copyright (c) 2009, Intel Corporation and its suppliers.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
+ *
+ */
+
+#define SBDBG        0
+
+#include "ffsport.h"
+#include "flash.h"
+#include <linux/interrupt.h>
+#include <linux/delay.h>
+#include <linux/wait.h>
+#include <linux/mutex.h>
+#include <linux/kthread.h>
+#include <linux/log2.h>
+
+/**** Helper functions used for Div, Remainder operation on u64 ****/
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     GLOB_Calc_Used_Bits
+* Inputs:       Power of 2 number
+* Outputs:      Number of Used Bits
+*               0, if the argument is 0
+* Description:  Calculate the number of bits used by a given power of 2 number
+*               Number can be upto 32 bit
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+int GLOB_Calc_Used_Bits(u32 n)
+{
+	int tot_bits = 0;
+
+	if (n >= 1 << 16) {
+		n >>= 16;
+		tot_bits += 16;
+	}
+
+	if (n >= 1 << 8) {
+		n >>=  8;
+		tot_bits +=  8;
+	}
+
+	if (n >= 1 << 4) {
+		n >>=  4;
+		tot_bits +=  4;
+	}
+
+	if (n >= 1 << 2) {
+		n >>=  2;
+		tot_bits +=  2;
+	}
+
+	if (n >= 1 << 1)
+		tot_bits +=  1;
+
+	return ((n == 0) ? (0) : tot_bits);
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     GLOB_u64_Div
+* Inputs:       Number of u64
+*               A power of 2 number as Division
+* Outputs:      Quotient of the Divisor operation
+* Description:  It divides the address by divisor by using bit shift operation
+*               (essentially without explicitely using "/").
+*               Divisor is a power of 2 number and Divided is of u64
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+u64 GLOB_u64_Div(u64 addr, u32 divisor)
+{
+	return  (u64)(addr >> GLOB_Calc_Used_Bits(divisor));
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     GLOB_u64_Remainder
+* Inputs:       Number of u64
+*               Divisor Type (1 -PageAddress, 2- BlockAddress)
+* Outputs:      Remainder of the Division operation
+* Description:  It calculates the remainder of a number (of u64) by
+*               divisor(power of 2 number ) by using bit shifting and multiply
+*               operation(essentially without explicitely using "/").
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+u64 GLOB_u64_Remainder(u64 addr, u32 divisor_type)
+{
+	u64 result = 0;
+
+	if (divisor_type == 1) { /* Remainder -- Page */
+		result = (addr >> DeviceInfo.nBitsInPageDataSize);
+		result = result * DeviceInfo.wPageDataSize;
+	} else if (divisor_type == 2) { /* Remainder -- Block */
+		result = (addr >> DeviceInfo.nBitsInBlockDataSize);
+		result = result * DeviceInfo.wBlockDataSize;
+	}
+
+	result = addr - result;
+
+	return result;
+}
+
+
+void glob_udelay(unsigned long usecs)
+{
+	udelay(usecs);
+}
+
+void glob_mdelay(unsigned long msecs)
+{
+	mdelay(msecs);
+}
+
+u32 *GLOB_MEMMAP_NOCACHE(unsigned long addr, unsigned long size)
+{
+#if (FLASH_NAND || FLASH_CDMA)
+    return (u32 *)ioremap_nocache(addr, (size+0xfff)&(~0xfff));
+#else
+    return (u32 *)addr;
+#endif
+}
+
+u32 *GLOB_MEMMAP_TOBUS(u32 *ptr)
+{
+#if (FLASH_NAND || FLASH_CDMA)
+    return (u32 *)virt_to_bus(ptr);
+#else
+    return ptr;
+#endif
+}
+
+
+#define NUM_DEVICES             1
+#define PARTITIONS              4
+#define NUM_OUTSTANDING_FTL_REQUESTS    8
+
+#define GLOB_SBD_NAME          "nd"
+#define GLOB_SBD_IRQ_NUM       (29)
+#define GLOB_VERSION		"driver version 20090312"
+
+#define GLOB_SBD_IOCTL_GC                                              (0x7701)
+#define GLOB_SBD_IOCTL_WL                                              (0x7702)
+#define GLOB_SBD_IOCTL_FORMAT                                      (0x7703)
+#define GLOB_SBD_IOCTL_ERASE_FLASH                             (0x7704)
+#define GLOB_SBD_IOCTL_FLUSH_CACHE                             (0x7705)
+#define GLOB_SBD_IOCTL_COPY_BLK_TABLE                       (0x7706)
+#define GLOB_SBD_IOCTL_COPY_WEAR_LEVELING_TABLE  (0x7707)
+#define GLOB_SBD_IOCTL_GET_NAND_INFO                         (0x7708)
+#define GLOB_SBD_IOCTL_WRITE_DATA                               (0x7709)
+#define GLOB_SBD_IOCTL_READ_DATA                             (0x770A)
+
+
+#define RESERVED_MB_FOR_OS_IMAGE    25
+
+int nand_debug_level;
+module_param(nand_debug_level, int, 0644);
+MODULE_PARM_DESC(nand_debug_level, "debug level value: 1-3");
+
+MODULE_LICENSE("GPL");
+
+struct spectra_nand_dev {
+	struct pci_dev *dev;
+	u64 size;
+	u16 users;
+	spinlock_t qlock;
+	void __iomem *ioaddr;  /* Mapped address */
+	struct request_queue *queue;
+	struct task_struct *thread;
+	struct gendisk *gd;
+#if CMD_DMA
+	wait_queue_head_t irq_wait_queue;
+	int irq_count;
+#endif
+	u8 *tmp_buf;
+};
+
+
+static int GLOB_SBD_majornum;
+
+static char *GLOB_version = GLOB_VERSION;
+
+static struct spectra_nand_dev nand_device[NUM_DEVICES];
+
+/* Because the driver will allocate a lot of memory and kmalloc can not */
+/* allocat memory more than 4M bytes, here we use static array as */
+/* memory pool. This is simple but ugly. It should only be used during */
+/* development.*/
+#define LOCAL_MEM_POOL_SIZE        (1024 * 1024 * 8)
+static u8 local_mem_pool[LOCAL_MEM_POOL_SIZE];
+
+
+/* static int dev_num; */
+
+static struct mutex spectra_lock;
+
+static int res_blks_os = 1;
+
+struct spectra_indentfy_dev_tag IdentifyDeviceData;
+
+#define SBD_SECTOR_SIZE         (IdentifyDeviceData.PageDataSize)
+#define SBD_BLOCK_SIZE          (IdentifyDeviceData.PageDataSize *\
+				IdentifyDeviceData.PagesPerBlock)
+
+u8 *mem_pool_ptr;
+
+#if CMD_DMA
+struct GLOB_SBD_CMDDMA_Q {
+	struct request *req;
+	int num_sbd_sects;
+} glob_sbd_cmddma_q;
+
+struct GLOB_SBD_CMDDMA_Q cmddma_request_queue[NUM_OUTSTANDING_FTL_REQUESTS];
+unsigned int cmddma_num_requests;
+unsigned int cmddma_num_ftl_requests;
+
+#if SBDBG
+u64 SBDBG_cdma_address[NUM_OUTSTANDING_FTL_REQUESTS];
+#endif
+
+#endif
+
+static int force_flush_cache(void)
+{
+	if (ERR == GLOB_FTL_Flush_Cache()) {
+		printk(KERN_ERR "Fail to Flush FTL Cache!\n");
+		return -EFAULT;
+	}
+
+	return 0;
+}
+
+struct ioctl_rw_page_info {
+	u8 *data;
+	unsigned int page;
+};
+
+static int ioctl_read_page_data(unsigned long arg)
+{
+	u8 *buf;
+	struct ioctl_rw_page_info info;
+	int result = PASS;
+
+	if (copy_from_user(&info, (void __user *)arg, sizeof(info)))
+		return -EFAULT;
+
+	buf = kmalloc(IdentifyDeviceData.PageDataSize, GFP_ATOMIC);
+	if (!buf) {
+		printk(KERN_ERR "ioctl_read_page_data: "
+		       "failed to allocate memory\n");
+		return -ENOMEM;
+	}
+
+	mutex_lock(&spectra_lock);
+	result = GLOB_FTL_Page_Read(buf,
+		(u64)info.page * IdentifyDeviceData.PageDataSize);
+	mutex_unlock(&spectra_lock);
+
+	if (copy_to_user((void __user *)info.data, buf,
+			   IdentifyDeviceData.PageDataSize)) {
+		printk(KERN_ERR "ioctl_read_page_data: "
+		       "failed to copy user data\n");
+		kfree(buf);
+		return -EFAULT;
+	}
+
+
+	kfree(buf);
+	return result;
+}
+
+static int ioctl_write_page_data(unsigned long arg)
+{
+	u8 *buf;
+	struct ioctl_rw_page_info info;
+	int result = PASS;
+
+	if (copy_from_user(&info, (void __user *)arg, sizeof(info)))
+		return -EFAULT;
+
+	buf = kmalloc(IdentifyDeviceData.PageDataSize, GFP_ATOMIC);
+	if (!buf) {
+		printk(KERN_ERR "ioctl_write_page_data: "
+		       "failed to allocate memory\n");
+		return -ENOMEM;
+	}
+
+	if (copy_from_user(buf, (void __user *)info.data,
+			   IdentifyDeviceData.PageDataSize)) {
+		printk(KERN_ERR "ioctl_write_page_data: "
+		       "failed to copy user data\n");
+		kfree(buf);
+		return -EFAULT;
+	}
+
+	mutex_lock(&spectra_lock);
+	result = GLOB_FTL_Page_Write(buf,
+		(u64)info.page * IdentifyDeviceData.PageDataSize);
+	mutex_unlock(&spectra_lock);
+
+	kfree(buf);
+	return result;
+}
+
+
+/* Static Function Declarations */
+static void GLOB_SBD_request(struct request_queue *q);
+
+/* Return how many blocks should be reserved for bad block replacement */
+static int get_res_blk_num_bad_blk(void)
+{
+	return IdentifyDeviceData.wDataBlockNum / 10;
+}
+
+/* Return how many blocks should be reserved for OS image */
+static int get_res_blk_num_os(void)
+{
+	int res_blks, blk_size;
+
+	blk_size = IdentifyDeviceData.PageDataSize *
+		IdentifyDeviceData.PagesPerBlock;
+
+	res_blks = (RESERVED_MB_FOR_OS_IMAGE *
+		1024 * 1024) / blk_size;
+
+	if (res_blks >= IdentifyDeviceData.wDataBlockNum) {
+		printk(KERN_ERR "Too many reserved blocks (%d) "
+			"for OS image. Will use default value 1\n",
+			res_blks);
+		res_blks = 1; /* Reserved 1 block for block table */
+	}
+
+	return res_blks;
+}
+
+static void SBD_prepare_flush(struct request_queue *q, struct request *rq)
+{
+	rq->cmd_type = REQ_TYPE_LINUX_BLOCK;
+	/* rq->timeout = 5 * HZ; */
+	rq->cmd[0] = REQ_LB_OP_FLUSH;
+}
+
+/* Transfer a full request. */
+static int do_transfer(struct spectra_nand_dev *tr, struct request *req)
+{
+	u64 start_addr, addr;
+	u32 logical_start_sect, hd_start_sect;
+	u32 nsect, hd_sects;
+	u32 rsect, tsect = 0;
+	char *buf;
+	u32 ratio = IdentifyDeviceData.PageDataSize >> 9;
+
+	start_addr = (u64)(blk_rq_pos(req)) << 9;
+	/* Add a big enough offset to prevent the OS Image from
+	*  being accessed or damaged by file system */
+	start_addr += (SBD_BLOCK_SIZE * res_blks_os);
+
+	if (req->cmd_type == REQ_TYPE_LINUX_BLOCK &&
+			req->cmd[0] == REQ_LB_OP_FLUSH) {
+		if (force_flush_cache()) /* Fail to flush cache */
+			return -EIO;
+		else
+			return 0;
+	}
+
+	if (!blk_fs_request(req))
+		return -EIO;
+
+	if (blk_rq_pos(req) + blk_rq_cur_sectors(req) > get_capacity(tr->gd)) {
+		printk(KERN_ERR "Spectra error: request over the NAND "
+			"capacity!sector %d, current_nr_sectors %d, "
+			"while capacity is %d\n",
+			(int)blk_rq_pos(req),
+			blk_rq_cur_sectors(req),
+			(int)get_capacity(tr->gd));
+		return -EIO;
+	}
+
+	logical_start_sect = start_addr >> 9;
+	hd_start_sect = logical_start_sect / ratio;
+	rsect = logical_start_sect - hd_start_sect * ratio;
+
+	addr = (u64)hd_start_sect * ratio * 512;
+	buf = req->buffer;
+	nsect = blk_rq_cur_sectors(req);
+
+	if (rsect)
+		tsect =  (ratio - rsect) < nsect ? (ratio - rsect) : nsect;
+
+	switch (rq_data_dir(req)) {
+	case READ:
+		/* Read the first NAND page */
+		if (rsect) {
+			if (GLOB_FTL_Page_Read(tr->tmp_buf, addr)) {
+				printk(KERN_ERR "Error in %s, Line %d\n",
+					__FILE__, __LINE__);
+				return -EIO;
+			}
+			memcpy(buf, tr->tmp_buf + (rsect << 9), tsect << 9);
+			addr += IdentifyDeviceData.PageDataSize;
+			buf += tsect << 9;
+			nsect -= tsect;
+		}
+
+		/* Read the other NAND pages */
+		for (hd_sects = nsect / ratio; hd_sects > 0; hd_sects--) {
+			if (GLOB_FTL_Page_Read(buf, addr)) {
+				printk(KERN_ERR "Error in %s, Line %d\n",
+					__FILE__, __LINE__);
+				return -EIO;
+			}
+			addr += IdentifyDeviceData.PageDataSize;
+			buf += IdentifyDeviceData.PageDataSize;
+		}
+
+		/* Read the last NAND pages */
+		if (nsect % ratio) {
+			if (GLOB_FTL_Page_Read(tr->tmp_buf, addr)) {
+				printk(KERN_ERR "Error in %s, Line %d\n",
+					__FILE__, __LINE__);
+				return -EIO;
+			}
+			memcpy(buf, tr->tmp_buf, (nsect % ratio) << 9);
+		}
+		return 0;
+
+	case WRITE:
+		/* Write the first NAND page */
+		if (rsect) {
+			if (GLOB_FTL_Page_Read(tr->tmp_buf, addr)) {
+				printk(KERN_ERR "Error in %s, Line %d\n",
+					__FILE__, __LINE__);
+				return -EIO;
+			}
+			memcpy(tr->tmp_buf + (rsect << 9), buf, tsect << 9);
+			if (GLOB_FTL_Page_Write(tr->tmp_buf, addr)) {
+				printk(KERN_ERR "Error in %s, Line %d\n",
+					__FILE__, __LINE__);
+				return -EIO;
+			}
+			addr += IdentifyDeviceData.PageDataSize;
+			buf += tsect << 9;
+			nsect -= tsect;
+		}
+
+		/* Write the other NAND pages */
+		for (hd_sects = nsect / ratio; hd_sects > 0; hd_sects--) {
+			if (GLOB_FTL_Page_Write(buf, addr)) {
+				printk(KERN_ERR "Error in %s, Line %d\n",
+					__FILE__, __LINE__);
+				return -EIO;
+			}
+			addr += IdentifyDeviceData.PageDataSize;
+			buf += IdentifyDeviceData.PageDataSize;
+		}
+
+		/* Write the last NAND pages */
+		if (nsect % ratio) {
+			if (GLOB_FTL_Page_Read(tr->tmp_buf, addr)) {
+				printk(KERN_ERR "Error in %s, Line %d\n",
+					__FILE__, __LINE__);
+				return -EIO;
+			}
+			memcpy(tr->tmp_buf, buf, (nsect % ratio) << 9);
+			if (GLOB_FTL_Page_Write(tr->tmp_buf, addr)) {
+				printk(KERN_ERR "Error in %s, Line %d\n",
+					__FILE__, __LINE__);
+				return -EIO;
+			}
+		}
+		return 0;
+
+	default:
+		printk(KERN_NOTICE "Unknown request %u\n", rq_data_dir(req));
+		return -EIO;
+	}
+}
+
+static int spectra_trans_thread(void *arg)
+{
+	struct spectra_nand_dev *tr = arg;
+	struct request_queue *rq = tr->queue;
+	struct request *req = NULL;
+
+	/* we might get involved when memory gets low, so use PF_MEMALLOC */
+	current->flags |= PF_MEMALLOC;
+
+	spin_lock_irq(rq->queue_lock);
+	while (!kthread_should_stop()) {
+		int res;
+
+		if (!req && !(req = blk_fetch_request(rq))) {
+			set_current_state(TASK_INTERRUPTIBLE);
+			spin_unlock_irq(rq->queue_lock);
+			schedule();
+			spin_lock_irq(rq->queue_lock);
+			continue;
+		}
+
+		spin_unlock_irq(rq->queue_lock);
+
+		mutex_lock(&spectra_lock);
+		res = do_transfer(tr, req);
+		mutex_unlock(&spectra_lock);
+
+		spin_lock_irq(rq->queue_lock);
+
+		if (!__blk_end_request_cur(req, res))
+			req = NULL;
+	}
+
+	if (req)
+		__blk_end_request_all(req, -EIO);
+
+	spin_unlock_irq(rq->queue_lock);
+
+	return 0;
+}
+
+
+/* Request function that "handles clustering". */
+static void GLOB_SBD_request(struct request_queue *rq)
+{
+	struct spectra_nand_dev *pdev = rq->queuedata;
+	wake_up_process(pdev->thread);
+}
+
+
+static int GLOB_SBD_open(struct block_device *bdev, fmode_t mode)
+
+{
+	nand_dbg_print(NAND_DBG_WARN, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+	return 0;
+}
+
+static int GLOB_SBD_release(struct gendisk *disk, fmode_t mode)
+{
+#if CMD_DMA
+	struct spectra_nand_dev *dev = disk->private_data;
+#endif
+	int ret;
+
+	nand_dbg_print(NAND_DBG_WARN, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+#if CMD_DMA
+	if (cmddma_num_requests) {
+		dev->irq_count = 0;
+		if (cmddma_num_requests)
+			wait_event_interruptible(dev->irq_wait_queue,
+						 dev->irq_count);
+		else
+			dev->irq_count = 1;
+	}
+	cmddma_num_requests = 0;
+	cmddma_num_ftl_requests = 0;
+#endif
+	mutex_lock(&spectra_lock);
+	ret = force_flush_cache();
+	mutex_unlock(&spectra_lock);
+
+#if CMD_DMA
+	if (ret == 0) {
+		dev->irq_count = 0;
+		GLOB_FTL_Execute_CMDS();
+		if (!dev->irq_count)
+			wait_event_interruptible(dev->irq_wait_queue,
+						 dev->irq_count);
+	}
+
+	if (dev->users)
+		free_irq(GLOB_SBD_IRQ_NUM, (void *)dev);
+#endif
+
+	return 0;
+}
+
+static int GLOB_SBD_getgeo(struct block_device *bdev, struct hd_geometry *geo)
+{
+	geo->heads = 4;
+	geo->sectors = 16;
+	geo->cylinders = get_capacity(bdev->bd_disk) / (4 * 16);
+
+	nand_dbg_print(NAND_DBG_DEBUG,
+		"heads: %d, sectors: %d, cylinders: %d\n",
+		geo->heads, geo->sectors, geo->cylinders);
+
+	return 0;
+}
+
+int GLOB_SBD_ioctl(struct block_device *bdev, fmode_t mode,
+		unsigned int cmd, unsigned long arg)
+{
+	int ret;
+#if CMD_DMA
+	struct spectra_nand_dev *dev = bdev->bd_disk->private_data;
+#endif
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	switch (cmd) {
+	case GLOB_SBD_IOCTL_GC:
+		nand_dbg_print(NAND_DBG_DEBUG,
+			       "Spectra IOCTL: Garbage Collection "
+			       "being performed\n");
+		if (PASS != GLOB_FTL_Garbage_Collection())
+			return -EFAULT;
+		return 0;
+
+	case GLOB_SBD_IOCTL_WL:
+		nand_dbg_print(NAND_DBG_DEBUG,
+			       "Spectra IOCTL: Static Wear Leveling "
+			       "being performed\n");
+		if (PASS != GLOB_FTL_Wear_Leveling())
+			return -EFAULT;
+		return 0;
+
+	case GLOB_SBD_IOCTL_FORMAT:
+		nand_dbg_print(NAND_DBG_DEBUG, "Spectra IOCTL: Flash format "
+			       "being performed\n");
+		if (PASS != GLOB_FTL_Flash_Format())
+			return -EFAULT;
+		return 0;
+
+	case GLOB_SBD_IOCTL_FLUSH_CACHE:
+		nand_dbg_print(NAND_DBG_DEBUG, "Spectra IOCTL: Cache flush "
+			       "being performed\n");
+		mutex_lock(&spectra_lock);
+		ret = force_flush_cache();
+		mutex_unlock(&spectra_lock);
+#if CMD_DMA
+		if (!ret) {
+			dev->irq_count = 0;
+			GLOB_FTL_Execute_CMDS();
+			if (!dev->irq_count)
+				wait_event_interruptible(dev->irq_wait_queue,
+							 dev->irq_count);
+		}
+#endif
+		return ret;
+
+	case GLOB_SBD_IOCTL_COPY_BLK_TABLE:
+		nand_dbg_print(NAND_DBG_DEBUG, "Spectra IOCTL: "
+			       "Copy block table\n");
+		if (copy_to_user((void __user *)arg,
+			get_blk_table_start_addr(),
+			get_blk_table_len()))
+			return -EFAULT;
+		return 0;
+
+	case GLOB_SBD_IOCTL_COPY_WEAR_LEVELING_TABLE:
+		nand_dbg_print(NAND_DBG_DEBUG, "Spectra IOCTL: "
+			       "Copy wear leveling table\n");
+		if (copy_to_user((void __user *)arg,
+			get_wear_leveling_table_start_addr(),
+			get_wear_leveling_table_len()))
+			return -EFAULT;
+		return 0;
+
+	case GLOB_SBD_IOCTL_GET_NAND_INFO:
+		nand_dbg_print(NAND_DBG_DEBUG, "Spectra IOCTL: "
+			       "Get NAND info\n");
+		if (copy_to_user((void __user *)arg, &IdentifyDeviceData,
+			sizeof(IdentifyDeviceData)))
+			return -EFAULT;
+		return 0;
+
+	case GLOB_SBD_IOCTL_WRITE_DATA:
+		nand_dbg_print(NAND_DBG_DEBUG, "Spectra IOCTL: "
+			       "Write one page data\n");
+		return ioctl_write_page_data(arg);
+
+	case GLOB_SBD_IOCTL_READ_DATA:
+		nand_dbg_print(NAND_DBG_DEBUG, "Spectra IOCTL: "
+			       "Read one page data\n");
+		return ioctl_read_page_data(arg);
+	}
+
+	return -ENOTTY;
+}
+
+static struct block_device_operations GLOB_SBD_ops = {
+	.owner = THIS_MODULE,
+	.open = GLOB_SBD_open,
+	.release = GLOB_SBD_release,
+	.locked_ioctl = GLOB_SBD_ioctl,
+	.getgeo = GLOB_SBD_getgeo,
+};
+
+static int SBD_setup_device(struct spectra_nand_dev *dev, int which)
+{
+	int res_blks;
+	u32 sects;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	memset(dev, 0, sizeof(struct spectra_nand_dev));
+
+	nand_dbg_print(NAND_DBG_WARN, "Reserved %d blocks "
+		"for OS image, %d blocks for bad block replacement.\n",
+		get_res_blk_num_os(),
+		get_res_blk_num_bad_blk());
+
+	res_blks = get_res_blk_num_bad_blk() + get_res_blk_num_os();
+
+	dev->size = (u64)IdentifyDeviceData.PageDataSize *
+		IdentifyDeviceData.PagesPerBlock *
+		(IdentifyDeviceData.wDataBlockNum - res_blks);
+
+	res_blks_os = get_res_blk_num_os();
+
+	spin_lock_init(&dev->qlock);
+
+	dev->tmp_buf = kmalloc(IdentifyDeviceData.PageDataSize, GFP_ATOMIC);
+	if (!dev->tmp_buf) {
+		printk(KERN_ERR "Failed to kmalloc memory in %s Line %d, exit.\n",
+			__FILE__, __LINE__);
+		goto out_vfree;
+	}
+
+	dev->queue = blk_init_queue(GLOB_SBD_request, &dev->qlock);
+	if (dev->queue == NULL) {
+		printk(KERN_ERR
+		       "Spectra: Request queue could not be initialized."
+			" Aborting\n ");
+		goto out_vfree;
+	}
+	dev->queue->queuedata = dev;
+
+	/* blk_queue_logical_block_size(dev->queue, SBD_SECTOR_SIZE); */
+	blk_queue_logical_block_size(dev->queue, 512);
+	blk_queue_ordered(dev->queue, QUEUE_ORDERED_DRAIN_FLUSH,
+						SBD_prepare_flush);
+
+	dev->thread = kthread_run(spectra_trans_thread, dev, "nand_thd");
+	if (IS_ERR(dev->thread)) {
+		blk_cleanup_queue(dev->queue);
+		unregister_blkdev(GLOB_SBD_majornum, GLOB_SBD_NAME);
+		return PTR_ERR(dev->thread);
+	}
+
+	dev->gd = alloc_disk(PARTITIONS);
+	if (!dev->gd) {
+		printk(KERN_ERR
+		       "Spectra: Could not allocate disk. Aborting \n ");
+		goto out_vfree;
+	}
+	dev->gd->major = GLOB_SBD_majornum;
+	dev->gd->first_minor = which * PARTITIONS;
+	dev->gd->fops = &GLOB_SBD_ops;
+	dev->gd->queue = dev->queue;
+	dev->gd->private_data = dev;
+	snprintf(dev->gd->disk_name, 32, "%s%c", GLOB_SBD_NAME, which + 'a');
+
+	sects = dev->size >> 9;
+	nand_dbg_print(NAND_DBG_WARN, "Capacity sects: %d\n", sects);
+	set_capacity(dev->gd, sects);
+
+#if CMD_DMA
+	cmddma_request_queue[0].req = NULL;
+	cmddma_num_requests = 0;
+	cmddma_num_ftl_requests = 0;
+	init_waitqueue_head(&dev->irq_wait_queue);
+	dev->irq_count = 1;
+#endif
+	add_disk(dev->gd);
+
+	return 0;
+out_vfree:
+	return -ENOMEM;
+}
+
+/*
+static ssize_t show_nand_block_num(struct device *dev,
+	struct device_attribute *attr, char *buf)
+{
+	return snprintf(buf, PAGE_SIZE, "%d\n",
+		(int)IdentifyDeviceData.wDataBlockNum);
+}
+
+static ssize_t show_nand_pages_per_block(struct device *dev,
+	struct device_attribute *attr, char *buf)
+{
+	return snprintf(buf, PAGE_SIZE, "%d\n",
+		(int)IdentifyDeviceData.PagesPerBlock);
+}
+
+static ssize_t show_nand_page_size(struct device *dev,
+	struct device_attribute *attr, char *buf)
+{
+	return snprintf(buf, PAGE_SIZE, "%d\n",
+		(int)IdentifyDeviceData.PageDataSize);
+}
+
+static DEVICE_ATTR(nand_block_num, 0444, show_nand_block_num, NULL);
+static DEVICE_ATTR(nand_pages_per_block, 0444, show_nand_pages_per_block, NULL);
+static DEVICE_ATTR(nand_page_size, 0444, show_nand_page_size, NULL);
+
+static void create_sysfs_entry(struct device *dev)
+{
+	if (device_create_file(dev, &dev_attr_nand_block_num))
+		printk(KERN_ERR "Spectra: "
+			"failed to create sysfs entry nand_block_num.\n");
+	if (device_create_file(dev, &dev_attr_nand_pages_per_block))
+		printk(KERN_ERR "Spectra: "
+		"failed to create sysfs entry nand_pages_per_block.\n");
+	if (device_create_file(dev, &dev_attr_nand_page_size))
+		printk(KERN_ERR "Spectra: "
+		"failed to create sysfs entry nand_page_size.\n");
+}
+*/
+
+static int GLOB_SBD_init(void)
+{
+	int i;
+
+	nand_debug_level = 0; /* Set level value for debug output */
+
+	printk(KERN_ALERT "Spectra: %s\n", GLOB_version);
+
+	mutex_init(&spectra_lock);
+
+	GLOB_SBD_majornum = register_blkdev(0, GLOB_SBD_NAME);
+	if (GLOB_SBD_majornum <= 0) {
+		printk(KERN_ERR "Unable to get the major %d for Spectra",
+		       GLOB_SBD_majornum);
+		return -EBUSY;
+	}
+
+	if (PASS != GLOB_FTL_Flash_Init()) {
+		printk(KERN_ERR "Spectra: Unable to Initialize Flash Device. "
+		       "Aborting\n");
+		goto out_flash_register;
+	}
+
+	/* create_sysfs_entry(&dev->dev); */
+
+	if (PASS != GLOB_FTL_IdentifyDevice(&IdentifyDeviceData)) {
+		printk(KERN_ERR "Spectra: Unable to Read Flash Device. "
+		       "Aborting\n");
+		goto out_flash_register;
+	} else {
+		nand_dbg_print(NAND_DBG_WARN, "In GLOB_SBD_init: "
+			       "Num blocks=%d, pagesperblock=%d, "
+			       "pagedatasize=%d, ECCBytesPerSector=%d, "
+			       "SizeofGlobalMem=%d\n",
+		       (int)IdentifyDeviceData.NumBlocks,
+		       (int)IdentifyDeviceData.PagesPerBlock,
+		       (int)IdentifyDeviceData.PageDataSize,
+		       (int)IdentifyDeviceData.wECCBytesPerSector,
+		       (int)IdentifyDeviceData.SizeOfGlobalMem);
+	}
+
+	if (IdentifyDeviceData.SizeOfGlobalMem >= LOCAL_MEM_POOL_SIZE) {
+		printk(KERN_ERR "Spectra: Unable to Initialize Memory Pool. "
+		       "Aborting\n");
+		goto out_mempool_flash_register;
+	}
+
+	/* mem_pool_ptr = (u8 *)kmalloc(IdentifyDeviceData.SizeOfGlobalMem,
+	 * GFP_KERNEL);
+	if (!mem_pool_ptr) {
+		printk(KERN_ERR "Spectra: Unable to Initialize Memory Pool. "
+		       "Aborting\n");
+		goto out_mempool_flash_register;
+	}
+	 */
+
+	mem_pool_ptr = local_mem_pool;
+
+	if (PASS != GLOB_FTL_Mem_Config(mem_pool_ptr)) {
+		printk(KERN_ERR "Spectra: Unable to Read Flash Device. "
+		       "Aborting\n");
+		goto out_mempool_flash_register;
+	}
+
+	printk(KERN_ALERT "Spectra: searching block table, please wait ...\n");
+	if (GLOB_FTL_Init() != PASS) {
+		printk(KERN_ERR "Spectra: Unable to Initialize FTL Layer. "
+		       "Aborting\n");
+		goto out_ftl_flash_register;
+	}
+
+	for (i = 0; i < NUM_DEVICES; i++)
+		if (SBD_setup_device(&nand_device[i], i) == -ENOMEM)
+			goto out_ftl_flash_register;
+
+	nand_dbg_print(NAND_DBG_DEBUG,
+		       "Spectra: module loaded with major number %d\n",
+		       GLOB_SBD_majornum);
+
+	return 0;
+
+out_ftl_flash_register:
+	GLOB_FTL_Cache_Release();
+out_flash_register:
+	GLOB_FTL_Flash_Release();
+	unregister_blkdev(GLOB_SBD_majornum, GLOB_SBD_NAME);
+	/* pci_unregister_driver(&nand_pci_driver); */
+
+out_mempool_flash_register:
+	/* kfree(mem_pool_ptr); */
+
+	printk(KERN_ERR "Spectra: Module load failed.\n");
+	return -ENOMEM;
+}
+
+static void __exit GLOB_SBD_exit(void)
+{
+	int i;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	for (i = 0; i < NUM_DEVICES; i++) {
+		struct spectra_nand_dev *dev = &nand_device[i];
+		if (dev->gd) {
+			del_gendisk(dev->gd);
+			put_disk(dev->gd);
+		}
+		if (dev->queue)
+			blk_cleanup_queue(dev->queue);
+	}
+
+	unregister_blkdev(GLOB_SBD_majornum, GLOB_SBD_NAME);
+
+	mutex_lock(&spectra_lock);
+	force_flush_cache();
+	mutex_unlock(&spectra_lock);
+
+	GLOB_FTL_Cache_Release();
+
+	/* kfree(mem_pool_ptr); */
+
+	GLOB_FTL_Flash_Release();
+	/* pci_unregister_driver(&nand_pci_driver); */
+
+	nand_dbg_print(NAND_DBG_DEBUG,
+		       "Spectra FTL module (major number %d) unloaded.\n",
+		       GLOB_SBD_majornum);
+}
+
+module_init(GLOB_SBD_init);
+module_exit(GLOB_SBD_exit);
diff --git a/drivers/staging/mrst_nand/ffsport.h b/drivers/staging/mrst_nand/ffsport.h
new file mode 100644
index 0000000..58ede43
--- /dev/null
+++ b/drivers/staging/mrst_nand/ffsport.h
@@ -0,0 +1,89 @@
+/*
+ * NAND Flash Controller Device Driver
+ * Copyright (c) 2009, Intel Corporation and its suppliers.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
+ *
+ */
+
+#ifndef _FFSPORT_
+#define _FFSPORT_
+
+#include "ffsdefs.h"
+
+#if defined __GNUC__
+#define PACKED
+#define PACKED_GNU __attribute__ ((packed))
+#define UNALIGNED
+#endif
+
+#include <linux/semaphore.h>
+#include <linux/string.h>	/* for strcpy(), stricmp(), etc */
+#include <linux/mm.h>		/* for kmalloc(), kfree() */
+#include <linux/vmalloc.h>
+#include <linux/module.h>
+#include <linux/moduleparam.h>
+#include <linux/init.h>
+
+#include <linux/kernel.h>	/* printk() */
+#include <linux/fs.h>		/* everything... */
+#include <linux/errno.h>	/* error codes */
+#include <linux/types.h>	/* size_t */
+#include <linux/genhd.h>
+#include <linux/blkdev.h>
+#include <linux/hdreg.h>
+#include <linux/pci.h>
+#include "flash.h"
+
+#define VERBOSE    1
+
+#define NAND_DBG_WARN  1
+#define NAND_DBG_DEBUG 2
+#define NAND_DBG_TRACE 3
+
+extern int nand_debug_level;
+
+#ifdef VERBOSE
+#define nand_dbg_print(level, args...)			\
+	do {						\
+		if (level <= nand_debug_level)		\
+			printk(KERN_ALERT args);	\
+	} while (0)
+#else
+#define nand_dbg_print(level, args...)
+#endif
+
+#ifdef SUPPORT_BIG_ENDIAN
+#define INVERTUINT16(w)   ((u16)(((u16)(w)) << 8) | \
+			   (u16)((u16)(w) >> 8))
+
+#define INVERTUINT32(dw)  (((u32)(dw) << 24) | \
+			   (((u32)(dw) << 8) & 0x00ff0000) | \
+			   (((u32)(dw) >> 8) & 0x0000ff00) | \
+			   ((u32)(dw) >> 24))
+#else
+#define INVERTUINT16(w)   w
+#define INVERTUINT32(dw)  dw
+#endif
+
+extern int GLOB_Calc_Used_Bits(u32 n);
+extern u64 GLOB_u64_Div(u64 addr, u32 divisor);
+extern u64 GLOB_u64_Remainder(u64 addr, u32 divisor_type);
+
+extern void glob_udelay(unsigned long usecs);
+extern void glob_mdelay(unsigned long msecs);
+extern u32 *GLOB_MEMMAP_NOCACHE(unsigned long addr, unsigned long size);
+extern u32 *GLOB_MEMMAP_TOBUS(u32 *ptr);
+
+#endif /* _FFSPORT_ */
diff --git a/drivers/staging/mrst_nand/flash.c b/drivers/staging/mrst_nand/flash.c
new file mode 100644
index 0000000..a2a8f36
--- /dev/null
+++ b/drivers/staging/mrst_nand/flash.c
@@ -0,0 +1,4150 @@
+/*
+ * NAND Flash Controller Device Driver
+ * Copyright (c) 2009, Intel Corporation and its suppliers.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
+ *
+ */
+
+#include "flash.h"
+#include "ffsdefs.h"
+#include "lld.h"
+#if CMD_DMA
+#include "lld_cdma.h"
+#endif
+
+#define NAND_CACHE_INIT_ADDR    0xffffffffffffffffULL
+
+#define BLK_FROM_ADDR(addr)  ((u32)(addr >> DeviceInfo.nBitsInBlockDataSize))
+#define PAGE_FROM_ADDR(addr, Block)  ((u16)((addr - (u64)Block * \
+	DeviceInfo.wBlockDataSize) >> DeviceInfo.nBitsInPageDataSize))
+
+#define IS_SPARE_BLOCK(blk)     (BAD_BLOCK != (pbt[blk] &\
+	BAD_BLOCK) && SPARE_BLOCK == (pbt[blk] & SPARE_BLOCK))
+
+#define IS_DATA_BLOCK(blk)      (0 == (pbt[blk] & BAD_BLOCK))
+
+#define IS_DISCARDED_BLOCK(blk) (BAD_BLOCK != (pbt[blk] &\
+	BAD_BLOCK) && DISCARD_BLOCK == (pbt[blk] & DISCARD_BLOCK))
+
+#define IS_BAD_BLOCK(blk)       (BAD_BLOCK == (pbt[blk] & BAD_BLOCK))
+
+#define NUM_MEMPOOL_ALLOCS            (22 + CACHE_BLOCK_NUMBER)
+
+#if DEBUG_BNDRY
+void debug_boundary_lineno_error(int chnl, int limit, int no,
+				int lineno, char *filename)
+{
+	if (chnl >= limit)
+		printk(KERN_ERR "Boundary Check Fail value %d >= limit %d, "
+		"at  %s:%d. Other info:%d. Aborting...\n",
+		chnl, limit, filename, lineno, no);
+}
+/* static int globalmemsize; */
+#endif
+
+static u8 FTL_Cache_If_Hit(u64 dwPageAddr);
+static int FTL_Cache_Read(u64 dwPageAddr);
+static void FTL_Cache_Read_Page(u8 *pData, u64 dwPageAddr,
+				u8 cache_blk);
+static void FTL_Cache_Write_Page(u8 *pData, u64 dwPageAddr,
+				 u8 cache_blk, u16 flag);
+static int FTL_Cache_Write(void);
+static int FTL_Cache_Write_Back(u8 *pData, u64 blk_addr);
+static void FTL_Calculate_LRU(void);
+static u32 FTL_Get_Block_Index(u32 wBlockNum);
+
+static int FTL_Search_Block_Table_IN_Block(u32 BT_Block,
+					   u8 BT_Tag, u16 *Page);
+static int FTL_Read_Block_Table(void);
+static int FTL_Write_Block_Table(int wForce);
+static int FTL_Write_Block_Table_Data(void);
+static int FTL_Check_Block_Table(int wOldTable);
+static int FTL_Static_Wear_Leveling(void);
+static u32 FTL_Replace_Block_Table(void);
+static int FTL_Write_IN_Progress_Block_Table_Page(void);
+
+static u32 FTL_Get_Page_Num(u64 length);
+static u64 FTL_Get_Physical_Block_Addr(u64 blk_addr);
+
+static u32 FTL_Replace_OneBlock(u32 wBlockNum,
+				      u32 wReplaceNum);
+static u32 FTL_Replace_LWBlock(u32 wBlockNum,
+				     int *pGarbageCollect);
+static u32 FTL_Replace_MWBlock(void);
+static int FTL_Replace_Block(u64 blk_addr);
+static int FTL_Adjust_Relative_Erase_Count(u32 Index_of_MAX);
+
+static int FTL_Flash_Error_Handle(u8 *pData,
+	u64 old_page_addr, u64 blk_addr);
+
+struct device_info_tag DeviceInfo;
+static u8 *g_pTempBuf;
+u8 *g_pBlockTable;
+u8 *g_pWearCounter;
+u16 *g_pReadCounter;
+static u16 g_wBlockTableOffset;
+static u32 g_wBlockTableIndex;
+static u8 g_cBlockTableStatus;
+u32 *g_pBTBlocks;
+struct flash_cache_tag Cache;
+
+int g_wNumFreeBlocks;
+#if CMD_DMA
+   u8 g_SBDCmdIndex = 0;
+#endif
+static u8 *g_pIPF;
+static u8 bt_flag = FIRST_BT_ID;
+static u8 bt_block_changed;
+
+#if READBACK_VERIFY
+static u8 *g_pCheckBuf;
+#endif
+
+static u8 cache_block_to_write;
+static u8 last_erased = FIRST_BT_ID;
+
+static u8 *g_pMemPool;
+static u8 *g_pMemPoolFree;
+static u8 *g_temp_buf;
+
+static int globalMemSize;
+
+static u8 GC_Called;
+static u8 BT_GC_Called;
+
+#if CMD_DMA
+static u8 FTLCommandCount;  /* Init value is 0 */
+u8 *g_pBTDelta;
+u8 *g_pBTDelta_Free;
+u8 *g_pBTStartingCopy;
+u8 *g_pWearCounterCopy;
+u16 *g_pReadCounterCopy;
+u8 *g_pBlockTableCopies;
+u8 *g_pNextBlockTable;
+u8 *g_pCopyBackBufferCopies;
+u8 *g_pCopyBackBufferStart;
+
+#pragma pack(push, 1)
+#pragma pack(1)
+struct BTableChangesDelta {
+	u8 FTLCommandCount;
+	u8 ValidFields;
+	u16 g_wBlockTableOffset;
+	u32 g_wBlockTableIndex;
+	u32 BT_Index;
+	u32 BT_Entry_Value;
+	u32 WC_Index;
+	u8 WC_Entry_Value;
+	u32 RC_Index;
+	u16 RC_Entry_Value;
+};
+
+#pragma pack(pop)
+
+struct BTableChangesDelta *p_BTableChangesDelta;
+#endif
+
+
+#define MARK_BLOCK_AS_BAD(blocknode)      (blocknode |= BAD_BLOCK)
+#define MARK_BLK_AS_DISCARD(blk)  (blk = (blk & ~SPARE_BLOCK) | DISCARD_BLOCK)
+
+#define FTL_Get_LBAPBA_Table_Mem_Size_Bytes() (DeviceInfo.wDataBlockNum *\
+						sizeof(u32))
+#define FTL_Get_WearCounter_Table_Mem_Size_Bytes() (DeviceInfo.wDataBlockNum *\
+						sizeof(u8))
+#define FTL_Get_ReadCounter_Table_Mem_Size_Bytes() (DeviceInfo.wDataBlockNum *\
+						sizeof(u16))
+#if SUPPORT_LARGE_BLOCKNUM
+#define FTL_Get_LBAPBA_Table_Flash_Size_Bytes() (DeviceInfo.wDataBlockNum *\
+						sizeof(u8) * 3)
+#else
+#define FTL_Get_LBAPBA_Table_Flash_Size_Bytes() (DeviceInfo.wDataBlockNum *\
+						sizeof(u32))
+#endif
+#define FTL_Get_WearCounter_Table_Flash_Size_Bytes \
+	FTL_Get_WearCounter_Table_Mem_Size_Bytes
+#define FTL_Get_ReadCounter_Table_Flash_Size_Bytes \
+	FTL_Get_ReadCounter_Table_Mem_Size_Bytes
+
+static u32 FTL_Get_Block_Table_Flash_Size_Bytes(void)
+{
+	u32 byte_num;
+
+	if (DeviceInfo.MLCDevice) {
+		byte_num = FTL_Get_LBAPBA_Table_Flash_Size_Bytes() +
+			DeviceInfo.wDataBlockNum * sizeof(u8) +
+			DeviceInfo.wDataBlockNum * sizeof(u16);
+	} else {
+		byte_num = FTL_Get_LBAPBA_Table_Flash_Size_Bytes() +
+			DeviceInfo.wDataBlockNum * sizeof(u8);
+	}
+
+	byte_num += 4 * sizeof(u8);
+
+	return byte_num;
+}
+
+static u16  FTL_Get_Block_Table_Flash_Size_Pages(void)
+{
+	return (u16)FTL_Get_Page_Num(FTL_Get_Block_Table_Flash_Size_Bytes());
+}
+
+static int FTL_Copy_Block_Table_To_Flash(u8 *flashBuf, u32 sizeToTx,
+					u32 sizeTxed)
+{
+	u32 wBytesCopied, blk_tbl_size, wBytes;
+	u32 *pbt = (u32 *)g_pBlockTable;
+
+	blk_tbl_size = FTL_Get_LBAPBA_Table_Flash_Size_Bytes();
+	for (wBytes = 0;
+	(wBytes < sizeToTx) && ((wBytes + sizeTxed) < blk_tbl_size);
+	wBytes++) {
+#if SUPPORT_LARGE_BLOCKNUM
+		flashBuf[wBytes] = (u8)(pbt[(wBytes + sizeTxed) / 3]
+		>> (((wBytes + sizeTxed) % 3) ?
+		((((wBytes + sizeTxed) % 3) == 2) ? 0 : 8) : 16)) & 0xFF;
+#else
+		flashBuf[wBytes] = (u8)(pbt[(wBytes + sizeTxed) / 2]
+		>> (((wBytes + sizeTxed) % 2) ? 0 : 8)) & 0xFF;
+#endif
+	}
+
+	sizeTxed = (sizeTxed > blk_tbl_size) ? (sizeTxed - blk_tbl_size) : 0;
+	blk_tbl_size = FTL_Get_WearCounter_Table_Flash_Size_Bytes();
+	wBytesCopied = wBytes;
+	wBytes = ((blk_tbl_size - sizeTxed) > (sizeToTx - wBytesCopied)) ?
+		(sizeToTx - wBytesCopied) : (blk_tbl_size - sizeTxed);
+	memcpy(flashBuf + wBytesCopied, g_pWearCounter + sizeTxed, wBytes);
+
+	sizeTxed = (sizeTxed > blk_tbl_size) ? (sizeTxed - blk_tbl_size) : 0;
+
+	if (DeviceInfo.MLCDevice) {
+		blk_tbl_size = FTL_Get_ReadCounter_Table_Flash_Size_Bytes();
+		wBytesCopied += wBytes;
+		for (wBytes = 0; ((wBytes + wBytesCopied) < sizeToTx) &&
+			((wBytes + sizeTxed) < blk_tbl_size); wBytes++)
+			flashBuf[wBytes + wBytesCopied] =
+			(g_pReadCounter[(wBytes + sizeTxed) / 2] >>
+			(((wBytes + sizeTxed) % 2) ? 0 : 8)) & 0xFF;
+	}
+
+	return wBytesCopied + wBytes;
+}
+
+static int FTL_Copy_Block_Table_From_Flash(u8 *flashBuf,
+				u32 sizeToTx, u32 sizeTxed)
+{
+	u32 wBytesCopied, blk_tbl_size, wBytes;
+	u32 *pbt = (u32 *)g_pBlockTable;
+
+	blk_tbl_size = FTL_Get_LBAPBA_Table_Flash_Size_Bytes();
+	for (wBytes = 0; (wBytes < sizeToTx) &&
+		((wBytes + sizeTxed) < blk_tbl_size); wBytes++) {
+#if SUPPORT_LARGE_BLOCKNUM
+		if (!((wBytes + sizeTxed) % 3))
+			pbt[(wBytes + sizeTxed) / 3] = 0;
+		pbt[(wBytes + sizeTxed) / 3] |=
+			(flashBuf[wBytes] << (((wBytes + sizeTxed) % 3) ?
+			((((wBytes + sizeTxed) % 3) == 2) ? 0 : 8) : 16));
+#else
+		if (!((wBytes + sizeTxed) % 2))
+			pbt[(wBytes + sizeTxed) / 2] = 0;
+		pbt[(wBytes + sizeTxed) / 2] |=
+			(flashBuf[wBytes] << (((wBytes + sizeTxed) % 2) ?
+			0 : 8));
+#endif
+	}
+
+	sizeTxed = (sizeTxed > blk_tbl_size) ? (sizeTxed - blk_tbl_size) : 0;
+	blk_tbl_size = FTL_Get_WearCounter_Table_Flash_Size_Bytes();
+	wBytesCopied = wBytes;
+	wBytes = ((blk_tbl_size - sizeTxed) > (sizeToTx - wBytesCopied)) ?
+		(sizeToTx - wBytesCopied) : (blk_tbl_size - sizeTxed);
+	memcpy(g_pWearCounter + sizeTxed, flashBuf + wBytesCopied, wBytes);
+	sizeTxed = (sizeTxed > blk_tbl_size) ? (sizeTxed - blk_tbl_size) : 0;
+
+	if (DeviceInfo.MLCDevice) {
+		wBytesCopied += wBytes;
+		blk_tbl_size = FTL_Get_ReadCounter_Table_Flash_Size_Bytes();
+		for (wBytes = 0; ((wBytes + wBytesCopied) < sizeToTx) &&
+			((wBytes + sizeTxed) < blk_tbl_size); wBytes++) {
+			if (((wBytes + sizeTxed) % 2))
+				g_pReadCounter[(wBytes + sizeTxed) / 2] = 0;
+			g_pReadCounter[(wBytes + sizeTxed) / 2] |=
+				(flashBuf[wBytes] <<
+				(((wBytes + sizeTxed) % 2) ? 0 : 8));
+		}
+	}
+
+	return wBytesCopied+wBytes;
+}
+
+static int FTL_Insert_Block_Table_Signature(u8 *buf, u8 tag)
+{
+	int i;
+
+	for (i = 0; i < BTSIG_BYTES; i++)
+		buf[BTSIG_OFFSET + i] =
+		((tag + (i * BTSIG_DELTA) - FIRST_BT_ID) %
+		(1 + LAST_BT_ID-FIRST_BT_ID)) + FIRST_BT_ID;
+
+	return PASS;
+}
+
+static int FTL_Extract_Block_Table_Tag(u8 *buf, u8 **tagarray)
+{
+	static u8 tag[BTSIG_BYTES >> 1];
+	int i, j, k, tagi, tagtemp, status;
+
+	*tagarray = (u8 *)tag;
+	tagi = 0;
+
+	for (i = 0; i < (BTSIG_BYTES - 1); i++) {
+		for (j = i + 1; (j < BTSIG_BYTES) &&
+			(tagi < (BTSIG_BYTES >> 1)); j++) {
+			tagtemp = buf[BTSIG_OFFSET + j] -
+				buf[BTSIG_OFFSET + i];
+			if (tagtemp && !(tagtemp % BTSIG_DELTA)) {
+				tagtemp = (buf[BTSIG_OFFSET + i] +
+					(1 + LAST_BT_ID - FIRST_BT_ID) -
+					(i * BTSIG_DELTA)) %
+					(1 + LAST_BT_ID - FIRST_BT_ID);
+				status = FAIL;
+				for (k = 0; k < tagi; k++) {
+					if (tagtemp == tag[k])
+						status = PASS;
+				}
+
+				if (status == FAIL) {
+					tag[tagi++] = tagtemp;
+					i = (j == (i + 1)) ? i + 1 : i;
+					j = (j == (i + 1)) ? i + 1 : i;
+				}
+			}
+		}
+	}
+
+	return tagi;
+}
+
+
+static int FTL_Execute_SPL_Recovery(void)
+{
+	u32 j, block, blks;
+	u32 *pbt = (u32 *)g_pBlockTable;
+	int ret;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+				__FILE__, __LINE__, __func__);
+
+	blks = DeviceInfo.wSpectraEndBlock - DeviceInfo.wSpectraStartBlock;
+	for (j = 0; j <= blks; j++) {
+		block = (pbt[j]);
+		if (((block & BAD_BLOCK) != BAD_BLOCK) &&
+			((block & SPARE_BLOCK) == SPARE_BLOCK)) {
+#if CMD_DMA
+			ret =  GLOB_LLD_Erase_Block(block & ~BAD_BLOCK,
+			FTLCommandCount, LLD_CMD_FLAG_MODE_POLL);
+#else
+			ret =  GLOB_LLD_Erase_Block(block & ~BAD_BLOCK);
+#endif
+			if (FAIL == ret) {
+				nand_dbg_print(NAND_DBG_WARN,
+					"NAND Program fail in %s, Line %d, "
+					"Function: %s, new Bad Block %d "
+					"generated!\n",
+					__FILE__, __LINE__, __func__,
+					(int)(block & ~BAD_BLOCK));
+				MARK_BLOCK_AS_BAD(pbt[j]);
+			}
+		}
+	}
+
+	return PASS;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     GLOB_FTL_IdentifyDevice
+* Inputs:       pointer to identify data structure
+* Outputs:      PASS / FAIL
+* Description:  the identify data structure is filled in with
+*                   information for the block driver.
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+int GLOB_FTL_IdentifyDevice(struct spectra_indentfy_dev_tag *dev_data)
+{
+	int status = PASS;
+	int bufMem;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+				__FILE__, __LINE__, __func__);
+
+	bufMem = (DeviceInfo.wPageDataSize -
+		  ((DeviceInfo.wDataBlockNum *
+		    (sizeof(u32) + sizeof(u8)
+		     + (DeviceInfo.MLCDevice ? sizeof(u16) : 0))) %
+		   DeviceInfo.wPageDataSize)) %
+		   DeviceInfo.wPageDataSize;
+
+	dev_data->NumBlocks = DeviceInfo.wTotalBlocks;
+	dev_data->PagesPerBlock = DeviceInfo.wPagesPerBlock;
+	dev_data->PageDataSize = DeviceInfo.wPageDataSize;
+	dev_data->wECCBytesPerSector =
+	    DeviceInfo.wECCBytesPerSector;
+	dev_data->wDataBlockNum = DeviceInfo.wDataBlockNum;
+
+	dev_data->SizeOfGlobalMem =
+	    (DeviceInfo.wDataBlockNum  * sizeof(u32) * 2) +
+	    (DeviceInfo.wDataBlockNum * sizeof(u8) + 2) +
+	    (DeviceInfo.MLCDevice ?
+	     (DeviceInfo.wDataBlockNum * sizeof(u16)
+#if CMD_DMA
+	      * (1+1+1)
+#endif
+	      ) : 0) + bufMem +
+#if (PAGES_PER_CACHE_BLOCK > 0)
+	((CACHE_BLOCK_NUMBER + 1) * PAGES_PER_CACHE_BLOCK *
+	DeviceInfo.wPageDataSize * sizeof(u8)) +
+#else
+	((CACHE_BLOCK_NUMBER+1) * DeviceInfo.wPagesPerBlock *
+	DeviceInfo.wPageDataSize * sizeof(u8)) +
+#endif
+	(DeviceInfo.wPageSize*sizeof(u8)) +
+	(DeviceInfo.wPagesPerBlock * DeviceInfo.wPageDataSize * sizeof(u8))
+	+
+#if CMD_DMA
+	(DeviceInfo.wDataBlockNum * sizeof(u32)) +
+	(DeviceInfo.wDataBlockNum * sizeof(u8)) +
+	(5 * ((DeviceInfo.wDataBlockNum * sizeof(u32)) +
+	(DeviceInfo.wDataBlockNum * sizeof(u8)) +
+	(DeviceInfo.wDataBlockNum * sizeof(u16)))) +
+	(MAX_DESCS * sizeof(struct BTableChangesDelta)) +
+	(10 * DeviceInfo.wPagesPerBlock * DeviceInfo.wPageDataSize) +
+#endif
+	((1 + LAST_BT_ID - FIRST_BT_ID) * sizeof(u32)) +
+	(DeviceInfo.wDataBlockNum) +
+	(DeviceInfo.wPageDataSize * sizeof(u8) * 2) +
+	(((DeviceInfo.wPageSize - DeviceInfo.wPageDataSize) *
+	sizeof(u8)) * 2) +
+	(DeviceInfo.wDataBlockNum) +
+#if !CMD_DMA
+	(DeviceInfo.wPageDataSize * DeviceInfo.wPagesPerBlock *
+	sizeof(u8) * 2) +
+#endif
+	DeviceInfo.wBlockSize + GLOB_LLD_Memory_Pool_Size() +
+	(NUM_MEMPOOL_ALLOCS * sizeof(u8) * 4);
+
+	globalMemSize = dev_data->SizeOfGlobalMem;
+
+	return status;
+}
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:        GLOB_FTL_Mem_Config
+* Inputs:          pointer to the memory that is allocated
+* Outputs:         PASS / FAIL
+* Description:     This allows the Block Driver to do the memory allocation
+*                  and is used in place of the FTL doing malloc's.  The
+*                  Block Driver assigns the length based on data passed
+*                  to it in the GLOB_FTL_IdentifyDevice function.
+*                  There is sanity checking that the pointers are not NULL
+*                  There is no sanity checking for the length. If this
+*                  becomes neccessary, an additioanl parameter will
+*                  be needed.
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+int GLOB_FTL_Mem_Config(u8 *pMem)
+{
+	int status = FAIL;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		__FILE__, __LINE__, __func__);
+
+	if (pMem != NULL) {
+		g_pMemPool = pMem;
+		status = GLOB_LLD_Mem_Config(pMem + globalMemSize -
+			GLOB_LLD_Memory_Pool_Size());
+	}
+
+	return status;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     GLOB_FTL_Init
+* Inputs:       none
+* Outputs:      PASS=0 / FAIL=1
+* Description:  allocates the memory for cache array,
+*               important data structures
+*               clears the cache array
+*               reads the block table from flash into array
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+int GLOB_FTL_Init(void)
+{
+	int i;
+	int status = PASS;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		__FILE__, __LINE__, __func__);
+
+#if (PAGES_PER_CACHE_BLOCK > 0)
+	Cache.wCachePageNum = PAGES_PER_CACHE_BLOCK;
+#else
+	Cache.wCachePageNum = DeviceInfo.wPagesPerBlock;
+#endif
+	Cache.dwCacheDataSize = (u32)Cache.wCachePageNum *
+		DeviceInfo.wPageDataSize;
+
+	g_pMemPoolFree = (u8 *)g_pMemPool;
+
+	g_pBlockTable = (u8 *)g_pMemPoolFree;
+	memset(g_pBlockTable, 0, DeviceInfo.wDataBlockNum * sizeof(u32));
+	g_pMemPoolFree += DeviceInfo.wDataBlockNum * sizeof(u32);
+	ALIGN_DWORD_FWD(g_pMemPoolFree);
+
+	g_pWearCounter = (u8 *)g_pMemPoolFree;
+	memset(g_pWearCounter, 0, DeviceInfo.wDataBlockNum * sizeof(u8));
+	g_pMemPoolFree += DeviceInfo.wDataBlockNum * sizeof(u8);
+	ALIGN_DWORD_FWD(g_pMemPoolFree);
+
+	if (DeviceInfo.MLCDevice) {
+		g_pReadCounter = (u16 *)g_pMemPoolFree;
+		g_pMemPoolFree += DeviceInfo.wDataBlockNum * sizeof(u16);
+		memset(g_pReadCounter, 0,
+			DeviceInfo.wDataBlockNum * sizeof(u16));
+		ALIGN_DWORD_FWD(g_pMemPoolFree);
+	}
+
+	for (i = 0; i < CACHE_BLOCK_NUMBER; i++) {
+		Cache.ItemArray[i].dwAddress = NAND_CACHE_INIT_ADDR;
+		Cache.ItemArray[i].bLRUCount = 0;
+		Cache.ItemArray[i].bChanged = CLEAR;
+		Cache.ItemArray[i].pContent = (u8 *)g_pMemPoolFree;
+		g_pMemPoolFree += Cache.dwCacheDataSize * sizeof(u8);
+		ALIGN_DWORD_FWD(g_pMemPoolFree);
+	}
+
+	g_pIPF = (u8 *)g_pMemPoolFree;
+	g_pMemPoolFree += DeviceInfo.wPageSize * sizeof(u8);
+	memset(g_pIPF, 0, DeviceInfo.wPageSize);
+	ALIGN_DWORD_FWD(g_pMemPoolFree);
+
+	g_pTempBuf = (u8 *)g_pMemPoolFree;
+	g_pMemPoolFree += Cache.dwCacheDataSize * sizeof(u8);
+	ALIGN_DWORD_FWD(g_pMemPoolFree);
+
+	g_temp_buf = (u8 *)g_pMemPoolFree;
+	g_pMemPoolFree += DeviceInfo.wPagesPerBlock *
+			DeviceInfo.wPageDataSize * sizeof(u8);
+	memset(g_temp_buf, 0xFF,
+		DeviceInfo.wPagesPerBlock * DeviceInfo.wPageDataSize);
+	ALIGN_DWORD_FWD(g_pMemPoolFree);
+
+#if CMD_DMA
+	g_pBTStartingCopy = (u8 *)g_pMemPoolFree;
+	g_pMemPoolFree += (DeviceInfo.wDataBlockNum * sizeof(u32));
+	memset(g_pBTStartingCopy, 0, DeviceInfo.wDataBlockNum * sizeof(u32));
+	ALIGN_DWORD_FWD(g_pMemPoolFree);
+
+	g_pWearCounterCopy = (u8 *)g_pMemPoolFree;
+	memset(g_pWearCounterCopy, 0, DeviceInfo.wDataBlockNum * sizeof(u8));
+	g_pMemPoolFree += DeviceInfo.wDataBlockNum * sizeof(u8);
+	ALIGN_DWORD_FWD(g_pMemPoolFree);
+
+	if (DeviceInfo.MLCDevice) {
+		g_pReadCounterCopy = (u16 *)g_pMemPoolFree;
+		g_pMemPoolFree += DeviceInfo.wDataBlockNum * sizeof(u16);
+		memset(g_pReadCounterCopy, 0,
+			DeviceInfo.wDataBlockNum * sizeof(u16));
+		ALIGN_DWORD_FWD(g_pMemPoolFree);
+	}
+
+	g_pBlockTableCopies = (u8 *)g_pMemPoolFree;
+	g_pNextBlockTable = g_pBlockTableCopies;
+
+	if (DeviceInfo.MLCDevice)
+		g_pMemPoolFree += 5 *
+			(DeviceInfo.wDataBlockNum * sizeof(u32) +
+			DeviceInfo.wDataBlockNum * sizeof(u8) +
+			DeviceInfo.wDataBlockNum * sizeof(u16));
+	else
+		g_pMemPoolFree += 5 *
+			(DeviceInfo.wDataBlockNum * sizeof(u32) +
+			DeviceInfo.wDataBlockNum * sizeof(u8));
+
+	ALIGN_DWORD_FWD(g_pMemPoolFree);
+
+	g_pBTDelta = (u8 *)g_pMemPoolFree;
+	g_pMemPoolFree += (MAX_DESCS * sizeof(struct BTableChangesDelta));
+	ALIGN_DWORD_FWD(g_pMemPoolFree);
+
+	FTLCommandCount = 0;
+	g_pBTDelta_Free = (u8 *)g_pBTDelta;
+	g_pCopyBackBufferCopies = (u8 *)g_pMemPoolFree;
+	g_pMemPoolFree += 10 * DeviceInfo.wPagesPerBlock *
+		DeviceInfo.wPageDataSize;
+	ALIGN_DWORD_FWD(g_pMemPoolFree);
+
+	g_pCopyBackBufferStart = g_pCopyBackBufferCopies;
+#endif
+	g_pBTBlocks = (u32 *)g_pMemPoolFree;
+	g_pMemPoolFree += (1 + LAST_BT_ID - FIRST_BT_ID) * sizeof(u32);
+	ALIGN_DWORD_FWD(g_pMemPoolFree);
+	memset(g_pBTBlocks, 0xFF,
+		(1 + LAST_BT_ID - FIRST_BT_ID) * sizeof(u32));
+	debug_boundary_error(((int)g_pMemPoolFree - (int)g_pMemPool) - 1,
+		globalMemSize, 0);
+
+	status = FTL_Read_Block_Table();
+
+#if CMD_DMA
+	FTLCommandCount = 0;
+#endif
+
+	return status;
+}
+
+#if CMD_DMA
+int GLOB_FTL_cdma_int(void)
+{
+	return GLOB_LLD_is_cdma_int();
+}
+
+static void save_blk_table_changes(u16 idx)
+{
+	u8 ftl_cmd;
+	u32 *pbt = (u32 *)g_pBTStartingCopy;
+
+	ftl_cmd = p_BTableChangesDelta->FTLCommandCount;
+
+	while (ftl_cmd <= PendingCMD[idx].Tag) {
+		if (p_BTableChangesDelta->ValidFields == 0x01) {
+			g_wBlockTableOffset =
+				p_BTableChangesDelta->g_wBlockTableOffset;
+		} else if (p_BTableChangesDelta->ValidFields == 0x0C) {
+			pbt[p_BTableChangesDelta->BT_Index] =
+				p_BTableChangesDelta->BT_Entry_Value;
+			debug_boundary_error(((
+				p_BTableChangesDelta->BT_Index)),
+				DeviceInfo.wDataBlockNum, 0);
+		} else if (p_BTableChangesDelta->ValidFields == 0x03) {
+			g_wBlockTableOffset =
+				p_BTableChangesDelta->g_wBlockTableOffset;
+			g_wBlockTableIndex =
+				p_BTableChangesDelta->g_wBlockTableIndex;
+		} else if (p_BTableChangesDelta->ValidFields == 0x30) {
+			g_pWearCounterCopy[p_BTableChangesDelta->WC_Index] =
+				p_BTableChangesDelta->WC_Entry_Value;
+		} else if ((DeviceInfo.MLCDevice) &&
+			(p_BTableChangesDelta->ValidFields == 0xC0)) {
+			g_pReadCounterCopy[p_BTableChangesDelta->RC_Index] =
+				p_BTableChangesDelta->RC_Entry_Value;
+			nand_dbg_print(NAND_DBG_DEBUG,
+				"In event status setting read counter "
+				"GLOB_FTLCommandCount %u Count %u Index %u\n",
+				ftl_cmd,
+				p_BTableChangesDelta->RC_Entry_Value,
+				(unsigned int)p_BTableChangesDelta->RC_Index);
+		} else {
+			nand_dbg_print(NAND_DBG_DEBUG,
+				"This should never occur \n");
+		}
+		p_BTableChangesDelta += 1;
+		ftl_cmd = p_BTableChangesDelta->FTLCommandCount;
+	}
+}
+
+static void discard_cmds(u16 n)
+{
+	u32 *pbt = (u32 *)g_pBTStartingCopy;
+	u8 ftl_cmd;
+	unsigned long k, cn;
+
+	if ((PendingCMD[n].CMD == WRITE_MAIN_CMD) ||
+		(PendingCMD[n].CMD == WRITE_MAIN_SPARE_CMD)) {
+		for (k = 0; k < DeviceInfo.wDataBlockNum; k++) {
+			if (PendingCMD[n].Block == (pbt[k] & (~BAD_BLOCK)))
+				MARK_BLK_AS_DISCARD(pbt[k]);
+		}
+	}
+
+	ftl_cmd = p_BTableChangesDelta->FTLCommandCount;
+	while (ftl_cmd <= PendingCMD[n].Tag) {
+		p_BTableChangesDelta += 1;
+		ftl_cmd = p_BTableChangesDelta->FTLCommandCount;
+	}
+
+	cn = UNHIT_BLOCK;
+	for (k = 0; k < CACHE_BLOCK_NUMBER; k++) {
+		if (PendingCMD[n].DataAddr == Cache.ItemArray[k].pContent) {
+			cn = k;
+			break;
+		}
+	}
+	if (cn < UNHIT_BLOCK) {
+		Cache.ItemArray[cn].dwAddress = NAND_CACHE_INIT_ADDR;
+		Cache.ItemArray[cn].bLRUCount = 0;
+		Cache.ItemArray[cn].bChanged  = CLEAR;
+	}
+}
+
+static void process_cmd_pass(int *first_failed_cmd, u16 idx)
+{
+	int is_rw_cmd;
+
+	is_rw_cmd = (PendingCMD[idx].CMD == WRITE_MAIN_CMD) ||
+		(PendingCMD[idx].CMD == WRITE_MAIN_SPARE_CMD) ||
+		(PendingCMD[idx].CMD == READ_MAIN_CMD) ||
+		(PendingCMD[idx].CMD == READ_MAIN_SPARE_CMD);
+
+	if (0 == *first_failed_cmd)
+		save_blk_table_changes(idx);
+	else if (is_rw_cmd)
+		discard_cmds(idx);
+}
+
+static void process_cmd_fail_abort(int *first_failed_cmd,
+				u16 idx, int event)
+{
+	u32 *pbt = (u32 *)g_pBTStartingCopy;
+	u8 ftl_cmd;
+	unsigned long i, k, cn;
+	int erase_fail, program_fail;
+
+	if (0 == *first_failed_cmd)
+		*first_failed_cmd = PendingCMD[idx].SBDCmdIndex;
+
+	nand_dbg_print(NAND_DBG_DEBUG, "Uncorrectable error has occured "
+		"while executing %u Command %u accesing Block %u\n",
+		(unsigned int)p_BTableChangesDelta->FTLCommandCount,
+		PendingCMD[idx].CMD,
+		(unsigned int)PendingCMD[idx].Block);
+
+	ftl_cmd = p_BTableChangesDelta->FTLCommandCount;
+	while (ftl_cmd <= PendingCMD[idx].Tag) {
+		p_BTableChangesDelta += 1;
+		ftl_cmd = p_BTableChangesDelta->FTLCommandCount;
+	}
+
+	if ((PendingCMD[idx].CMD == READ_MAIN_CMD) ||
+		(PendingCMD[idx].CMD == READ_MAIN_SPARE_CMD)) {
+		for (i = 0; i < CACHE_BLOCK_NUMBER; i++) {
+			Cache.ItemArray[i].dwAddress = NAND_CACHE_INIT_ADDR;
+			Cache.ItemArray[i].bLRUCount = 0;
+			Cache.ItemArray[i].bChanged  = CLEAR;
+		}
+	} else if ((PendingCMD[idx].CMD == WRITE_MAIN_CMD) ||
+		(PendingCMD[idx].CMD == WRITE_MAIN_SPARE_CMD)) {
+		cn = 0;
+		for (k = 0; k < DeviceInfo.wDataBlockNum; k++) {
+			if (PendingCMD[idx].Block == (pbt[k] & (~BAD_BLOCK))) {
+				Cache.ItemArray[0].dwAddress = (u64)k *
+					DeviceInfo.wBlockDataSize;
+				Cache.ItemArray[0].bLRUCount = 0;
+				Cache.ItemArray[0].bChanged  = SET;
+				break;
+			}
+		}
+
+		if (k == DeviceInfo.wDataBlockNum)
+			cn = 0;
+		else
+			cn = 1;
+
+		for (i = cn; i < CACHE_BLOCK_NUMBER; i++) {
+			Cache.ItemArray[i].dwAddress = NAND_CACHE_INIT_ADDR;
+			Cache.ItemArray[i].bLRUCount = 0;
+			Cache.ItemArray[i].bChanged  = CLEAR;
+		}
+	}
+
+	erase_fail = (event == EVENT_ERASE_FAILURE) &&
+			(PendingCMD[idx].CMD == ERASE_CMD);
+
+	program_fail = (event == EVENT_PROGRAM_FAILURE) &&
+			((PendingCMD[idx].CMD == WRITE_MAIN_CMD) ||
+			(PendingCMD[idx].CMD == WRITE_MAIN_SPARE_CMD));
+
+	if (erase_fail || program_fail) {
+		for (i = 0; i < DeviceInfo.wDataBlockNum; i++) {
+			if (PendingCMD[idx].Block ==
+				(pbt[i] & (~BAD_BLOCK)))
+				MARK_BLOCK_AS_BAD(pbt[i]);
+		}
+	}
+}
+
+static void process_cmd(int *first_failed_cmd, u16 idx, int event)
+{
+	u8 ftl_cmd;
+	int cmd_match = 0;
+
+	if (p_BTableChangesDelta->FTLCommandCount == PendingCMD[idx].Tag)
+		cmd_match = 1;
+
+	if (PendingCMD[idx].Status == CMD_PASS) {
+		process_cmd_pass(first_failed_cmd, idx);
+	} else if ((PendingCMD[idx].Status == CMD_FAIL) ||
+			(PendingCMD[idx].Status == CMD_ABORT)) {
+		process_cmd_fail_abort(first_failed_cmd, idx, event);
+	} else if ((PendingCMD[idx].Status == CMD_NOT_DONE) &&
+					PendingCMD[idx].Tag) {
+		nand_dbg_print(NAND_DBG_DEBUG,
+			" Command no. %hu is not executed\n",
+			(unsigned int)PendingCMD[idx].Tag);
+		ftl_cmd = p_BTableChangesDelta->FTLCommandCount;
+		while (ftl_cmd <= PendingCMD[idx].Tag) {
+			p_BTableChangesDelta += 1;
+			ftl_cmd = p_BTableChangesDelta->FTLCommandCount;
+		}
+	}
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:    	GLOB_FTL_Event_Status
+* Inputs:       none
+* Outputs:      Event Code
+* Description:	It is called by SBD after hardware interrupt signalling
+*               completion of commands chain
+*               It does following things
+*               get event status from LLD
+*               analyze command chain status
+*               determine last command executed
+*               analyze results
+*               rebuild the block table in case of uncorrectable error
+*               return event code
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+int GLOB_FTL_Event_Status(int *first_failed_cmd)
+{
+	int event_code = PASS;
+	u16 i_P;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		__FILE__, __LINE__, __func__);
+
+	*first_failed_cmd = 0;
+
+	event_code = GLOB_LLD_Event_Status();
+	nand_dbg_print(NAND_DBG_DEBUG, "Event Code got from lld %d\n",
+		event_code);
+
+	switch (event_code) {
+	case EVENT_PASS:
+		nand_dbg_print(NAND_DBG_DEBUG, "Handling EVENT_PASS\n");
+		break;
+	case EVENT_CORRECTABLE_DATA_ERROR_FIXED:
+		nand_dbg_print(NAND_DBG_DEBUG, "Handling "
+			       "EVENT_CORRECTABLE_DATA_ERROR_FIXED");
+		return event_code;
+	case EVENT_UNCORRECTABLE_DATA_ERROR:
+	case EVENT_PROGRAM_FAILURE:
+	case EVENT_ERASE_FAILURE:
+		nand_dbg_print(NAND_DBG_DEBUG, "Handling Ugly case\n");
+		nand_dbg_print(NAND_DBG_DEBUG, "UNCORRECTABLE "
+			       "DATA ERROR HAS HAPPENED\n");
+		p_BTableChangesDelta =
+			(struct BTableChangesDelta *)g_pBTDelta;
+		for (i_P = MAX_CHANS; i_P < (FTLCommandCount + MAX_CHANS);
+				i_P++)
+			process_cmd(first_failed_cmd, i_P, event_code);
+		memcpy(g_pBlockTable, g_pBTStartingCopy,
+			DeviceInfo.wDataBlockNum * sizeof(u32));
+		memcpy(g_pWearCounter, g_pWearCounterCopy,
+			DeviceInfo.wDataBlockNum * sizeof(u8));
+		if (DeviceInfo.MLCDevice)
+			memcpy(g_pReadCounter, g_pReadCounterCopy,
+				DeviceInfo.wDataBlockNum * sizeof(u16));
+		FTL_Write_Block_Table(FAIL);
+		break;
+	default:
+		nand_dbg_print(NAND_DBG_DEBUG, "Handling default case\n");
+		event_code = FAIL;
+		break;
+	}
+
+	memcpy(g_pBTStartingCopy, g_pBlockTable,
+		DeviceInfo.wDataBlockNum * sizeof(u32));
+	memcpy(g_pWearCounterCopy, g_pWearCounter,
+		DeviceInfo.wDataBlockNum * sizeof(u8));
+	if (DeviceInfo.MLCDevice)
+		memcpy(g_pReadCounterCopy, g_pReadCounter,
+			DeviceInfo.wDataBlockNum * sizeof(u16));
+
+	g_pBTDelta_Free = g_pBTDelta;
+	FTLCommandCount = 0;
+	g_pNextBlockTable = g_pBlockTableCopies;
+	g_pCopyBackBufferStart = g_pCopyBackBufferCopies;
+
+	return event_code;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:    	GLOB_FTL_Enable_Disable_Interrupts
+* Inputs:       enable or disable
+* Outputs:      none
+* Description:	pass thru to LLD
+**************************************************************/
+void GLOB_FTL_Enable_Disable_Interrupts(u16 int_enable)
+{
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		__FILE__, __LINE__, __func__);
+
+	GLOB_LLD_Enable_Disable_Interrupts(int_enable);
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     GLOB_FTL_Execute_CMDS
+* Inputs:       none
+* Outputs:      none
+* Description:  pass thru to LLD
+***************************************************************/
+void GLOB_FTL_Execute_CMDS(void)
+{
+	nand_dbg_print(NAND_DBG_TRACE,
+		"GLOB_FTL_Execute_CMDS: FTLCommandCount %u\n",
+		(unsigned int)FTLCommandCount);
+	g_SBDCmdIndex = 0;
+	GLOB_LLD_Execute_CMDs(FTLCommandCount);
+}
+
+#endif
+
+#if !CMD_DMA
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     GLOB_FTL_Read Immediate
+* Inputs:         pointer to data
+*                     address of data
+* Outputs:      PASS / FAIL
+* Description:  Reads one page of data into RAM directly from flash without
+*       using or disturbing cache.It is assumed this function is called
+*       with CMD-DMA disabled.
+*****************************************************************/
+int GLOB_FTL_Read_Immediate(u8 *read_data, u64 addr)
+{
+	int wResult = FAIL;
+	u32 Block;
+	u16 Page;
+	u32 phy_blk;
+	u32 *pbt = (u32 *)g_pBlockTable;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		__FILE__, __LINE__, __func__);
+
+	Block = BLK_FROM_ADDR(addr);
+	Page = PAGE_FROM_ADDR(addr, Block);
+
+	if (!IS_SPARE_BLOCK(Block))
+		return FAIL;
+
+	phy_blk = pbt[Block];
+	wResult = GLOB_LLD_Read_Page_Main(read_data, phy_blk, Page, 1);
+
+	if (DeviceInfo.MLCDevice) {
+		g_pReadCounter[phy_blk - DeviceInfo.wSpectraStartBlock]++;
+		if (g_pReadCounter[phy_blk - DeviceInfo.wSpectraStartBlock]
+			>= MAX_READ_COUNTER)
+			FTL_Read_Disturbance(phy_blk);
+		if (g_cBlockTableStatus != IN_PROGRESS_BLOCK_TABLE) {
+			g_cBlockTableStatus = IN_PROGRESS_BLOCK_TABLE;
+			FTL_Write_IN_Progress_Block_Table_Page();
+		}
+	}
+
+	return wResult;
+}
+#endif
+
+#ifdef SUPPORT_BIG_ENDIAN
+/*********************************************************************
+* Function:     FTL_Invert_Block_Table
+* Inputs:       none
+* Outputs:      none
+* Description:  Re-format the block table in ram based on BIG_ENDIAN and
+*                     LARGE_BLOCKNUM if necessary
+**********************************************************************/
+static void FTL_Invert_Block_Table(void)
+{
+	u32 i;
+	u32 *pbt = (u32 *)g_pBlockTable;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		__FILE__, __LINE__, __func__);
+
+#ifdef SUPPORT_LARGE_BLOCKNUM
+	for (i = 0; i < DeviceInfo.wDataBlockNum; i++) {
+		pbt[i] = INVERTUINT32(pbt[i]);
+		g_pWearCounter[i] = INVERTUINT32(g_pWearCounter[i]);
+	}
+#else
+	for (i = 0; i < DeviceInfo.wDataBlockNum; i++) {
+		pbt[i] = INVERTUINT16(pbt[i]);
+		g_pWearCounter[i] = INVERTUINT16(g_pWearCounter[i]);
+	}
+#endif
+}
+#endif
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     GLOB_FTL_Flash_Init
+* Inputs:       none
+* Outputs:      PASS=0 / FAIL=0x01 (based on read ID)
+* Description:  The flash controller is initialized
+*               The flash device is reset
+*               Perform a flash READ ID command to confirm that a
+*                   valid device is attached and active.
+*                   The DeviceInfo structure gets filled in
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+int GLOB_FTL_Flash_Init(void)
+{
+	int status = FAIL;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		__FILE__, __LINE__, __func__);
+
+#if CMD_DMA
+	GLOB_LLD_Flash_Init(LLD_CMD_FLAG_MODE_POLL);
+#else
+	GLOB_LLD_Flash_Init();
+#endif
+	status = GLOB_LLD_Read_Device_ID();
+
+	return status;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Inputs:       none
+* Outputs:      PASS=0 / FAIL=0x01 (based on read ID)
+* Description:  The flash controller is released
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+int GLOB_FTL_Flash_Release(void)
+{
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		__FILE__, __LINE__, __func__);
+
+	return GLOB_LLD_Flash_Release();
+}
+
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     GLOB_FTL_Cache_Release
+* Inputs:       none
+* Outputs:      none
+* Description:  release all allocated memory in GLOB_FTL_Init
+*               (allocated in GLOB_FTL_Init)
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+void GLOB_FTL_Cache_Release(void)
+{
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+			       __FILE__, __LINE__, __func__);
+	return;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     FTL_Cache_If_Hit
+* Inputs:       Page Address
+* Outputs:      Block number/UNHIT BLOCK
+* Description:  Determines if the addressed page is in cache
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+static u8 FTL_Cache_If_Hit(u64 page_addr)
+{
+	u8 i, blk;
+	u64 addr;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		__FILE__, __LINE__, __func__);
+
+	blk = UNHIT_BLOCK;
+	for (i = 0; i < CACHE_BLOCK_NUMBER; i++) {
+		addr = Cache.ItemArray[i].dwAddress;
+		if ((addr <= page_addr) &&
+			(addr + Cache.dwCacheDataSize > page_addr)) {
+			blk = i;
+			break;
+		}
+	}
+
+	return blk;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     FTL_Calculate_LRU
+* Inputs:       None
+* Outputs:      None
+* Description:  Calculate the least recently block in a cache and record its
+*               index in bLRU field.
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+static void FTL_Calculate_LRU(void)
+{
+	u8 i, bCurrentLRU, bTempCount;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		__FILE__, __LINE__, __func__);
+
+	bCurrentLRU = 0;
+	bTempCount = MAX_BYTE_VALUE;
+
+	for (i = 0; i < CACHE_BLOCK_NUMBER; i++) {
+		if (Cache.ItemArray[i].bLRUCount < bTempCount) {
+			bCurrentLRU = i;
+			bTempCount = Cache.ItemArray[i].bLRUCount;
+		}
+	}
+
+	Cache.bLRU = bCurrentLRU;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     FTL_Cache_Read_Page
+* Inputs:       pointer to read buffer,page address and block number in a cache
+* Outputs:      None
+* Description:  Read the page from the cached block addressed by blocknumber
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+static void FTL_Cache_Read_Page(u8 *pData, u64 dwPageAddr,
+					u8 cache_blk)
+{
+	u8 *pSrc;
+	u64 addr;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		__FILE__, __LINE__, __func__);
+
+	addr = Cache.ItemArray[cache_blk].dwAddress;
+	pSrc = Cache.ItemArray[cache_blk].pContent;
+	pSrc += (unsigned long)(((dwPageAddr - addr) >>
+		DeviceInfo.nBitsInPageDataSize) * DeviceInfo.wPageDataSize);
+
+#if CMD_DMA
+	GLOB_LLD_MemCopy_CMD(FTLCommandCount, pData, pSrc,
+			DeviceInfo.wPageDataSize, 0);
+	FTLCommandCount++;
+#else
+	memcpy(pData, pSrc, DeviceInfo.wPageDataSize);
+#endif
+
+	if (Cache.ItemArray[cache_blk].bLRUCount < MAX_BYTE_VALUE)
+		Cache.ItemArray[cache_blk].bLRUCount++;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     FTL_Cache_Read_All
+* Inputs:       pointer to read buffer,block address
+* Outputs:      PASS=0 / FAIL =1
+* Description:  It reads pages in cache
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+static int FTL_Cache_Read_All(u8 *pData, u64 blk_addr)
+{
+	int wResult;
+	u32 Block;
+	u32 lba = BAD_BLOCK;
+	u16 Page;
+	u16 PageCount;
+	u32 *pbt = (u32 *)g_pBlockTable;
+	u32 i;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+			       __FILE__, __LINE__, __func__);
+
+	wResult = PASS;
+
+	Block = BLK_FROM_ADDR(blk_addr);
+	Page = PAGE_FROM_ADDR(blk_addr, Block);
+
+	PageCount = Cache.wCachePageNum;
+
+	nand_dbg_print(NAND_DBG_DEBUG,
+		"FTL_Cache_Read_All: Reading Block %u\n",
+		(unsigned int)Block);
+
+	for (i = 0; i < DeviceInfo.wDataBlockNum; i++) {
+		if (Block == (pbt[i] & (~BAD_BLOCK))) {
+			lba = i;
+			if (IS_SPARE_BLOCK(i) || IS_BAD_BLOCK(i) ||
+				IS_DISCARDED_BLOCK(i)) {
+				/* Add by yunpeng -2008.12.3 */
+#if CMD_DMA
+				GLOB_LLD_MemCopy_CMD(FTLCommandCount,
+				pData, g_temp_buf,
+				PageCount * DeviceInfo.wPageDataSize, 0);
+				FTLCommandCount++;
+#else
+				memset(pData, 0xFF,
+					PageCount * DeviceInfo.wPageDataSize);
+#endif
+				return wResult;
+			} else {
+				continue;
+			}
+		}
+	}
+
+	if (lba == BAD_BLOCK)
+		printk(KERN_ERR "FTL_Cache_Read_All: Block is not found in BT\n");
+
+#if CMD_DMA
+	wResult = GLOB_LLD_Read_Page_Main(pData, Block, Page, PageCount,
+					    FTLCommandCount,
+					    LLD_CMD_FLAG_MODE_CDMA);
+	if (DeviceInfo.MLCDevice) {
+		g_pReadCounter[Block - DeviceInfo.wSpectraStartBlock]++;
+		nand_dbg_print(NAND_DBG_DEBUG,
+			       "Read Counter modified in FTLCommandCount %u"
+				" Block %u Counter%u\n",
+			       FTLCommandCount, (unsigned int)Block,
+			       g_pReadCounter[Block -
+			       DeviceInfo.wSpectraStartBlock]);
+
+		p_BTableChangesDelta =
+			(struct BTableChangesDelta *)g_pBTDelta_Free;
+		g_pBTDelta_Free += sizeof(struct BTableChangesDelta);
+		p_BTableChangesDelta->FTLCommandCount = FTLCommandCount;
+		p_BTableChangesDelta->RC_Index =
+			Block - DeviceInfo.wSpectraStartBlock;
+		p_BTableChangesDelta->RC_Entry_Value =
+			g_pReadCounter[Block - DeviceInfo.wSpectraStartBlock];
+		p_BTableChangesDelta->ValidFields = 0xC0;
+
+		FTLCommandCount++;
+
+		if (g_pReadCounter[Block - DeviceInfo.wSpectraStartBlock] >=
+		    MAX_READ_COUNTER)
+			FTL_Read_Disturbance(Block);
+		if (g_cBlockTableStatus != IN_PROGRESS_BLOCK_TABLE) {
+			g_cBlockTableStatus = IN_PROGRESS_BLOCK_TABLE;
+			FTL_Write_IN_Progress_Block_Table_Page();
+		}
+	} else {
+		FTLCommandCount++;
+	}
+#else
+	wResult = GLOB_LLD_Read_Page_Main(pData, Block, Page, PageCount);
+	if (wResult == FAIL)
+		return wResult;
+
+	if (DeviceInfo.MLCDevice) {
+		g_pReadCounter[Block - DeviceInfo.wSpectraStartBlock]++;
+		if (g_pReadCounter[Block - DeviceInfo.wSpectraStartBlock] >=
+						MAX_READ_COUNTER)
+			FTL_Read_Disturbance(Block);
+		if (g_cBlockTableStatus != IN_PROGRESS_BLOCK_TABLE) {
+			g_cBlockTableStatus = IN_PROGRESS_BLOCK_TABLE;
+			FTL_Write_IN_Progress_Block_Table_Page();
+		}
+	}
+#endif
+	return wResult;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     FTL_Cache_Write_All
+* Inputs:       pointer to cache in sys memory
+*               address of free block in flash
+* Outputs:      PASS=0 / FAIL=1
+* Description:  writes all the pages of the block in cache to flash
+*
+*               NOTE:need to make sure this works ok when cache is limited
+*               to a partial block. This is where copy-back would be
+*               activated.  This would require knowing which pages in the
+*               cached block are clean/dirty.Right now we only know if
+*               the whole block is clean/dirty.
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+static int FTL_Cache_Write_All(u8 *pData, u64 blk_addr)
+{
+	u16 wResult = PASS;
+	u32 Block;
+	u16 Page;
+	u16 PageCount;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+			       __FILE__, __LINE__, __func__);
+
+	nand_dbg_print(NAND_DBG_DEBUG, "This block %d going to be written "
+		"on %d\n", cache_block_to_write,
+		(u32)(blk_addr >> DeviceInfo.nBitsInBlockDataSize));
+
+	Block = BLK_FROM_ADDR(blk_addr);
+	Page = PAGE_FROM_ADDR(blk_addr, Block);
+	PageCount = Cache.wCachePageNum;
+
+#if CMD_DMA
+	if (FAIL == GLOB_LLD_Write_Page_Main(pData, Block, Page, PageCount,
+					     FTLCommandCount)) {
+		nand_dbg_print(NAND_DBG_WARN,
+			"NAND Program fail in %s, Line %d, "
+			"Function: %s, new Bad Block %d generated! "
+			"Need Bad Block replacing.\n",
+			__FILE__, __LINE__, __func__, Block);
+		wResult = FAIL;
+	}
+	FTLCommandCount++;
+#else
+	if (FAIL == GLOB_LLD_Write_Page_Main(pData, Block, Page, PageCount)) {
+		nand_dbg_print(NAND_DBG_WARN, "NAND Program fail in %s,"
+			" Line %d, Function %s, new Bad Block %d generated!"
+			"Need Bad Block replacing.\n",
+			__FILE__, __LINE__, __func__, Block);
+		wResult = FAIL;
+	}
+#endif
+	return wResult;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     FTL_Cache_Update_Block
+* Inputs:       pointer to buffer,page address,block address
+* Outputs:      PASS=0 / FAIL=1
+* Description:  It updates the cache
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+static int FTL_Cache_Update_Block(u8 *pData,
+			u64 old_page_addr, u64 blk_addr)
+{
+	int i, j;
+	u8 *buf = pData;
+	int wResult = PASS;
+	int wFoundInCache;
+	u64 page_addr;
+	u64 addr;
+	u64 old_blk_addr;
+	u16 page_offset;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+				__FILE__, __LINE__, __func__);
+
+	old_blk_addr = (u64)(old_page_addr >>
+		DeviceInfo.nBitsInBlockDataSize) * DeviceInfo.wBlockDataSize;
+	page_offset = (u16)(GLOB_u64_Remainder(old_page_addr, 2) >>
+		DeviceInfo.nBitsInPageDataSize);
+
+	for (i = 0; i < DeviceInfo.wPagesPerBlock; i += Cache.wCachePageNum) {
+		page_addr = old_blk_addr + i * DeviceInfo.wPageDataSize;
+		if (i != page_offset) {
+			wFoundInCache = FAIL;
+			for (j = 0; j < CACHE_BLOCK_NUMBER; j++) {
+				addr = Cache.ItemArray[j].dwAddress;
+				addr = FTL_Get_Physical_Block_Addr(addr) +
+					GLOB_u64_Remainder(addr, 2);
+				if ((addr >= page_addr) && addr <
+					(page_addr + Cache.dwCacheDataSize)) {
+					wFoundInCache = PASS;
+					buf = Cache.ItemArray[j].pContent;
+					Cache.ItemArray[j].bChanged = SET;
+					break;
+				}
+			}
+			if (FAIL == wFoundInCache) {
+				if (ERR == FTL_Cache_Read_All(g_pTempBuf,
+					page_addr)) {
+					wResult = FAIL;
+					break;
+				}
+				buf = g_pTempBuf;
+			}
+		} else {
+			buf = pData;
+		}
+
+		if (FAIL == FTL_Cache_Write_All(buf,
+			blk_addr + (page_addr - old_blk_addr))) {
+			wResult = FAIL;
+			break;
+		}
+	}
+
+	return wResult;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     FTL_Copy_Block
+* Inputs:       source block address
+*               Destination block address
+* Outputs:      PASS=0 / FAIL=1
+* Description:  used only for static wear leveling to move the block
+*               containing static data to new blocks(more worn)
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+int FTL_Copy_Block(u64 old_blk_addr, u64 blk_addr)
+{
+	int i, r1, r2, wResult = PASS;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		__FILE__, __LINE__, __func__);
+
+	for (i = 0; i < DeviceInfo.wPagesPerBlock; i += Cache.wCachePageNum) {
+		r1 = FTL_Cache_Read_All(g_pTempBuf, old_blk_addr +
+					i * DeviceInfo.wPageDataSize);
+		r2 = FTL_Cache_Write_All(g_pTempBuf, blk_addr +
+					i * DeviceInfo.wPageDataSize);
+		if ((ERR == r1) || (FAIL == r2)) {
+			wResult = FAIL;
+			break;
+		}
+	}
+
+	return wResult;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     FTL_Cache_Write_Back
+* Inputs:       pointer to data cached in sys memory
+*               address of free block in flash
+* Outputs:      PASS=0 / FAIL=1
+* Description:  writes all the pages of Cache Block to flash
+*
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+static int FTL_Cache_Write_Back(u8 *pData, u64 blk_addr)
+{
+	int i, j, iErase;
+	u64 old_page_addr, addr, phy_addr;
+	u32 *pbt = (u32 *)g_pBlockTable;
+	u32 lba;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+			       __FILE__, __LINE__, __func__);
+
+	old_page_addr = FTL_Get_Physical_Block_Addr(blk_addr) +
+		GLOB_u64_Remainder(blk_addr, 2);
+
+	iErase = (FAIL == FTL_Replace_Block(blk_addr)) ? PASS : FAIL;
+
+	pbt[BLK_FROM_ADDR(blk_addr)] &= (~SPARE_BLOCK);
+
+#if CMD_DMA
+	p_BTableChangesDelta = (struct BTableChangesDelta *)g_pBTDelta_Free;
+	g_pBTDelta_Free += sizeof(struct BTableChangesDelta);
+
+	p_BTableChangesDelta->FTLCommandCount = FTLCommandCount;
+	p_BTableChangesDelta->BT_Index = (u32)(blk_addr >>
+		DeviceInfo.nBitsInBlockDataSize);
+	p_BTableChangesDelta->BT_Entry_Value =
+		pbt[(u32)(blk_addr >> DeviceInfo.nBitsInBlockDataSize)];
+	p_BTableChangesDelta->ValidFields = 0x0C;
+#endif
+
+	if (IN_PROGRESS_BLOCK_TABLE != g_cBlockTableStatus) {
+		g_cBlockTableStatus = IN_PROGRESS_BLOCK_TABLE;
+		FTL_Write_IN_Progress_Block_Table_Page();
+	}
+
+	for (i = 0; i < RETRY_TIMES; i++) {
+		if (PASS == iErase) {
+			phy_addr = FTL_Get_Physical_Block_Addr(blk_addr);
+			if (FAIL == GLOB_FTL_Block_Erase(phy_addr)) {
+				lba = BLK_FROM_ADDR(blk_addr);
+				MARK_BLOCK_AS_BAD(pbt[lba]);
+				i = RETRY_TIMES;
+				break;
+			}
+		}
+
+		for (j = 0; j < CACHE_BLOCK_NUMBER; j++) {
+			addr = Cache.ItemArray[j].dwAddress;
+			if ((addr <= blk_addr) &&
+				((addr + Cache.dwCacheDataSize) > blk_addr))
+				cache_block_to_write = j;
+		}
+
+		phy_addr = FTL_Get_Physical_Block_Addr(blk_addr);
+		if (PASS == FTL_Cache_Update_Block(pData,
+					old_page_addr, phy_addr)) {
+			cache_block_to_write = UNHIT_BLOCK;
+			break;
+		} else {
+			iErase = PASS;
+		}
+	}
+
+	if (i >= RETRY_TIMES) {
+		if (ERR == FTL_Flash_Error_Handle(pData,
+					old_page_addr, blk_addr))
+			return ERR;
+		else
+			return FAIL;
+	}
+
+	return PASS;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     FTL_Cache_Write_Page
+* Inputs:       Pointer to buffer, page address, cache block number
+* Outputs:      PASS=0 / FAIL=1
+* Description:  It writes the data in Cache Block
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+static void FTL_Cache_Write_Page(u8 *pData, u64 page_addr,
+				u8 cache_blk, u16 flag)
+{
+	u8 *pDest;
+	u64 addr;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		__FILE__, __LINE__, __func__);
+
+	addr = Cache.ItemArray[cache_blk].dwAddress;
+	pDest = Cache.ItemArray[cache_blk].pContent;
+
+	pDest += (unsigned long)(page_addr - addr);
+	Cache.ItemArray[cache_blk].bChanged = SET;
+#if CMD_DMA
+	GLOB_LLD_MemCopy_CMD(FTLCommandCount, pDest, pData,
+			DeviceInfo.wPageDataSize, flag);
+	FTLCommandCount++;
+#else
+	memcpy(pDest, pData, DeviceInfo.wPageDataSize);
+#endif
+	if (Cache.ItemArray[cache_blk].bLRUCount < MAX_BYTE_VALUE)
+		Cache.ItemArray[cache_blk].bLRUCount++;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     FTL_Cache_Write
+* Inputs:       none
+* Outputs:      PASS=0 / FAIL=1
+* Description:  It writes least frequently used Cache block to flash if it
+*               has been changed
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+static int FTL_Cache_Write(void)
+{
+	int i, bResult = PASS;
+	u8 bNO, least_count = 0xFF;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		__FILE__, __LINE__, __func__);
+
+	FTL_Calculate_LRU();
+
+	bNO = Cache.bLRU;
+	nand_dbg_print(NAND_DBG_DEBUG, "FTL_Cache_Write: "
+		"Least used cache block is %d\n", bNO);
+
+	if (SET == Cache.ItemArray[bNO].bChanged) {
+		nand_dbg_print(NAND_DBG_DEBUG, "FTL_Cache_Write: Cache"
+			" Block %d containing logical block %d is dirty\n",
+			bNO,
+			(u32)(Cache.ItemArray[bNO].dwAddress >>
+			DeviceInfo.nBitsInBlockDataSize));
+		bResult = FTL_Cache_Write_Back(Cache.ItemArray[bNO].pContent,
+				Cache.ItemArray[bNO].dwAddress);
+		if (bResult != ERR)
+			Cache.ItemArray[bNO].bChanged = CLEAR;
+
+		least_count = Cache.ItemArray[bNO].bLRUCount;
+
+		for (i = 0; i < CACHE_BLOCK_NUMBER; i++) {
+			if (i == bNO)
+				continue;
+			if (Cache.ItemArray[i].bLRUCount > 0)
+				Cache.ItemArray[i].bLRUCount -= least_count;
+		}
+	}
+
+	return bResult;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     FTL_Cache_Read
+* Inputs:       Page address
+* Outputs:      PASS=0 / FAIL=1
+* Description:  It reads the block from device in Cache Bllock
+*               Set the LRU count to 1
+*               Mark the Cache Block as clean
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+static int FTL_Cache_Read(u64 page_addr)
+{
+	u64 addr;
+	u8 bNO = Cache.bLRU;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		__FILE__, __LINE__, __func__);
+
+	addr = (u64)GLOB_u64_Div(page_addr, Cache.dwCacheDataSize)
+		* Cache.dwCacheDataSize;
+	Cache.ItemArray[bNO].bLRUCount = 1;
+	Cache.ItemArray[bNO].dwAddress = addr;
+	Cache.ItemArray[bNO].bChanged = CLEAR;
+
+	nand_dbg_print(NAND_DBG_DEBUG, "FTL_Cache_Read: Logical Block %d "
+		"is read into cache block no. %d\n",
+		(u32)GLOB_u64_Div(Cache.ItemArray[bNO].dwAddress,
+			Cache.dwCacheDataSize),
+		bNO);
+
+	return FTL_Cache_Read_All(Cache.ItemArray[bNO].pContent,
+		FTL_Get_Physical_Block_Addr(addr) +
+		GLOB_u64_Remainder(addr, 2));
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     FTL_Check_Block_Table
+* Inputs:       ?
+* Outputs:      PASS=0 / FAIL=1
+* Description:  It checks the correctness of each block table entry
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+static int FTL_Check_Block_Table(int wOldTable)
+{
+	u32 i;
+	int wResult = PASS;
+	u32 blk_idx;
+	u32 *pbt = (u32 *)g_pBlockTable;
+
+	u8 *pFlag = g_pMemPoolFree;
+	g_pMemPoolFree += (DeviceInfo.wDataBlockNum);
+	ALIGN_DWORD_FWD(g_pMemPoolFree);
+	debug_boundary_error(((int)g_pMemPoolFree - (int)g_pMemPool) - 1,
+		globalMemSize, 0);
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	if (NULL != pFlag) {
+		memset(pFlag, FAIL, DeviceInfo.wDataBlockNum);
+		for (i = 0; i < DeviceInfo.wDataBlockNum; i++) {
+			blk_idx = (u32)(pbt[i] & (~BAD_BLOCK));
+
+			/*
+			 * 20081006/KBV - Changed to pFlag[i] reference
+			 * to avoid buffer overflow
+			 */
+
+			/*
+			 * 2008-10-20 Yunpeng Note: This change avoid
+			 * buffer overflow, but changed function of
+			 * the code, so it should be re-write later
+			 */
+			if ((blk_idx > DeviceInfo.wSpectraEndBlock) ||
+				PASS == pFlag[i]) {
+				wResult = FAIL;
+				break;
+			} else {
+				pFlag[i] = PASS;
+			}
+		}
+		g_pMemPoolFree -= (DeviceInfo.wDataBlockNum);
+		ALIGN_DWORD_BWD(g_pMemPoolFree);
+	}
+	return wResult;
+}
+
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     FTL_Write_Block_Table
+* Inputs:       flasg
+* Outputs:      0=Block Table was updated. No write done. 1=Block write needs to
+* happen. -1 Error
+* Description:  It writes the block table
+*               Block table always mapped to LBA 0 which inturn mapped
+*               to any physical block
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+static int FTL_Write_Block_Table(int wForce)
+{
+	u32 *pbt = (u32 *)g_pBlockTable;
+	int wSuccess = PASS;
+	u32 wTempBlockTableIndex;
+	u16 bt_pages, new_bt_offset;
+	u8 blockchangeoccured = 0;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+			       __FILE__, __LINE__, __func__);
+
+	bt_pages = FTL_Get_Block_Table_Flash_Size_Pages();
+
+	if (IN_PROGRESS_BLOCK_TABLE != g_cBlockTableStatus)
+		return 0;
+
+	if (PASS == wForce) {
+		g_wBlockTableOffset =
+			(u16)(DeviceInfo.wPagesPerBlock - bt_pages);
+#if CMD_DMA
+		p_BTableChangesDelta =
+			(struct BTableChangesDelta *)g_pBTDelta_Free;
+		g_pBTDelta_Free += sizeof(struct BTableChangesDelta);
+
+		p_BTableChangesDelta->FTLCommandCount = FTLCommandCount;
+		p_BTableChangesDelta->g_wBlockTableOffset =
+			g_wBlockTableOffset;
+		p_BTableChangesDelta->ValidFields = 0x01;
+#endif
+	}
+
+	nand_dbg_print(NAND_DBG_DEBUG,
+		"Inside FTL_Write_Block_Table: block %d Page:%d\n",
+		g_wBlockTableIndex, g_wBlockTableOffset);
+
+	do {
+		new_bt_offset = g_wBlockTableOffset + bt_pages + 1;
+		if ((0 == (new_bt_offset % DeviceInfo.wPagesPerBlock)) ||
+			(new_bt_offset > DeviceInfo.wPagesPerBlock) ||
+			(FAIL == wSuccess)) {
+			wTempBlockTableIndex = FTL_Replace_Block_Table();
+			if (BAD_BLOCK == wTempBlockTableIndex)
+				return ERR;
+			if (!blockchangeoccured) {
+				bt_block_changed = 1;
+				blockchangeoccured = 1;
+			}
+
+			g_wBlockTableIndex = wTempBlockTableIndex;
+			g_wBlockTableOffset = 0;
+			pbt[BLOCK_TABLE_INDEX] = g_wBlockTableIndex;
+#if CMD_DMA
+			p_BTableChangesDelta =
+				(struct BTableChangesDelta *)g_pBTDelta_Free;
+			g_pBTDelta_Free += sizeof(struct BTableChangesDelta);
+
+			p_BTableChangesDelta->FTLCommandCount =
+				    FTLCommandCount;
+			p_BTableChangesDelta->g_wBlockTableOffset =
+				    g_wBlockTableOffset;
+			p_BTableChangesDelta->g_wBlockTableIndex =
+				    g_wBlockTableIndex;
+			p_BTableChangesDelta->ValidFields = 0x03;
+
+			p_BTableChangesDelta =
+				(struct BTableChangesDelta *)g_pBTDelta_Free;
+			g_pBTDelta_Free +=
+				sizeof(struct BTableChangesDelta);
+
+			p_BTableChangesDelta->FTLCommandCount =
+				    FTLCommandCount;
+			p_BTableChangesDelta->BT_Index =
+				    BLOCK_TABLE_INDEX;
+			p_BTableChangesDelta->BT_Entry_Value =
+				    pbt[BLOCK_TABLE_INDEX];
+			p_BTableChangesDelta->ValidFields = 0x0C;
+#endif
+		}
+
+		wSuccess = FTL_Write_Block_Table_Data();
+		if (FAIL == wSuccess)
+			MARK_BLOCK_AS_BAD(pbt[BLOCK_TABLE_INDEX]);
+	} while (FAIL == wSuccess);
+
+	g_cBlockTableStatus = CURRENT_BLOCK_TABLE;
+
+	return 1;
+}
+
+/******************************************************************
+* Function:     GLOB_FTL_Flash_Format
+* Inputs:       none
+* Outputs:      PASS
+* Description:  The block table stores bad block info, including MDF+
+*               blocks gone bad over the ages. Therefore, if we have a
+*               block table in place, then use it to scan for bad blocks
+*               If not, then scan for MDF.
+*               Now, a block table will only be found if spectra was already
+*               being used. For a fresh flash, we'll go thru scanning for
+*               MDF. If spectra was being used, then there is a chance that
+*               the MDF has been corrupted. Spectra avoids writing to the
+*               first 2 bytes of the spare area to all pages in a block. This
+*               covers all known flash devices. However, since flash
+*               manufacturers have no standard of where the MDF is stored,
+*               this cannot guarantee that the MDF is protected for future
+*               devices too. The initial scanning for the block table assures
+*               this. It is ok even if the block table is outdated, as all
+*               we're looking for are bad block markers.
+*               Use this when mounting a file system or starting a
+*               new flash.
+*
+*********************************************************************/
+static int  FTL_Format_Flash(u8 valid_block_table)
+{
+	u32 i, j;
+	u32 *pbt = (u32 *)g_pBlockTable;
+	u32 tempNode;
+	int ret;
+
+#if CMD_DMA
+	u32 *pbtStartingCopy = (u32 *)g_pBTStartingCopy;
+	if (FTLCommandCount)
+		return FAIL;
+#endif
+
+	if (FAIL == FTL_Check_Block_Table(FAIL))
+		valid_block_table = 0;
+
+	if (valid_block_table) {
+		u8 switched = 1;
+		u32 block, k;
+
+		k = DeviceInfo.wSpectraStartBlock;
+		while (switched && (k < DeviceInfo.wSpectraEndBlock)) {
+			switched = 0;
+			k++;
+			for (j = DeviceInfo.wSpectraStartBlock, i = 0;
+			j <= DeviceInfo.wSpectraEndBlock;
+			j++, i++) {
+				block = (pbt[i] & ~BAD_BLOCK) -
+					DeviceInfo.wSpectraStartBlock;
+				if (block != i) {
+					switched = 1;
+					tempNode = pbt[i];
+					pbt[i] = pbt[block];
+					pbt[block] = tempNode;
+				}
+			}
+		}
+		if ((k == DeviceInfo.wSpectraEndBlock) && switched)
+			valid_block_table = 0;
+	}
+
+	if (!valid_block_table) {
+		memset(g_pBlockTable, 0,
+			DeviceInfo.wDataBlockNum * sizeof(u32));
+		memset(g_pWearCounter, 0,
+			DeviceInfo.wDataBlockNum * sizeof(u8));
+		if (DeviceInfo.MLCDevice)
+			memset(g_pReadCounter, 0,
+				DeviceInfo.wDataBlockNum * sizeof(u16));
+#if CMD_DMA
+		memset(g_pBTStartingCopy, 0,
+			DeviceInfo.wDataBlockNum * sizeof(u32));
+		memset(g_pWearCounterCopy, 0,
+				DeviceInfo.wDataBlockNum * sizeof(u8));
+		if (DeviceInfo.MLCDevice)
+			memset(g_pReadCounterCopy, 0,
+				DeviceInfo.wDataBlockNum * sizeof(u16));
+#endif
+
+#if READ_BADBLOCK_INFO
+		for (j = DeviceInfo.wSpectraStartBlock, i = 0;
+			j <= DeviceInfo.wSpectraEndBlock;
+			j++, i++) {
+			if (GLOB_LLD_Get_Bad_Block((u32)j))
+				pbt[i] = (u32)(BAD_BLOCK | j);
+		}
+#endif
+	}
+
+	nand_dbg_print(NAND_DBG_WARN, "Erasing all blocks in the NAND\n");
+
+	for (j = DeviceInfo.wSpectraStartBlock, i = 0;
+		j <= DeviceInfo.wSpectraEndBlock;
+		j++, i++) {
+		if ((pbt[i] & BAD_BLOCK) != BAD_BLOCK) {
+#if CMD_DMA
+			ret = GLOB_LLD_Erase_Block(j, FTLCommandCount,
+						LLD_CMD_FLAG_MODE_POLL);
+#else
+			ret = GLOB_LLD_Erase_Block(j);
+#endif
+			if (FAIL == ret) {
+				pbt[i] = (u32)(j);
+				MARK_BLOCK_AS_BAD(pbt[i]);
+				nand_dbg_print(NAND_DBG_WARN,
+			       "NAND Program fail in %s, Line %d, "
+			       "Function: %s, new Bad Block %d generated!\n",
+			       __FILE__, __LINE__, __func__, (int)j);
+			} else {
+				pbt[i] = (u32)(SPARE_BLOCK | j);
+			}
+		}
+#if CMD_DMA
+		pbtStartingCopy[i] = pbt[i];
+#endif
+	}
+
+	g_wBlockTableOffset = 0;
+	for (i = 0; (i <= (DeviceInfo.wSpectraEndBlock -
+			DeviceInfo.wSpectraStartBlock))
+			&& ((pbt[i] & BAD_BLOCK) == BAD_BLOCK); i++)
+		;
+	if (i > (DeviceInfo.wSpectraEndBlock - DeviceInfo.wSpectraStartBlock)) {
+		printk(KERN_ERR "All blocks bad!\n");
+		return FAIL;
+	} else {
+		g_wBlockTableIndex = pbt[i] & ~BAD_BLOCK;
+		if (i != BLOCK_TABLE_INDEX) {
+			tempNode = pbt[i];
+			pbt[i] = pbt[BLOCK_TABLE_INDEX];
+			pbt[BLOCK_TABLE_INDEX] = tempNode;
+		}
+	}
+	pbt[BLOCK_TABLE_INDEX] &= (~SPARE_BLOCK);
+
+#if CMD_DMA
+	pbtStartingCopy[BLOCK_TABLE_INDEX] &= (~SPARE_BLOCK);
+#endif
+
+	g_cBlockTableStatus = IN_PROGRESS_BLOCK_TABLE;
+	memset(g_pBTBlocks, 0xFF,
+			(1 + LAST_BT_ID - FIRST_BT_ID) * sizeof(u32));
+	g_pBTBlocks[FIRST_BT_ID-FIRST_BT_ID] = g_wBlockTableIndex;
+	FTL_Write_Block_Table(FAIL);
+
+	for (i = 0; i < CACHE_BLOCK_NUMBER; i++) {
+		Cache.ItemArray[i].dwAddress = NAND_CACHE_INIT_ADDR;
+		Cache.ItemArray[i].bLRUCount = 0;
+		Cache.ItemArray[i].bChanged  = CLEAR;
+	}
+
+	return PASS;
+}
+
+int GLOB_FTL_Flash_Format(void)
+{
+	return FTL_Format_Flash(1);
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     FTL_Search_Block_Table_IN_Block
+* Inputs:       Block Number
+*               Pointer to page
+* Outputs:      PASS / FAIL
+*               Page contatining the block table
+* Description:  It searches the block table in the block
+*               passed as an argument.
+*
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+static int FTL_Search_Block_Table_IN_Block(u32 BT_Block,
+						u8 BT_Tag, u16 *Page)
+{
+	u16 i, j, k;
+	u16 Result = PASS;
+	u16 Last_IPF = 0;
+	u8   BT_Found = 0;
+	u8 *tempbuf, *tagarray;
+	u8 *pSpareBuf;
+	u8 *pSpareBufBTLastPage;
+	u8 bt_flag_last_page = 0xFF;
+	u8 search_in_previous_pages = 0;
+	u16 bt_pages;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+			       __FILE__, __LINE__, __func__);
+
+	bt_pages = FTL_Get_Block_Table_Flash_Size_Pages();
+
+	tempbuf = g_pMemPoolFree;
+	g_pMemPoolFree += (DeviceInfo.wPageDataSize*sizeof(u8));
+	ALIGN_DWORD_FWD(g_pMemPoolFree);
+	pSpareBuf = g_pMemPoolFree;
+	g_pMemPoolFree += (DeviceInfo.wPageSize - DeviceInfo.wPageDataSize) *
+	    sizeof(u8);
+	ALIGN_DWORD_FWD(g_pMemPoolFree);
+	pSpareBufBTLastPage = g_pMemPoolFree;
+	g_pMemPoolFree += (DeviceInfo.wPageSize - DeviceInfo.wPageDataSize) *
+	    sizeof(u8);
+	ALIGN_DWORD_FWD(g_pMemPoolFree);
+	debug_boundary_error(((int)g_pMemPoolFree - (int)g_pMemPool) - 1,
+		globalMemSize, 0);
+
+	nand_dbg_print(NAND_DBG_DEBUG,
+		       "FTL_Search_Block_Table_IN_Block: "
+		       "Searching block table in %u block\n",
+		       (unsigned int)BT_Block);
+
+	for (i = bt_pages; i < DeviceInfo.wPagesPerBlock;
+				i += (bt_pages + 1)) {
+#if CMD_DMA
+		nand_dbg_print(NAND_DBG_DEBUG,
+			       "Searching last IPF: %d\n", i);
+		Result = GLOB_LLD_Read_Page_Main(tempbuf,
+			BT_Block, i, 1, FTLCommandCount,
+			LLD_CMD_FLAG_MODE_POLL);
+#else
+		nand_dbg_print(NAND_DBG_DEBUG,
+			       "Searching last IPF: %d\n", i);
+		Result = GLOB_LLD_Read_Page_Main_Polling(tempbuf,
+							BT_Block, i, 1);
+#endif
+		if (0 == memcmp(tempbuf, g_pIPF, DeviceInfo.wPageDataSize)) {
+			if ((i + bt_pages + 1) < DeviceInfo.wPagesPerBlock) {
+				continue;
+			} else {
+				search_in_previous_pages = 1;
+				Last_IPF = i;
+			}
+		}
+
+		if (!search_in_previous_pages) {
+			if (i != bt_pages) {
+				i -= (bt_pages + 1);
+				Last_IPF = i;
+			}
+		}
+
+		if (0 == Last_IPF)
+			break;
+
+		if (!search_in_previous_pages) {
+			i = i + 1;
+			nand_dbg_print(NAND_DBG_DEBUG,
+				"Reading the spare area of Block %u Page %u",
+				(unsigned int)BT_Block, i);
+			Result = GLOB_LLD_Read_Page_Spare(pSpareBuf,
+							BT_Block, i, 1);
+			nand_dbg_print(NAND_DBG_DEBUG,
+				"Reading the spare area of Block %u Page %u",
+				(unsigned int)BT_Block, i + bt_pages - 1);
+			Result = GLOB_LLD_Read_Page_Spare(pSpareBufBTLastPage,
+				BT_Block, i + bt_pages - 1, 1);
+
+			k = 0;
+			j = FTL_Extract_Block_Table_Tag(pSpareBuf, &tagarray);
+			if (j) {
+				for (; k < j; k++) {
+					if (tagarray[k] == BT_Tag)
+						break;
+				}
+			}
+
+			if (k < j)
+				bt_flag = tagarray[k];
+			else
+				Result = FAIL;
+
+			if (Result == PASS) {
+				k = 0;
+				j = FTL_Extract_Block_Table_Tag(
+					pSpareBufBTLastPage, &tagarray);
+				if (j) {
+					for (; k < j; k++) {
+						if (tagarray[k] == BT_Tag)
+							break;
+					}
+				}
+
+				if (k < j)
+					bt_flag_last_page = tagarray[k];
+				else
+					Result = FAIL;
+
+				if (Result == PASS) {
+					if (bt_flag == bt_flag_last_page) {
+						nand_dbg_print(NAND_DBG_DEBUG,
+							"Block table is found"
+							" in page after IPF "
+							"at block %d "
+							"page %d\n",
+							(int)BT_Block, i);
+						BT_Found = 1;
+						*Page  = i;
+						g_cBlockTableStatus =
+							CURRENT_BLOCK_TABLE;
+						break;
+					} else {
+						Result = FAIL;
+					}
+				}
+			}
+		}
+
+		if (search_in_previous_pages)
+			i = i - bt_pages;
+		else
+			i = i - (bt_pages + 1);
+
+		Result = PASS;
+
+		nand_dbg_print(NAND_DBG_DEBUG,
+			"Reading the spare area of Block %d Page %d",
+			(int)BT_Block, i);
+
+		Result = GLOB_LLD_Read_Page_Spare(pSpareBuf, BT_Block, i, 1);
+		nand_dbg_print(NAND_DBG_DEBUG,
+			"Reading the spare area of Block %u Page %u",
+			(unsigned int)BT_Block, i + bt_pages - 1);
+
+		Result = GLOB_LLD_Read_Page_Spare(pSpareBufBTLastPage,
+					BT_Block, i + bt_pages - 1, 1);
+
+		k = 0;
+		j = FTL_Extract_Block_Table_Tag(pSpareBuf, &tagarray);
+		if (j) {
+			for (; k < j; k++) {
+				if (tagarray[k] == BT_Tag)
+					break;
+			}
+		}
+
+		if (k < j)
+			bt_flag = tagarray[k];
+		else
+			Result = FAIL;
+
+		if (Result == PASS) {
+			k = 0;
+			j = FTL_Extract_Block_Table_Tag(pSpareBufBTLastPage,
+						&tagarray);
+			if (j) {
+				for (; k < j; k++) {
+					if (tagarray[k] == BT_Tag)
+						break;
+				}
+			}
+
+			if (k < j) {
+				bt_flag_last_page = tagarray[k];
+			} else {
+				Result = FAIL;
+				break;
+			}
+
+			if (Result == PASS) {
+				if (bt_flag == bt_flag_last_page) {
+					nand_dbg_print(NAND_DBG_DEBUG,
+						"Block table is found "
+						"in page prior to IPF "
+						"at block %u page %d\n",
+						(unsigned int)BT_Block, i);
+					BT_Found = 1;
+					*Page  = i;
+					g_cBlockTableStatus =
+						IN_PROGRESS_BLOCK_TABLE;
+					break;
+				} else {
+					Result = FAIL;
+					break;
+				}
+			}
+		}
+	}
+
+	if (Result == FAIL) {
+		if ((Last_IPF > bt_pages) && (i < Last_IPF) && (!BT_Found)) {
+			BT_Found = 1;
+			*Page = i - (bt_pages + 1);
+		}
+		if ((Last_IPF == bt_pages) && (i < Last_IPF) && (!BT_Found))
+			goto func_return;
+	}
+
+	if (Last_IPF == 0) {
+		i = 0;
+		Result = PASS;
+		nand_dbg_print(NAND_DBG_DEBUG, "Reading the spare area of "
+			"Block %u Page %u", (unsigned int)BT_Block, i);
+
+		Result = GLOB_LLD_Read_Page_Spare(pSpareBuf, BT_Block, i, 1);
+		nand_dbg_print(NAND_DBG_DEBUG,
+			"Reading the spare area of Block %u Page %u",
+			(unsigned int)BT_Block, i + bt_pages - 1);
+		Result = GLOB_LLD_Read_Page_Spare(pSpareBufBTLastPage,
+					BT_Block, i + bt_pages - 1, 1);
+
+		k = 0;
+		j = FTL_Extract_Block_Table_Tag(pSpareBuf, &tagarray);
+		if (j) {
+			for (; k < j; k++) {
+				if (tagarray[k] == BT_Tag)
+					break;
+			}
+		}
+
+		if (k < j)
+			bt_flag = tagarray[k];
+		else
+			Result = FAIL;
+
+		if (Result == PASS) {
+			k = 0;
+			j = FTL_Extract_Block_Table_Tag(pSpareBufBTLastPage,
+							&tagarray);
+			if (j) {
+				for (; k < j; k++) {
+					if (tagarray[k] == BT_Tag)
+						break;
+				}
+			}
+
+			if (k < j)
+				bt_flag_last_page = tagarray[k];
+			else
+				Result = FAIL;
+
+			if (Result == PASS) {
+				if (bt_flag == bt_flag_last_page) {
+					nand_dbg_print(NAND_DBG_DEBUG,
+						"Block table is found "
+						"in page after IPF at "
+						"block %u page %u\n",
+						(unsigned int)BT_Block,
+						(unsigned int)i);
+					BT_Found = 1;
+					*Page  = i;
+					g_cBlockTableStatus =
+						CURRENT_BLOCK_TABLE;
+					goto func_return;
+				} else {
+					Result = FAIL;
+				}
+			}
+		}
+
+		if (Result == FAIL)
+			goto func_return;
+	}
+func_return:
+	g_pMemPoolFree -= ((DeviceInfo.wPageSize - DeviceInfo.wPageDataSize) *
+							sizeof(u8));
+	ALIGN_DWORD_BWD(g_pMemPoolFree);
+
+	g_pMemPoolFree -= ((DeviceInfo.wPageSize - DeviceInfo.wPageDataSize) *
+							sizeof(u8));
+	ALIGN_DWORD_BWD(g_pMemPoolFree);
+
+	g_pMemPoolFree -= ((DeviceInfo.wPageDataSize * sizeof(u8)));
+	ALIGN_DWORD_BWD(g_pMemPoolFree);
+
+	return Result;
+}
+
+u8 *get_blk_table_start_addr(void)
+{
+	return g_pBlockTable;
+}
+
+unsigned long get_blk_table_len(void)
+{
+	return DeviceInfo.wDataBlockNum * sizeof(u32);
+}
+
+u8 *get_wear_leveling_table_start_addr(void)
+{
+	return g_pWearCounter;
+}
+
+unsigned long get_wear_leveling_table_len(void)
+{
+	return DeviceInfo.wDataBlockNum * sizeof(u8);
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     FTL_Read_Block_Table
+* Inputs:       none
+* Outputs:      PASS / FAIL
+* Description:  read the flash spare area and find a block containing the
+*               most recent block table(having largest block_table_counter).
+*               Find the last written Block table in this block.
+*               Check the correctness of Block Table
+*               If CDMA is enabled, this function is called in
+*               polling mode.
+*               We don't need to store changes in Block table in this
+*               function as it is called only at initialization
+*
+*               Note: Currently this function is called at initialization
+*               before any read/erase/write command issued to flash so,
+*               there is no need to wait for CDMA list to complete as of now
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+static int FTL_Read_Block_Table(void)
+{
+	int k;
+	u16 i;
+	int j;
+	u8 *tempBuf, *tagarray;
+	int wResult = FAIL;
+	int status = FAIL;
+	u8 block_table_found = 0;
+	int search_result;
+	u32 Block;
+	u16 Page = 0;
+	u16 PageCount;
+	u16 bt_pages;
+	int wBytesCopied = 0, tempvar;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+			       __FILE__, __LINE__, __func__);
+
+	bt_pages = FTL_Get_Block_Table_Flash_Size_Pages();
+
+	tempBuf = g_pMemPoolFree;
+	g_pMemPoolFree += DeviceInfo.wPageDataSize * sizeof(u8);
+	ALIGN_DWORD_FWD(g_pMemPoolFree);
+	debug_boundary_error(((int)g_pMemPoolFree - (int)g_pMemPool) - 1,
+		globalMemSize, 0);
+
+	for (j = DeviceInfo.wSpectraStartBlock;
+		j <= (int)DeviceInfo.wSpectraEndBlock;
+			j++) {
+		status = GLOB_LLD_Read_Page_Spare(tempBuf, j, 0, 1);
+		k = 0;
+		i = FTL_Extract_Block_Table_Tag(tempBuf, &tagarray);
+		if (i) {
+#if CMD_DMA
+			status = GLOB_LLD_Read_Page_Main(tempBuf, j, 0, 1,
+				FTLCommandCount, LLD_CMD_FLAG_MODE_POLL);
+#else
+			status  = GLOB_LLD_Read_Page_Main_Polling(tempBuf,
+								j, 0, 1);
+#endif
+			for (; k < i; k++) {
+				if (tagarray[k] == tempBuf[3])
+					break;
+			}
+		}
+
+		if (k < i)
+			k = tagarray[k];
+		else
+			continue;
+
+		nand_dbg_print(NAND_DBG_DEBUG,
+				"Block table is contained in Block %d %d\n",
+				       (unsigned int)j, (unsigned int)k);
+
+		if (g_pBTBlocks[k-FIRST_BT_ID] == BTBLOCK_INVAL) {
+			g_pBTBlocks[k-FIRST_BT_ID] = j;
+			block_table_found = 1;
+		} else {
+			printk(KERN_ERR "FTL_Read_Block_Table -"
+				"This should never happens. "
+				"Two block table have same counter %u!\n", k);
+		}
+	}
+
+	g_pMemPoolFree -= DeviceInfo.wPageDataSize * sizeof(u8);
+	ALIGN_DWORD_BWD(g_pMemPoolFree);
+
+	if (block_table_found) {
+		if (g_pBTBlocks[FIRST_BT_ID - FIRST_BT_ID] != BTBLOCK_INVAL &&
+		g_pBTBlocks[LAST_BT_ID - FIRST_BT_ID] != BTBLOCK_INVAL) {
+			j = LAST_BT_ID;
+			while ((j > FIRST_BT_ID) &&
+			(g_pBTBlocks[j - FIRST_BT_ID] != BTBLOCK_INVAL))
+				j--;
+			if (j == FIRST_BT_ID) {
+				j = LAST_BT_ID;
+				last_erased = LAST_BT_ID;
+			} else {
+				last_erased = (u8)j + 1;
+				while ((j > FIRST_BT_ID) && (BTBLOCK_INVAL ==
+					g_pBTBlocks[j - FIRST_BT_ID]))
+					j--;
+			}
+		} else {
+			j = FIRST_BT_ID;
+			while (g_pBTBlocks[j - FIRST_BT_ID] == BTBLOCK_INVAL)
+				j++;
+			last_erased = (u8)j;
+			while ((j < LAST_BT_ID) && (BTBLOCK_INVAL !=
+				g_pBTBlocks[j - FIRST_BT_ID]))
+				j++;
+			if (g_pBTBlocks[j-FIRST_BT_ID] == BTBLOCK_INVAL)
+				j--;
+		}
+
+		if (last_erased > j)
+			j += (1 + LAST_BT_ID - FIRST_BT_ID);
+
+		for (; (j >= last_erased) && (FAIL == wResult); j--) {
+			i = (j - FIRST_BT_ID) %
+				(1 + LAST_BT_ID - FIRST_BT_ID);
+			search_result =
+			FTL_Search_Block_Table_IN_Block(g_pBTBlocks[i],
+						i + FIRST_BT_ID, &Page);
+			if (g_cBlockTableStatus == IN_PROGRESS_BLOCK_TABLE)
+				block_table_found = 0;
+
+			while ((search_result == PASS) && (FAIL == wResult)) {
+				nand_dbg_print(NAND_DBG_DEBUG,
+					"FTL_Read_Block_Table:"
+					"Block: %u Page: %u "
+					"contains block table\n",
+					(unsigned int)g_pBTBlocks[i],
+					(unsigned int)Page);
+
+				tempBuf = g_pMemPoolFree;
+				g_pMemPoolFree += DeviceInfo.wPageDataSize *
+						sizeof(u8);
+				ALIGN_DWORD_FWD(g_pMemPoolFree);
+				debug_boundary_error(((int)g_pMemPoolFree -
+					(int)g_pMemPool) - 1,
+					globalMemSize, 0);
+
+				for (k = 0; k < bt_pages; k++) {
+					Block = g_pBTBlocks[i];
+					PageCount = 1;
+#if CMD_DMA
+					status  = GLOB_LLD_Read_Page_Main(
+					tempBuf, Block, Page, PageCount,
+					FTLCommandCount,
+					LLD_CMD_FLAG_MODE_POLL);
+#else
+					status  =
+					GLOB_LLD_Read_Page_Main_Polling(
+					tempBuf, Block, Page, PageCount);
+#endif
+					tempvar = k ? 0 : 4;
+
+					wBytesCopied +=
+					FTL_Copy_Block_Table_From_Flash(
+					tempBuf + tempvar,
+					DeviceInfo.wPageDataSize - tempvar,
+					wBytesCopied);
+
+					Page++;
+				}
+
+				g_pMemPoolFree -= DeviceInfo.wPageDataSize *
+							sizeof(u8);
+				ALIGN_DWORD_BWD(g_pMemPoolFree);
+
+				wResult = FTL_Check_Block_Table(FAIL);
+				if (FAIL == wResult) {
+					block_table_found = 0;
+					if (Page > bt_pages)
+						Page -= ((bt_pages<<1) + 1);
+					else
+						search_result = FAIL;
+				}
+			}
+		}
+	}
+
+	if (PASS == wResult) {
+		if (!block_table_found)
+			FTL_Execute_SPL_Recovery();
+
+		if (g_cBlockTableStatus == IN_PROGRESS_BLOCK_TABLE)
+			g_wBlockTableOffset = (u16)Page + 1;
+		else
+			g_wBlockTableOffset = (u16)Page - bt_pages;
+
+		g_wBlockTableIndex = (u32)g_pBTBlocks[i];
+
+#if CMD_DMA
+		if (DeviceInfo.MLCDevice)
+			memcpy(g_pBTStartingCopy, g_pBlockTable,
+				DeviceInfo.wDataBlockNum * sizeof(u32)
+				+ DeviceInfo.wDataBlockNum * sizeof(u8)
+				+ DeviceInfo.wDataBlockNum * sizeof(u16));
+		else
+			memcpy(g_pBTStartingCopy, g_pBlockTable,
+				DeviceInfo.wDataBlockNum * sizeof(u32)
+				+ DeviceInfo.wDataBlockNum * sizeof(u8));
+#endif
+	}
+
+	if (FAIL == wResult)
+		printk(KERN_ERR "Yunpeng - "
+		"Can not find valid spectra block table!\n");
+
+#if CMD_DMA
+	GLOB_LLD_Flash_Init(LLD_CMD_FLAG_MODE_CDMA);
+#endif
+
+#if AUTO_FORMAT_FLASH
+	if (FAIL == wResult) {
+		nand_dbg_print(NAND_DBG_DEBUG, "doing auto-format\n");
+		wResult = FTL_Format_Flash(0);
+	}
+#endif
+
+	return wResult;
+}
+
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     FTL_Flash_Error_Handle
+* Inputs:       Pointer to data
+*               Page address
+*               Block address
+* Outputs:      PASS=0 / FAIL=1
+* Description:  It handles any error occured during Spectra operation
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+static int FTL_Flash_Error_Handle(u8 *pData, u64 old_page_addr,
+				u64 blk_addr)
+{
+	u32 i;
+	int j;
+	u32 tmp_node, blk_node = BLK_FROM_ADDR(blk_addr);
+	u64 phy_addr;
+	int wErase = FAIL;
+	int wResult = FAIL;
+	u32 *pbt = (u32 *)g_pBlockTable;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	if (ERR == GLOB_FTL_Garbage_Collection())
+		return ERR;
+
+	do {
+		for (i = DeviceInfo.wSpectraEndBlock -
+			DeviceInfo.wSpectraStartBlock;
+					i > 0; i--) {
+			if (IS_SPARE_BLOCK(i)) {
+				tmp_node = (u32)(BAD_BLOCK |
+					pbt[blk_node]);
+				pbt[blk_node] = (u32)(pbt[i] &
+					(~SPARE_BLOCK));
+				pbt[i] = tmp_node;
+#if CMD_DMA
+				p_BTableChangesDelta =
+				    (struct BTableChangesDelta *)
+				    g_pBTDelta_Free;
+				g_pBTDelta_Free +=
+				    sizeof(struct BTableChangesDelta);
+
+				p_BTableChangesDelta->FTLCommandCount =
+				    FTLCommandCount;
+				p_BTableChangesDelta->BT_Index =
+				    blk_node;
+				p_BTableChangesDelta->BT_Entry_Value =
+				    pbt[blk_node];
+				p_BTableChangesDelta->ValidFields = 0x0C;
+
+				p_BTableChangesDelta =
+				    (struct BTableChangesDelta *)
+				    g_pBTDelta_Free;
+				g_pBTDelta_Free +=
+				    sizeof(struct BTableChangesDelta);
+
+				p_BTableChangesDelta->FTLCommandCount =
+				    FTLCommandCount;
+				p_BTableChangesDelta->BT_Index = i;
+				p_BTableChangesDelta->BT_Entry_Value = pbt[i];
+				p_BTableChangesDelta->ValidFields = 0x0C;
+#endif
+				wResult = PASS;
+				break;
+			}
+		}
+
+		if (FAIL == wResult) {
+			if (FAIL == GLOB_FTL_Garbage_Collection())
+				break;
+			else
+				continue;
+		}
+
+		if (IN_PROGRESS_BLOCK_TABLE != g_cBlockTableStatus) {
+			g_cBlockTableStatus = IN_PROGRESS_BLOCK_TABLE;
+			FTL_Write_IN_Progress_Block_Table_Page();
+		}
+
+		phy_addr = FTL_Get_Physical_Block_Addr(blk_addr);
+
+		for (j = 0; j < RETRY_TIMES; j++) {
+			if (PASS == wErase) {
+				if (FAIL == GLOB_FTL_Block_Erase(phy_addr)) {
+					MARK_BLOCK_AS_BAD(pbt[blk_node]);
+					break;
+				}
+			}
+			if (PASS == FTL_Cache_Update_Block(pData,
+							   old_page_addr,
+							   phy_addr)) {
+				wResult = PASS;
+				break;
+			} else {
+				wResult = FAIL;
+				wErase = PASS;
+			}
+		}
+	} while (FAIL == wResult);
+
+	FTL_Write_Block_Table(FAIL);
+
+	return wResult;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     FTL_Get_Page_Num
+* Inputs:       Size in bytes
+* Outputs:      Size in pages
+* Description:  It calculates the pages required for the length passed
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+static u32 FTL_Get_Page_Num(u64 length)
+{
+	return (u32)((length >> DeviceInfo.nBitsInPageDataSize) +
+		(GLOB_u64_Remainder(length , 1) > 0 ? 1 : 0));
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     FTL_Get_Physical_Block_Addr
+* Inputs:       Block Address (byte format)
+* Outputs:      Physical address of the block.
+* Description:  It translates LBA to PBA by returning address stored
+*               at the LBA location in the block table
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+static u64 FTL_Get_Physical_Block_Addr(u64 blk_addr)
+{
+	u32 *pbt;
+	u64 physical_addr;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		__FILE__, __LINE__, __func__);
+
+	pbt = (u32 *)g_pBlockTable;
+	physical_addr = (u64) DeviceInfo.wBlockDataSize *
+		(pbt[BLK_FROM_ADDR(blk_addr)] & (~BAD_BLOCK));
+
+	return physical_addr;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     FTL_Get_Block_Index
+* Inputs:       Physical Block no.
+* Outputs:      Logical block no. /BAD_BLOCK
+* Description:  It returns the logical block no. for the PBA passed
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+static u32 FTL_Get_Block_Index(u32 wBlockNum)
+{
+	u32 *pbt = (u32 *)g_pBlockTable;
+	u32 i;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	for (i = 0; i < DeviceInfo.wDataBlockNum; i++)
+		if (wBlockNum == (pbt[i] & (~BAD_BLOCK)))
+			return i;
+
+	return BAD_BLOCK;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     GLOB_FTL_Wear_Leveling
+* Inputs:       none
+* Outputs:      PASS=0
+* Description:  This is static wear leveling (done by explicit call)
+*               do complete static wear leveling
+*               do complete garbage collection
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+int GLOB_FTL_Wear_Leveling(void)
+{
+	nand_dbg_print(NAND_DBG_WARN, "%s, Line %d, Function: %s\n",
+		__FILE__, __LINE__, __func__);
+
+	FTL_Static_Wear_Leveling();
+	GLOB_FTL_Garbage_Collection();
+
+	return PASS;
+}
+
+static void find_least_most_worn(u8 *chg,
+	u32 *least_idx, u8 *least_cnt,
+	u32 *most_idx, u8 *most_cnt)
+{
+	u32 *pbt = (u32 *)g_pBlockTable;
+	u32 idx;
+	u8 cnt;
+	int i;
+
+	for (i = BLOCK_TABLE_INDEX + 1; i < DeviceInfo.wDataBlockNum; i++) {
+		if (IS_BAD_BLOCK(i) || PASS == chg[i])
+			continue;
+
+		idx = (u32) ((~BAD_BLOCK) & pbt[i]);
+		cnt = g_pWearCounter[idx - DeviceInfo.wSpectraStartBlock];
+
+		if (IS_SPARE_BLOCK(i)) {
+			if (cnt > *most_cnt) {
+				*most_cnt = cnt;
+				*most_idx = idx;
+			}
+		}
+
+		if (IS_DATA_BLOCK(i)) {
+			if (cnt < *least_cnt) {
+				*least_cnt = cnt;
+				*least_idx = idx;
+			}
+		}
+
+		if (PASS == chg[*most_idx] || PASS == chg[*least_idx]) {
+			debug_boundary_error(*most_idx,
+				DeviceInfo.wDataBlockNum, 0);
+			debug_boundary_error(*least_idx,
+				DeviceInfo.wDataBlockNum, 0);
+			continue;
+		}
+	}
+}
+
+static int move_blks_for_wear_leveling(u8 *chg,
+	u32 *least_idx, u32 *rep_blk_num, int *result)
+{
+	u32 *pbt = (u32 *)g_pBlockTable;
+	u32 rep_blk;
+	int j, ret_cp_blk, ret_erase;
+	int ret = PASS;
+
+	chg[*least_idx] = PASS;
+	debug_boundary_error(*least_idx, DeviceInfo.wDataBlockNum, 0);
+
+	rep_blk = FTL_Replace_MWBlock();
+	if (rep_blk != BAD_BLOCK) {
+		nand_dbg_print(NAND_DBG_DEBUG,
+			"More than two spare blocks exist so do it\n");
+		nand_dbg_print(NAND_DBG_DEBUG, "Block Replaced is %d\n",
+				rep_blk);
+
+		chg[rep_blk] = PASS;
+
+		if (IN_PROGRESS_BLOCK_TABLE != g_cBlockTableStatus) {
+			g_cBlockTableStatus = IN_PROGRESS_BLOCK_TABLE;
+			FTL_Write_IN_Progress_Block_Table_Page();
+		}
+
+		for (j = 0; j < RETRY_TIMES; j++) {
+			ret_cp_blk = FTL_Copy_Block((u64)(*least_idx) *
+				DeviceInfo.wBlockDataSize,
+				(u64)rep_blk * DeviceInfo.wBlockDataSize);
+			if (FAIL == ret_cp_blk) {
+				ret_erase = GLOB_FTL_Block_Erase((u64)rep_blk
+					* DeviceInfo.wBlockDataSize);
+				if (FAIL == ret_erase)
+					MARK_BLOCK_AS_BAD(pbt[rep_blk]);
+			} else {
+				nand_dbg_print(NAND_DBG_DEBUG,
+					"FTL_Copy_Block == OK\n");
+				break;
+			}
+		}
+
+		if (j < RETRY_TIMES) {
+			u32 tmp;
+			u32 old_idx = FTL_Get_Block_Index(*least_idx);
+			u32 rep_idx = FTL_Get_Block_Index(rep_blk);
+			tmp = (u32)(DISCARD_BLOCK | pbt[old_idx]);
+			pbt[old_idx] = (u32)((~SPARE_BLOCK) &
+							pbt[rep_idx]);
+			pbt[rep_idx] = tmp;
+#if CMD_DMA
+			p_BTableChangesDelta = (struct BTableChangesDelta *)
+						g_pBTDelta_Free;
+			g_pBTDelta_Free += sizeof(struct BTableChangesDelta);
+			p_BTableChangesDelta->FTLCommandCount =
+						FTLCommandCount;
+			p_BTableChangesDelta->BT_Index = old_idx;
+			p_BTableChangesDelta->BT_Entry_Value = pbt[old_idx];
+			p_BTableChangesDelta->ValidFields = 0x0C;
+
+			p_BTableChangesDelta = (struct BTableChangesDelta *)
+						g_pBTDelta_Free;
+			g_pBTDelta_Free += sizeof(struct BTableChangesDelta);
+
+			p_BTableChangesDelta->FTLCommandCount =
+						FTLCommandCount;
+			p_BTableChangesDelta->BT_Index = rep_idx;
+			p_BTableChangesDelta->BT_Entry_Value = pbt[rep_idx];
+			p_BTableChangesDelta->ValidFields = 0x0C;
+#endif
+		} else {
+			pbt[FTL_Get_Block_Index(rep_blk)] |= BAD_BLOCK;
+#if CMD_DMA
+			p_BTableChangesDelta = (struct BTableChangesDelta *)
+						g_pBTDelta_Free;
+			g_pBTDelta_Free += sizeof(struct BTableChangesDelta);
+
+			p_BTableChangesDelta->FTLCommandCount =
+						FTLCommandCount;
+			p_BTableChangesDelta->BT_Index =
+					FTL_Get_Block_Index(rep_blk);
+			p_BTableChangesDelta->BT_Entry_Value =
+					pbt[FTL_Get_Block_Index(rep_blk)];
+			p_BTableChangesDelta->ValidFields = 0x0C;
+#endif
+			*result = FAIL;
+			ret = FAIL;
+		}
+
+		if ((*rep_blk_num++) > WEAR_LEVELING_BLOCK_NUM)
+			ret = FAIL;
+	} else {
+		printk(KERN_ERR "Less than 3 spare blocks exist so quit\n");
+		ret = FAIL;
+	}
+
+	return ret;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     FTL_Static_Wear_Leveling
+* Inputs:       none
+* Outputs:      PASS=0 / FAIL=1
+* Description:  This is static wear leveling (done by explicit call)
+*               search for most&least used
+*               if difference < GATE:
+*                   update the block table with exhange
+*                   mark block table in flash as IN_PROGRESS
+*                   copy flash block
+*               the caller should handle GC clean up after calling this function
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+int FTL_Static_Wear_Leveling(void)
+{
+	u8 most_worn_cnt;
+	u8 least_worn_cnt;
+	u32 most_worn_idx;
+	u32 least_worn_idx;
+	int result = PASS;
+	int go_on = PASS;
+	u32 replaced_blks = 0;
+	u8 *chang_flag;
+
+	nand_dbg_print(NAND_DBG_WARN, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	chang_flag = g_pMemPoolFree;
+	g_pMemPoolFree += (DeviceInfo.wDataBlockNum);
+	ALIGN_DWORD_FWD(g_pMemPoolFree);
+	debug_boundary_error(((int)g_pMemPoolFree - (int)g_pMemPool) - 1,
+			     globalMemSize, 0);
+
+	if (!chang_flag)
+		return FAIL;
+
+	memset(chang_flag, FAIL, DeviceInfo.wDataBlockNum);
+	while (go_on == PASS) {
+		nand_dbg_print(NAND_DBG_DEBUG,
+			"starting static wear leveling\n");
+		most_worn_cnt = 0;
+		least_worn_cnt = 0xFF;
+		least_worn_idx = BLOCK_TABLE_INDEX;
+		most_worn_idx = BLOCK_TABLE_INDEX;
+
+		find_least_most_worn(chang_flag, &least_worn_idx,
+			&least_worn_cnt, &most_worn_idx, &most_worn_cnt);
+
+		nand_dbg_print(NAND_DBG_DEBUG,
+			"Used and least worn is block %u, whos count is %u\n",
+			(unsigned int)least_worn_idx,
+			(unsigned int)least_worn_cnt);
+
+		nand_dbg_print(NAND_DBG_DEBUG,
+			"Free and  most worn is block %u, whos count is %u\n",
+			(unsigned int)most_worn_idx,
+			(unsigned int)most_worn_cnt);
+
+		if ((most_worn_cnt > least_worn_cnt) &&
+			(most_worn_cnt - least_worn_cnt > WEAR_LEVELING_GATE))
+			go_on = move_blks_for_wear_leveling(chang_flag,
+				&least_worn_idx, &replaced_blks, &result);
+	}
+
+	g_pMemPoolFree -= (DeviceInfo.wDataBlockNum);
+	ALIGN_DWORD_BWD(g_pMemPoolFree);
+
+	return result;
+}
+
+#if CMD_DMA
+static int do_garbage_collection(u32 discard_cnt)
+{
+	u32 *pbt = (u32 *)g_pBlockTable;
+	u32 pba;
+	u8 bt_block_erased = 0;
+	int i, cnt, ret = FAIL;
+	u64 addr;
+
+	i = 0;
+	while ((i < DeviceInfo.wDataBlockNum) && (discard_cnt > 0) &&
+			((FTLCommandCount + 28) < 256)) {
+		if (((pbt[i] & BAD_BLOCK) != BAD_BLOCK) &&
+				(pbt[i] & DISCARD_BLOCK)) {
+			if (IN_PROGRESS_BLOCK_TABLE != g_cBlockTableStatus) {
+				g_cBlockTableStatus = IN_PROGRESS_BLOCK_TABLE;
+				FTL_Write_IN_Progress_Block_Table_Page();
+			}
+
+			addr = FTL_Get_Physical_Block_Addr((u64)i *
+						DeviceInfo.wBlockDataSize);
+			pba = BLK_FROM_ADDR(addr);
+
+			for (cnt = FIRST_BT_ID; cnt <= LAST_BT_ID; cnt++) {
+				if (pba == g_pBTBlocks[cnt - FIRST_BT_ID]) {
+					nand_dbg_print(NAND_DBG_DEBUG,
+						"GC will erase BT block %u\n",
+						(unsigned int)pba);
+					discard_cnt--;
+					i++;
+					bt_block_erased = 1;
+					break;
+				}
+			}
+
+			if (bt_block_erased) {
+				bt_block_erased = 0;
+				continue;
+			}
+
+			addr = FTL_Get_Physical_Block_Addr((u64)i *
+						DeviceInfo.wBlockDataSize);
+
+			if (PASS == GLOB_FTL_Block_Erase(addr)) {
+				pbt[i] &= (u32)(~DISCARD_BLOCK);
+				pbt[i] |= (u32)(SPARE_BLOCK);
+				p_BTableChangesDelta =
+					(struct BTableChangesDelta *)
+					g_pBTDelta_Free;
+				g_pBTDelta_Free +=
+					sizeof(struct BTableChangesDelta);
+				p_BTableChangesDelta->FTLCommandCount =
+					FTLCommandCount - 1;
+				p_BTableChangesDelta->BT_Index = i;
+				p_BTableChangesDelta->BT_Entry_Value = pbt[i];
+				p_BTableChangesDelta->ValidFields = 0x0C;
+				discard_cnt--;
+				ret = PASS;
+			} else {
+				MARK_BLOCK_AS_BAD(pbt[i]);
+			}
+		}
+
+		i++;
+	}
+
+	return ret;
+}
+
+#else
+static int do_garbage_collection(u32 discard_cnt)
+{
+	u32 *pbt = (u32 *)g_pBlockTable;
+	u32 pba;
+	u8 bt_block_erased = 0;
+	int i, cnt, ret = FAIL;
+	u64 addr;
+
+	i = 0;
+	while ((i < DeviceInfo.wDataBlockNum) && (discard_cnt > 0)) {
+		if (((pbt[i] & BAD_BLOCK) != BAD_BLOCK) &&
+				(pbt[i] & DISCARD_BLOCK)) {
+			if (IN_PROGRESS_BLOCK_TABLE != g_cBlockTableStatus) {
+				g_cBlockTableStatus = IN_PROGRESS_BLOCK_TABLE;
+				FTL_Write_IN_Progress_Block_Table_Page();
+			}
+
+			addr = FTL_Get_Physical_Block_Addr((u64)i *
+						DeviceInfo.wBlockDataSize);
+			pba = BLK_FROM_ADDR(addr);
+
+			for (cnt = FIRST_BT_ID; cnt <= LAST_BT_ID; cnt++) {
+				if (pba == g_pBTBlocks[cnt - FIRST_BT_ID]) {
+					nand_dbg_print(NAND_DBG_DEBUG,
+						"GC will erase BT block %d\n",
+						pba);
+					discard_cnt--;
+					i++;
+					bt_block_erased = 1;
+					break;
+				}
+			}
+
+			if (bt_block_erased) {
+				bt_block_erased = 0;
+				continue;
+			}
+
+			addr = FTL_Get_Physical_Block_Addr((u64)i *
+						DeviceInfo.wBlockDataSize);
+
+			if (PASS == GLOB_FTL_Block_Erase(addr)) {
+				pbt[i] &= (u32)(~DISCARD_BLOCK);
+				pbt[i] |= (u32)(SPARE_BLOCK);
+				discard_cnt--;
+				ret = PASS;
+			} else {
+				MARK_BLOCK_AS_BAD(pbt[i]);
+			}
+		}
+
+		i++;
+	}
+
+	return ret;
+}
+#endif
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     GLOB_FTL_Garbage_Collection
+* Inputs:       none
+* Outputs:      PASS / FAIL (returns the number of un-erased blocks
+* Description:  search the block table for all discarded blocks to erase
+*               for each discarded block:
+*                   set the flash block to IN_PROGRESS
+*                   erase the block
+*                   update the block table
+*                   write the block table to flash
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+int GLOB_FTL_Garbage_Collection(void)
+{
+	u32 i;
+	u32 wDiscard = 0;
+	int wResult = FAIL;
+	u32 *pbt = (u32 *)g_pBlockTable;
+
+	nand_dbg_print(NAND_DBG_WARN, "%s, Line %d, Function: %s\n",
+			       __FILE__, __LINE__, __func__);
+
+	if (GC_Called) {
+		printk(KERN_ALERT "GLOB_FTL_Garbage_Collection() "
+			"has been re-entered! Exit.\n");
+		return PASS;
+	}
+
+	GC_Called = 1;
+
+	GLOB_FTL_BT_Garbage_Collection();
+
+	for (i = 0; i < DeviceInfo.wDataBlockNum; i++) {
+		if (IS_DISCARDED_BLOCK(i))
+			wDiscard++;
+	}
+
+	if (wDiscard <= 0) {
+		GC_Called = 0;
+		return wResult;
+	}
+
+	nand_dbg_print(NAND_DBG_DEBUG,
+		"Found %d discarded blocks\n", wDiscard);
+
+	FTL_Write_Block_Table(FAIL);
+
+	wResult = do_garbage_collection(wDiscard);
+
+	FTL_Write_Block_Table(FAIL);
+
+	GC_Called = 0;
+
+	return wResult;
+}
+
+
+#if CMD_DMA
+static int do_bt_garbage_collection(void)
+{
+	u32 pba, lba;
+	u32 *pbt = (u32 *)g_pBlockTable;
+	u32 *pBTBlocksNode = (u32 *)g_pBTBlocks;
+	u64 addr;
+	int i, ret = FAIL;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+			       __FILE__, __LINE__, __func__);
+
+	if (BT_GC_Called)
+		return PASS;
+
+	BT_GC_Called = 1;
+
+	for (i = last_erased; (i <= LAST_BT_ID) &&
+		(g_pBTBlocks[((i + 2) % (1 + LAST_BT_ID - FIRST_BT_ID)) +
+		FIRST_BT_ID - FIRST_BT_ID] != BTBLOCK_INVAL) &&
+		((FTLCommandCount + 28)) < 256; i++) {
+		pba = pBTBlocksNode[i - FIRST_BT_ID];
+		lba = FTL_Get_Block_Index(pba);
+		nand_dbg_print(NAND_DBG_DEBUG,
+			"do_bt_garbage_collection: pba %d, lba %d\n",
+			pba, lba);
+		nand_dbg_print(NAND_DBG_DEBUG,
+			"Block Table Entry: %d", pbt[lba]);
+
+		if (((pbt[lba] & BAD_BLOCK) != BAD_BLOCK) &&
+			(pbt[lba] & DISCARD_BLOCK)) {
+			nand_dbg_print(NAND_DBG_DEBUG,
+				"do_bt_garbage_collection_cdma: "
+				"Erasing Block tables present in block %d\n",
+				pba);
+			addr = FTL_Get_Physical_Block_Addr((u64)lba *
+						DeviceInfo.wBlockDataSize);
+			if (PASS == GLOB_FTL_Block_Erase(addr)) {
+				pbt[lba] &= (u32)(~DISCARD_BLOCK);
+				pbt[lba] |= (u32)(SPARE_BLOCK);
+
+				p_BTableChangesDelta =
+					(struct BTableChangesDelta *)
+					g_pBTDelta_Free;
+				g_pBTDelta_Free +=
+					sizeof(struct BTableChangesDelta);
+
+				p_BTableChangesDelta->FTLCommandCount =
+					FTLCommandCount - 1;
+				p_BTableChangesDelta->BT_Index = lba;
+				p_BTableChangesDelta->BT_Entry_Value =
+								pbt[lba];
+
+				p_BTableChangesDelta->ValidFields = 0x0C;
+
+				ret = PASS;
+				pBTBlocksNode[last_erased - FIRST_BT_ID] =
+							BTBLOCK_INVAL;
+				nand_dbg_print(NAND_DBG_DEBUG,
+					"resetting bt entry at index %d "
+					"value %d\n", i,
+					pBTBlocksNode[i - FIRST_BT_ID]);
+				if (last_erased == LAST_BT_ID)
+					last_erased = FIRST_BT_ID;
+				else
+					last_erased++;
+			} else {
+				MARK_BLOCK_AS_BAD(pbt[lba]);
+			}
+		}
+	}
+
+	BT_GC_Called = 0;
+
+	return ret;
+}
+
+#else
+static int do_bt_garbage_collection(void)
+{
+	u32 pba, lba;
+	u32 *pbt = (u32 *)g_pBlockTable;
+	u32 *pBTBlocksNode = (u32 *)g_pBTBlocks;
+	u64 addr;
+	int i, ret = FAIL;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+			       __FILE__, __LINE__, __func__);
+
+	if (BT_GC_Called)
+		return PASS;
+
+	BT_GC_Called = 1;
+
+	for (i = last_erased; (i <= LAST_BT_ID) &&
+		(g_pBTBlocks[((i + 2) % (1 + LAST_BT_ID - FIRST_BT_ID)) +
+		FIRST_BT_ID - FIRST_BT_ID] != BTBLOCK_INVAL); i++) {
+		pba = pBTBlocksNode[i - FIRST_BT_ID];
+		lba = FTL_Get_Block_Index(pba);
+		nand_dbg_print(NAND_DBG_DEBUG,
+			"do_bt_garbage_collection_cdma: pba %d, lba %d\n",
+			pba, lba);
+		nand_dbg_print(NAND_DBG_DEBUG,
+			"Block Table Entry: %d", pbt[lba]);
+
+		if (((pbt[lba] & BAD_BLOCK) != BAD_BLOCK) &&
+			(pbt[lba] & DISCARD_BLOCK)) {
+			nand_dbg_print(NAND_DBG_DEBUG,
+				"do_bt_garbage_collection: "
+				"Erasing Block tables present in block %d\n",
+				pba);
+			addr = FTL_Get_Physical_Block_Addr((u64)lba *
+						DeviceInfo.wBlockDataSize);
+			if (PASS == GLOB_FTL_Block_Erase(addr)) {
+				pbt[lba] &= (u32)(~DISCARD_BLOCK);
+				pbt[lba] |= (u32)(SPARE_BLOCK);
+				ret = PASS;
+				pBTBlocksNode[last_erased - FIRST_BT_ID] =
+							BTBLOCK_INVAL;
+				nand_dbg_print(NAND_DBG_DEBUG,
+					"resetting bt entry at index %d "
+					"value %d\n", i,
+					pBTBlocksNode[i - FIRST_BT_ID]);
+				if (last_erased == LAST_BT_ID)
+					last_erased = FIRST_BT_ID;
+				else
+					last_erased++;
+			} else {
+				MARK_BLOCK_AS_BAD(pbt[lba]);
+			}
+		}
+	}
+
+	BT_GC_Called = 0;
+
+	return ret;
+}
+
+#endif
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     GLOB_FTL_BT_Garbage_Collection
+* Inputs:       none
+* Outputs:      PASS / FAIL (returns the number of un-erased blocks
+* Description:  Erases discarded blocks containing Block table
+*
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+int GLOB_FTL_BT_Garbage_Collection(void)
+{
+	return do_bt_garbage_collection();
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     FTL_Replace_OneBlock
+* Inputs:       Block number 1
+*               Block number 2
+* Outputs:      Replaced Block Number
+* Description:  Interchange block table entries at wBlockNum and wReplaceNum
+*
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+static u32 FTL_Replace_OneBlock(u32 blk, u32 rep_blk)
+{
+	u32 tmp_blk;
+	u32 replace_node = BAD_BLOCK;
+	u32 *pbt = (u32 *)g_pBlockTable;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		__FILE__, __LINE__, __func__);
+
+	if (rep_blk != BAD_BLOCK) {
+		if (IS_BAD_BLOCK(blk))
+			tmp_blk = (u32)(pbt[blk]);
+		else
+			tmp_blk = (u32)(DISCARD_BLOCK |
+					(~SPARE_BLOCK & pbt[blk]));
+		replace_node = (u32) ((~SPARE_BLOCK) & pbt[rep_blk]);
+		pbt[blk] = replace_node;
+		pbt[rep_blk] = tmp_blk;
+
+#if CMD_DMA
+		p_BTableChangesDelta =
+			(struct BTableChangesDelta *)g_pBTDelta_Free;
+		g_pBTDelta_Free += sizeof(struct BTableChangesDelta);
+
+		p_BTableChangesDelta->FTLCommandCount = FTLCommandCount;
+		p_BTableChangesDelta->BT_Index = blk;
+		p_BTableChangesDelta->BT_Entry_Value = pbt[blk];
+
+		p_BTableChangesDelta->ValidFields = 0x0C;
+
+		p_BTableChangesDelta =
+			(struct BTableChangesDelta *)g_pBTDelta_Free;
+		g_pBTDelta_Free += sizeof(struct BTableChangesDelta);
+
+		p_BTableChangesDelta->FTLCommandCount = FTLCommandCount;
+		p_BTableChangesDelta->BT_Index = rep_blk;
+		p_BTableChangesDelta->BT_Entry_Value = pbt[rep_blk];
+		p_BTableChangesDelta->ValidFields = 0x0C;
+#endif
+	}
+
+	return replace_node;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     FTL_Write_Block_Table_Data
+* Inputs:       Block table size in pages
+* Outputs:      PASS=0 / FAIL=1
+* Description:  Write block table data in flash
+*               If first page and last page
+*                  Write data+BT flag
+*               else
+*                  Write data
+*               BT flag is a counter. Its value is incremented for block table
+*               write in a new Block
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+static int FTL_Write_Block_Table_Data(void)
+{
+	u64 dwBlockTableAddr, pTempAddr;
+	u32 Block;
+	u16 Page, PageCount;
+	u8 *tempBuf;
+	int wBytesCopied;
+	u16 bt_pages;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+			       __FILE__, __LINE__, __func__);
+
+	dwBlockTableAddr =
+		(u64)((u64)g_wBlockTableIndex * DeviceInfo.wBlockDataSize +
+		(u64)g_wBlockTableOffset * DeviceInfo.wPageDataSize);
+	pTempAddr = dwBlockTableAddr;
+
+	bt_pages = FTL_Get_Block_Table_Flash_Size_Pages();
+
+	nand_dbg_print(NAND_DBG_DEBUG, "FTL_Write_Block_Table_Data: "
+			       "page= %d BlockTableIndex= %d "
+			       "BlockTableOffset=%d\n", bt_pages,
+			       g_wBlockTableIndex, g_wBlockTableOffset);
+
+	Block = BLK_FROM_ADDR(pTempAddr);
+	Page = PAGE_FROM_ADDR(pTempAddr, Block);
+	PageCount = 1;
+
+	if (bt_block_changed) {
+		if (bt_flag == LAST_BT_ID) {
+			bt_flag = FIRST_BT_ID;
+			g_pBTBlocks[bt_flag - FIRST_BT_ID] = Block;
+		} else if (bt_flag < LAST_BT_ID) {
+			bt_flag++;
+			g_pBTBlocks[bt_flag - FIRST_BT_ID] = Block;
+		}
+
+		if ((bt_flag > (LAST_BT_ID-4)) &&
+			g_pBTBlocks[FIRST_BT_ID - FIRST_BT_ID] !=
+						BTBLOCK_INVAL) {
+			bt_block_changed = 0;
+			GLOB_FTL_BT_Garbage_Collection();
+		}
+
+		bt_block_changed = 0;
+		nand_dbg_print(NAND_DBG_DEBUG,
+			"Block Table Counter is %u Block %u\n",
+			bt_flag, (unsigned int)Block);
+	}
+
+	tempBuf = g_pMemPoolFree;
+	g_pMemPoolFree += (bt_pages > 3) ?
+	    (FTL_Get_Block_Table_Flash_Size_Bytes() -
+	     (DeviceInfo.wPageSize << 1)) : DeviceInfo.wPageSize;
+	ALIGN_DWORD_FWD(g_pMemPoolFree);
+	debug_boundary_error(((int)g_pMemPoolFree - (int)g_pMemPool) - 1,
+			     globalMemSize, 0);
+
+	memset(tempBuf, 0, 3);
+	tempBuf[3] = bt_flag;
+	wBytesCopied = FTL_Copy_Block_Table_To_Flash(tempBuf + 4,
+			DeviceInfo.wPageDataSize - 4, 0);
+	memset(&tempBuf[wBytesCopied + 4], 0xff,
+		DeviceInfo.wPageSize - (wBytesCopied + 4));
+	FTL_Insert_Block_Table_Signature(&tempBuf[DeviceInfo.wPageDataSize],
+					bt_flag);
+
+#if CMD_DMA
+	memcpy(g_pNextBlockTable, tempBuf,
+		DeviceInfo.wPageSize * sizeof(u8));
+	nand_dbg_print(NAND_DBG_DEBUG, "Writing First Page of Block Table "
+		"Block %u Page %u\n", (unsigned int)Block, Page);
+	if (FAIL == GLOB_LLD_Write_Page_Main_Spare(g_pNextBlockTable,
+		Block, Page, 1, FTLCommandCount,
+		LLD_CMD_FLAG_MODE_CDMA | LLD_CMD_FLAG_ORDER_BEFORE_REST)) {
+		nand_dbg_print(NAND_DBG_WARN, "NAND Program fail in "
+			"%s, Line %d, Function: %s, "
+			"new Bad Block %d generated!\n",
+			__FILE__, __LINE__, __func__, Block);
+		goto func_return;
+	}
+
+	FTLCommandCount++;
+	g_pNextBlockTable += ((DeviceInfo.wPageSize * sizeof(u8)));
+#else
+	if (FAIL == GLOB_LLD_Write_Page_Main_Spare(tempBuf, Block, Page, 1)) {
+		nand_dbg_print(NAND_DBG_WARN,
+			"NAND Program fail in %s, Line %d, Function: %s, "
+			"new Bad Block %d generated!\n",
+			__FILE__, __LINE__, __func__, Block);
+		goto func_return;
+	}
+#endif
+
+	if (bt_pages > 1) {
+		PageCount = bt_pages - 1;
+		if (PageCount > 1) {
+			wBytesCopied += FTL_Copy_Block_Table_To_Flash(tempBuf,
+				DeviceInfo.wPageDataSize * (PageCount - 1),
+				wBytesCopied);
+
+#if CMD_DMA
+			memcpy(g_pNextBlockTable, tempBuf,
+				(PageCount - 1) * DeviceInfo.wPageDataSize);
+			if (FAIL == GLOB_LLD_Write_Page_Main(
+				g_pNextBlockTable, Block, Page + 1,
+				PageCount - 1, FTLCommandCount)) {
+				nand_dbg_print(NAND_DBG_WARN,
+					"NAND Program fail in %s, Line %d, "
+					"Function: %s, "
+					"new Bad Block %d generated!\n",
+					__FILE__, __LINE__, __func__,
+					(int)Block);
+				goto func_return;
+			}
+
+			FTLCommandCount++;
+			g_pNextBlockTable += (PageCount - 1) *
+				DeviceInfo.wPageDataSize * sizeof(u8);
+#else
+			if (FAIL == GLOB_LLD_Write_Page_Main(tempBuf,
+					Block, Page + 1, PageCount - 1)) {
+				nand_dbg_print(NAND_DBG_WARN,
+					"NAND Program fail in %s, Line %d, "
+					"Function: %s, "
+					"new Bad Block %d generated!\n",
+					__FILE__, __LINE__, __func__,
+					(int)Block);
+				goto func_return;
+			}
+#endif
+		}
+
+		wBytesCopied = FTL_Copy_Block_Table_To_Flash(tempBuf,
+				DeviceInfo.wPageDataSize, wBytesCopied);
+		memset(&tempBuf[wBytesCopied], 0xff,
+			DeviceInfo.wPageSize-wBytesCopied);
+		FTL_Insert_Block_Table_Signature(
+			&tempBuf[DeviceInfo.wPageDataSize], bt_flag);
+#if CMD_DMA
+		memcpy(g_pNextBlockTable, tempBuf,
+				DeviceInfo.wPageSize * sizeof(u8));
+		nand_dbg_print(NAND_DBG_DEBUG,
+			"Writing the last Page of Block Table "
+			"Block %u Page %u\n",
+			(unsigned int)Block, Page + bt_pages - 1);
+		if (FAIL == GLOB_LLD_Write_Page_Main_Spare(g_pNextBlockTable,
+			Block, Page + bt_pages - 1, 1, FTLCommandCount,
+			LLD_CMD_FLAG_MODE_CDMA |
+			LLD_CMD_FLAG_ORDER_BEFORE_REST)) {
+			nand_dbg_print(NAND_DBG_WARN,
+				"NAND Program fail in %s, Line %d, "
+				"Function: %s, new Bad Block %d generated!\n",
+				__FILE__, __LINE__, __func__, Block);
+			goto func_return;
+		}
+		FTLCommandCount++;
+#else
+		if (FAIL == GLOB_LLD_Write_Page_Main_Spare(tempBuf,
+					Block, Page+bt_pages - 1, 1)) {
+			nand_dbg_print(NAND_DBG_WARN,
+				"NAND Program fail in %s, Line %d, "
+				"Function: %s, "
+				"new Bad Block %d generated!\n",
+				__FILE__, __LINE__, __func__, Block);
+			goto func_return;
+		}
+#endif
+	}
+
+	nand_dbg_print(NAND_DBG_DEBUG, "FTL_Write_Block_Table_Data: done\n");
+
+func_return:
+	g_pMemPoolFree -= (bt_pages > 3) ?
+		(FTL_Get_Block_Table_Flash_Size_Bytes() -
+		(DeviceInfo.wPageSize << 1)) : DeviceInfo.wPageSize;
+	ALIGN_DWORD_BWD(g_pMemPoolFree);
+
+	return PASS;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     FTL_Replace_Block_Table
+* Inputs:       None
+* Outputs:      PASS=0 / FAIL=1
+* Description:  Get a new block to write block table
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+static u32 FTL_Replace_Block_Table(void)
+{
+	u32 blk;
+	int gc;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		__FILE__, __LINE__, __func__);
+
+	blk = FTL_Replace_LWBlock(BLOCK_TABLE_INDEX, &gc);
+
+	if ((BAD_BLOCK == blk) && (PASS == gc)) {
+		GLOB_FTL_Garbage_Collection();
+		blk = FTL_Replace_LWBlock(BLOCK_TABLE_INDEX, &gc);
+	}
+	if (BAD_BLOCK == blk)
+		printk(KERN_ERR "%s, %s: There is no spare block. "
+			"It should never happen\n",
+			__FILE__, __func__);
+
+	nand_dbg_print(NAND_DBG_DEBUG, "New Block table Block is %d\n", blk);
+
+	return blk;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     FTL_Replace_LWBlock
+* Inputs:       Block number
+*               Pointer to Garbage Collect flag
+* Outputs:
+* Description:  Determine the least weared block by traversing
+*               block table
+*               Set Garbage collection to be called if number of spare
+*               block is less than Free Block Gate count
+*               Change Block table entry to map least worn block for current
+*               operation
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+static u32 FTL_Replace_LWBlock(u32 wBlockNum, int *pGarbageCollect)
+{
+	u32 i;
+	u32 *pbt = (u32 *)g_pBlockTable;
+	u8 wLeastWornCounter = 0xFF;
+	u32 wLeastWornIndex = BAD_BLOCK;
+	u32 wSpareBlockNum = 0;
+	u32 wDiscardBlockNum = 0;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		__FILE__, __LINE__, __func__);
+
+	if (IS_SPARE_BLOCK(wBlockNum)) {
+		*pGarbageCollect = FAIL;
+		pbt[wBlockNum] = (u32)(pbt[wBlockNum] & (~SPARE_BLOCK));
+#if CMD_DMA
+		p_BTableChangesDelta =
+			(struct BTableChangesDelta *)g_pBTDelta_Free;
+		g_pBTDelta_Free += sizeof(struct BTableChangesDelta);
+		p_BTableChangesDelta->FTLCommandCount =
+						FTLCommandCount;
+		p_BTableChangesDelta->BT_Index = (u32)(wBlockNum);
+		p_BTableChangesDelta->BT_Entry_Value = pbt[wBlockNum];
+		p_BTableChangesDelta->ValidFields = 0x0C;
+#endif
+		return pbt[wBlockNum];
+	}
+
+	for (i = 0; i < DeviceInfo.wDataBlockNum; i++) {
+		if (IS_DISCARDED_BLOCK(i))
+			wDiscardBlockNum++;
+
+		if (IS_SPARE_BLOCK(i)) {
+			u32 wPhysicalIndex = (u32)((~BAD_BLOCK) & pbt[i]);
+			if (wPhysicalIndex > DeviceInfo.wSpectraEndBlock)
+				printk(KERN_ERR "FTL_Replace_LWBlock: "
+					"This should never occur!\n");
+			if (g_pWearCounter[wPhysicalIndex -
+				DeviceInfo.wSpectraStartBlock] <
+				wLeastWornCounter) {
+				wLeastWornCounter =
+					g_pWearCounter[wPhysicalIndex -
+					DeviceInfo.wSpectraStartBlock];
+				wLeastWornIndex = i;
+			}
+			wSpareBlockNum++;
+		}
+	}
+
+	nand_dbg_print(NAND_DBG_WARN,
+		"FTL_Replace_LWBlock: Least Worn Counter %d\n",
+		(int)wLeastWornCounter);
+
+	if ((wDiscardBlockNum >= NUM_FREE_BLOCKS_GATE) ||
+		(wSpareBlockNum <= NUM_FREE_BLOCKS_GATE))
+		*pGarbageCollect = PASS;
+	else
+		*pGarbageCollect = FAIL;
+
+	nand_dbg_print(NAND_DBG_DEBUG,
+		"FTL_Replace_LWBlock: Discarded Blocks %u Spare"
+		" Blocks %u\n",
+		(unsigned int)wDiscardBlockNum,
+		(unsigned int)wSpareBlockNum);
+
+	return FTL_Replace_OneBlock(wBlockNum, wLeastWornIndex);
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     FTL_Replace_MWBlock
+* Inputs:       None
+* Outputs:      most worn spare block no./BAD_BLOCK
+* Description:  It finds most worn spare block.
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+static u32 FTL_Replace_MWBlock(void)
+{
+	u32 i;
+	u32 *pbt = (u32 *)g_pBlockTable;
+	u8 wMostWornCounter = 0;
+	u32 wMostWornIndex = BAD_BLOCK;
+	u32 wSpareBlockNum = 0;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	for (i = 0; i < DeviceInfo.wDataBlockNum; i++) {
+		if (IS_SPARE_BLOCK(i)) {
+			u32 wPhysicalIndex = (u32)((~SPARE_BLOCK) & pbt[i]);
+			if (g_pWearCounter[wPhysicalIndex -
+			    DeviceInfo.wSpectraStartBlock] >
+			    wMostWornCounter) {
+				wMostWornCounter =
+				    g_pWearCounter[wPhysicalIndex -
+				    DeviceInfo.wSpectraStartBlock];
+				wMostWornIndex = wPhysicalIndex;
+			}
+			wSpareBlockNum++;
+		}
+	}
+
+	if (wSpareBlockNum <= 2)
+		return BAD_BLOCK;
+
+	return wMostWornIndex;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     FTL_Replace_Block
+* Inputs:       Block Address
+* Outputs:      PASS=0 / FAIL=1
+* Description:  If block specified by blk_addr parameter is not free,
+*               replace it with the least worn block.
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+static int FTL_Replace_Block(u64 blk_addr)
+{
+	u32 current_blk = BLK_FROM_ADDR(blk_addr);
+	u32 *pbt = (u32 *)g_pBlockTable;
+	int wResult = PASS;
+	int GarbageCollect = FAIL;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		__FILE__, __LINE__, __func__);
+
+	if (IS_SPARE_BLOCK(current_blk)) {
+		pbt[current_blk] = (~SPARE_BLOCK) & pbt[current_blk];
+#if CMD_DMA
+		p_BTableChangesDelta =
+			(struct BTableChangesDelta *)g_pBTDelta_Free;
+		g_pBTDelta_Free += sizeof(struct BTableChangesDelta);
+		p_BTableChangesDelta->FTLCommandCount =
+			FTLCommandCount;
+		p_BTableChangesDelta->BT_Index = current_blk;
+		p_BTableChangesDelta->BT_Entry_Value = pbt[current_blk];
+		p_BTableChangesDelta->ValidFields = 0x0C ;
+#endif
+		return wResult;
+	}
+
+	FTL_Replace_LWBlock(current_blk, &GarbageCollect);
+
+	if (PASS == GarbageCollect)
+		wResult = GLOB_FTL_Garbage_Collection();
+
+	return wResult;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     GLOB_FTL_Is_BadBlock
+* Inputs:       block number to test
+* Outputs:      PASS (block is BAD) / FAIL (block is not bad)
+* Description:  test if this block number is flagged as bad
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+int GLOB_FTL_Is_BadBlock(u32 wBlockNum)
+{
+	u32 *pbt = (u32 *)g_pBlockTable;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		__FILE__, __LINE__, __func__);
+
+	if (wBlockNum >= DeviceInfo.wSpectraStartBlock
+		&& BAD_BLOCK == (pbt[wBlockNum] & BAD_BLOCK))
+		return PASS;
+	else
+		return FAIL;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     GLOB_FTL_Flush_Cache
+* Inputs:       none
+* Outputs:      PASS=0 / FAIL=1
+* Description:  flush all the cache blocks to flash
+*               if a cache block is not dirty, don't do anything with it
+*               else, write the block and update the block table
+* Note:         This function should be called at shutdown/power down.
+*               to write important data into device
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+int GLOB_FTL_Flush_Cache(void)
+{
+	int i;
+
+	nand_dbg_print(NAND_DBG_WARN, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	for (i = 0; i < CACHE_BLOCK_NUMBER; i++) {
+		if (SET == Cache.ItemArray[i].bChanged) {
+			if (FTL_Cache_Write_Back(Cache.ItemArray[i].pContent,
+					Cache.ItemArray[i].dwAddress) != ERR)
+				Cache.ItemArray[i].bChanged = CLEAR;
+			else
+				return ERR;
+		}
+	}
+
+	return FTL_Write_Block_Table(FAIL);
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     GLOB_FTL_Page_Read
+* Inputs:       pointer to data
+*               address of data (u64 is LBA * Bytes/Page)
+* Outputs:      PASS=0 / FAIL=1
+* Description:  reads a page of data into RAM from the cache
+*               if the data is not already in cache, read from flash to cache
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+int GLOB_FTL_Page_Read(u8 *pData, u64 dwPageAddr)
+{
+	u8 cache_blk;
+	int wResult = PASS;
+
+	nand_dbg_print(NAND_DBG_DEBUG, "GLOB_FTL_Page_Read - "
+		"dwPageAddr: %llu\n", dwPageAddr);
+
+#if CMD_DMA
+    g_SBDCmdIndex++;
+#endif
+
+	cache_blk = FTL_Cache_If_Hit(dwPageAddr);
+
+	if (UNHIT_BLOCK == cache_blk) {
+		nand_dbg_print(NAND_DBG_DEBUG,
+			       "GLOB_FTL_Page_Read: Cache not hit\n");
+		wResult = FTL_Cache_Write();
+		if (ERR == FTL_Cache_Read(dwPageAddr))
+			wResult = ERR;
+		cache_blk = Cache.bLRU;
+	}
+
+	FTL_Cache_Read_Page(pData, dwPageAddr, cache_blk);
+
+	return wResult;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     GLOB_FTL_Page_Write
+* Inputs:       pointer to data
+*               address of data (ADDRESSTYPE is LBA * Bytes/Page)
+* Outputs:      PASS=0 / FAIL=1
+* Description:  writes a page of data from RAM to the cache
+*               if the data is not already in cache, write back the
+*               least recently used block and read the addressed block
+*               from flash to cache
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+int GLOB_FTL_Page_Write(u8 *pData, u64 dwPageAddr)
+{
+	u8 cache_blk;
+	int wResult = PASS;
+	u32 *pbt = (u32 *)g_pBlockTable;
+
+	nand_dbg_print(NAND_DBG_DEBUG, "GLOB_FTL_Page_Write - "
+		"dwPageAddr: %llu\n", dwPageAddr);
+
+#if CMD_DMA
+    g_SBDCmdIndex++;
+#endif
+
+	cache_blk = FTL_Cache_If_Hit(dwPageAddr);
+
+	if (UNHIT_BLOCK == cache_blk) {
+		wResult = FTL_Cache_Write();
+		if (IS_BAD_BLOCK(BLK_FROM_ADDR(dwPageAddr))) {
+			if (FAIL == FTL_Replace_Block(dwPageAddr))
+				return FAIL;
+		}
+		if (ERR == FTL_Cache_Read(dwPageAddr))
+			wResult = ERR;
+		cache_blk = Cache.bLRU;
+		FTL_Cache_Write_Page(pData, dwPageAddr, cache_blk, 0);
+	} else {
+#if CMD_DMA
+		FTL_Cache_Write_Page(pData, dwPageAddr, cache_blk,
+				LLD_CMD_FLAG_ORDER_BEFORE_REST);
+#else
+		FTL_Cache_Write_Page(pData, dwPageAddr, cache_blk, 0);
+#endif
+	}
+
+	return wResult;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     GLOB_FTL_Block_Erase
+* Inputs:       address of block to erase (now in byte format, should change to
+* block format)
+* Outputs:      PASS=0 / FAIL=1
+* Description:  erases the specified block
+*               increments the erase count
+*               If erase count reaches its upper limit,call function to
+*               do the ajustment as per the relative erase count values
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+int GLOB_FTL_Block_Erase(u64 blk_addr)
+{
+	int status;
+	u32 BlkIdx;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+			       __FILE__, __LINE__, __func__);
+
+	BlkIdx = (u32)(blk_addr >> DeviceInfo.nBitsInBlockDataSize);
+
+	if (BlkIdx < DeviceInfo.wSpectraStartBlock) {
+		printk(KERN_ERR "GLOB_FTL_Block_Erase: "
+			"This should never occur\n");
+		return FAIL;
+	}
+
+#if CMD_DMA
+	status = GLOB_LLD_Erase_Block(BlkIdx,
+		FTLCommandCount, LLD_CMD_FLAG_MODE_CDMA);
+	if (status == FAIL)
+		nand_dbg_print(NAND_DBG_WARN,
+			       "NAND Program fail in %s, Line %d, "
+			       "Function: %s, new Bad Block %d generated!\n",
+			       __FILE__, __LINE__, __func__, BlkIdx);
+#else
+	status = GLOB_LLD_Erase_Block(BlkIdx);
+	if (status == FAIL) {
+		nand_dbg_print(NAND_DBG_WARN,
+			       "NAND Program fail in %s, Line %d, "
+			       "Function: %s, new Bad Block %d generated!\n",
+			       __FILE__, __LINE__, __func__, BlkIdx);
+		return status;
+	}
+#endif
+
+	if (DeviceInfo.MLCDevice) {
+		g_pReadCounter[BlkIdx - DeviceInfo.wSpectraStartBlock] = 0;
+		if (g_cBlockTableStatus != IN_PROGRESS_BLOCK_TABLE) {
+			g_cBlockTableStatus = IN_PROGRESS_BLOCK_TABLE;
+			FTL_Write_IN_Progress_Block_Table_Page();
+		}
+	}
+
+	g_pWearCounter[BlkIdx - DeviceInfo.wSpectraStartBlock]++;
+
+#if CMD_DMA
+	p_BTableChangesDelta =
+		(struct BTableChangesDelta *)g_pBTDelta_Free;
+	g_pBTDelta_Free += sizeof(struct BTableChangesDelta);
+	p_BTableChangesDelta->FTLCommandCount = FTLCommandCount;
+	p_BTableChangesDelta->WC_Index =
+		BlkIdx - DeviceInfo.wSpectraStartBlock;
+	p_BTableChangesDelta->WC_Entry_Value =
+		g_pWearCounter[BlkIdx - DeviceInfo.wSpectraStartBlock];
+	p_BTableChangesDelta->ValidFields = 0x30;
+
+	if (DeviceInfo.MLCDevice) {
+		p_BTableChangesDelta =
+			(struct BTableChangesDelta *)g_pBTDelta_Free;
+		g_pBTDelta_Free += sizeof(struct BTableChangesDelta);
+		p_BTableChangesDelta->FTLCommandCount =
+			FTLCommandCount;
+		p_BTableChangesDelta->RC_Index =
+			BlkIdx - DeviceInfo.wSpectraStartBlock;
+		p_BTableChangesDelta->RC_Entry_Value =
+			g_pReadCounter[BlkIdx -
+				DeviceInfo.wSpectraStartBlock];
+		p_BTableChangesDelta->ValidFields = 0xC0;
+	}
+
+	FTLCommandCount++;
+#endif
+
+	if (g_pWearCounter[BlkIdx - DeviceInfo.wSpectraStartBlock] == 0xFE)
+		FTL_Adjust_Relative_Erase_Count(BlkIdx);
+
+	return status;
+}
+
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     FTL_Adjust_Relative_Erase_Count
+* Inputs:       index to block that was just incremented and is at the max
+* Outputs:      PASS=0 / FAIL=1
+* Description:  If any erase counts at MAX, adjusts erase count of every
+*               block by substracting least worn
+*               counter from counter value of every entry in wear table
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+static int FTL_Adjust_Relative_Erase_Count(u32 Index_of_MAX)
+{
+	u8 wLeastWornCounter = MAX_BYTE_VALUE;
+	u8 wWearCounter;
+	u32 i, wWearIndex;
+	u32 *pbt = (u32 *)g_pBlockTable;
+	int wResult = PASS;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		__FILE__, __LINE__, __func__);
+
+	for (i = 0; i < DeviceInfo.wDataBlockNum; i++) {
+		if (IS_BAD_BLOCK(i))
+			continue;
+		wWearIndex = (u32)(pbt[i] & (~BAD_BLOCK));
+
+		if ((wWearIndex - DeviceInfo.wSpectraStartBlock) < 0)
+			printk(KERN_ERR "FTL_Adjust_Relative_Erase_Count:"
+					"This should never occur\n");
+		wWearCounter = g_pWearCounter[wWearIndex -
+			DeviceInfo.wSpectraStartBlock];
+		if (wWearCounter < wLeastWornCounter)
+			wLeastWornCounter = wWearCounter;
+	}
+
+	if (wLeastWornCounter == 0) {
+		nand_dbg_print(NAND_DBG_WARN,
+			"Adjusting Wear Levelling Counters: Special Case\n");
+		g_pWearCounter[Index_of_MAX -
+			DeviceInfo.wSpectraStartBlock]--;
+#if CMD_DMA
+		p_BTableChangesDelta =
+			(struct BTableChangesDelta *)g_pBTDelta_Free;
+		g_pBTDelta_Free += sizeof(struct BTableChangesDelta);
+		p_BTableChangesDelta->FTLCommandCount = FTLCommandCount;
+		p_BTableChangesDelta->WC_Index =
+			Index_of_MAX - DeviceInfo.wSpectraStartBlock;
+		p_BTableChangesDelta->WC_Entry_Value =
+			g_pWearCounter[Index_of_MAX -
+				DeviceInfo.wSpectraStartBlock];
+		p_BTableChangesDelta->ValidFields = 0x30;
+#endif
+		FTL_Static_Wear_Leveling();
+	} else {
+		for (i = 0; i < DeviceInfo.wDataBlockNum; i++)
+			if (!IS_BAD_BLOCK(i)) {
+				wWearIndex = (u32)(pbt[i] & (~BAD_BLOCK));
+				g_pWearCounter[wWearIndex -
+					DeviceInfo.wSpectraStartBlock] =
+					(u8)(g_pWearCounter
+					[wWearIndex -
+					DeviceInfo.wSpectraStartBlock] -
+					wLeastWornCounter);
+#if CMD_DMA
+				p_BTableChangesDelta =
+				(struct BTableChangesDelta *)g_pBTDelta_Free;
+				g_pBTDelta_Free +=
+					sizeof(struct BTableChangesDelta);
+
+				p_BTableChangesDelta->FTLCommandCount =
+					FTLCommandCount;
+				p_BTableChangesDelta->WC_Index = wWearIndex -
+					DeviceInfo.wSpectraStartBlock;
+				p_BTableChangesDelta->WC_Entry_Value =
+					g_pWearCounter[wWearIndex -
+					DeviceInfo.wSpectraStartBlock];
+				p_BTableChangesDelta->ValidFields = 0x30;
+#endif
+			}
+	}
+
+	return wResult;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     FTL_Write_IN_Progress_Block_Table_Page
+* Inputs:       None
+* Outputs:      None
+* Description:  It writes in-progress flag page to the page next to
+*               block table
+***********************************************************************/
+static int FTL_Write_IN_Progress_Block_Table_Page(void)
+{
+	int wResult = PASS;
+	u16 bt_pages;
+	u16 dwIPFPageAddr;
+#if CMD_DMA
+#else
+	u32 *pbt = (u32 *)g_pBlockTable;
+	u32 wTempBlockTableIndex;
+#endif
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+			       __FILE__, __LINE__, __func__);
+
+	bt_pages = FTL_Get_Block_Table_Flash_Size_Pages();
+
+	dwIPFPageAddr = g_wBlockTableOffset + bt_pages;
+
+	nand_dbg_print(NAND_DBG_DEBUG, "Writing IPF at "
+			       "Block %d Page %d\n",
+			       g_wBlockTableIndex, dwIPFPageAddr);
+
+#if CMD_DMA
+	wResult = GLOB_LLD_Write_Page_Main_Spare(g_pIPF,
+		g_wBlockTableIndex, dwIPFPageAddr, 1, FTLCommandCount,
+		LLD_CMD_FLAG_MODE_CDMA | LLD_CMD_FLAG_ORDER_BEFORE_REST);
+
+	if (wResult == FAIL) {
+		nand_dbg_print(NAND_DBG_WARN,
+			       "NAND Program fail in %s, Line %d, "
+			       "Function: %s, new Bad Block %d generated!\n",
+			       __FILE__, __LINE__, __func__,
+			       g_wBlockTableIndex);
+	}
+	g_wBlockTableOffset = dwIPFPageAddr + 1;
+	p_BTableChangesDelta = (struct BTableChangesDelta *)g_pBTDelta_Free;
+	g_pBTDelta_Free += sizeof(struct BTableChangesDelta);
+	p_BTableChangesDelta->FTLCommandCount = FTLCommandCount;
+	p_BTableChangesDelta->g_wBlockTableOffset = g_wBlockTableOffset;
+	p_BTableChangesDelta->ValidFields = 0x01;
+	FTLCommandCount++;
+#else
+	wResult = GLOB_LLD_Write_Page_Main_Spare(g_pIPF,
+		g_wBlockTableIndex, dwIPFPageAddr, 1);
+	if (wResult == FAIL) {
+		nand_dbg_print(NAND_DBG_WARN,
+			       "NAND Program fail in %s, Line %d, "
+			       "Function: %s, new Bad Block %d generated!\n",
+			       __FILE__, __LINE__, __func__,
+			       (int)g_wBlockTableIndex);
+		MARK_BLOCK_AS_BAD(pbt[BLOCK_TABLE_INDEX]);
+		wTempBlockTableIndex = FTL_Replace_Block_Table();
+		bt_block_changed = 1;
+		if (BAD_BLOCK == wTempBlockTableIndex)
+			return ERR;
+		g_wBlockTableIndex = wTempBlockTableIndex;
+		g_wBlockTableOffset = 0;
+		pbt[BLOCK_TABLE_INDEX] = g_wBlockTableIndex;
+		return FAIL;
+	}
+	g_wBlockTableOffset = dwIPFPageAddr + 1;
+#endif
+	return wResult;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     FTL_Read_Disturbance
+* Inputs:       block address
+* Outputs:      PASS=0 / FAIL=1
+* Description:  used to handle read disturbance. Data in block that
+*               reaches its read limit is moved to new block
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+int FTL_Read_Disturbance(u32 blk_addr)
+{
+	int wResult = FAIL;
+	u32 *pbt = (u32 *) g_pBlockTable;
+	u32 dwOldBlockAddr = blk_addr;
+	u32 wBlockNum;
+	u32 i;
+	u32 wLeastReadCounter = 0xFFFF;
+	u32 wLeastReadIndex = BAD_BLOCK;
+	u32 wSpareBlockNum = 0;
+	u32 wTempNode;
+	u32 wReplacedNode;
+	u8 *g_pTempBuf;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+			       __FILE__, __LINE__, __func__);
+
+#if CMD_DMA
+	g_pTempBuf = (u8 *)g_pCopyBackBufferStart;
+	g_pCopyBackBufferStart += DeviceInfo.wPageDataSize *
+		DeviceInfo.wPagesPerBlock * sizeof(u8);
+#else
+	g_pTempBuf = (u8 *)g_pMemPoolFree;
+	g_pMemPoolFree += DeviceInfo.wPageDataSize *
+		DeviceInfo.wPagesPerBlock * sizeof(u8);
+	ALIGN_DWORD_FWD(g_pMemPoolFree);
+	debug_boundary_error(((int)g_pMemPoolFree - (int)g_pMemPool) - 1,
+			     globalMemSize, 0);
+#endif
+
+	wBlockNum = FTL_Get_Block_Index(blk_addr);
+
+	do {
+		/* This is a bug.Here 'i' should be logical block number
+		 * and start from 1 (0 is reserved for block table).
+		 * Have fixed it.        - Yunpeng 2008. 12. 19
+		 */
+		for (i = 1; i < DeviceInfo.wDataBlockNum; i++) {
+			if (IS_SPARE_BLOCK(i)) {
+				u32 wPhysicalIndex =
+					(u32)((~SPARE_BLOCK) & pbt[i]);
+				if (g_pReadCounter[wPhysicalIndex -
+					DeviceInfo.wSpectraStartBlock] <
+					wLeastReadCounter) {
+					wLeastReadCounter =
+						g_pReadCounter[wPhysicalIndex -
+						DeviceInfo.wSpectraStartBlock];
+					wLeastReadIndex = i;
+				}
+				wSpareBlockNum++;
+			}
+		}
+
+		if (wSpareBlockNum <= NUM_FREE_BLOCKS_GATE) {
+			wResult = GLOB_FTL_Garbage_Collection();
+			if (PASS == wResult)
+				continue;
+			else
+				break;
+		} else {
+			wTempNode = (u32)(DISCARD_BLOCK | pbt[wBlockNum]);
+			wReplacedNode = (u32)((~SPARE_BLOCK) &
+					pbt[wLeastReadIndex]);
+#if CMD_DMA
+			pbt[wBlockNum] = wReplacedNode;
+			pbt[wLeastReadIndex] = wTempNode;
+			p_BTableChangesDelta =
+				(struct BTableChangesDelta *)g_pBTDelta_Free;
+			g_pBTDelta_Free += sizeof(struct BTableChangesDelta);
+
+			p_BTableChangesDelta->FTLCommandCount =
+					FTLCommandCount;
+			p_BTableChangesDelta->BT_Index = wBlockNum;
+			p_BTableChangesDelta->BT_Entry_Value = pbt[wBlockNum];
+			p_BTableChangesDelta->ValidFields = 0x0C;
+
+			p_BTableChangesDelta =
+				(struct BTableChangesDelta *)g_pBTDelta_Free;
+			g_pBTDelta_Free += sizeof(struct BTableChangesDelta);
+
+			p_BTableChangesDelta->FTLCommandCount =
+					FTLCommandCount;
+			p_BTableChangesDelta->BT_Index = wLeastReadIndex;
+			p_BTableChangesDelta->BT_Entry_Value =
+					pbt[wLeastReadIndex];
+			p_BTableChangesDelta->ValidFields = 0x0C;
+
+			wResult = GLOB_LLD_Read_Page_Main(g_pTempBuf,
+				dwOldBlockAddr, 0, DeviceInfo.wPagesPerBlock,
+				FTLCommandCount, LLD_CMD_FLAG_MODE_CDMA);
+			if (wResult == FAIL)
+				return wResult;
+
+			FTLCommandCount++;
+
+			if (wResult != FAIL) {
+				if (FAIL == GLOB_LLD_Write_Page_Main(
+					g_pTempBuf, pbt[wBlockNum], 0,
+					DeviceInfo.wPagesPerBlock,
+					FTLCommandCount)) {
+					nand_dbg_print(NAND_DBG_WARN,
+						"NAND Program fail in "
+						"%s, Line %d, Function: %s, "
+						"new Bad Block %d "
+						"generated!\n",
+						__FILE__, __LINE__, __func__,
+						(int)pbt[wBlockNum]);
+					wResult = FAIL;
+					MARK_BLOCK_AS_BAD(pbt[wBlockNum]);
+				}
+				FTLCommandCount++;
+			}
+#else
+			wResult = GLOB_LLD_Read_Page_Main(g_pTempBuf,
+				dwOldBlockAddr, 0, DeviceInfo.wPagesPerBlock);
+			if (wResult == FAIL) {
+				g_pMemPoolFree -= (DeviceInfo.wPageDataSize *
+					DeviceInfo.wPagesPerBlock *
+					sizeof(u8));
+				ALIGN_DWORD_BWD(g_pMemPoolFree);
+				return wResult;
+			}
+
+			if (wResult != FAIL) {
+				/* This is a bug. At this time, pbt[wBlockNum]
+				is still the physical address of
+				discard block, and should not be write.
+				Have fixed it as below.
+					-- Yunpeng 2008.12.19
+				*/
+				wResult = GLOB_LLD_Write_Page_Main(g_pTempBuf,
+					wReplacedNode, 0,
+					DeviceInfo.wPagesPerBlock);
+				if (wResult == FAIL) {
+					nand_dbg_print(NAND_DBG_WARN,
+						"NAND Program fail in "
+						"%s, Line %d, Function: %s, "
+						"new Bad Block %d "
+						"generated!\n",
+						__FILE__, __LINE__, __func__,
+						(int)wReplacedNode);
+					MARK_BLOCK_AS_BAD(wReplacedNode);
+				} else {
+					pbt[wBlockNum] = wReplacedNode;
+					pbt[wLeastReadIndex] = wTempNode;
+				}
+			}
+
+			if ((wResult == PASS) && (g_cBlockTableStatus !=
+				IN_PROGRESS_BLOCK_TABLE)) {
+				g_cBlockTableStatus = IN_PROGRESS_BLOCK_TABLE;
+				FTL_Write_IN_Progress_Block_Table_Page();
+			}
+#endif
+		}
+	} while (wResult != PASS)
+	;
+
+#if CMD_DMA
+	/* ... */
+#else
+	g_pMemPoolFree -= (DeviceInfo.wPageDataSize *
+			DeviceInfo.wPagesPerBlock * sizeof(u8));
+	ALIGN_DWORD_BWD(g_pMemPoolFree);
+#endif
+
+	return wResult;
+}
+
diff --git a/drivers/staging/mrst_nand/flash.h b/drivers/staging/mrst_nand/flash.h
new file mode 100644
index 0000000..d0adf5c
--- /dev/null
+++ b/drivers/staging/mrst_nand/flash.h
@@ -0,0 +1,158 @@
+/*
+ * NAND Flash Controller Device Driver
+ * Copyright (c) 2009, Intel Corporation and its suppliers.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
+ *
+ */
+
+#ifndef _FLASH_INTERFACE_
+#define _FLASH_INTERFACE_
+
+#include "ffsport.h"
+#include "spectraswconfig.h"
+
+#define MAX_BLOCKNODE_VALUE     0xFFFFFF
+#define DISCARD_BLOCK           0x800000
+#define SPARE_BLOCK             0x400000
+#define BAD_BLOCK               0xC00000
+
+#define MAX_BYTE_VALUE      0xFF
+#define UNHIT_BLOCK         0xFF
+
+#define IN_PROGRESS_BLOCK_TABLE   0x00
+#define CURRENT_BLOCK_TABLE       0x01
+
+
+#define BTSIG_OFFSET   (0)
+#define BTSIG_BYTES    (5)
+#define BTSIG_DELTA    (3)
+
+#define MAX_TWO_BYTE_VALUE 0xFFFF
+#define MAX_READ_COUNTER  0x2710
+
+#define FIRST_BT_ID		(1)
+#define LAST_BT_ID    (254)
+#define BTBLOCK_INVAL  (u32)(0xFFFFFFFF)
+
+#define ALIGN_DWORD_FWD(ptr)  (ptr = (u8 *)((unsigned long)(ptr+3) & ~0x3))
+#define ALIGN_DWORD_BWD(ptr)  (ptr = (u8 *)((unsigned long)ptr & ~0x3))
+
+struct device_info_tag {
+	u16 wDeviceMaker;
+	u32 wDeviceType;
+	u32 wSpectraStartBlock;
+	u32 wSpectraEndBlock;
+	u32 wTotalBlocks;
+	u16 wPagesPerBlock;
+	u16 wPageSize;
+	u16 wPageDataSize;
+	u16 wPageSpareSize;
+	u16 wNumPageSpareFlag;
+	u16 wECCBytesPerSector;
+	u32 wBlockSize;
+	u32 wBlockDataSize;
+	u32 wDataBlockNum;
+	u8 bPlaneNum;
+	u16 wDeviceMainAreaSize;
+	u16 wDeviceSpareAreaSize;
+	u16 wDevicesConnected;
+	u16 wDeviceWidth;
+	u16 wHWRevision;
+	u16 wHWFeatures;
+
+	u16 wONFIDevFeatures;
+	u16 wONFIOptCommands;
+	u16 wONFITimingMode;
+	u16 wONFIPgmCacheTimingMode;
+
+	u16 MLCDevice;
+	u16 wSpareSkipBytes;
+
+	u8 nBitsInPageNumber;
+	u8 nBitsInPageDataSize;
+	u8 nBitsInBlockDataSize;
+};
+
+extern struct device_info_tag DeviceInfo;
+
+/* Cache item format */
+struct flash_cache_item_tag {
+	u64 dwAddress;
+	u8 bLRUCount;
+	u8 bChanged;
+	u8 *pContent;
+};
+
+struct flash_cache_tag {
+	u8 bLRU;
+	u32 dwCacheDataSize;
+	u16 wCachePageNum;
+	struct flash_cache_item_tag ItemArray[CACHE_BLOCK_NUMBER];
+};
+
+extern struct flash_cache_tag Cache;
+
+/* struture used for IndentfyDevice function */
+struct spectra_indentfy_dev_tag {
+	u32 NumBlocks;
+	u16 PagesPerBlock;
+	u16 PageDataSize;
+	u16 wECCBytesPerSector;
+	u32 wDataBlockNum;
+	u32 SizeOfGlobalMem;
+};
+
+int GLOB_FTL_Flash_Init(void);
+int GLOB_FTL_Flash_Release(void);
+/*void GLOB_FTL_Erase_Flash(void);*/
+int GLOB_FTL_Block_Erase(u64 block_addr);
+int GLOB_FTL_Is_BadBlock(u32 block_num);
+int GLOB_FTL_IdentifyDevice(struct spectra_indentfy_dev_tag *IdentfyDeviceData);
+int GLOB_FTL_Mem_Config(u8 *pMem);
+int GLOB_FTL_cdma_int (void);
+int GLOB_FTL_Event_Status(int *);
+void GLOB_FTL_Enable_Disable_Interrupts(u16 INT_ENABLE);
+#if CMD_DMA
+void GLOB_FTL_Execute_CMDS(void);
+#endif
+
+/*int FTL_Read_Disturbance(ADDRESSTYPE dwBlockAddr);*/
+int FTL_Read_Disturbance(u32 dwBlockAddr);
+
+/*Flash r/w based on cache*/
+int GLOB_FTL_Page_Read(u8 *read_data, u64 page_addr);
+int GLOB_FTL_Page_Write(u8 *write_data, u64 page_addr);
+int GLOB_FTL_Wear_Leveling(void);
+int GLOB_FTL_Flash_Format(void);
+int GLOB_FTL_Init(void);
+int GLOB_FTL_Flush_Cache(void);
+int GLOB_FTL_Garbage_Collection(void);
+int GLOB_FTL_BT_Garbage_Collection(void);
+void GLOB_FTL_Cache_Release(void);
+u8 *get_blk_table_start_addr(void);
+u8 *get_wear_leveling_table_start_addr(void);
+unsigned long get_blk_table_len(void);
+unsigned long get_wear_leveling_table_len(void);
+
+#if DEBUG_BNDRY
+void debug_boundary_lineno_error(int chnl, int limit, int no, int lineno,
+				char *filename);
+#define debug_boundary_error(chnl, limit, no) debug_boundary_lineno_error(chnl,\
+						limit, no, __LINE__, __FILE__)
+#else
+#define debug_boundary_error(chnl, limit, no) ;
+#endif
+
+#endif /*_FLASH_INTERFACE_*/
diff --git a/drivers/staging/mrst_nand/lld.c b/drivers/staging/mrst_nand/lld.c
new file mode 100644
index 0000000..bed1c3c
--- /dev/null
+++ b/drivers/staging/mrst_nand/lld.c
@@ -0,0 +1,492 @@
+/*
+ * NAND Flash Controller Device Driver
+ * Copyright (c) 2009, Intel Corporation and its suppliers.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
+ *
+ */
+
+#ifdef ELDORA
+#include "defs.h"
+#include "lld.h"
+#else
+#include "spectraswconfig.h"
+#include "ffsport.h"
+#include "ffsdefs.h"
+#include "lld.h"
+
+#ifdef NEW_LLD_API
+#include "flash.h"
+#endif
+
+#endif
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+#if FLASH_EMU		/* vector all the LLD calls to the LLD_EMU code */
+#include "lld_emu.h"
+#include "lld_cdma.h"
+
+/* common functions: */
+u16 GLOB_LLD_Flash_Reset(void)
+{
+	return emu_Flash_Reset();
+}
+
+u16 GLOB_LLD_Read_Device_ID(void)
+{
+	return emu_Read_Device_ID();
+}
+
+u16 GLOB_LLD_Flash_Release(void)
+{
+	return emu_Flash_Release();
+}
+
+#if CMD_DMA			/* new APIs with tags */
+u16 GLOB_LLD_Flash_Init(u16 Flags)
+{
+	if (Flags & LLD_CMD_FLAG_MODE_POLL)
+		return emu_Flash_Init();
+	else
+		return emu_CDMA_Flash_Init();
+}
+
+u16 GLOB_LLD_Erase_Block(u32 block, u8 TagCount, u16 Flags)
+{
+	if (Flags & LLD_CMD_FLAG_MODE_POLL)
+		return emu_Erase_Block(block);
+	else
+		return CDMA_Data_CMD(TagCount, ERASE_CMD, 0, block, 0, 0,
+				     Flags);
+}
+
+u16 GLOB_LLD_Write_Page_Main(u8 *data, u32 block, u16 page,
+				u16 count, u8 TagCount)
+{
+	return CDMA_Data_CMD(TagCount, WRITE_MAIN_CMD, data, block, page, count,
+			     0);
+}
+
+u16 GLOB_LLD_Read_Page_Main(u8 *data, u32 block, u16 page,
+			       u16 count, u8 TagCount, u16 Flags)
+{
+	if (Flags & LLD_CMD_FLAG_MODE_POLL)
+		return emu_Read_Page_Main(data, block, page, count);
+	else
+		return CDMA_Data_CMD(TagCount, READ_MAIN_CMD, data, block, page,
+				     count, Flags);
+}
+
+u16 GLOB_LLD_MemCopy_CMD(u8 TagCount, u8 *dest, u8 *src,
+			u16 ByteCount, u16 flag)
+{
+	return CDMA_MemCopy_CMD(TagCount, dest, src, ByteCount, flag);
+}
+
+u16 GLOB_LLD_Execute_CMDs(u16 count)
+{
+	return emu_CDMA_Execute_CMDs(count);
+}
+
+u16 GLOB_LLD_Event_Status(void)
+{
+	return emu_CDMA_Event_Status();
+}
+
+#ifndef ELDORA
+void GLOB_LLD_Enable_Disable_Interrupts(u16 INT_ENABLE)
+{
+	emu_Enable_Disable_Interrupts(INT_ENABLE);
+}
+
+u16 GLOB_LLD_Write_Page_Main_Spare(u8 *write_data, u32 block,
+				      u16 Page, u16 PageCount,
+				      u8 TagCount, u16 Flags)
+{
+	if (Flags & LLD_CMD_FLAG_MODE_POLL)
+		return emu_Write_Page_Main_Spare(write_data, block, Page,
+						 PageCount);
+	else
+		return CDMA_Data_CMD(TagCount, WRITE_MAIN_SPARE_CMD, write_data,
+				     block, Page, PageCount, Flags);
+}
+
+u16 GLOB_LLD_Read_Page_Main_Spare(u8 *read_data, u32 Block,
+				     u16 Page, u16 PageCount,
+				     u8 TagCount)
+{
+	return CDMA_Data_CMD(TagCount, READ_MAIN_SPARE_CMD,
+		read_data, Block, Page, PageCount,
+		LLD_CMD_FLAG_MODE_CDMA);
+}
+
+u16 GLOB_LLD_Write_Page_Spare(u8 *write_data, u32 Block, u16 Page,
+				 u16 PageCount)
+{
+	return emu_Write_Page_Spare(write_data, Block, Page, PageCount);
+}
+
+u16 GLOB_LLD_Read_Page_Spare(u8 *read_data, u32 Block, u16 Page,
+				u16 PageCount)
+{
+	return emu_Read_Page_Spare(read_data, Block, Page, PageCount);
+}
+
+u32  GLOB_LLD_Memory_Pool_Size(void)
+{
+    return CDMA_Memory_Pool_Size();
+}
+
+int GLOB_LLD_Mem_Config(u8 *pMem)
+{
+    return CDMA_Mem_Config(pMem);
+}
+#endif /* !ELDORA */
+
+#else /* if not CMD_DMA, use old style parameters without tags */
+u16 GLOB_LLD_Flash_Init(void)
+{
+	return emu_Flash_Init();
+}
+
+u16 GLOB_LLD_Erase_Block(u32 block_add)
+{
+	return emu_Erase_Block(block_add);
+}
+
+u16 GLOB_LLD_Write_Page_Main(u8 *write_data, u32 block, u16 Page,
+				u16 PageCount)
+{
+	return emu_Write_Page_Main(write_data, block, Page, PageCount);
+}
+
+u16 GLOB_LLD_Read_Page_Main(u8 *read_data, u32 block, u16 Page,
+			       u16 PageCount)
+{
+	return emu_Read_Page_Main(read_data, block, Page, PageCount);
+}
+
+u16 GLOB_LLD_Read_Page_Main_Polling(u8 *read_data,
+			u32 block, u16 page, u16 page_count)
+{
+	return emu_Read_Page_Main(read_data, block, page, page_count);
+}
+#ifndef ELDORA
+void GLOB_LLD_Enable_Disable_Interrupts(u16 INT_ENABLE)
+{
+	emu_Enable_Disable_Interrupts(INT_ENABLE);
+}
+
+u16 GLOB_LLD_Write_Page_Main_Spare(u8 *write_data, u32 block,
+				      u16 Page, u16 PageCount)
+{
+	return emu_Write_Page_Main_Spare(write_data, block, Page, PageCount);
+}
+
+u16 GLOB_LLD_Read_Page_Main_Spare(u8 *read_data, u32 block,
+				     u16 Page, u16 PageCount)
+{
+	return emu_Read_Page_Main_Spare(read_data, block, Page, PageCount);
+}
+
+u16 GLOB_LLD_Write_Page_Spare(u8 *write_data, u32 block, u16 Page,
+				 u16 PageCount)
+{
+	return emu_Write_Page_Spare(write_data, block, Page, PageCount);
+}
+
+u16 GLOB_LLD_Read_Page_Spare(u8 *read_data, u32 block, u16 Page,
+				u16 PageCount)
+{
+	return emu_Read_Page_Spare(read_data, block, Page, PageCount);
+}
+
+u32  GLOB_LLD_Memory_Pool_Size(void)
+{
+    return 0;
+}
+
+int GLOB_LLD_Mem_Config(u8 *pMem)
+{
+    return 0;
+}
+
+#endif /* !ELDORA */
+#endif /* CMD_DMA or not */
+
+#ifndef ELDORA
+u16  GLOB_LLD_Get_Bad_Block(u32 block)
+{
+    return  emu_Get_Bad_Block(block);
+}
+#endif /* !ELDORA */
+
+#endif /* FLASH_EMU */
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+#if FLASH_NAND	/* vector all the LLD calls to the NAND controller code */
+#include "lld_nand.h"
+#ifndef ELDORA
+#include "flash.h"
+#endif
+
+/* common functions for LLD_NAND */
+void GLOB_LLD_ECC_Control(int enable)
+{
+	NAND_ECC_Ctrl(enable);
+}
+
+/* common functions for LLD_NAND */
+u16 GLOB_LLD_Flash_Reset(void)
+{
+	return NAND_Flash_Reset();
+}
+
+u16 GLOB_LLD_Read_Device_ID(void)
+{
+	return NAND_Read_Device_ID();
+}
+
+u16 GLOB_LLD_UnlockArrayAll(void)
+{
+	return NAND_UnlockArrayAll();
+}
+
+void GLOB_LLD_Enable_Disable_Interrupts(u16 INT_ENABLE)
+{
+	NAND_LLD_Enable_Disable_Interrupts(INT_ENABLE);
+}
+
+u16 GLOB_LLD_Flash_Init(void)
+{
+	return NAND_Flash_Init();
+}
+
+u16 GLOB_LLD_Flash_Release(void)
+{
+	return 0;
+}
+
+u16 GLOB_LLD_Event_Status(void)
+{
+	return NAND_LLD_Event_Status();
+}
+
+u16 GLOB_LLD_Erase_Block(u32 block_add)
+{
+	return NAND_Erase_Block(block_add);
+}
+
+
+u16 GLOB_LLD_Write_Page_Main(u8 *write_data, u32 block, u16 Page,
+				u16 PageCount)
+{
+	return NAND_Write_Page_Main(write_data, block, Page, PageCount);
+}
+
+u16 GLOB_LLD_Read_Page_Main(u8 *read_data, u32 block, u16 page,
+			       u16 page_count)
+{
+	return NAND_Read_Page_Main(read_data, block, page, page_count);
+}
+
+u16 GLOB_LLD_Read_Page_Main_Polling(u8 *read_data,
+			u32 block, u16 page, u16 page_count)
+{
+	return NAND_Read_Page_Main_Polling(read_data,
+			block, page, page_count);
+}
+
+#ifndef ELDORA
+u16 GLOB_LLD_Write_Page_Main_Spare(u8 *write_data, u32 block,
+				      u16 Page, u16 PageCount)
+{
+	return NAND_Write_Page_Main_Spare(write_data, block, Page, PageCount);
+}
+
+u16 GLOB_LLD_Write_Page_Spare(u8 *write_data, u32 block, u16 Page,
+				 u16 PageCount)
+{
+	return NAND_Write_Page_Spare(write_data, block, Page, PageCount);
+}
+
+u16 GLOB_LLD_Read_Page_Main_Spare(u8 *read_data, u32 block,
+				     u16 page, u16 page_count)
+{
+	return NAND_Read_Page_Main_Spare(read_data, block, page, page_count);
+}
+
+u16 GLOB_LLD_Read_Page_Spare(u8 *read_data, u32 block, u16 Page,
+				u16 PageCount)
+{
+	return NAND_Read_Page_Spare(read_data, block, Page, PageCount);
+}
+
+u16  GLOB_LLD_Get_Bad_Block(u32 block)
+{
+	return  NAND_Get_Bad_Block(block);
+}
+
+u32  GLOB_LLD_Memory_Pool_Size(void)
+{
+	return NAND_Memory_Pool_Size();
+}
+
+int GLOB_LLD_Mem_Config(u8 *pMem)
+{
+	return NAND_Mem_Config(pMem);
+}
+
+#endif /* !ELDORA */
+
+#endif /* FLASH_NAND */
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+
+/* CMD DMA is not applicable for Eldora */
+#ifndef ELDORA
+
+#if FLASH_CDMA	/* vector all the LLD data calls to the LLD_CDMA module */
+		   /* vector some other  LLD  calls to the LLD_CDMA module*/
+		   /* vector the common  LLD  calls to the LLD_NAND module*/
+#include "lld_cdma.h"
+#include "lld_nand.h"
+
+u16 GLOB_LLD_Flash_Reset(void)
+{
+	return NAND_Flash_Reset();
+}
+
+u16 GLOB_LLD_Read_Device_ID(void)
+{
+	return NAND_Read_Device_ID();
+}
+
+u16 GLOB_LLD_UnlockArrayAll(void)
+{
+	return NAND_UnlockArrayAll();
+}
+
+void GLOB_LLD_Enable_Disable_Interrupts(u16 INT_ENABLE)
+{
+	NAND_LLD_Enable_Disable_Interrupts(INT_ENABLE);
+}
+
+u16 GLOB_LLD_Flash_Release(void)	/* not used; NOP */
+{
+	return 0;
+}
+
+u16 GLOB_LLD_Flash_Init(u16 Flags)
+{
+	if (Flags & LLD_CMD_FLAG_MODE_POLL)
+		return NAND_Flash_Init();
+	else
+		return CDMA_Flash_Init();
+}
+
+int GLOB_LLD_is_cdma_int(void)
+{
+	return is_cdma_interrupt();
+}
+
+u16 GLOB_LLD_Event_Status(void)
+{
+	return CDMA_Event_Status();
+}
+
+u16 GLOB_LLD_MemCopy_CMD(u8 TagCount, u8 *dest, u8 *src,
+			u16 ByteCount, u16 flag)
+{
+	return CDMA_MemCopy_CMD(TagCount, dest, src, ByteCount, flag);
+}
+
+u16 GLOB_LLD_Execute_CMDs(u16 count)
+{
+	return CDMA_Execute_CMDs(count);
+}
+
+u16 GLOB_LLD_Erase_Block(u32 block, u8 TagCount, u16 Flags)
+{
+	if (Flags & LLD_CMD_FLAG_MODE_POLL)
+		return NAND_Erase_Block(block);
+	else
+		return CDMA_Data_CMD(TagCount, ERASE_CMD, 0, block, 0, 0,
+				     Flags);
+}
+
+u16 GLOB_LLD_Write_Page_Main(u8 *data, u32 block, u16 page,
+				u16 count, u8 TagCount)
+{
+	return CDMA_Data_CMD(TagCount, WRITE_MAIN_CMD, data, block, page, count,
+			     0);
+}
+
+u16 GLOB_LLD_Read_Page_Main(u8 *data, u32 block, u16 page,
+			       u16 count, u8 TagCount, u16 Flags)
+{
+	if (Flags & LLD_CMD_FLAG_MODE_POLL) {
+		return NAND_Read_Page_Main(data, block, page, count);
+	} else
+		return CDMA_Data_CMD(TagCount, READ_MAIN_CMD, data, block, page,
+				     count, Flags);
+}
+
+u16 GLOB_LLD_Write_Page_Spare(u8 *data, u32 block, u16 page,
+				 u16 count)
+{
+	return NAND_Write_Page_Spare(data, block, page, count);
+}
+
+u16 GLOB_LLD_Read_Page_Spare(u8 *data, u32 block, u16 page,
+				u16 count)
+{
+	return NAND_Read_Page_Spare(data, block, page, count);
+}
+
+u16 GLOB_LLD_Write_Page_Main_Spare(u8 *data, u32 block, u16 page,
+				      u16 count, u8 TagCount, u16 Flags)
+{
+	return CDMA_Data_CMD(TagCount, WRITE_MAIN_SPARE_CMD, data, block, page,
+			     count, Flags);
+}
+
+u16 GLOB_LLD_Read_Page_Main_Spare(u8 *data, u32 block, u16 page,
+				     u16 count, u8 TagCount)
+{
+	return CDMA_Data_CMD(TagCount, READ_MAIN_SPARE_CMD, data, block, page,
+			     count, LLD_CMD_FLAG_MODE_CDMA);
+}
+
+u16  GLOB_LLD_Get_Bad_Block(u32 block)
+{
+    return  NAND_Get_Bad_Block(block);
+}
+
+u32  GLOB_LLD_Memory_Pool_Size(void)
+{
+    return CDMA_Memory_Pool_Size();
+}
+
+int GLOB_LLD_Mem_Config(u8 *pMem)
+{
+    return CDMA_Mem_Config(pMem);
+}
+#endif /* FLASH_CDMA */
+
+#endif /* !ELDORA */
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+
+/* end of LLD.c */
diff --git a/drivers/staging/mrst_nand/lld.h b/drivers/staging/mrst_nand/lld.h
new file mode 100644
index 0000000..5889c40
--- /dev/null
+++ b/drivers/staging/mrst_nand/lld.h
@@ -0,0 +1,123 @@
+/*
+ * NAND Flash Controller Device Driver
+ * Copyright (c) 2009, Intel Corporation and its suppliers.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
+ *
+ */
+
+
+
+#ifndef _LLD_
+#define _LLD_
+
+#include "ffsport.h"
+#include "spectraswconfig.h"
+#include "flash.h"
+
+#define GOOD_BLOCK 0
+#define DEFECTIVE_BLOCK 1
+#define READ_ERROR 2
+
+#define CLK_X  5
+#define CLK_MULTI 4
+
+/* Max main & spare sizes supported in LLD */
+#define MAX_PAGE_MAIN_AREA          8192
+#define MAX_PAGE_SPARE_AREA          512
+#define MAX_PAGE_MAINSPARE_AREA     8704
+
+/* for GLOB_LLD_Enable_Disable_Interrupts */
+#define ENABLE_INTERRUPTS           0x0001
+#define DISABLE_INTERRUPTS          0x0000
+
+
+
+/* Typedefs */
+
+/*  prototypes: API for LLD */
+/* Currently, Write_Page_Main
+ * 			  MemCopy
+ * 			  Read_Page_Main_Spare
+ * do not have flag because they were not implemented prior to this
+ * They are not being added to keep changes to a minimum for now.
+ * Currently, they are not required (only reqd for Wr_P_M_S.)
+ * Later on, these NEED to be changed.
+ */
+ extern void GLOB_LLD_ECC_Control(int enable);
+extern u16   GLOB_LLD_Flash_Release(void);
+extern u16   GLOB_LLD_Flash_Reset(void);
+extern u16   GLOB_LLD_Read_Device_ID(void);
+#if CMD_DMA
+extern u16   GLOB_LLD_Flash_Init(u16 Flags);
+extern u16   GLOB_LLD_Execute_CMDs(u16 count);
+extern u16   GLOB_LLD_Erase_Block(u32 block, u8 TagCount,
+					u16 Flags);
+extern u16   GLOB_LLD_Write_Page_Main(u8 *write_data, u32 block,
+					u16 Page, u16
+					PageCount, u8 CommandCount);
+extern u16   GLOB_LLD_Read_Page_Main(u8 *read_data, u32 block,
+					u16 Page, u16
+				PageCount, u8 CommandCount, u16 Flags);
+extern u16  GLOB_LLD_MemCopy_CMD(u8 tag, u8 *dest, u8 *src,
+				u16 ByteCount, u16 flag);
+#else
+extern u16   GLOB_LLD_Flash_Init(void);
+extern u16   GLOB_LLD_Erase_Block(u32 block_add);
+extern u16   GLOB_LLD_Write_Page_Main(u8 *write_data, u32 block,
+					u16 Page, u16 PageCount);
+extern u16   GLOB_LLD_Read_Page_Main(u8 *read_data, u32 block,
+					u16 Page, u16 PageCount);
+extern u16 GLOB_LLD_Read_Page_Main_Polling(u8 *read_data,
+			u32 block, u16 page, u16 page_count);
+#endif
+
+extern int GLOB_LLD_is_cdma_int(void);
+extern u16   GLOB_LLD_Event_Status(void);
+extern void     GLOB_LLD_Enable_Disable_Interrupts(u16 INT_ENABLE);
+
+extern u16   GLOB_LLD_UnlockArrayAll(void);
+extern u16   GLOB_LLD_Read_Page_Spare(u8 *read_data, u32 block,
+					u16 Page, u16 PageCount);
+extern u16   GLOB_LLD_Write_Page_Spare(u8 *write_data, u32 block,
+					u16 Page, u16 PageCount);
+extern u16   GLOB_LLD_Get_Bad_Block(u32 block);
+#if CMD_DMA
+extern u16   GLOB_LLD_Write_Page_Main_Spare(u8 *write_data,
+					u32 block, u16 Page, u16
+				PageCount, u8 CommandCount, u16 Flags);
+extern u16   GLOB_LLD_Read_Page_Main_Spare(u8 *read_data,
+					u32 block, u16 Page, u16
+					PageCount, u8 CommandCount);
+#else
+extern u16   GLOB_LLD_Write_Page_Main_Spare(u8 *write_data,
+			u32 block, u16 Page, u16 PageCount);
+extern u16   GLOB_LLD_Read_Page_Main_Spare(u8 *read_data,
+					u32 block, u16 Page, u16
+					PageCount);
+#endif /* CMD_DMA */
+
+extern u32  GLOB_LLD_Memory_Pool_Size(void);
+extern int GLOB_LLD_Mem_Config(u8 *pMem);
+
+#if CMD_DMA
+#define LLD_CMD_FLAG_ORDER_BEFORE_REST		(0x1)
+#define LLD_CMD_FLAG_MODE_POLL			(0x4)
+#define LLD_CMD_FLAG_MODE_CDMA			(0x8)
+#endif /* CMD_DMA */
+
+
+#endif /*_LLD_ */
+
+
diff --git a/drivers/staging/mrst_nand/lld_cdma.c b/drivers/staging/mrst_nand/lld_cdma.c
new file mode 100644
index 0000000..19650c2
--- /dev/null
+++ b/drivers/staging/mrst_nand/lld_cdma.c
@@ -0,0 +1,2736 @@
+/*
+ * NAND Flash Controller Device Driver
+ * Copyright (c) 2009, Intel Corporation and its suppliers.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
+ *
+ */
+
+
+/* note: compile with LLD_NAND.C as it contains some common functions */
+#include "spectraswconfig.h"
+#include "lld.h"
+#include "lld_nand.h"
+#include "lld_cdma.h"
+#include "lld_emu.h"
+#include "flash.h"
+#include "NAND_Regs_4.h"
+
+#define DBG_SNC_PRINTEVERY    1000000
+
+#if CMD_DMA
+#define MODE_02             (0x2 << 26)
+#define MAX_DESC_PER_CHANNEL     (MAX_DESCS + 2)
+
+#if FLASH_CDMA
+static void ResetSyncModule(void);
+#endif
+
+/* command is sent. This is global so FTL can check final cmd results */
+struct pending_cmd    PendingCMD[MAX_DESCS + MAX_CHANS];
+
+struct cdma_descriptor (*cdma_desc)[MAX_DESC_PER_CHANNEL];
+struct memcpy_descriptor (*memcp_desc)[MAX_DESCS];
+
+u16 dcount[MAX_CHANS];
+
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     CDMA_Data_Cmd
+* Inputs:       tag (0-255)
+*               cmd code (aligned for hw)
+*               data: pointer to source or destination
+*               block: block address
+*               page: page address
+*               count: num pages to transfer
+* Outputs:      PASS
+* Description:  This function takes the parameters and puts them
+*                   into the "pending commands" array.
+*               It does not parse or validate the parameters.
+*               The array index is same as the tag.
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+u16 CDMA_Data_CMD(u8 tag, u8 CMD, u8 *data,
+	u32 block, u16 page, u16 count, u16 flags)
+{
+	int i;
+
+	debug_boundary_error(block, DeviceInfo.wTotalBlocks, tag);
+	debug_boundary_error(count, DeviceInfo.wPagesPerBlock+1, tag);
+	debug_boundary_error(tag, 252, 0);
+
+	tag += MAX_CHANS;
+	PendingCMD[tag].Tag = tag - MAX_CHANS;
+	PendingCMD[tag].CMD = CMD;
+	PendingCMD[tag].DataAddr = data;
+	PendingCMD[tag].Block = block;
+	PendingCMD[tag].Page = page;
+	PendingCMD[tag].PageCount = count;
+	PendingCMD[tag].DataDestAddr = 0;
+	PendingCMD[tag].DataSrcAddr = 0;
+	PendingCMD[tag].MemCopyByteCnt = 0;
+	PendingCMD[tag].Flags = flags;
+	PendingCMD[tag].SBDCmdIndex = g_SBDCmdIndex;
+
+	for (i = 0; i <= MAX_CHANS; i++)
+		PendingCMD[tag].ChanSync[i] = 0;
+
+	PendingCMD[tag].Status = 0xB0B;
+
+#if FLASH_CDMA
+	switch (CMD) {
+	case WRITE_MAIN_SPARE_CMD:
+		NAND_Conv_Main_Spare_Data_Log2Phy_Format(data, count);
+		break;
+	case WRITE_SPARE_CMD:
+		NAND_Conv_Spare_Data_Log2Phy_Format(data);
+		break;
+	default:
+		break;
+	}
+#endif
+	return PASS;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     CDMA_MemCopy_CMD
+* Inputs:       tag (0-255)
+*               dest: pointer to destination
+*               src:  pointer to source
+*               count: num bytes to transfer
+* Outputs:      PASS
+* Description:  This function takes the parameters and puts them
+*                   into the "pending commands" array.
+*               It does not parse or validate the parameters.
+*               The array index is same as the tag.
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+u16 CDMA_MemCopy_CMD(u8 tag, u8 *dest, u8 *src,
+			u16 ByteCount, u16 flags)
+{
+	int i;
+
+	nand_dbg_print(NAND_DBG_DEBUG,
+		"CDMA MemC Command called tag=%u\n", tag);
+
+	debug_boundary_error(tag, 252, 0);
+
+	tag += MAX_CHANS;
+	PendingCMD[tag].Tag = tag - MAX_CHANS;
+	PendingCMD[tag].CMD = MEMCOPY_CMD;
+	PendingCMD[tag].DataAddr = 0;
+	PendingCMD[tag].Block = 0;
+	PendingCMD[tag].Page = 0;
+	PendingCMD[tag].PageCount = 0;
+	PendingCMD[tag].DataDestAddr = dest;
+	PendingCMD[tag].DataSrcAddr = src;
+	PendingCMD[tag].MemCopyByteCnt = ByteCount;
+	PendingCMD[tag].Flags = flags;
+	PendingCMD[tag].SBDCmdIndex = g_SBDCmdIndex;
+
+	for (i = 0; i <= MAX_CHANS; i++)
+		PendingCMD[tag].ChanSync[i] = 0;
+
+	PendingCMD[tag].Status = 0xB0B;
+
+	return PASS;
+}
+
+
+#if DEBUG_SYNC || VERBOSE
+/* Double check here because CheckSyncPoints also uses it */
+static void pcmd_per_ch(struct pending_cmd (*p)[MAX_CHANS + MAX_DESCS],
+		u16 tag_count, int *chIndexes)
+{
+	u32 i, j, chnl;
+
+	for (i = 0; i < MAX_CHANS; i++)
+		chIndexes[i] = 0;
+
+	for (i = 0; i < (tag_count + MAX_CHANS); i++) {
+		chnl = PendingCMD[i].Block /
+			(DeviceInfo.wTotalBlocks / totalUsedBanks);
+		debug_boundary_error(chnl, totalUsedBanks, i);
+
+		p[chnl][chIndexes[chnl]].Tag = PendingCMD[i].Tag;
+		p[chnl][chIndexes[chnl]].CMD = PendingCMD[i].CMD;
+		p[chnl][chIndexes[chnl]].DataAddr = PendingCMD[i].DataAddr;
+		p[chnl][chIndexes[chnl]].Block = PendingCMD[i].Block;
+		p[chnl][chIndexes[chnl]].Page = PendingCMD[i].Page;
+		p[chnl][chIndexes[chnl]].DataDestAddr =
+				PendingCMD[i].DataDestAddr;
+		p[chnl][chIndexes[chnl]].PageCount = PendingCMD[i].PageCount;
+		p[chnl][chIndexes[chnl]].DataSrcAddr =
+				PendingCMD[i].DataSrcAddr;
+		p[chnl][chIndexes[chnl]].MemCopyByteCnt =
+				PendingCMD[i].MemCopyByteCnt;
+		p[chnl][chIndexes[chnl]].ChanSync[0] =
+				PendingCMD[i].ChanSync[0];
+		p[chnl][chIndexes[chnl]].Status = PendingCMD[i].Status;
+		chIndexes[chnl]++;
+
+		for (j = 1; (j <= MAX_CHANS) && (PendingCMD[i].ChanSync[j]);
+								j++) {
+			p[chnl][chIndexes[chnl]].Tag = 0xFF;
+			p[chnl][chIndexes[chnl]].CMD = DUMMY_CMD;
+			p[chnl][chIndexes[chnl]].Block = PendingCMD[i].Block;
+			p[chnl][chIndexes[chnl]].ChanSync[0] =
+					PendingCMD[i].ChanSync[j];
+			chIndexes[chnl]++;
+		}
+	}
+}
+#endif
+
+#if VERBOSE
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     PrintPendingCMDs
+* Inputs:       none
+* Outputs:      none
+* Description:  prints the PendingCMDs array
+*               number of elements to print needs manual control
+*               to keep it small
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+void PrintPendingCMDs(u16 tag_count)
+{
+	u16  i;
+	u16  not_print;
+
+	nand_dbg_print(NAND_DBG_DEBUG, "Printing PendingCMDs Table\n");
+	nand_dbg_print(NAND_DBG_DEBUG, "-------------------------------"
+		"------------------------------------------|\n");
+	nand_dbg_print(NAND_DBG_DEBUG, "           | Cache  |     Flash      "
+		"|        MemCopy       |        |    |\n");
+	nand_dbg_print(NAND_DBG_DEBUG,
+		"Tag Command DataAddr Block Page PgCnt DestAddr SrcAddr  "
+		"BCnt ChanSync Stat|\n");
+
+	for (i = 0; i < (tag_count + MAX_CHANS); i++) {
+		not_print = 0;
+
+		switch (PendingCMD[i].CMD) {
+		case ERASE_CMD:
+			nand_dbg_print(NAND_DBG_DEBUG, "%03d",
+					PendingCMD[i].Tag);
+			nand_dbg_print(NAND_DBG_DEBUG, " ERASE  ");
+			break;
+		case WRITE_MAIN_CMD:
+			nand_dbg_print(NAND_DBG_DEBUG, "%03d",
+					PendingCMD[i].Tag);
+			nand_dbg_print(NAND_DBG_DEBUG, " WRITE  ");
+			break;
+		case WRITE_MAIN_SPARE_CMD:
+			nand_dbg_print(NAND_DBG_DEBUG, "%03d",
+					PendingCMD[i].Tag);
+			nand_dbg_print(NAND_DBG_DEBUG,
+					" WRITE MAIN+SPARE  ");
+			break;
+		case READ_MAIN_SPARE_CMD:
+			nand_dbg_print(NAND_DBG_DEBUG, "%03d",
+					PendingCMD[i].Tag);
+			nand_dbg_print(NAND_DBG_DEBUG,
+					" WRITE MAIN+SPARE  ");
+			break;
+		case READ_MAIN_CMD:
+			nand_dbg_print(NAND_DBG_DEBUG, "%03d",
+					PendingCMD[i].Tag);
+			nand_dbg_print(NAND_DBG_DEBUG, " READ   ");
+			break;
+		case MEMCOPY_CMD:
+			nand_dbg_print(NAND_DBG_DEBUG, "%03d",
+					PendingCMD[i].Tag);
+			nand_dbg_print(NAND_DBG_DEBUG, " MemCpy ");
+			break;
+		case DUMMY_CMD:
+			nand_dbg_print(NAND_DBG_DEBUG, "%03d",
+					PendingCMD[i].Tag);
+			nand_dbg_print(NAND_DBG_DEBUG, "  DUMMY ");
+			break;
+		default:
+			if (i)
+				not_print = 1;
+		}
+
+		if (!not_print) {
+			nand_dbg_print(NAND_DBG_DEBUG, " %p",
+					PendingCMD[i].DataAddr);
+			nand_dbg_print(NAND_DBG_DEBUG, "  %04X",
+					(unsigned int)PendingCMD[i].Block);
+			nand_dbg_print(NAND_DBG_DEBUG, " %04X",
+					PendingCMD[i].Page);
+			nand_dbg_print(NAND_DBG_DEBUG, " %04X",
+					PendingCMD[i].PageCount);
+			nand_dbg_print(NAND_DBG_DEBUG, "  %p",
+					PendingCMD[i].DataDestAddr);
+			nand_dbg_print(NAND_DBG_DEBUG, " %p",
+					PendingCMD[i].DataSrcAddr);
+			nand_dbg_print(NAND_DBG_DEBUG, " %04X",
+					PendingCMD[i].MemCopyByteCnt);
+			nand_dbg_print(NAND_DBG_DEBUG, " %04X",
+					PendingCMD[i].ChanSync[0]);
+			nand_dbg_print(NAND_DBG_DEBUG, " %04X",
+					PendingCMD[i].Status);
+			nand_dbg_print(NAND_DBG_DEBUG, "|\n");
+		}
+	}
+
+	nand_dbg_print(NAND_DBG_DEBUG, " ----------------------------"
+		"---------------------------------------------|\n");
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     PrintPendingCMDsPerChannel
+* Inputs:       none
+* Outputs:      none
+* Description:  prints the PendingCMDs array on a per channel basis
+*               number of elements to print needs manual control
+*               to keep it small
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+void PrintPendingCMDsPerChannel(u16 tag_count)
+{
+	u16 i, chnl;
+	u16 not_print = 0;
+	struct pending_cmd p_cmd_ch[MAX_CHANS][MAX_CHANS + MAX_DESCS];
+	int chIndexes[MAX_CHANS], maxChIndexes;
+
+	pcmd_per_ch(p_cmd_ch, tag_count, chIndexes);
+	nand_dbg_print(NAND_DBG_DEBUG,
+		"Printing PendingCMDsPerChannel Table\n");
+
+	for (i = 0; i < MAX_CHANS; i++)
+		nand_dbg_print(NAND_DBG_DEBUG,
+		" -------------------------------------|");
+	nand_dbg_print(NAND_DBG_DEBUG, "\n");
+
+	for (i = 0; i < MAX_CHANS; i++)
+		nand_dbg_print(NAND_DBG_DEBUG,
+			" Ch%1d                                  |", i);
+	nand_dbg_print(NAND_DBG_DEBUG, "\n");
+
+	maxChIndexes = 0;
+	for (i = 0; i < MAX_CHANS; i++) {
+		nand_dbg_print(NAND_DBG_DEBUG,
+			"Tag Command  FromAddr   DestAddr  Sync|");
+		if (maxChIndexes < chIndexes[i])
+			maxChIndexes = chIndexes[i];
+	}
+	nand_dbg_print(NAND_DBG_DEBUG, "\n");
+
+	for (i = 0; i <= maxChIndexes; i++) {
+		for (chnl = 0; chnl < MAX_CHANS; chnl++) {
+			not_print = 0;
+			if (chIndexes[chnl] > i) {
+				switch (p_cmd_ch[chnl][i].CMD) {
+				case ERASE_CMD:
+					nand_dbg_print(NAND_DBG_DEBUG,
+						"%03d",
+						p_cmd_ch[chnl][i].Tag);
+					nand_dbg_print(NAND_DBG_DEBUG,
+						"  ERASE ");
+					nand_dbg_print(NAND_DBG_DEBUG,
+						"         ");
+					nand_dbg_print(NAND_DBG_DEBUG,
+						"   %04X:0000",
+						(unsigned int)
+						p_cmd_ch[chnl][i].Block);
+					break;
+				case WRITE_MAIN_CMD:
+					nand_dbg_print(NAND_DBG_DEBUG,
+						"%03d",
+						p_cmd_ch[chnl][i].Tag);
+					nand_dbg_print(NAND_DBG_DEBUG,
+						"  WR_MN ");
+					nand_dbg_print(NAND_DBG_DEBUG,
+						"  %p",
+						p_cmd_ch[chnl][i].DataAddr);
+					nand_dbg_print(NAND_DBG_DEBUG,
+						"  %04X",
+						(unsigned int)
+						p_cmd_ch[chnl][i].Block);
+					nand_dbg_print(NAND_DBG_DEBUG,
+						":%04X",
+						p_cmd_ch[chnl][i].Page);
+					break;
+				case WRITE_MAIN_SPARE_CMD:
+					nand_dbg_print(NAND_DBG_DEBUG,
+						"%03d",
+						p_cmd_ch[chnl][i].Tag);
+					nand_dbg_print(NAND_DBG_DEBUG,
+						" WR_M+S ");
+					nand_dbg_print(NAND_DBG_DEBUG,
+						"  %p",
+						p_cmd_ch[chnl][i].DataAddr);
+					nand_dbg_print(NAND_DBG_DEBUG,
+						"  %04X",
+						(unsigned int)
+						p_cmd_ch[chnl][i].Block);
+					nand_dbg_print(NAND_DBG_DEBUG,
+						":%04X",
+						p_cmd_ch[chnl][i].Page);
+					break;
+				case READ_MAIN_SPARE_CMD:
+					nand_dbg_print(NAND_DBG_DEBUG,
+						"%03d",
+						p_cmd_ch[chnl][i].Tag);
+					nand_dbg_print(NAND_DBG_DEBUG,
+						" RD_M+S ");
+					nand_dbg_print(NAND_DBG_DEBUG,
+						"  %04X",
+						(unsigned int)
+						p_cmd_ch[chnl][i].Block);
+					nand_dbg_print(NAND_DBG_DEBUG,
+						":%04X",
+						p_cmd_ch[chnl][i].Page);
+					nand_dbg_print(NAND_DBG_DEBUG,
+						"  %p",
+						p_cmd_ch[chnl][i].DataAddr);
+					break;
+				case READ_MAIN_CMD:
+					nand_dbg_print(NAND_DBG_DEBUG,
+						"%03d",
+						p_cmd_ch[chnl][i].Tag);
+					nand_dbg_print(NAND_DBG_DEBUG,
+						"   READ ");
+					nand_dbg_print(NAND_DBG_DEBUG,
+						" %04X",
+						(unsigned int)
+						p_cmd_ch[chnl][i].Block);
+					nand_dbg_print(NAND_DBG_DEBUG,
+						":%04X",
+						p_cmd_ch[chnl][i].Page);
+					nand_dbg_print(NAND_DBG_DEBUG,
+						"   %p",
+						p_cmd_ch[chnl][i].DataAddr);
+					break;
+				case MEMCOPY_CMD:
+					nand_dbg_print(NAND_DBG_DEBUG,
+					"%03d",
+					p_cmd_ch[chnl][i].Tag);
+					nand_dbg_print(NAND_DBG_DEBUG,
+					" MemCpy ");
+					nand_dbg_print(NAND_DBG_DEBUG,
+					"  %p",
+					p_cmd_ch[chnl][i].DataSrcAddr);
+					nand_dbg_print(NAND_DBG_DEBUG,
+					"  %p",
+					p_cmd_ch[chnl][i].DataDestAddr);
+					break;
+				case DUMMY_CMD:
+					nand_dbg_print(NAND_DBG_DEBUG,
+					"%03d", p_cmd_ch[chnl][i].Tag);
+					nand_dbg_print(NAND_DBG_DEBUG,
+					"  DUMMY ");
+					nand_dbg_print(NAND_DBG_DEBUG,
+					"            %04X:0000",
+					(unsigned int)
+					p_cmd_ch[chnl][i].Block);
+					break;
+				default:
+					not_print = 1;
+				}
+			} else {
+				not_print = 1;
+			}
+
+			if (!not_print)
+				nand_dbg_print(NAND_DBG_DEBUG,
+				"  %04X|",
+				p_cmd_ch[chnl][i].ChanSync[0]);
+			else
+				nand_dbg_print(NAND_DBG_DEBUG,
+				"                                      |");
+
+			if (chnl == MAX_CHANS - 1)
+				nand_dbg_print(NAND_DBG_DEBUG, "\n");
+		}
+	}
+
+	nand_dbg_print(NAND_DBG_DEBUG, " ----------------------------"
+		"---------------------------------------------|\n");
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     PrintCDMA_Descriptors
+* Inputs:       none
+* Outputs:      none
+* Description:  prints the CDMA_Descriptors array
+*               number of elements to print needs manual control
+*               to keep it small
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+void PrintCDMA_Descriptors(void)
+{
+	u16 i;
+	struct cdma_descriptor *pch[MAX_CHANS];
+	struct cdma_descriptor *pchTotal = NULL;
+	struct memcpy_descriptor *mcpyPtr;
+
+	char str[MAX_CHANS * 50 + 2];
+	char *strp;
+
+	for (i = 0; i < MAX_CHANS; i++) {
+		pch[i] = &(cdma_desc[i][0]);
+		pchTotal += (u32)pch[i];
+	}
+
+	nand_dbg_print(NAND_DBG_DEBUG,
+		" Printing CDMA_Descriptors Table \n");
+	nand_dbg_print(NAND_DBG_DEBUG,
+		"----------------------------------------------------"
+		"----------------------------------------------------"
+		"----------------------------------------------------"
+		"-----------------\n");
+	nand_dbg_print(NAND_DBG_DEBUG,
+		" CMD | FromAddr |   ToAddr | Siz | Channel | CMD | "
+		"FromAddr |   ToAddr | Siz | Channel | CMD | FromAddr |   "
+		"ToAddr | Siz | Channel | CMD | FromAddr |   ToAddr "
+		"| Siz | Channel\n");
+
+	while (pchTotal) {
+		pchTotal = NULL;
+		for (i = 0; i < MAX_CHANS; i++) {
+			strp = &str[i * (5 + 22 + 6 + 11)];
+			if (pch[i]) {
+				switch ((pch[i]->CommandType) >> 8) {
+				case 0x21:
+					sprintf(strp, " FWr ");
+					strp += 5;
+					sprintf(strp, " 0x%04x%04x",
+						(unsigned)pch[i]->MemAddrHi,
+						(u16)pch[i]->MemAddrLo);
+					strp += 11;
+					sprintf(strp, " 0x%04x%04x",
+						(unsigned)
+						pch[i]->FlashPointerHi,
+						(u16)
+						pch[i]->FlashPointerLo);
+					strp += 11;
+					break;
+				case 0x20:
+					if ((pch[i]->CommandFlags >> 10)) {
+						sprintf(strp, " Mcp ");
+						strp += 5;
+						mcpyPtr =
+						(struct memcpy_descriptor *)
+					((pch[i]->MemCopyPointerHi << 16) |
+					pch[i]->MemCopyPointerLo);
+						sprintf(strp, " 0x%04x%04x",
+						(unsigned)mcpyPtr->SrcAddrHi,
+						(u16)mcpyPtr->SrcAddrLo);
+						strp += 11;
+						sprintf(strp, " 0x%04x%04x",
+						(unsigned)mcpyPtr->DestAddrHi,
+						(u16)mcpyPtr->DestAddrLo);
+						strp += 11;
+					} else {
+						sprintf(strp, " FRd ");
+						strp += 5;
+						sprintf(strp, " 0x%04x%04x",
+						(unsigned)
+						pch[i]->FlashPointerHi,
+						(u16)
+						pch[i]->FlashPointerLo);
+						strp += 11;
+						sprintf(strp, " 0x%04x%04x",
+						(unsigned)pch[i]->MemAddrHi,
+						(u16)pch[i]->MemAddrLo);
+						strp += 11;
+					}
+					break;
+				default:
+					if (pch[i]->CommandType == 1) {
+						sprintf(strp, " Ers ");
+						strp += 5;
+					} else {
+						sprintf(strp, " INV ");
+						strp += 5;
+					}
+					sprintf(strp,
+					"                          ");
+					strp += 22;
+					break;
+				}
+
+				sprintf(strp, "  %3d ",
+					(int)(pch[i]->CommandType & 0xFFF));
+				strp += 6;
+				sprintf(strp, "  0x%04x ||",
+					(unsigned)pch[i]->Channel);
+				strp += 11;
+
+				pch[i] = (struct cdma_descriptor *)
+					((pch[i]->NxtPointerHi << 16) |
+					pch[i]->NxtPointerLo);
+				pchTotal += (u32)pch[i];
+			} else {
+				sprintf(strp,
+				"                                       |");
+				strp += 44;
+			}
+		}
+
+		sprintf(strp, "\n");
+		nand_dbg_print(NAND_DBG_DEBUG, "%s", str);
+	}
+	nand_dbg_print(NAND_DBG_DEBUG,
+		" ---------------------------------------------------"
+		"----------------------|\n");
+}
+#endif
+
+static u32 calc_next_desc_ptr(u16 c, u16 d)
+{
+	u32 offset, addr;
+
+	offset = sizeof(struct cdma_descriptor) *
+		(c * MAX_DESC_PER_CHANNEL + d + 1);
+	addr = (unsigned long)cdma_desc + offset;
+
+	return (unsigned long)GLOB_MEMMAP_TOBUS((u32 *)addr);
+}
+
+static u32 calc_desc_ptr(u16 c)
+{
+	u32 offset, addr ;
+
+	offset = sizeof(struct cdma_descriptor) * c * MAX_DESC_PER_CHANNEL;
+	addr = (u32)GLOB_MEMMAP_TOBUS((u32 *)cdma_desc) + offset;
+
+	return addr;
+}
+
+/* Reset cdma_desc d in channel c to 0 */
+static void reset_cdma_desc(u16 c, u16 d)
+{
+	cdma_desc[c][d].NxtPointerHi = 0;
+	cdma_desc[c][d].NxtPointerLo = 0;
+	cdma_desc[c][d].FlashPointerHi = 0;
+	cdma_desc[c][d].FlashPointerLo = 0;
+	cdma_desc[c][d].CommandType = 0;
+	cdma_desc[c][d].MemAddrHi = 0;
+	cdma_desc[c][d].MemAddrLo = 0;
+	cdma_desc[c][d].CommandFlags = 0;
+	cdma_desc[c][d].Channel = 0;
+	cdma_desc[c][d].Status = 0;
+	cdma_desc[c][d].MemCopyPointerHi = 0;
+	cdma_desc[c][d].MemCopyPointerLo = 0;
+	cdma_desc[c][d].Tag = 0;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     CDMA_AddDummyDesc
+* Inputs:       Channel number
+* Outputs:      None
+* Description:  This function adds a dummy descriptor at the descriptor
+*               location (from dcount structure) in the given channel.
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+static void     CDMA_AddDummyDesc(u16 channel)
+{
+	u16 c, d;
+	u32 *ptr;
+	u32 cont;
+	unsigned long next_ptr;
+
+	c = channel;
+	d = dcount[c];
+
+	debug_boundary_error(d, MAX_DESC_PER_CHANNEL, 0);
+
+	reset_cdma_desc(c, d);
+
+	next_ptr = calc_next_desc_ptr(c, d);
+	cdma_desc[c][d].NxtPointerHi = next_ptr >> 16;
+	cdma_desc[c][d].NxtPointerLo = next_ptr;
+
+	ptr = (u32 *)(u32)(MODE_10 | (c << 24));
+	cdma_desc[c][d].FlashPointerHi = (u32)((u32)ptr >> 16);
+	cdma_desc[c][d].FlashPointerLo = (u32)ptr;
+
+	cdma_desc[c][d].CommandType = 0x42;
+
+	cont = 1;
+	cdma_desc[c][d].CommandFlags = (0 << 10) | (cont << 9) |
+						(0 << 8) | 0x40;
+
+	cdma_desc[c][d].Status = 0;
+	cdma_desc[c][d].Tag  = 0xFF;
+
+	return;
+}
+
+
+#if FLASH_ESL
+#else
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     CDMA_AddDummyDescAtEnd
+* Inputs:       Channel number
+* Outputs:      None
+* Description:  This function adds a dummy descriptor at the end of the
+*               descriptor chain for the given channel.
+*               The purpose of these descriptors is to get a single
+*               interrupt on cmd dma chain completion using sync.
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+static void CDMA_AddDummyDescAtEnd(u16 channel)
+{
+	u16 c, d;
+	u32 *ptr;
+	u32 cont;
+
+	c = channel;
+	d = dcount[c];
+	debug_boundary_error(d, MAX_DESC_PER_CHANNEL, 0);
+
+	reset_cdma_desc(c, d);
+
+	ptr = (u32 *)(u32)(MODE_10 | (c << 24));
+	cdma_desc[c][d].FlashPointerHi = (u32)((u32)ptr >> 16);
+	cdma_desc[c][d].FlashPointerLo = (u32)ptr;
+
+	cdma_desc[c][d].CommandType = 0xFFFF;
+
+	cont = 0;
+	cdma_desc[c][d].CommandFlags = (0 << 10) | (cont << 9) |
+						(1 << 8) | 0x40;
+
+	cdma_desc[c][d].Channel = ((1 << 15) | (1 << 14) |
+				(c << CHANNEL_ID_OFFSET) |
+				((GLOB_valid_banks[3] << 7) |
+				(GLOB_valid_banks[2] << 6) |
+				(GLOB_valid_banks[1] << 5) |
+				(GLOB_valid_banks[0] << 4)) | 0);
+
+	cdma_desc[c][d].Status = 0;
+	cdma_desc[c][d].Tag = 0xFF;
+
+	return;
+}
+
+u32 CDMA_Memory_Pool_Size(void)
+{
+	return (sizeof(struct cdma_descriptor) * MAX_CHANS *
+		MAX_DESC_PER_CHANNEL) +
+		(sizeof(struct memcpy_descriptor) * MAX_CHANS *
+		MAX_DESCS) + 6;
+}
+
+int CDMA_Mem_Config(u8 *pMem)
+{
+	ALIGN_DWORD_FWD(pMem);
+	cdma_desc = (struct cdma_descriptor (*)[MAX_DESC_PER_CHANNEL])pMem;
+	pMem += (sizeof(struct cdma_descriptor) * MAX_CHANS *
+		MAX_DESC_PER_CHANNEL);
+	ALIGN_DWORD_FWD(pMem);
+	memcp_desc = (struct memcpy_descriptor (*)[MAX_DESCS])pMem;
+
+	return PASS;
+}
+
+#if FLASH_CDMA
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     CDMA_Flash_Init
+* Inputs:       none
+* Outputs:      PASS=0 (notice 0=ok here)
+* Description:  This should be called at power up.
+*               It disables interrupts and clears status bits
+*               issues flash reset command
+*               configures the controller registers
+*               It sets the interrupt mask and enables interrupts
+*               It pre-builds special descriptors
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+u16 CDMA_Flash_Init(void)
+{
+	u16 i, j;
+	u16 int_en_mask;
+	u16 cdma_int_en_mask;
+
+	NAND_Flash_Reset();
+
+	/* Set the global Enable masks for only those interrupts
+	 * that are supported */
+	cdma_int_en_mask = (DMA_INTR__DESC_COMP_CHANNEL0 |
+			DMA_INTR__DESC_COMP_CHANNEL1 |
+			DMA_INTR__DESC_COMP_CHANNEL2 |
+			DMA_INTR__DESC_COMP_CHANNEL3 |
+			DMA_INTR__MEMCOPY_DESC_COMP);
+
+	int_en_mask = (INTR_STATUS0__ECC_ERR |
+		INTR_STATUS0__PROGRAM_FAIL |
+		INTR_STATUS0__ERASE_FAIL);
+
+	/* Disable all interrupts */
+	iowrite32(0, FlashReg + GLOBAL_INT_ENABLE);
+	iowrite32(0, FlashReg + INTR_EN0);
+	iowrite32(0, FlashReg + INTR_EN1);
+	iowrite32(0, FlashReg + INTR_EN2);
+	iowrite32(0, FlashReg + INTR_EN3);
+
+	/* Clear all status bits */
+	iowrite32(0xFFFF, FlashReg + INTR_STATUS0);
+	iowrite32(0xFFFF, FlashReg + INTR_STATUS1);
+	iowrite32(0xFFFF, FlashReg + INTR_STATUS2);
+	iowrite32(0xFFFF, FlashReg + INTR_STATUS3);
+
+	iowrite32(0, FlashReg + DMA_INTR_EN);
+	iowrite32(0xFFFF, FlashReg + DMA_INTR);
+
+	iowrite32(cdma_int_en_mask, FlashReg + DMA_INTR_EN);
+
+	iowrite32(int_en_mask, FlashReg + INTR_EN0);
+	iowrite32(int_en_mask, FlashReg + INTR_EN1);
+	iowrite32(int_en_mask, FlashReg + INTR_EN2);
+	iowrite32(int_en_mask, FlashReg + INTR_EN3);
+
+	/* Enable global interrupt to host */
+	iowrite32(GLOBAL_INT_EN_FLAG, FlashReg + GLOBAL_INT_ENABLE);
+
+	/* clear the pending CMD array */
+	for (i = 0; i < (MAX_DESCS + MAX_CHANS); i++) {
+		PendingCMD[i].CMD = 0;
+		PendingCMD[i].Tag = 0;
+		PendingCMD[i].DataAddr = 0;
+		PendingCMD[i].Block = 0;
+		PendingCMD[i].Page = 0;
+		PendingCMD[i].PageCount = 0;
+		PendingCMD[i].DataDestAddr = 0;
+		PendingCMD[i].DataSrcAddr = 0;
+		PendingCMD[i].MemCopyByteCnt = 0;
+
+		for (j = 0; j <= MAX_CHANS; j++)
+			PendingCMD[i].ChanSync[j] = 0;
+
+		PendingCMD[i].Status = 0;
+		PendingCMD[i].SBDCmdIndex = 0;
+	}
+
+	return PASS;
+}
+
+static u16 abort_chnl_helper(u16 ch)
+{
+	u16 desc;
+
+	for (desc = 0; desc < dcount[ch]; desc++) {
+		if ((cdma_desc[ch][desc].Status & CMD_DMA_DESC_COMP) !=
+				CMD_DMA_DESC_COMP) {
+			if (cdma_desc[ch][desc].Tag != 0xFF)
+				PendingCMD[cdma_desc[ch][desc].Tag].Status =
+								CMD_PASS;
+			break;
+		} else {
+			if (cdma_desc[ch][desc].Tag != 0xFF)
+				PendingCMD[cdma_desc[ch][desc].Tag].Status =
+								CMD_PASS;
+		}
+	}
+
+	return desc;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     CDMA_AbortChannels
+* Inputs:       channel with failed descriptor
+* Outputs:      PASS/ FAIL status
+* Description:  This function is called to Abort all the other active channels
+*               when a channel gets an error.
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+u16  CDMA_AbortChannels(u16 chan)
+{
+	u16  c, d;
+	u16  aborts_comp;
+	u16  DescB4Abort[MAX_CHANS];
+	u16  status = PASS;
+	u32  active_chnl = 0;
+
+	debug_boundary_error(chan, totalUsedBanks, 0);
+
+	/* If status not complete, Abort the channel */
+	for (c = 0; c < MAX_CHANS; c++) {
+		/* Initialize the descriptor to be aborted */
+		DescB4Abort[c] = 0xFF;
+		if ((c != chan) && (1 == GLOB_valid_banks[c])) {
+			d = abort_chnl_helper(c);
+			if ((ioread32(FlashReg + CHNL_ACTIVE) & (1 << c)) ==
+							(1 << c)) {
+				DescB4Abort[c] = d;
+				aborts_comp = 0;
+				iowrite32(MODE_02 | (0 << 4), FlashMem);
+				iowrite32((0xF << 4) | c, FlashMem + 0x10);
+			}
+		}
+	}
+
+	/* Check if aborts (of all active channels) are done */
+	while (1) {
+		aborts_comp = 1;
+		for (c = 0; c < MAX_CHANS; c++) {
+			if ((DescB4Abort[c] != 0xFF) && (c != chan)) {
+				if (0 == c)
+					active_chnl = CHNL_ACTIVE__CHANNEL0;
+				else if (1 == c)
+					active_chnl = CHNL_ACTIVE__CHANNEL1;
+				else if (2 == c)
+					active_chnl = CHNL_ACTIVE__CHANNEL2;
+				else if (3 == c)
+					active_chnl = CHNL_ACTIVE__CHANNEL3;
+
+				if (!(ioread32(FlashReg + CHNL_ACTIVE) &
+							active_chnl))
+					DescB4Abort[c] = 0xFF;
+				else
+					aborts_comp = 0;
+			}
+		}
+
+		if (1 == aborts_comp)
+			break;
+	}
+
+	ResetSyncModule();
+
+	return status;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     CDMA_UpdateEventStatus
+* Inputs:       none
+* Outputs:      none
+* Description:  This function update the event status of all the channels
+*               when an error condition is reported.
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+u16 CDMA_UpdateEventStatus(void)
+{
+	u16 i, j, c, d, status = PASS;
+
+	for (c = 0; c < MAX_CHANS; c++) {
+		if (!GLOB_valid_banks[c])
+			continue;
+
+		d = dcount[c];
+		debug_boundary_error(d, MAX_DESC_PER_CHANNEL, 0);
+		for (j = 0; j < d; j++) {
+			/* Check for the descriptor with failure
+			* (not just desc_complete) */
+			if (!(cdma_desc[c][j].Status & CMD_DMA_DESC_FAIL))
+				continue;
+
+			/* All the previous command's status for this channel
+			* must be good (no errors reported) */
+			for (i = 0; i < j; i++) {
+				if (cdma_desc[c][i].Tag != 0xFF)
+					PendingCMD[cdma_desc[c][i].Tag].Status
+							= CMD_PASS;
+			}
+
+			status = CDMA_AbortChannels(c);
+
+			return status;
+		}
+	}
+
+	return status;
+}
+#endif
+
+
+static void cdma_trans(u16 chan)
+{
+	iowrite32(MODE_10 | (chan << 24), FlashMem);
+	iowrite32((1 << 7) | chan, FlashMem + 0x10);
+
+	iowrite32(MODE_10 | (chan << 24) |
+		((0x0FFFF & ((u32)(calc_desc_ptr(chan)) >> 16)) << 8),
+		FlashMem);
+	iowrite32((1 << 7) | (1 << 4) | 0, FlashMem + 0x10);
+
+	iowrite32(MODE_10 | (chan << 24) |
+		((0x0FFFF & ((u32)(calc_desc_ptr(chan)))) << 8),
+		FlashMem);
+	iowrite32((1 << 7) | (1 << 5) | 0, FlashMem + 0x10);
+
+	iowrite32(MODE_10 | (chan << 24), FlashMem);
+	iowrite32((1 << 7) | (1 << 5) | (1 << 4) | 0, FlashMem + 0x10);
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     CDMA_Execute_CMDs (for use with CMD_DMA)
+* Inputs:       tag_count:  the number of pending cmds to do
+* Outputs:      PASS/FAIL
+* Description:  Build the SDMA chain(s) by making one CMD-DMA descriptor
+*               for each pending command, start the CDMA engine, and return.
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+u16 CDMA_Execute_CMDs(u16 tag_count)
+{
+	u16 i, j;
+	u8 cont;
+	u64 flash_add;
+	u32 *ptr;
+	u32 mapped_addr;
+	u16 status = PASS;
+	u16 c, d;
+	u16 tmp_c;
+	unsigned long next_ptr;
+
+	if (tag_count >= MAX_DESCS)
+		return FAIL;
+
+	c = 0;
+	d = 0;
+
+	for (c = 0; c < MAX_CHANS; c++)
+		for (d = 0; d < MAX_DESC_PER_CHANNEL; d++)
+			reset_cdma_desc(c, d);
+
+	debug_boundary_error(totalUsedBanks - 1, MAX_CHANS, 0);
+
+	for (c = 0; c < totalUsedBanks; c++) {
+		dcount[c] = 0;
+		PendingCMD[c].CMD = DUMMY_CMD;
+		PendingCMD[c].SBDCmdIndex = 0xFF;
+		PendingCMD[c].Tag = 0xFF;
+		PendingCMD[c].Block = c * (DeviceInfo.wTotalBlocks /
+				totalUsedBanks);
+
+		for (i = 0; i <= MAX_CHANS; i++)
+			PendingCMD[c].ChanSync[i] = 0;
+	}
+
+	c = 0;
+
+	CDMA_AddSyncPoints(tag_count);
+#if DEBUG_SYNC
+	CDMA_CheckSyncPoints(tag_count);
+#endif
+
+	for (i = 0; i < (tag_count + MAX_CHANS); i++) {
+		if ((i >= totalUsedBanks) && (i < MAX_CHANS))
+			continue;
+
+		if (PendingCMD[i].Block >= DeviceInfo.wTotalBlocks) {
+			PendingCMD[i].Status = CMD_NOT_DONE;
+			continue;
+		}
+
+		c = 0;
+		tmp_c = PendingCMD[i].Block /
+			(DeviceInfo.wTotalBlocks / totalUsedBanks);
+
+		debug_boundary_error(tmp_c, totalUsedBanks, 0);
+
+		if (0 == tmp_c) {
+			c = tmp_c;
+		} else {
+			for (j = 1; j < MAX_CHANS; j++) {
+				if (GLOB_valid_banks[j]) {
+					tmp_c--;
+					if (0 == tmp_c) {
+						c = j;
+						break;
+					}
+				}
+			}
+		}
+
+		if (GLOB_valid_banks[c] == 1) {
+			d = dcount[c];
+			dcount[c]++;
+		} else {
+			continue;
+		}
+
+		next_ptr = calc_next_desc_ptr(c, d);
+		cdma_desc[c][d].NxtPointerHi = next_ptr >> 16;
+		cdma_desc[c][d].NxtPointerLo = next_ptr;
+
+		/* Use the Block offset within a bank */
+		tmp_c = PendingCMD[i].Block /
+			(DeviceInfo.wTotalBlocks / totalUsedBanks);
+		debug_boundary_error(tmp_c, totalUsedBanks, i);
+		flash_add = (u64)(PendingCMD[i].Block - tmp_c *
+			(DeviceInfo.wTotalBlocks / totalUsedBanks)) *
+			DeviceInfo.wBlockDataSize +
+			(u64)(PendingCMD[i].Page) * DeviceInfo.wPageDataSize;
+
+#if FLASH_CDMA
+		ptr = (u32 *)(MODE_10 | (c << 24) |
+			(u32)GLOB_u64_Div(flash_add,
+				DeviceInfo.wPageDataSize));
+		cdma_desc[c][d].FlashPointerHi = (u32)ptr >> 16;
+		cdma_desc[c][d].FlashPointerLo = (u32)ptr;
+#endif
+		/* set continue flag except if last cmd-descriptor */
+		cont = 1;
+
+		if ((PendingCMD[i].CMD == WRITE_MAIN_SPARE_CMD) ||
+			(PendingCMD[i].CMD == READ_MAIN_SPARE_CMD)) {
+			/* Descriptor to set Main+Spare Access Mode */
+			cdma_desc[c][d].CommandType   = 0x43;
+			cdma_desc[c][d].CommandFlags  =
+				(0 << 10) | (cont << 9) | (0 << 8) | 0x40;
+			cdma_desc[c][d].MemAddrHi = 0;
+			cdma_desc[c][d].MemAddrLo = 0;
+
+			cdma_desc[c][d].Channel = 0;
+			cdma_desc[c][d].Status = 0;
+			cdma_desc[c][d].Tag  = i;
+
+			dcount[c]++;
+			d++;
+
+			reset_cdma_desc(c, d);
+
+			next_ptr = calc_next_desc_ptr(c, d);
+			cdma_desc[c][d].NxtPointerHi = next_ptr >> 16;
+			cdma_desc[c][d].NxtPointerLo = next_ptr;
+
+#if FLASH_CDMA
+			cdma_desc[c][d].FlashPointerHi = (u32)ptr >> 16;
+			cdma_desc[c][d].FlashPointerLo = (u32)ptr;
+#endif
+		}
+
+		switch (PendingCMD[i].CMD) {
+		case ERASE_CMD:
+			cdma_desc[c][d].CommandType = 1;
+			cdma_desc[c][d].CommandFlags =
+				(0 << 10) | (cont << 9) | (0 << 8) | 0x40;
+			cdma_desc[c][d].MemAddrHi = 0;
+			cdma_desc[c][d].MemAddrLo = 0;
+			break;
+
+		case WRITE_MAIN_CMD:
+			cdma_desc[c][d].CommandType =
+				0x2100 | PendingCMD[i].PageCount;
+			cdma_desc[c][d].CommandFlags =
+				(0 << 10) | (cont << 9) | (0 << 8) | 0x40;
+			mapped_addr = (u32)GLOB_MEMMAP_TOBUS
+				((u32 *)PendingCMD[i].DataAddr);
+			cdma_desc[c][d].MemAddrHi = mapped_addr >> 16;
+			cdma_desc[c][d].MemAddrLo = mapped_addr;
+			break;
+
+		case READ_MAIN_CMD:
+			cdma_desc[c][d].CommandType =
+				0x2000 | (PendingCMD[i].PageCount);
+			cdma_desc[c][d].CommandFlags =
+				(0 << 10) | (cont << 9) | (0 << 8) | 0x40;
+			mapped_addr = (u32)GLOB_MEMMAP_TOBUS
+				((u32 *)PendingCMD[i].DataAddr);
+			cdma_desc[c][d].MemAddrHi = mapped_addr >> 16;
+			cdma_desc[c][d].MemAddrLo = mapped_addr;
+			break;
+
+		case WRITE_MAIN_SPARE_CMD:
+			cdma_desc[c][d].CommandType =
+				0x2100 | (PendingCMD[i].PageCount);
+			cdma_desc[c][d].CommandFlags =
+				(0 << 10) | (cont << 9) | (0 << 8) | 0x40;
+			mapped_addr = (u32)GLOB_MEMMAP_TOBUS
+				((u32 *)PendingCMD[i].DataAddr);
+			cdma_desc[c][d].MemAddrHi = mapped_addr >> 16;
+			cdma_desc[c][d].MemAddrLo = mapped_addr;
+			break;
+
+		case READ_MAIN_SPARE_CMD:
+			cdma_desc[c][d].CommandType =
+				0x2000 | (PendingCMD[i].PageCount);
+			cdma_desc[c][d].CommandFlags =
+				(0 << 10) | (cont << 9) | (0 << 8) | 0x40;
+			mapped_addr = (u32)GLOB_MEMMAP_TOBUS
+				((u32 *)PendingCMD[i].DataAddr);
+			cdma_desc[c][d].MemAddrHi = mapped_addr >> 16;
+			cdma_desc[c][d].MemAddrLo = mapped_addr;
+			break;
+
+		case MEMCOPY_CMD:
+			cdma_desc[c][d].CommandType =
+				0x2000 | (PendingCMD[i].PageCount);
+			cdma_desc[c][d].CommandFlags =
+				(1 << 10) | (cont << 9) | (0 << 8) | 0x40;
+			mapped_addr = (unsigned int)GLOB_MEMMAP_TOBUS
+				((u32 *)&memcp_desc[c][d]);
+			cdma_desc[c][d].MemCopyPointerHi = mapped_addr >> 16;
+			cdma_desc[c][d].MemCopyPointerLo = mapped_addr;
+
+			memcp_desc[c][d].NxtPointerHi = 0;
+			memcp_desc[c][d].NxtPointerLo = 0;
+
+			mapped_addr = (u32)GLOB_MEMMAP_TOBUS
+				((u32 *)PendingCMD[i].DataSrcAddr);
+			memcp_desc[c][d].SrcAddrHi = mapped_addr >> 16;
+			memcp_desc[c][d].SrcAddrLo = mapped_addr;
+			mapped_addr = (u32)GLOB_MEMMAP_TOBUS
+				((u32 *)PendingCMD[i].DataDestAddr);
+			memcp_desc[c][d].DestAddrHi = mapped_addr >> 16;
+			memcp_desc[c][d].DestAddrLo = mapped_addr;
+
+			memcp_desc[c][d].XferSize =
+				PendingCMD[i].MemCopyByteCnt;
+			memcp_desc[c][d].MemCopyFlags =
+				(0 << 15 | 0 << 14 | 27 << 8 | 0x40);
+			memcp_desc[c][d].MemCopyStatus = 0;
+			break;
+
+		case DUMMY_CMD:
+		default:
+			cdma_desc[c][d].CommandType = 0XFFFF;
+			cdma_desc[c][d].CommandFlags =
+				(0 << 10) | (cont << 9) | (0 << 8) | 0x40;
+			cdma_desc[c][d].MemAddrHi = 0;
+			cdma_desc[c][d].MemAddrLo = 0;
+			break;
+		}
+
+		cdma_desc[c][d].Channel = PendingCMD[i].ChanSync[0];
+		cdma_desc[c][d].Status = 0;
+		cdma_desc[c][d].Tag = i;
+
+		for (j = 1; j <= MAX_CHANS; j++) {
+			if (PendingCMD[i].ChanSync[j]) {
+				if (1 == GLOB_valid_banks[c]) {
+					CDMA_AddDummyDesc(c);
+					d = dcount[c]++;
+					cdma_desc[c][d].Channel =
+						PendingCMD[i].ChanSync[j];
+				}
+			}
+		}
+
+		if ((PendingCMD[i].CMD == WRITE_MAIN_SPARE_CMD) ||
+			(PendingCMD[i].CMD == READ_MAIN_SPARE_CMD)) {
+			/* Descriptor to set back Main Area Access Mode */
+			dcount[c]++;
+			d++;
+			debug_boundary_error(d, MAX_DESC_PER_CHANNEL, 0);
+			next_ptr = calc_next_desc_ptr(c, d);
+			cdma_desc[c][d].NxtPointerHi = next_ptr >> 16;
+			cdma_desc[c][d].NxtPointerLo = next_ptr;
+#if FLASH_CDMA
+			cdma_desc[c][d].FlashPointerHi = (u32)ptr >> 16;
+			cdma_desc[c][d].FlashPointerLo = (u32)ptr;
+#endif
+			cdma_desc[c][d].CommandType = 0x42;
+			cdma_desc[c][d].CommandFlags =
+				(0 << 10) | (cont << 9) | (0 << 8) | 0x40;
+			cdma_desc[c][d].MemAddrHi = 0;
+			cdma_desc[c][d].MemAddrLo = 0;
+
+			cdma_desc[c][d].Channel = PendingCMD[i].ChanSync[0];
+			cdma_desc[c][d].Status = 0;
+			cdma_desc[c][d].Tag = i;
+		}
+	}
+
+	for (c = 0; c < MAX_CHANS; c++) {
+		if (GLOB_valid_banks[c])
+			CDMA_AddDummyDescAtEnd(c);
+	}
+
+#if FLASH_CDMA
+	iowrite32(1, FlashReg + DMA_ENABLE);
+	/* Wait for DMA to be enabled before issuing the next command */
+	while (!(ioread32(FlashReg + DMA_ENABLE) & DMA_ENABLE__FLAG))
+		;
+
+	for (c = 0; c < MAX_CHANS; c++) {
+		if (!GLOB_valid_banks[c])
+			continue;
+		cdma_trans(c);
+	}
+#endif
+
+	return status;
+}
+
+
+#if FLASH_CDMA
+static void ResetSyncModule(void)
+{
+	u16 c, d;
+	u32 *ptr;
+	u32 cont;
+	unsigned long next_ptr;
+
+	/* Disable all interrupts */
+	iowrite32(0, FlashReg + GLOBAL_INT_ENABLE);
+
+	/* Clear all DMA interrupt bits before starting the chains */
+	iowrite32(ioread32(FlashReg + DMA_INTR), FlashReg + DMA_INTR);
+
+	for (c = 0; c < MAX_CHANS; c++) {
+		for (d = 0; d < MAX_SYNC_POINTS; d++) {
+			reset_cdma_desc(c, d);
+
+			next_ptr = calc_next_desc_ptr(c, d);
+			cdma_desc[c][d].NxtPointerHi = next_ptr >> 16;
+			cdma_desc[c][d].NxtPointerLo = next_ptr;
+
+			ptr = (u32 *)(u32)(MODE_10 | (c << 24));
+			cdma_desc[c][d].FlashPointerHi =
+				(u32)((u32)ptr >> 16);
+			cdma_desc[c][d].FlashPointerLo =
+				(u32)ptr;
+
+			cdma_desc[c][d].CommandType = 0xFFFF;
+
+			if (d == (MAX_SYNC_POINTS - 1)) {
+				cont = 0;
+				cdma_desc[c][d].CommandFlags = (0 << 10) |
+					(cont << 9) | (1 << 8) | 0x40;
+			} else {
+				cont = 1;
+				cdma_desc[c][d].CommandFlags = (0 << 10) |
+					(cont << 9) | (0 << 8) | 0x40;
+			}
+
+			cdma_desc[c][d].Channel = ((0 << 15) | (1 << 14) |
+				(c << CHANNEL_ID_OFFSET) |
+				(1 << (4 + c)) | d);
+
+			cdma_desc[c][d].Status = 0;
+			cdma_desc[c][d].Tag = c * MAX_SYNC_POINTS + d;
+		}
+	}
+
+	for (c = 0; c < MAX_CHANS; c++)
+		cdma_trans(c);
+
+	while ((ioread32(FlashReg + DMA_INTR) &
+		(DMA_INTR__DESC_COMP_CHANNEL0 |
+		DMA_INTR__DESC_COMP_CHANNEL1 |
+		DMA_INTR__DESC_COMP_CHANNEL2 |
+		DMA_INTR__DESC_COMP_CHANNEL3)) !=
+		(DMA_INTR__DESC_COMP_CHANNEL0 |
+		DMA_INTR__DESC_COMP_CHANNEL1 |
+		DMA_INTR__DESC_COMP_CHANNEL2 |
+		DMA_INTR__DESC_COMP_CHANNEL3))
+		;
+
+	iowrite32(ioread32(FlashReg + DMA_INTR), FlashReg + DMA_INTR);
+	iowrite32(GLOBAL_INT_EN_FLAG, FlashReg + GLOBAL_INT_ENABLE);
+}
+
+int is_cdma_interrupt(void)
+{
+	u32 ints_b0, ints_b1, ints_b2, ints_b3, ints_cdma;
+	u32 int_en_mask;
+	u32 cdma_int_en_mask;
+
+	/* Set the global Enable masks for only those interrupts
+	 * that are supported */
+	cdma_int_en_mask = (DMA_INTR__DESC_COMP_CHANNEL0 |
+			DMA_INTR__DESC_COMP_CHANNEL1 |
+			DMA_INTR__DESC_COMP_CHANNEL2 |
+			DMA_INTR__DESC_COMP_CHANNEL3 |
+			DMA_INTR__MEMCOPY_DESC_COMP);
+
+	int_en_mask = (INTR_STATUS0__ECC_ERR |
+		INTR_STATUS0__PROGRAM_FAIL |
+		INTR_STATUS0__ERASE_FAIL);
+
+	ints_b0 = ioread32(FlashReg + INTR_STATUS0) & int_en_mask;
+	ints_b1 = ioread32(FlashReg + INTR_STATUS1) & int_en_mask;
+	ints_b2 = ioread32(FlashReg + INTR_STATUS2) & int_en_mask;
+	ints_b3 = ioread32(FlashReg + INTR_STATUS3) & int_en_mask;
+	ints_cdma = ioread32(FlashReg + DMA_INTR) & cdma_int_en_mask;
+
+	if (ints_b0 || ints_b1 || ints_b2 || ints_b3 || ints_cdma) {
+		nand_dbg_print(NAND_DBG_DEBUG, "NAND controller interrupt!\n"
+			"ints_bank0 to ints_bank3: 0x%x, 0x%x, 0x%x, 0x%x\n"
+			"ints_cdma: 0x%x\n",
+			ints_b0, ints_b1, ints_b2, ints_b3, ints_cdma);
+		return 1;
+	} else {
+		nand_dbg_print(NAND_DBG_DEBUG,
+				"Not a NAND controller interrupt!\n");
+		return 0;
+	}
+}
+
+static void update_event_status(void)
+{
+	u16 i, c, d;
+
+	for (c = 0; c < MAX_CHANS; c++) {
+		if (GLOB_valid_banks[c]) {
+			d = dcount[c];
+			debug_boundary_error(d, MAX_DESC_PER_CHANNEL, 0);
+			for (i = 0; i < d; i++) {
+				if (cdma_desc[c][i].Tag != 0xFF)
+					PendingCMD[cdma_desc[c][i].Tag].Status
+							= CMD_PASS;
+#if FLASH_CDMA
+				if ((cdma_desc[c][i].CommandType == 0x41) ||
+				(cdma_desc[c][i].CommandType == 0x42) ||
+				(cdma_desc[c][i].CommandType == 0x43))
+					continue;
+
+				switch (PendingCMD[cdma_desc[c][i].Tag].CMD) {
+				case READ_MAIN_SPARE_CMD:
+					Conv_Main_Spare_Data_Phy2Log_Format(
+						PendingCMD[
+						cdma_desc[c][i].Tag].DataAddr,
+						PendingCMD[
+						cdma_desc[c][i].Tag].
+						PageCount);
+					break;
+				case READ_SPARE_CMD:
+					Conv_Spare_Data_Phy2Log_Format(
+						PendingCMD[
+						cdma_desc[c][i].Tag].
+						DataAddr);
+					break;
+				default:
+					break;
+				}
+#endif
+			}
+		}
+	}
+}
+
+static u16 do_ecc_for_desc(u16 c, u8 *buf,
+				u16 page)
+{
+	u16 event = EVENT_NONE;
+	u16 err_byte;
+	u8 err_sector;
+	u8 err_page = 0;
+	u8 err_device;
+	u16 ecc_correction_info;
+	u16 err_address;
+	u32 eccSectorSize;
+	u8 *err_pos;
+
+	eccSectorSize   = ECC_SECTOR_SIZE * (DeviceInfo.wDevicesConnected);
+
+	do {
+		if (0 == c)
+			err_page = ioread32(FlashReg + ERR_PAGE_ADDR0);
+		else if (1 == c)
+			err_page = ioread32(FlashReg + ERR_PAGE_ADDR1);
+		else if (2 == c)
+			err_page = ioread32(FlashReg + ERR_PAGE_ADDR2);
+		else if (3 == c)
+			err_page = ioread32(FlashReg + ERR_PAGE_ADDR3);
+
+		err_address = ioread32(FlashReg + ECC_ERROR_ADDRESS);
+		err_byte = err_address & ECC_ERROR_ADDRESS__OFFSET;
+		err_sector = ((err_address &
+			ECC_ERROR_ADDRESS__SECTOR_NR) >> 12);
+
+		ecc_correction_info = ioread32(FlashReg + ERR_CORRECTION_INFO);
+		err_device = ((ecc_correction_info &
+			ERR_CORRECTION_INFO__DEVICE_NR) >> 8);
+
+		if (ecc_correction_info & ERR_CORRECTION_INFO__ERROR_TYPE) {
+			return EVENT_UNCORRECTABLE_DATA_ERROR;
+		} else {
+			event = EVENT_CORRECTABLE_DATA_ERROR_FIXED;
+			if (err_byte < eccSectorSize) {
+				err_pos = buf +
+					(err_page - page) *
+					DeviceInfo.wPageDataSize +
+					err_sector * eccSectorSize +
+					err_byte *
+					DeviceInfo.wDevicesConnected +
+					err_device;
+				*err_pos ^= ecc_correction_info &
+					ERR_CORRECTION_INFO__BYTEMASK;
+			}
+		}
+	} while (!(ecc_correction_info & ERR_CORRECTION_INFO__LAST_ERR_INFO));
+
+	return event;
+}
+
+static u16 process_ecc_int(u16 c,
+	u16 *fiqs, u16 *i)
+{
+	u16 d, j, event;
+	u16 ints;
+	u16 cdma_int_en_mask;
+
+	event = EVENT_PASS;
+	d = dcount[c];
+
+	for (j = 0; j < d; j++) {
+		if ((cdma_desc[c][j].Status & CMD_DMA_DESC_COMP) !=
+			CMD_DMA_DESC_COMP)
+			break;
+	}
+
+	*i = j; /* Pass the descripter number found here */
+
+	if (j == d)
+		return EVENT_UNCORRECTABLE_DATA_ERROR;
+
+	event = do_ecc_for_desc(c, PendingCMD[cdma_desc[c][j].Tag].DataAddr,
+			PendingCMD[cdma_desc[c][j].Tag].Page);
+
+	if (EVENT_UNCORRECTABLE_DATA_ERROR == event) {
+		if (cdma_desc[c][j].Tag != 0xFF)
+			PendingCMD[cdma_desc[c][j].Tag].Status = CMD_FAIL;
+		CDMA_UpdateEventStatus();
+
+		iowrite32(0, FlashReg + DMA_ENABLE);
+		while ((ioread32(FlashReg + DMA_ENABLE) & DMA_ENABLE__FLAG))
+			;
+
+		iowrite32(fiqs[0], FlashReg + INTR_STATUS0);
+		iowrite32(fiqs[1], FlashReg + INTR_STATUS1);
+		iowrite32(fiqs[2], FlashReg + INTR_STATUS2);
+		iowrite32(fiqs[3], FlashReg + INTR_STATUS3);
+
+		cdma_int_en_mask = (DMA_INTR__DESC_COMP_CHANNEL0 |
+				DMA_INTR__DESC_COMP_CHANNEL1 |
+				DMA_INTR__DESC_COMP_CHANNEL2 |
+				DMA_INTR__DESC_COMP_CHANNEL3 |
+				DMA_INTR__MEMCOPY_DESC_COMP);
+
+		ints = (ioread32(FlashReg + DMA_INTR) & cdma_int_en_mask);
+		iowrite32(ints, FlashReg + DMA_INTR);
+
+		return event;
+	}
+
+	if (0 == c)
+		iowrite32(INTR_STATUS0__ECC_ERR, FlashReg + INTR_STATUS0);
+	else if (1 == c)
+		iowrite32(INTR_STATUS1__ECC_ERR, FlashReg + INTR_STATUS1);
+	else if (2 == c)
+		iowrite32(INTR_STATUS2__ECC_ERR, FlashReg + INTR_STATUS2);
+	else if (3 == c)
+		iowrite32(INTR_STATUS3__ECC_ERR, FlashReg + INTR_STATUS3);
+
+	return event;
+}
+
+static void process_prog_fail_int(u16 c,
+	u16 *fiqs, u16 *i)
+{
+	u16 ints;
+	u16 cdma_int_en_mask;
+
+	if (cdma_desc[c][*i].Tag != 0xFF)
+		PendingCMD[cdma_desc[c][*i].Tag].Status = CMD_FAIL;
+
+	CDMA_UpdateEventStatus();
+
+	iowrite32(0, FlashReg + DMA_ENABLE);
+	while ((ioread32(FlashReg + DMA_ENABLE) & DMA_ENABLE__FLAG))
+		;
+
+	iowrite32(fiqs[0], FlashReg + INTR_STATUS0);
+	iowrite32(fiqs[1], FlashReg + INTR_STATUS1);
+	iowrite32(fiqs[2], FlashReg + INTR_STATUS2);
+	iowrite32(fiqs[3], FlashReg + INTR_STATUS3);
+
+	cdma_int_en_mask = (DMA_INTR__DESC_COMP_CHANNEL0 |
+			DMA_INTR__DESC_COMP_CHANNEL1 |
+			DMA_INTR__DESC_COMP_CHANNEL2 |
+			DMA_INTR__DESC_COMP_CHANNEL3 |
+			DMA_INTR__MEMCOPY_DESC_COMP);
+
+	ints = (ioread32(FlashReg + DMA_INTR) & cdma_int_en_mask);
+	iowrite32(ints, FlashReg + DMA_INTR);
+}
+
+static void process_erase_fail_int(u16 c,
+	u16 *fiqs, u16 *i)
+{
+	u16 ints;
+	u16 cdma_int_en_mask;
+
+	if (cdma_desc[c][*i].Tag != 0xFF)
+		PendingCMD[cdma_desc[c][*i].Tag].Status = CMD_FAIL;
+
+	CDMA_UpdateEventStatus();
+
+	iowrite32(0, FlashReg + DMA_ENABLE);
+	while ((ioread32(FlashReg + DMA_ENABLE) & DMA_ENABLE__FLAG))
+		;
+
+	iowrite32(fiqs[0], FlashReg + INTR_STATUS0);
+	iowrite32(fiqs[1], FlashReg + INTR_STATUS1);
+	iowrite32(fiqs[2], FlashReg + INTR_STATUS2);
+	iowrite32(fiqs[3], FlashReg + INTR_STATUS3);
+
+	cdma_int_en_mask = (DMA_INTR__DESC_COMP_CHANNEL0 |
+			DMA_INTR__DESC_COMP_CHANNEL1 |
+			DMA_INTR__DESC_COMP_CHANNEL2 |
+			DMA_INTR__DESC_COMP_CHANNEL3 |
+			DMA_INTR__MEMCOPY_DESC_COMP);
+
+	ints = (ioread32(FlashReg + DMA_INTR) & cdma_int_en_mask);
+	iowrite32(ints, FlashReg + DMA_INTR);
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     CDMA_Event_Status (for use with CMD_DMA)
+* Inputs:       none
+* Outputs:      Event_Status code
+* Description:  This function is called after an interrupt has happened
+*               It reads the HW status register and ...tbd
+*               It returns the appropriate event status
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+u16  CDMA_Event_Status(void)
+{
+	u16 FIQstatus[MAX_CHANS];
+	u16 int_status, event;
+	u16 c, i = 0;
+	u16 int_en_mask;
+	u16 cdma_int_en_mask;
+
+	event = EVENT_PASS;
+
+	/* Set the global Enable masks for only those interrupts
+	 * that are supported */
+	cdma_int_en_mask = (DMA_INTR__DESC_COMP_CHANNEL0 |
+			DMA_INTR__DESC_COMP_CHANNEL1 |
+			DMA_INTR__DESC_COMP_CHANNEL2 |
+			DMA_INTR__DESC_COMP_CHANNEL3 |
+			DMA_INTR__MEMCOPY_DESC_COMP);
+
+	int_en_mask = (INTR_STATUS0__ECC_ERR |
+		INTR_STATUS0__PROGRAM_FAIL |
+		INTR_STATUS0__ERASE_FAIL);
+
+	FIQstatus[0] = (ioread32(FlashReg + INTR_STATUS0) & int_en_mask);
+	FIQstatus[1] = (ioread32(FlashReg + INTR_STATUS1) & int_en_mask);
+	FIQstatus[2] = (ioread32(FlashReg + INTR_STATUS2) & int_en_mask);
+	FIQstatus[3] = (ioread32(FlashReg + INTR_STATUS3) & int_en_mask);
+
+	int_status = ioread32(FlashReg + DMA_INTR) & cdma_int_en_mask;
+
+	if (int_status) {
+		if ((int_status & DMA_INTR__DESC_COMP_CHANNEL0) ||
+		(int_status & DMA_INTR__DESC_COMP_CHANNEL1) ||
+		(int_status & DMA_INTR__DESC_COMP_CHANNEL2) ||
+		(int_status & DMA_INTR__DESC_COMP_CHANNEL3)) {
+
+			event = EVENT_PASS;
+			update_event_status();
+		} else {
+			/* TODO -- What kind of status can be
+			* reported back to FTL in PendindCMD? */
+			event = EVENT_DMA_CMD_FAIL;
+		}
+
+		iowrite32(0, FlashReg + DMA_ENABLE);
+		while ((ioread32(FlashReg + DMA_ENABLE) & DMA_ENABLE__FLAG))
+			;
+
+		iowrite32(int_status, FlashReg + DMA_INTR);
+	}
+
+	for (c = 0; c < MAX_CHANS; c++) {
+		if (FIQstatus[c]) {
+			if ((FIQstatus[c] & INTR_STATUS0__ECC_ERR) &&
+				ioread32(FlashReg + ECC_ENABLE)) {
+				event = process_ecc_int(c, FIQstatus, &i);
+				if (EVENT_UNCORRECTABLE_DATA_ERROR == event)
+					return event;
+			}
+
+			if (FIQstatus[c] & INTR_STATUS0__PROGRAM_FAIL) {
+				process_prog_fail_int(c, FIQstatus, &i);
+				return EVENT_PROGRAM_FAILURE;
+			}
+
+			if (FIQstatus[c] & INTR_STATUS0__ERASE_FAIL) {
+				process_erase_fail_int(c, FIQstatus, &i);
+				return EVENT_ERASE_FAILURE;
+			} else {
+				if (0 == c)
+					iowrite32(FIQstatus[0],
+						FlashReg + INTR_STATUS0);
+				else if (1 == c)
+					iowrite32(FIQstatus[1],
+						FlashReg + INTR_STATUS1);
+				else if (2 == c)
+					iowrite32(FIQstatus[2],
+						FlashReg + INTR_STATUS2);
+				else if (3 == c)
+					iowrite32(FIQstatus[3],
+						FlashReg + INTR_STATUS3);
+			}
+		}
+	}
+
+	return event;
+}
+#endif
+
+#endif
+
+/****** Sync related functions ********/
+#define MAX_SYNC                           14
+#define FORCED_ORDERED_SYNC    15
+#define SNUS_CHAN_OFFSET           24
+#define SNUS_LASTID_MASK           0xFFFFFF
+
+#if DEBUG_SYNC
+u32  debug_sync_cnt = 1;
+#endif
+
+static u32 isFlashReadCMD(u8 CMD)
+{
+	switch (CMD) {
+	case READ_MAIN_CMD:
+	case READ_SPARE_CMD:
+	case READ_MAIN_SPARE_CMD:
+		return 1;
+	default:
+		return 0;
+	}
+}
+
+static u32 isFlashWriteCMD(u8 CMD)
+{
+	switch (CMD) {
+	case WRITE_MAIN_CMD:
+	case WRITE_SPARE_CMD:
+	case WRITE_MAIN_SPARE_CMD:
+		return 1;
+	default:
+		return 0;
+	}
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     generateSyncNum
+* Inputs:       sync_usage array, a new sync number in case no reusable one
+*               was found. The bit vector of channels taking place in current
+*               sync operation, and the earliest cmd id for the new sync op.
+* Outputs:      The sync number to be used for the current syncing
+* Description:
+* Assumption :  A sync point is always used between 2 and only 2 channels.
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+static u32 generateSyncNum(u32 *sync_usage, u32 *newSyncNum,
+			u32 syncedChans, u32 lastid)
+{
+	u32 synci, toUseSyncNum = 0;
+
+	/* We try to reuse syncs as much as possible with this algorithm */
+	for (synci = 1; synci < *newSyncNum; synci++) {
+		if (((sync_usage[synci] >> SNUS_CHAN_OFFSET) == syncedChans)
+			&& ((sync_usage[synci] & SNUS_LASTID_MASK)
+			< lastid)) {
+			toUseSyncNum = synci;
+			break;
+		}
+	}
+
+	if (!toUseSyncNum && (*newSyncNum <= MAX_SYNC))
+		toUseSyncNum = (*newSyncNum)++;
+
+/*
+    The rest is to find another sync point which has at least
+    one channel in common, and then add a sync point to
+    the extra channel, and then use it.
+
+    -- This will not result in the sync number being used
+    by 3 channels, since the new use will have just the
+    syncedChans values. So our assumption still holds valid.
+
+    -- However, adding the new channel is not easy.
+    We need to find the id, which is after the last sync number
+    that existed between the common channel, and our new
+    channel, and before the next sync that will exist
+    between the new and common channel!
+*/
+
+	return toUseSyncNum;
+}
+
+
+#define getChannelPendingCMD(idx)  (PendingCMD[idx].Block /\
+	(DeviceInfo.wTotalBlocks / totalUsedBanks))
+
+#define isOrderedPendingCMD(idx)  ((PendingCMD[idx].Flags &\
+	LLD_CMD_FLAG_ORDER_BEFORE_REST) != 0)
+
+#define getSyncFromChannel(c)                   ((c & CHANNEL_SYNC_MASK) >>\
+	CHANNEL_SYNC_OFFSET)
+#define getIdFromChannel(c)                     ((c & CHANNEL_ID_MASK) >>\
+	CHANNEL_ID_OFFSET)
+#define getContFromChannel(c)                   ((c & CHANNEL_CONT_MASK) >>\
+	CHANNEL_CONT_OFFSET)
+#define getIntrFromChannel(c)                   ((c & CHANNEL_INTR_MASK) >>\
+	CHANNEL_INTR_OFFSET)
+#define getChanFromChannel(c)                   ((c & CHANNEL_DMA_MASK) >>\
+	CHANNEL_DMA_OFFSET)
+
+#define putSyncInChannel(c, v)    (c |= ((v << CHANNEL_SYNC_OFFSET) &\
+	CHANNEL_SYNC_MASK))
+#define putIdInChannel(c, v)        (c |= ((v << CHANNEL_ID_OFFSET) &\
+	CHANNEL_ID_MASK))
+#define putContInChannel(c, v)    (c |= ((v << CHANNEL_CONT_OFFSET) &\
+	CHANNEL_CONT_MASK))
+#define putIntrInChannel(c, v)    (c |= ((v << CHANNEL_INTR_OFFSET) &\
+	CHANNEL_INTR_MASK))
+#define putChanInChannel(c, v)    (c |= ((v << CHANNEL_DMA_OFFSET) &\
+	CHANNEL_DMA_MASK))
+
+#define addChanToChannel(c, v)    (c |= ((1 << CHANNEL_DMA_OFFSET) << v))
+
+#define isWithinRange(toChk, Addr, Bytes)    ((toChk >= Addr) &&\
+	(toChk < (Addr + Bytes)))
+
+struct add_sync_points_struct {
+	u8 *fromAddr, *toAddr;
+	u8 CMD;
+	u32 idx;
+	u32 numSync, numSyncOther;
+	u32 chnl, chnlOther;
+	u32 newSyncNum, writeOpSyncPlaced;
+	u32 indx_last_cmd[MAX_CHANS];
+	u32 namb[MAX_CHANS][MAX_CHANS];
+	u32 sync_usage[MAX_SYNC + 1];
+};
+
+static void process_memcpy(struct add_sync_points_struct *ptr)
+{
+	int i, stopLoop, within1, within2, condition;
+	u8 *data_addr;
+	unsigned long offset;
+
+	ptr->fromAddr = PendingCMD[ptr->idx].DataSrcAddr;
+	ptr->toAddr = PendingCMD[ptr->idx].DataDestAddr;
+	stopLoop = 0;
+
+	for (i = ptr->idx - 1; (i >= MAX_CHANS) && !stopLoop; i--) {
+		data_addr = PendingCMD[i].DataAddr;
+		offset = PendingCMD[i].PageCount * DeviceInfo.wPageDataSize;
+		within1 = isWithinRange(ptr->toAddr, data_addr, offset);
+		within2 = isWithinRange(ptr->fromAddr, data_addr, offset);
+		condition = (PendingCMD[i].CMD != MEMCOPY_CMD) &&
+			(PendingCMD[i].CMD != ERASE_CMD) &&
+			(within1 || within2);
+		if (condition) {
+			stopLoop = 1;
+			PendingCMD[ptr->idx].Block = PendingCMD[i].Block;
+			ptr->chnl = getChannelPendingCMD(ptr->idx);
+			debug_boundary_error(ptr->chnl, totalUsedBanks,
+				ptr->idx);
+			if (isFlashWriteCMD(PendingCMD[i].CMD) && within1) {
+				ptr->CMD = READ_MAIN_CMD;
+				PendingCMD[ptr->idx].DataAddr = ptr->toAddr;
+			}
+		}
+	}
+}
+
+static void check_synced_helper(struct add_sync_points_struct *ptr,
+				int j, int k)
+{
+	int l;
+	unsigned long m, n;
+
+	m = ptr->chnl;
+	n = ptr->chnlOther;
+
+	for (l = 0; l < totalUsedBanks; l++) {
+		if ((l != m) && (l != n)) {
+			if (ptr->namb[l][n] <= j) {
+				if (ptr->namb[m][l] < ptr->namb[n][l])
+					ptr->namb[m][l] = ptr->namb[n][l];
+			} else {
+				if (ptr->namb[l][m] < ptr->namb[n][m])
+					ptr->namb[l][m] = ptr->namb[n][m];
+			}
+
+			if (ptr->namb[l][m] <= k) {
+				if (ptr->namb[n][l] < ptr->namb[m][l])
+					ptr->namb[n][l] = ptr->namb[m][l];
+			} else {
+				if (ptr->namb[l][n] < ptr->namb[m][n])
+					ptr->namb[l][n] = ptr->namb[m][n];
+			}
+		}
+	}
+}
+
+#if DEBUG_SYNC
+static void check_synced_debug_sync(struct add_sync_points_struct *ptr,
+	unsigned long toUseSyncNum, unsigned long syncedChans, int j, int k)
+{
+	int m, n;
+
+	if (!(debug_sync_cnt % DBG_SNC_PRINTEVERY)) {
+		nand_dbg_print(NAND_DBG_DEBUG,
+			"ADDSYNC: Placed Sync point 0x%x "
+			"with chanvectors 0x%x "
+			"betn tags %d & prev(%d)=%d\n",
+			(unsigned)toUseSyncNum,
+			(unsigned)syncedChans,
+			j - MAX_CHANS,
+			ptr->idx - MAX_CHANS,
+			k - MAX_CHANS);
+		for (m = 0; m < totalUsedBanks; m++) {
+			nand_dbg_print(NAND_DBG_DEBUG,
+				"ADDSYNC: ch:%d ->", m);
+			for (n = 0; n < totalUsedBanks; n++)
+				if (255 == PendingCMD[ptr->namb[m][n]].Tag)
+					nand_dbg_print(NAND_DBG_DEBUG,
+					" (ch:%d tag: -1)", n);
+				else
+					nand_dbg_print(NAND_DBG_DEBUG,
+					" (ch:%d tag:%3d)", n,
+					PendingCMD[ptr->namb[m][n]].Tag);
+			nand_dbg_print(NAND_DBG_DEBUG, "\n");
+		}
+	}
+}
+#endif
+
+static void check_synced(struct add_sync_points_struct *ptr, int j, int k)
+{
+	unsigned long syncedChans, toUseSyncNum;
+
+	for (ptr->numSync = 0;
+	(ptr->numSync <= MAX_CHANS) &&
+	(PendingCMD[k].ChanSync[ptr->numSync] & CHANNEL_DMA_MASK);
+	ptr->numSync++)
+		;
+
+	for (ptr->numSyncOther = 0;
+	(ptr->numSyncOther <= MAX_CHANS) &&
+	(PendingCMD[j].ChanSync[ptr->numSyncOther] & CHANNEL_DMA_MASK);
+	ptr->numSyncOther++)
+		;
+
+	if ((ptr->numSync > MAX_CHANS) ||
+		(ptr->numSyncOther > MAX_CHANS)) {
+		nand_dbg_print(NAND_DBG_DEBUG,
+			"LLD_CDMA: Sync Algorithm failed to place a Sync "
+			"between command tags %d and %d\n",
+			ptr->idx - MAX_CHANS,
+			j - MAX_CHANS);
+	} else {
+		ptr->writeOpSyncPlaced |= (1 << ptr->chnlOther);
+		syncedChans = ((1 << ptr->chnl) | (1 << ptr->chnlOther));
+		toUseSyncNum = generateSyncNum(&ptr->sync_usage[0],
+			&ptr->newSyncNum, syncedChans, (j < k ? j : k));
+		if (!toUseSyncNum) {
+			nand_dbg_print(NAND_DBG_DEBUG,
+				"LLD_CDMA: Sync Algorithm ran out of Syncs "
+				"during syncing command tags %d and %d\n",
+				ptr->idx - MAX_CHANS,
+				j - MAX_CHANS);
+		} else {
+			putSyncInChannel(
+				PendingCMD[k].ChanSync[ptr->numSync],
+				toUseSyncNum);
+			putContInChannel(
+				PendingCMD[k].ChanSync[ptr->numSync],
+				1);
+			putIdInChannel(
+				PendingCMD[k].ChanSync[ptr->numSync],
+				ptr->chnl);
+			putSyncInChannel(
+				PendingCMD[j].ChanSync[ptr->numSyncOther],
+				toUseSyncNum);
+			putContInChannel(
+				PendingCMD[j].ChanSync[ptr->numSyncOther],
+				1);
+			putIdInChannel(
+				PendingCMD[j].ChanSync[ptr->numSyncOther],
+				ptr->chnlOther);
+			putChanInChannel(
+				PendingCMD[j].ChanSync[ptr->numSyncOther],
+				syncedChans);
+			putChanInChannel(
+				PendingCMD[k].ChanSync[ptr->numSync],
+				syncedChans);
+
+			ptr->sync_usage[toUseSyncNum] =
+				(syncedChans << SNUS_CHAN_OFFSET) |
+				((j > k ? j : k) & SNUS_LASTID_MASK);
+
+			ptr->namb[ptr->chnl][ptr->chnlOther] = j;
+
+			if (ptr->namb[ptr->chnlOther][ptr->chnl] > k)
+				nand_dbg_print(NAND_DBG_DEBUG,
+				"LLD_CDMA: Sync Algorithm detected "
+				"a possible deadlock in its assignments.\n");
+			else
+				ptr->namb[ptr->chnlOther][ptr->chnl] = k;
+
+			check_synced_helper(ptr, j, k);
+
+#if DEBUG_SYNC
+			check_synced_debug_sync(ptr, toUseSyncNum,
+					syncedChans, j, k);
+#endif
+		}
+	}
+}
+
+static void process_flash_rw(struct add_sync_points_struct *ptr)
+{
+	int j, k, stopLoop, within1, within2, condition;
+	unsigned long offset;
+
+	ptr->fromAddr = PendingCMD[ptr->idx].DataAddr;
+	k = ptr->indx_last_cmd[ptr->chnl];
+	offset = PendingCMD[ptr->idx].PageCount * DeviceInfo.wPageDataSize;
+	stopLoop = 0;
+
+	for (j = ptr->idx - 1; (j >= MAX_CHANS) && !stopLoop; j--) {
+		ptr->chnlOther = getChannelPendingCMD(j);
+		debug_boundary_error(ptr->chnlOther, totalUsedBanks, j);
+		within1 = isWithinRange(PendingCMD[j].DataDestAddr,
+				ptr->fromAddr, offset);
+		within2 = isWithinRange(PendingCMD[j].DataSrcAddr,
+				ptr->fromAddr, offset);
+		condition = (ptr->fromAddr == PendingCMD[j].DataAddr) ||
+			((PendingCMD[j].CMD == MEMCOPY_CMD) &&
+			(within1 || within2));
+		if (condition) {
+			if (ptr->namb[ptr->chnl][ptr->chnlOther] >= j) {
+				stopLoop = 1;
+			} else if (ptr->chnlOther == ptr->chnl) {
+				condition = isFlashWriteCMD(ptr->CMD) ||
+					isFlashReadCMD(PendingCMD[j].CMD) ||
+					((PendingCMD[j].CMD == MEMCOPY_CMD)
+					&& within1);
+				if (condition)
+					stopLoop = 1;
+			} else {
+				condition = isFlashReadCMD(ptr->CMD) ||
+					isFlashReadCMD(PendingCMD[j].CMD) ||
+					((PendingCMD[j].CMD == MEMCOPY_CMD)
+					&& within1);
+				if (condition) {
+					if (isFlashReadCMD(PendingCMD[j].CMD)
+						|| ((PendingCMD[j].CMD ==
+						MEMCOPY_CMD) && within1)) {
+						stopLoop = 1;
+						if (ptr->writeOpSyncPlaced)
+							break;
+					}
+					if (ptr->writeOpSyncPlaced &
+						(1 << ptr->chnlOther))
+						break;
+
+					check_synced(ptr, j, k);
+				}
+			}
+		}
+	}
+}
+
+static void process_force_ordering_helper(struct add_sync_points_struct *ptr,
+				unsigned long *syncNums, int k)
+{
+	unsigned long syncedChans;
+	int l;
+
+	if ((syncNums[ptr->chnlOther] > MAX_CHANS)) {
+		nand_dbg_print(NAND_DBG_DEBUG,
+			"LLD_CDMA: Sync Algorithm failed "
+			"find previously placed Forced Sync "
+			"at command tag %d, chnl %d\n",
+			(int)k - MAX_CHANS,
+			ptr->chnl);
+		} else {
+			syncedChans = getChanFromChannel(
+			PendingCMD[k].ChanSync[syncNums[ptr->chnlOther]]);
+
+			l = getIntrFromChannel(
+			PendingCMD[k].ChanSync[syncNums[ptr->chnlOther]]);
+
+			PendingCMD[k].ChanSync[syncNums[ptr->chnlOther]] = 0;
+
+			putIntrInChannel(
+			PendingCMD[k].ChanSync[syncNums[ptr->chnlOther]], l);
+
+			putSyncInChannel(
+			PendingCMD[ptr->idx].ChanSync[syncNums[ptr->chnl]],
+			FORCED_ORDERED_SYNC);
+
+			putContInChannel(
+			PendingCMD[ptr->idx].ChanSync[syncNums[ptr->chnl]],
+			1);
+
+			putIdInChannel(
+			PendingCMD[ptr->idx].ChanSync[syncNums[ptr->chnl]],
+			ptr->chnl);
+
+			putChanInChannel(
+			PendingCMD[ptr->idx].ChanSync[syncNums[ptr->chnl]],
+			syncedChans);
+
+			for (l = 0; l < totalUsedBanks; l++) {
+				if (l != ptr->chnl)
+					ptr->namb[l][ptr->chnl] = ptr->idx;
+			}
+#if DEBUG_SYNC
+			if (!(debug_sync_cnt % DBG_SNC_PRINTEVERY))
+				nand_dbg_print(NAND_DBG_DEBUG,
+				"ADDSYNC: Moved Forced Sync point "
+				"in chnl %d from tag %d to %d\n",
+				ptr->chnl,
+				k - MAX_CHANS,
+				ptr->idx - MAX_CHANS);
+#endif
+		}
+}
+
+static void process_force_ordering(struct add_sync_points_struct *ptr)
+{
+	unsigned long syncNums[MAX_CHANS];
+	unsigned long syncedChans;
+	int j, k, l, stopLoop;
+#if DEBUG_SYNC
+	int m;
+#endif
+
+	stopLoop = 0;
+	for (k = ptr->idx - 1; (k >= MAX_CHANS); k--) {
+		if (ptr->chnl != getChannelPendingCMD(k))
+			k = MAX_CHANS - 1;
+		else if (isOrderedPendingCMD(k))
+			break;
+	}
+
+	if (k >= MAX_CHANS) {
+		for (syncNums[ptr->chnl] = 0;
+		(syncNums[ptr->chnl] <= MAX_CHANS)
+		&& (PendingCMD[ptr->idx].ChanSync[syncNums[ptr->chnl]]
+		& CHANNEL_DMA_MASK); syncNums[ptr->chnl]++)
+			;
+
+		if (syncNums[ptr->chnl] > MAX_CHANS) {
+			nand_dbg_print(NAND_DBG_DEBUG,
+				"LLD_CDMA: Sync Algorithm failed to place "
+				"a Forced Sync at command tag %d\n",
+				ptr->idx - MAX_CHANS);
+		} else {
+			ptr->chnlOther = (ptr->chnl+1) % totalUsedBanks;
+			for (syncNums[ptr->chnlOther] = 0;
+			     (syncNums[ptr->chnlOther] <= MAX_CHANS)
+			     && (getSyncFromChannel(
+			PendingCMD[k].ChanSync[syncNums[ptr->chnlOther]]) !=
+			FORCED_ORDERED_SYNC);
+			     syncNums[ptr->chnlOther]++)
+				;
+
+			process_force_ordering_helper(ptr, syncNums, k);
+		}
+	} else {
+		syncedChans = 0;
+		for (j = 0; j < totalUsedBanks; j++) {
+			k = ptr->indx_last_cmd[j];
+			for (syncNums[j] = 0;
+			(syncNums[j] <= MAX_CHANS) &&
+				(PendingCMD[k].ChanSync[syncNums[j]] &
+				CHANNEL_DMA_MASK); syncNums[j]++)
+				;
+			if ((syncNums[j] > MAX_CHANS)) {
+				/* This should never happen! */
+				nand_dbg_print(NAND_DBG_DEBUG,
+				"LLD_CDMA: Sync Algorithm failed to place "
+				"a Forced Sync at command tag %d\n",
+				k - MAX_CHANS);
+				syncNums[0] = MAX_CHANS + 1;
+			}
+			syncedChans |= (1 << j);
+		}
+
+		if (syncNums[0] <= MAX_CHANS) {
+			for (j = 0; j < totalUsedBanks; j++) {
+				k = ptr->indx_last_cmd[j];
+				putSyncInChannel(
+					PendingCMD[k].ChanSync[syncNums[j]],
+					FORCED_ORDERED_SYNC);
+				putContInChannel(
+					PendingCMD[k].ChanSync[syncNums[j]],
+					1);
+				putIdInChannel(
+					PendingCMD[k].ChanSync[syncNums[j]],
+					j);
+				putChanInChannel(
+					PendingCMD[k].ChanSync[syncNums[j]],
+					syncedChans);
+				for (l = 0; l < totalUsedBanks; l++) {
+					if (l != j)
+						ptr->namb[l][j] = k;
+				}
+			}
+#if DEBUG_SYNC
+			if (!(debug_sync_cnt % DBG_SNC_PRINTEVERY)) {
+				nand_dbg_print(NAND_DBG_DEBUG,
+					"ADDSYNC: Placed Forced Sync point "
+					"for tag %d in tags",
+					ptr->idx - MAX_CHANS);
+				for (m = 0; m < totalUsedBanks; m++) {
+					if (m != ptr->chnl)
+						nand_dbg_print(NAND_DBG_DEBUG,
+						" %d",
+						(int)ptr->indx_last_cmd[m] -
+						MAX_CHANS);
+				}
+				nand_dbg_print(NAND_DBG_DEBUG, "\n");
+			}
+#endif
+		}
+	}
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     CDMA_AddSyncPoints
+* Inputs:       tag_count:- Number of commands in PendingCMD list
+* Outputs:      NONE
+* Description:  This function takes the PendingCMD list, and adds sync
+*               points between each entry on it, and any preceding entry
+*               in other channels that have conflicts with the Cache Block
+*               pointer.
+*               The design also takes care of syncing between memcopy
+*               and flash read/write operations. However, this function
+*               does not sync between 2 memcopy operations that have a conflict
+*               in a RAM pointer other than the cache block one. It is the
+*               responsibility of the calling function, probablt the
+*               application calling spectra, to take care of that.
+* Assumptions:  + This function is before the CDMA_Descriptor list is created.
+*               + This function takes care of the fact that memcopy accesses
+*                 might be just a few bytes within a cache block, and uses a
+*                 knowledge of the cache block to check for accesses anywhere
+*                 within it. However, it is assumed that we dont have ranges
+*                 that overlap one another. Either ranges overlap perfectly, or
+*                 the memcopy range is a subset of the flash address range.
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+void CDMA_AddSyncPoints(u16 tag_count)
+{
+	struct add_sync_points_struct vars;
+	int i, j;
+
+	vars.newSyncNum = 1;
+	debug_boundary_error(totalUsedBanks - 1, MAX_CHANS, 0);
+	for (i = 0; i < totalUsedBanks; i++) {
+		vars.chnl = getChannelPendingCMD(i);
+		debug_boundary_error(vars.chnl, totalUsedBanks, i);
+		vars.indx_last_cmd[vars.chnl] = i;
+		for (j = 0; j < totalUsedBanks; j++)
+			vars.namb[i][j] = 0;
+	}
+
+	for (i = 0; i <= MAX_SYNC; i++)
+		vars.sync_usage[i] = 0;
+
+	for (vars.idx = MAX_CHANS;
+		vars.idx < (tag_count + MAX_CHANS);
+		vars.idx++) {
+
+		vars.writeOpSyncPlaced = 0;
+		vars.CMD = PendingCMD[vars.idx].CMD;
+		vars.chnl = getChannelPendingCMD(vars.idx);
+		debug_boundary_error(vars.chnl, totalUsedBanks, vars.idx);
+
+		if (vars.CMD == MEMCOPY_CMD)
+			process_memcpy(&vars);
+
+		if (isFlashReadCMD(vars.CMD) || isFlashWriteCMD(vars.CMD))
+			process_flash_rw(&vars);
+
+		vars.indx_last_cmd[vars.chnl] = vars.idx;
+
+		/* Simple one sync to rule them all approach */
+		if (isOrderedPendingCMD(vars.idx))
+			process_force_ordering(&vars);
+
+	}
+}
+
+#if DEBUG_SYNC
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     CDMA_SyncCheck
+* Inputs:       tag_count:- Number of commands in PendingCMD list
+* Outputs:      NONE
+* Description:  This function takes a long time to run!
+*               So use only during testing with lld_emu. The job of this fn
+*               is to go through the post-synced PendingCMD array, and check
+*               for a) buffers getting accessed out of order (which should
+*               not happen), and b) deadlocks. i.e. 2 channels waiting on 2
+*               different sync points both of which occur on the other channel
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+
+#include "flash.h"
+
+#define EOLIST(i)   (chis[i] >= chMaxIndexes[i])
+
+static void print_ops(struct pending_cmd (*p)[MAX_CHANS + MAX_DESCS],
+		u32 rwop, u32 i, u32 chisi)
+{
+	if (rwop & 2)
+		nand_dbg_print(NAND_DBG_DEBUG,
+		"one or more read operations(indx:%d, tag:%d)",
+		chisi >> 16, p[i][chisi >> 16].Tag);
+	if (rwop & 1)
+		nand_dbg_print(NAND_DBG_DEBUG,
+		" one or more write operations(indx:%d, tag:%d)",
+		chisi & 0xFFFF, p[i][chisi & 0xFFFF].Tag);
+}
+
+/* Get sync channel from pending command */
+static u8 get_sync_ch_pcmd(struct pending_cmd (*p)[MAX_CHANS + MAX_DESCS],
+		int i, int chisi, int *syncNum, int *i2)
+{
+	u32 syncVal;
+
+	syncVal = p[i][chisi].ChanSync[0];
+	if (syncVal) {
+		*syncNum = getSyncFromChannel(syncVal);
+		*i2 = getChanFromChannel(syncVal) & ~(1 << i);
+		if ((*i2 != 1) && (*i2 != 2) && (*i2 != 4) && (*i2 != 8) &&
+				(*syncNum != FORCED_ORDERED_SYNC))
+			nand_dbg_print(NAND_DBG_DEBUG,
+			"SYNCCHECK: ASSERT FAIL: "
+			"second channel of sync(%d) got from sync val of "
+			"(ch:%d, indx:%d, tag:%d) is not a valid one!\n",
+			*i2, i, chisi, p[i][chisi].Tag);
+		*i2 = (*i2 == 1) ? 0 : (*i2 == 2 ? 1 : (*i2 == 4 ? 2 :
+			(i != 3 ? 3 : 2)));
+	}
+
+	return (syncVal != 0);
+}
+
+static u32 check_ordering(struct pending_cmd (*p)[MAX_CHANS + MAX_DESCS],
+		u32 ch1, u32 ch1_fromi, u32 ch1_toi,
+		u32 ch2, u32 ch2_fromi, u32 ch2_toi)
+{
+	u32 sync2syncops[2], i, j;
+	u32 rwop1, rwop2, lastcmd[2][CACHE_BLOCK_NUMBER];
+	u32 chi, ch, chfromi, chtoi;
+	u32 allok = 1;
+
+	for (chi = 0; chi < 2; chi++) {
+		if (chi) {
+			ch = ch2;
+			chfromi = ch2_fromi;
+			chtoi = ch2_toi;
+		} else {
+			ch = ch1;
+			chfromi = ch1_fromi;
+			chtoi = ch1_toi;
+		}
+
+		sync2syncops[chi] = 0;
+
+		for (j = 0; j < CACHE_BLOCK_NUMBER; j++)
+			lastcmd[chi][j] = 0;
+
+		for (i = chfromi; i <= chtoi; i++) {
+			for (j = 0; j < CACHE_BLOCK_NUMBER; j++) {
+				if ((isFlashReadCMD(p[ch][i].CMD) &&
+					(p[ch][i].DataAddr ==
+					Cache.ItemArray[j].pContent)) ||
+					((p[ch][i].CMD == MEMCOPY_CMD) &&
+					isWithinRange(p[ch][i].DataDestAddr,
+					Cache.ItemArray[j].pContent,
+					DeviceInfo.wBlockDataSize)
+					)) {
+					sync2syncops[chi] |= (1 << (j << 1));
+					lastcmd[chi][j] &= 0xFFFF0000;
+					lastcmd[chi][j] |= (i & 0xFFFF);
+				}
+				if ((isFlashWriteCMD(p[ch][i].CMD) &&
+					(p[ch][i].DataAddr ==
+					Cache.ItemArray[j].pContent)) ||
+					((p[ch][i].CMD == MEMCOPY_CMD) &&
+					isWithinRange(p[ch][i].DataSrcAddr,
+					Cache.ItemArray[j].pContent,
+					DeviceInfo.wBlockDataSize))) {
+					sync2syncops[chi] |=
+						(1 << ((j << 1) + 1));
+					lastcmd[chi][j] &= 0xFFFF;
+					lastcmd[chi][j] |=
+						((i & 0xFFFF) << 16);
+				}
+			}
+		}
+	}
+
+	for (j = 0; j < CACHE_BLOCK_NUMBER; j++) {
+		rwop1 = (sync2syncops[0] >> (j << 1)) & 3;
+		rwop2 = (sync2syncops[1] >> (j << 1)) & 3;
+		if (((rwop1 & 1) && rwop2) || ((rwop2 & 1) && rwop1)) {
+			nand_dbg_print(NAND_DBG_DEBUG,
+				"SYNCCHECK: ORDERING PROBLEM "
+				"in cache buffer %d: Between "
+				"(ch:%d, indx:%d, tag:%d) & "
+				"(ch:%d, indx:%d, tag:%d), "
+				"there has been\n",
+				j, ch1, ch1_fromi,
+				p[ch1][ch1_fromi].Tag,
+				ch1, ch1_toi,
+				p[ch1][ch1_toi].Tag);
+			print_ops(p, rwop1, ch1,
+					lastcmd[0][j]);
+			nand_dbg_print(NAND_DBG_DEBUG,
+				".\nWhich are not ordered w.r.t to ");
+			print_ops(p, rwop2, ch2,
+					lastcmd[1][j]);
+			nand_dbg_print(NAND_DBG_DEBUG,
+				"\nbetween (ch:%d, indx:%d, tag:%d) & "
+				"(ch:%d, indx:%d, tag:%d).\n",
+				ch2, ch2_fromi,
+				p[ch2][ch2_fromi].Tag,
+				ch2, ch2_toi,
+				p[ch2][ch2_toi].Tag);
+			allok = 0;
+		}
+	}
+
+	return allok;
+}
+
+static int lookfor_deadlocks(struct pending_cmd (*p)[MAX_CHANS + MAX_DESCS],
+			int *chis, int *chMaxIndexes)
+{
+	int i, j, done, ch1, ch2, snum, snum2;
+
+	done = 0;
+	for (i = 0; (!done) && (i < totalUsedBanks); i++) {
+		if (!EOLIST(i) &&
+			get_sync_ch_pcmd(p, i, chis[i], &snum, &ch1)) {
+			j = 0;
+			ch2 = ch1;
+			ch1 = i;
+			snum2 = snum;
+			snum = 0xFF;
+			while ((snum != snum2) && (j <= totalUsedBanks) &&
+				!EOLIST(ch2) && (ch2 != i) &&
+				((snum == 0xFF) ||
+				(snum2 != FORCED_ORDERED_SYNC))) {
+				ch1 = ch2;
+				snum = snum2;
+				get_sync_ch_pcmd(p, ch1, chis[ch1],
+					&snum2, &ch2);
+				j++;
+			}
+			if ((j <= totalUsedBanks) && (snum != snum2)) {
+				nand_dbg_print(NAND_DBG_DEBUG,
+					"SYNCCHECK: DEADLOCK:\n");
+				ch1 = i;
+				snum = 0xFF;
+				get_sync_ch_pcmd(p, ch1, chis[ch1],
+						&snum2, &ch2);
+				debug_boundary_error(ch2, totalUsedBanks, 0);
+				while (!EOLIST(ch2) && (ch2 != i) &&
+					((snum == 0xFF) ||
+					(snum2 != FORCED_ORDERED_SYNC))) {
+					nand_dbg_print(NAND_DBG_DEBUG,
+						"Channel %d, cmdindx %d, "
+						"tag %d is waiting for "
+						"sync number %d "
+						"from channel %d\n",
+						ch1, chis[ch1],
+						p[ch1][chis[ch1]].Tag,
+						snum2, ch2);
+					ch1 = ch2;
+					snum = snum2;
+					get_sync_ch_pcmd(p, ch1, chis[ch1],
+							&snum2, &ch2);
+					debug_boundary_error(ch2,
+							totalUsedBanks, 0);
+				}
+				nand_dbg_print(NAND_DBG_DEBUG,
+					"Channel %d, cmdindx %d, tag %d "
+					"is waiting for sync number %d "
+					"from channel %d",
+					ch1, chis[ch1],
+					p[ch1][chis[ch1]].Tag,
+					snum2, ch2);
+				if (!EOLIST(ch2))
+					nand_dbg_print(NAND_DBG_DEBUG,
+					", which is the initial channel!\n");
+				else if (snum2 != FORCED_ORDERED_SYNC)
+					nand_dbg_print(NAND_DBG_DEBUG,
+					" which does not have that "
+					"sync number!\n");
+				else
+					nand_dbg_print(NAND_DBG_DEBUG,
+					" which is th forced ordered "
+					"sync number that cannot proceed "
+					"until all channels reach it!\n");
+				nand_dbg_print(NAND_DBG_DEBUG,
+					"Sync checking is aborting.\n");
+				done = 1;
+			}
+			if (j > totalUsedBanks) {
+				nand_dbg_print(NAND_DBG_DEBUG,
+					"SYNCCHECK: DEADLOCK: "
+					"Unknown case. "
+					"Infinite loop in deadlock check. "
+					"Aborting.\n");
+				done = 1;
+			}
+		}
+	}
+
+	return done;
+}
+
+static void cfo_helper_1(struct pending_cmd (*p)[MAX_CHANS + MAX_DESCS],
+	int *chis, int *chMaxIndexes, int (*namb)[MAX_CHANS],
+	int i, int ch1, int syncNum)
+{
+	int k;
+
+	for (k = 0; k < totalUsedBanks; k++) {
+		if ((k != i) && (k != ch1)) {
+			if (namb[ch1][k] > namb[i][k]) {
+				if (!check_ordering(p, i, namb[k][i] + 1,
+					chis[i], k, namb[i][k] + 1,
+					namb[ch1][k]))
+					nand_dbg_print(NAND_DBG_DEBUG,
+						"Above problem occured when "
+						"analyzing sync %d between "
+						"(ch:%d, indx:%d, tag:%d) & "
+						"(ch:%d, indx:%d, tag:%d)\n",
+						syncNum, i, chis[i],
+						p[i][chis[i]].Tag,
+						ch1, chis[ch1],
+						p[ch1][chis[ch1]].Tag);
+				namb[i][k] = namb[ch1][k];
+			} else if (namb[ch1][k] < namb[i][k]) {
+				if (!check_ordering(p, ch1,
+					namb[k][ch1] + 1,
+					chis[ch1], k,
+					namb[ch1][k] + 1,
+					namb[i][k]))
+					nand_dbg_print(NAND_DBG_DEBUG,
+						"Above problem occured when "
+						"analyzing sync %d between "
+						"(ch:%d, indx:%d, tag:%d) & "
+						"(ch:%d, indx:%d, tag:%d)\n",
+						syncNum, i, chis[i],
+						p[i][chis[i]].Tag,
+						ch1, chis[ch1],
+						p[ch1][chis[ch1]].Tag);
+				namb[ch1][k] = namb[i][k];
+			}
+		}
+	}
+}
+
+static void cfo_helper_2(struct pending_cmd (*p)[MAX_CHANS + MAX_DESCS],
+	int *chis, int *chMaxIndexes, int (*namb)[MAX_CHANS],
+	int i, int ch1, u8 *pidxchgd)
+{
+	int k, m, n;
+	int sync_num, ch2;
+
+	for (k = 0; k < totalUsedBanks; k++) {
+		if ((k != i) && (k != ch1)) {
+			if (!EOLIST(k) && get_sync_ch_pcmd(p, k,
+					chis[k], &sync_num, &ch2)) {
+				if (sync_num != FORCED_ORDERED_SYNC)
+					k = totalUsedBanks + 2;
+			}
+		}
+	}
+
+	if (k == totalUsedBanks) {
+		for (m = 0; m < (totalUsedBanks - 1); m++) {
+			for (n = m + 1; n < totalUsedBanks; n++) {
+				if (!check_ordering(p, m, namb[n][m] + 1,
+					chis[m], n, namb[m][n] + 1, chis[n]))
+					nand_dbg_print(NAND_DBG_DEBUG,
+						"Above problem occured when "
+						"analyzing sync %d between "
+						"(ch:%d, indx:%d, tag:%d) & "
+						"(ch:%d, indx:%d, tag:%d)\n",
+						sync_num, m, chis[m],
+						p[m][chis[m]].Tag,
+						n, chis[n],
+						p[n][chis[n]].Tag);
+				namb[n][m] = chis[m];
+				namb[m][n] = chis[n];
+			}
+			chis[m]++;
+		}
+		chis[m]++;
+		*pidxchgd = 1;
+	}
+}
+
+static int check_for_ording(struct pending_cmd (*p)[MAX_CHANS + MAX_DESCS],
+	int *chis, int *chMaxIndexes, int (*namb)[MAX_CHANS])
+{
+	int i, done, ch1, ch2, syncNum, syncNum2;
+	u8 indexchgd;
+
+	indexchgd = 0;
+	for (i = 0; (i < totalUsedBanks) && !done && !indexchgd; i++) {
+		if (!EOLIST(i) &&
+			get_sync_ch_pcmd(p, i, chis[i], &syncNum, &ch1)) {
+			debug_boundary_error(ch1, totalUsedBanks, 0);
+			if (!EOLIST(ch1) && get_sync_ch_pcmd(p, ch1,
+					chis[ch1], &syncNum2, &ch2)) {
+				debug_boundary_error(ch2, totalUsedBanks, 0);
+				if ((syncNum == syncNum2) &&
+					(syncNum != FORCED_ORDERED_SYNC)) {
+					if (ch2 != i) {
+						nand_dbg_print(NAND_DBG_DEBUG,
+						"SYNCCHECK: ILLEGAL CASE: "
+						"Channel %d, cmdindx %d, "
+						"tag %d is waiting for "
+						"sync number %d "
+						"from channel %d, "
+						"which is waiting for "
+						"the same sync number "
+						"from channel %d. "
+						"Sync checking is aborting\n",
+						i, chis[i],
+						p[i][chis[i]].Tag,
+						syncNum, ch1, ch2);
+						done = 1;
+					} else {
+						if (!(debug_sync_cnt %
+							DBG_SNC_PRINTEVERY)) {
+							nand_dbg_print(
+							NAND_DBG_DEBUG,
+							"SYNCCHECK: "
+							"syncnum %d "
+							"betn Ch %d, "
+							"cmdindx %d, "
+							"tag %d & Ch %d, "
+							"cmdindx %d, tag %d. "
+							"chis="
+							"{%d, %d, %d, %d}\n",
+							syncNum, i,
+							chis[i],
+							p[i][chis[i]].Tag,
+							ch1, chis[ch1],
+							p[ch1][chis[ch1]].Tag,
+							chis[0], chis[1],
+							chis[2], chis[3]);
+						}
+						if (!check_ordering(p, i,
+							namb[ch1][i]+1,
+							chis[i], ch1,
+							namb[i][ch1]+1,
+							chis[ch1]))
+							nand_dbg_print(
+							NAND_DBG_DEBUG,
+							"Above problem "
+							"occured when "
+							"analyzing "
+							"sync %d "
+							"between "
+							"(ch:%d, indx:%d, "
+							"tag:%d) & "
+							"(ch:%d, indx:%d, "
+							"tag:%d)\n",
+							syncNum, i, chis[i],
+							p[i][chis[i]].Tag,
+							ch1, chis[ch1],
+							p[ch1][chis[ch1]].Tag);
+
+						namb[ch1][i] = chis[i];
+						namb[i][ch1] = chis[ch1];
+
+						cfo_helper_1(p, chis,
+							chMaxIndexes,
+							namb, i, ch1,
+							syncNum);
+
+						chis[i]++;
+						chis[ch1]++;
+						indexchgd = 1;
+					}
+				} else if ((syncNum == syncNum2) &&
+					   (syncNum == FORCED_ORDERED_SYNC)) {
+					cfo_helper_2(p, chis, chMaxIndexes,
+						namb, i, ch1, &indexchgd);
+				}
+			}
+		}
+	}
+
+	return done;
+}
+
+void CDMA_CheckSyncPoints(u16 tag_count)
+{
+	struct pending_cmd p_cmd_ch[MAX_CHANS][MAX_CHANS + MAX_DESCS];
+	int namb[MAX_CHANS][MAX_CHANS];
+	int chMaxIndexes[MAX_CHANS];
+	int chis[MAX_CHANS];
+	u32 i, j, k, alldone;
+
+	/* Initial Checks */
+	if (CACHE_BLOCK_NUMBER > 16) {
+		nand_dbg_print(NAND_DBG_DEBUG,
+			"SYNCCHECK: INIT FAILED: SyncCheck can only "
+			"work with upto 16 cache blocks \n");
+		return;
+	}
+
+	/* Initializations */
+	for (i = 0; i < totalUsedBanks; i++) {
+		chis[i] = 0;
+		for (j = 0; j < totalUsedBanks; j++)
+			namb[i][j] = -1;
+	}
+
+	pcmd_per_ch(p_cmd_ch, tag_count, chMaxIndexes);
+
+	if (!(debug_sync_cnt % DBG_SNC_PRINTEVERY)) {
+		nand_dbg_print(NAND_DBG_DEBUG, "SYNCCHECK: Cache Ptrs:");
+		for (j = 0; j < CACHE_BLOCK_NUMBER; j++)
+			nand_dbg_print(NAND_DBG_DEBUG, " %p",
+				Cache.ItemArray[j].pContent);
+		nand_dbg_print(NAND_DBG_DEBUG, "\n");
+	}
+
+	alldone = 0;
+	while (!alldone) {
+		for (i = 0; i < totalUsedBanks; i++) {
+			while (!EOLIST(i)) {
+				if (!p_cmd_ch[i][chis[i]].ChanSync[0])
+					chis[i]++;
+				else
+					break;
+			}
+		}
+		alldone = lookfor_deadlocks(p_cmd_ch, chis, chMaxIndexes);
+		alldone = check_for_ording(p_cmd_ch, chis, chMaxIndexes,
+					namb);
+		if (!alldone) {
+			alldone = 1;
+			for (i = 0; alldone && (i < totalUsedBanks); i++) {
+				if (!EOLIST(i))
+					alldone = 0;
+			}
+		}
+	}
+
+	for (i = 0; i < totalUsedBanks; i++) {
+		for (k = i + 1; k < totalUsedBanks; k++) {
+			if (!check_ordering(p_cmd_ch, i, namb[k][i] + 1,
+				chMaxIndexes[i] - 1, k, namb[i][k] + 1,
+				chMaxIndexes[k] - 1))
+				nand_dbg_print(NAND_DBG_DEBUG,
+				"Above problem occured when doing "
+				"end of list checks on channels %d & %d\n",
+				i, k);
+		}
+	}
+}
+
+#endif
+#endif
+
diff --git a/drivers/staging/mrst_nand/lld_cdma.h b/drivers/staging/mrst_nand/lld_cdma.h
new file mode 100644
index 0000000..5d19791
--- /dev/null
+++ b/drivers/staging/mrst_nand/lld_cdma.h
@@ -0,0 +1,138 @@
+/*
+ * NAND Flash Controller Device Driver
+ * Copyright (c) 2009, Intel Corporation and its suppliers.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
+ *
+ */
+
+/* header for LLD_CDMA.c module */
+
+#ifndef _LLD_CDMA_
+#define _LLD_CDMA_
+
+#include "flash.h"
+
+#define  DEBUG_SYNC    1
+
+/*///////////   CDMA specific MACRO definition */
+#define MAX_DESCS         (255)
+#define MAX_CHANS  (4)
+#define MAX_SYNC_POINTS         (16)
+
+#define CHANNEL_SYNC_MASK       (0x000F)
+#define CHANNEL_DMA_MASK        (0x00F0)
+#define CHANNEL_ID_MASK         (0x0300)
+#define CHANNEL_CONT_MASK       (0x4000)
+#define CHANNEL_INTR_MASK       (0x8000)
+
+#define CHANNEL_SYNC_OFFSET     (0)
+#define CHANNEL_DMA_OFFSET      (4)
+#define CHANNEL_ID_OFFSET       (8)
+#define CHANNEL_CONT_OFFSET     (14)
+#define CHANNEL_INTR_OFFSET     (15)
+
+#if CMD_DMA
+u16 CDMA_Data_CMD(u8 tag, u8 CMD, u8 *data, u32 block,
+			u16 page, u16 count, u16 flags);
+u16 CDMA_MemCopy_CMD(u8 tag, u8 *dest, u8 *src, u16 ByteCount,
+			u16 flags);
+u16 CDMA_Execute_CMDs(u16 tag_count);
+void CDMA_AddSyncPoints(u16 tag_count);
+void CDMA_CheckSyncPoints(u16 tag_count);
+void PrintPendingCMDs(u16 tag_count);
+void PrintPendingCMDsPerChannel(u16 tag_count);
+void PrintCDMA_Descriptors(void);
+u32 CDMA_Memory_Pool_Size(void);
+int CDMA_Mem_Config(u8 *pMem);
+
+extern u8 g_SBDCmdIndex;
+
+#endif
+
+#if FLASH_CDMA
+/*///////////   prototypes: APIs for LLD_CDMA */
+u16 CDMA_Flash_Init(void);
+int is_cdma_interrupt(void);
+u16 CDMA_Event_Status(void);
+#endif
+
+/* CMD-DMA Descriptor Struct.  These are defined by the CMD_DMA HW */
+struct cdma_descriptor {
+	u32 NxtPointerHi;
+	u32 NxtPointerLo;
+	u32 FlashPointerHi;
+	u32 FlashPointerLo;
+	u32 CommandType;
+	u32 MemAddrHi;
+	u32 MemAddrLo;
+	u32 CommandFlags;
+	u32 Channel;
+	u32 Status;
+	u32 MemCopyPointerHi;
+	u32 MemCopyPointerLo;
+	u32 Reserved12;
+	u32 Reserved13;
+	u32 Reserved14;
+	u32 Tag;
+};
+
+/* This struct holds one MemCopy descriptor as defined by the HW */
+struct memcpy_descriptor {
+	u32 NxtPointerHi;
+	u32 NxtPointerLo;
+	u32 SrcAddrHi;
+	u32 SrcAddrLo;
+	u32 DestAddrHi;
+	u32 DestAddrLo;
+	u32 XferSize;
+	u32 MemCopyFlags;
+	u32 MemCopyStatus;
+	u32 reserved9;
+	u32 reserved10;
+	u32 reserved11;
+	u32 reserved12;
+	u32 reserved13;
+	u32 reserved14;
+	u32 reserved15;
+};
+
+/* Pending CMD table entries (includes MemCopy parameters */
+struct pending_cmd {
+	u8 Tag;
+	u8 CMD;
+	u8 *DataAddr;
+	u32 Block;
+	u16 Page;
+	u16 PageCount;
+	u8 *DataDestAddr;
+	u8 *DataSrcAddr;
+	u16 MemCopyByteCnt;
+	u16 Flags;
+	u16 ChanSync[MAX_CHANS + 1];
+	u16 Status;
+	u8    SBDCmdIndex;
+};
+
+extern struct pending_cmd PendingCMD[MAX_DESCS + MAX_CHANS];
+
+#if DEBUG_SYNC
+extern u32 debug_sync_cnt;
+#endif
+
+/* Definitions for CMD DMA descriptor chain fields */
+#define     CMD_DMA_DESC_COMP   0x8000
+#define     CMD_DMA_DESC_FAIL   0x4000
+
+#endif /*_LLD_CDMA_*/
diff --git a/drivers/staging/mrst_nand/lld_emu.c b/drivers/staging/mrst_nand/lld_emu.c
new file mode 100644
index 0000000..8b17970
--- /dev/null
+++ b/drivers/staging/mrst_nand/lld_emu.c
@@ -0,0 +1,788 @@
+/*
+ * NAND Flash Controller Device Driver
+ * Copyright (c) 2009, Intel Corporation and its suppliers.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
+ *
+ */
+
+#include <linux/fs.h>
+#include <linux/slab.h>
+#include "flash.h"
+#include "ffsdefs.h"
+#include "lld_emu.h"
+#include "lld.h"
+#if CMD_DMA
+#include "lld_cdma.h"
+#endif
+
+#define GLOB_LLD_PAGES           64
+#define GLOB_LLD_PAGE_SIZE       (512+16)
+#define GLOB_LLD_PAGE_DATA_SIZE  512
+#define GLOB_LLD_BLOCKS          2048
+
+#if (CMD_DMA  && FLASH_EMU)
+#include "lld_cdma.h"
+u32 totalUsedBanks;
+u32 valid_banks[MAX_CHANS];
+#endif
+
+#if FLASH_EMU			/* This is for entire module */
+
+static u8 *flash_memory[GLOB_LLD_BLOCKS * GLOB_LLD_PAGES];
+
+/* Read nand emu file and then fill it's content to flash_memory */
+int emu_load_file_to_mem(void)
+{
+	mm_segment_t fs;
+	struct file *nef_filp = NULL;
+	struct inode *inode = NULL;
+	loff_t nef_size = 0;
+	loff_t tmp_file_offset, file_offset;
+	ssize_t nread;
+	int i, rc = -EINVAL;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	fs = get_fs();
+	set_fs(get_ds());
+
+	nef_filp = filp_open("/root/nand_emu_file", O_RDWR | O_LARGEFILE, 0);
+	if (IS_ERR(nef_filp)) {
+		printk(KERN_ERR "filp_open error: "
+		       "Unable to open nand emu file!\n");
+		return PTR_ERR(nef_filp);
+	}
+
+	if (nef_filp->f_path.dentry) {
+		inode = nef_filp->f_path.dentry->d_inode;
+	} else {
+		printk(KERN_ERR "Can not get valid inode!\n");
+		goto out;
+	}
+
+	nef_size = i_size_read(inode->i_mapping->host);
+	if (nef_size <= 0) {
+		printk(KERN_ERR "Invalid nand emu file size: "
+		       "0x%llx\n", nef_size);
+		goto out;
+	} else {
+		nand_dbg_print(NAND_DBG_DEBUG, "nand emu file size: %lld\n",
+			       nef_size);
+	}
+
+	file_offset = 0;
+	for (i = 0; i < GLOB_LLD_BLOCKS * GLOB_LLD_PAGES; i++) {
+		tmp_file_offset = file_offset;
+		nread = vfs_read(nef_filp,
+				 (char __user *)flash_memory[i],
+				 GLOB_LLD_PAGE_SIZE, &tmp_file_offset);
+		if (nread < GLOB_LLD_PAGE_SIZE) {
+			printk(KERN_ERR "%s, Line %d - "
+			       "nand emu file partial read: "
+			       "%d bytes\n", __FILE__, __LINE__, (int)nread);
+			goto out;
+		}
+		file_offset += GLOB_LLD_PAGE_SIZE;
+	}
+	rc = 0;
+
+out:
+	filp_close(nef_filp, current->files);
+	set_fs(fs);
+	return rc;
+}
+
+/* Write contents of flash_memory to nand emu file */
+int emu_write_mem_to_file(void)
+{
+	mm_segment_t fs;
+	struct file *nef_filp = NULL;
+	struct inode *inode = NULL;
+	loff_t nef_size = 0;
+	loff_t tmp_file_offset, file_offset;
+	ssize_t nwritten;
+	int i, rc = -EINVAL;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	fs = get_fs();
+	set_fs(get_ds());
+
+	nef_filp = filp_open("/root/nand_emu_file", O_RDWR | O_LARGEFILE, 0);
+	if (IS_ERR(nef_filp)) {
+		printk(KERN_ERR "filp_open error: "
+		       "Unable to open nand emu file!\n");
+		return PTR_ERR(nef_filp);
+	}
+
+	if (nef_filp->f_path.dentry) {
+		inode = nef_filp->f_path.dentry->d_inode;
+	} else {
+		printk(KERN_ERR "Invalid " "nef_filp->f_path.dentry value!\n");
+		goto out;
+	}
+
+	nef_size = i_size_read(inode->i_mapping->host);
+	if (nef_size <= 0) {
+		printk(KERN_ERR "Invalid "
+		       "nand emu file size: 0x%llx\n", nef_size);
+		goto out;
+	} else {
+		nand_dbg_print(NAND_DBG_DEBUG, "nand emu file size: "
+			       "%lld\n", nef_size);
+	}
+
+	file_offset = 0;
+	for (i = 0; i < GLOB_LLD_BLOCKS * GLOB_LLD_PAGES; i++) {
+		tmp_file_offset = file_offset;
+		nwritten = vfs_write(nef_filp,
+				     (char __user *)flash_memory[i],
+				     GLOB_LLD_PAGE_SIZE, &tmp_file_offset);
+		if (nwritten < GLOB_LLD_PAGE_SIZE) {
+			printk(KERN_ERR "%s, Line %d - "
+			       "nand emu file partial write: "
+			       "%d bytes\n", __FILE__, __LINE__, (int)nwritten);
+			goto out;
+		}
+		file_offset += GLOB_LLD_PAGE_SIZE;
+	}
+	rc = 0;
+
+out:
+	filp_close(nef_filp, current->files);
+	set_fs(fs);
+	return rc;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     emu_Flash_Init
+* Inputs:       none
+* Outputs:      PASS=0 (notice 0=ok here)
+* Description:  Creates & initializes the flash RAM array.
+*
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+u16 emu_Flash_Init(void)
+{
+	int i;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	flash_memory[0] = (u8 *)vmalloc(GLOB_LLD_PAGE_SIZE *
+						   GLOB_LLD_BLOCKS *
+						   GLOB_LLD_PAGES *
+						   sizeof(u8));
+	if (!flash_memory[0]) {
+		printk(KERN_ERR "Fail to allocate memory "
+		       "for nand emulator!\n");
+		return ERR;
+	}
+
+	memset((char *)(flash_memory[0]), 0xFF,
+	       GLOB_LLD_PAGE_SIZE * GLOB_LLD_BLOCKS * GLOB_LLD_PAGES *
+	       sizeof(u8));
+
+	for (i = 1; i < GLOB_LLD_BLOCKS * GLOB_LLD_PAGES; i++)
+		flash_memory[i] = flash_memory[i - 1] + GLOB_LLD_PAGE_SIZE;
+
+	emu_load_file_to_mem(); /* Load nand emu file to mem */
+
+	return PASS;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     emu_Flash_Release
+* Inputs:       none
+* Outputs:      PASS=0 (notice 0=ok here)
+* Description:          Releases the flash.
+*
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+u16 emu_Flash_Release()
+{
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	emu_write_mem_to_file();  /* Write back mem to nand emu file */
+
+	vfree(flash_memory[0]);
+	return PASS;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     emu_Read_Device_ID
+* Inputs:       none
+* Outputs:      PASS=1 FAIL=0
+* Description:  Reads the info from the controller registers.
+*               Sets up DeviceInfo structure with device parameters
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+
+u16 emu_Read_Device_ID(void)
+{
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	DeviceInfo.wDeviceMaker = 0;
+	DeviceInfo.wDeviceType = 8;
+	DeviceInfo.wSpectraStartBlock = 36;
+	DeviceInfo.wSpectraEndBlock = GLOB_LLD_BLOCKS - 1;
+	DeviceInfo.wTotalBlocks = GLOB_LLD_BLOCKS;
+	DeviceInfo.wPagesPerBlock = GLOB_LLD_PAGES;
+	DeviceInfo.wPageSize = GLOB_LLD_PAGE_SIZE;
+	DeviceInfo.wPageDataSize = GLOB_LLD_PAGE_DATA_SIZE;
+	DeviceInfo.wPageSpareSize = GLOB_LLD_PAGE_SIZE -
+	    GLOB_LLD_PAGE_DATA_SIZE;
+	DeviceInfo.wBlockSize = DeviceInfo.wPageSize * GLOB_LLD_PAGES;
+	DeviceInfo.wBlockDataSize = DeviceInfo.wPageDataSize * GLOB_LLD_PAGES;
+	DeviceInfo.wDataBlockNum = (u32) (DeviceInfo.wSpectraEndBlock -
+						DeviceInfo.wSpectraStartBlock
+						+ 1);
+	DeviceInfo.MLCDevice = 1; /* Emulate MLC device */
+	DeviceInfo.nBitsInPageNumber =
+		(u8)GLOB_Calc_Used_Bits(DeviceInfo.wPagesPerBlock);
+	DeviceInfo.nBitsInPageDataSize =
+		(u8)GLOB_Calc_Used_Bits(DeviceInfo.wPageDataSize);
+	DeviceInfo.nBitsInBlockDataSize =
+		(u8)GLOB_Calc_Used_Bits(DeviceInfo.wBlockDataSize);
+
+#if CMD_DMA
+	totalUsedBanks = 4;
+	valid_banks[0] = 1;
+	valid_banks[1] = 1;
+	valid_banks[2] = 1;
+	valid_banks[3] = 1;
+#endif
+
+	return PASS;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     emu_Flash_Reset
+* Inputs:       none
+* Outputs:      PASS=0 (notice 0=ok here)
+* Description:          Reset the flash
+*
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+u16 emu_Flash_Reset(void)
+{
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	return PASS;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     emu_Erase_Block
+* Inputs:       Address
+* Outputs:      PASS=0 (notice 0=ok here)
+* Description:          Erase a block
+*
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+u16 emu_Erase_Block(u32 block_add)
+{
+	int i;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	if (block_add >= DeviceInfo.wTotalBlocks) {
+		printk(KERN_ERR "emu_Erase_Block error! "
+		       "Too big block address: %d\n", block_add);
+		return FAIL;
+	}
+
+	nand_dbg_print(NAND_DBG_DEBUG, "Erasing block %d\n",
+		(int)block_add);
+
+	for (i = block_add * GLOB_LLD_PAGES;
+	     i < ((block_add + 1) * GLOB_LLD_PAGES); i++) {
+		if (flash_memory[i]) {
+			memset((u8 *)(flash_memory[i]), 0xFF,
+			       DeviceInfo.wPageSize * sizeof(u8));
+		}
+	}
+
+	return PASS;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     emu_Write_Page_Main
+* Inputs:       Write buffer address pointer
+*               Block number
+*               Page  number
+*               Number of pages to process
+* Outputs:      PASS=0 (notice 0=ok here)
+* Description:  Write the data in the buffer to main area of flash
+*
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+u16 emu_Write_Page_Main(u8 *write_data, u32 Block,
+			   u16 Page, u16 PageCount)
+{
+	int i;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	if (Block >= DeviceInfo.wTotalBlocks)
+		return FAIL;
+
+	if (Page + PageCount > DeviceInfo.wPagesPerBlock)
+		return FAIL;
+
+	nand_dbg_print(NAND_DBG_DEBUG, "emu_Write_Page_Main: "
+		       "lba %u Page %u PageCount %u\n",
+		       (unsigned int)Block,
+		       (unsigned int)Page, (unsigned int)PageCount);
+
+	for (i = 0; i < PageCount; i++) {
+		if (NULL == flash_memory[Block * GLOB_LLD_PAGES + Page]) {
+			printk(KERN_ERR "Run out of memory\n");
+			return FAIL;
+		}
+		memcpy((u8 *) (flash_memory[Block * GLOB_LLD_PAGES + Page]),
+		       write_data, DeviceInfo.wPageDataSize);
+		write_data += DeviceInfo.wPageDataSize;
+		Page++;
+	}
+
+	return PASS;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     emu_Read_Page_Main
+* Inputs:       Read buffer address pointer
+*               Block number
+*               Page  number
+*               Number of pages to process
+* Outputs:      PASS=0 (notice 0=ok here)
+* Description:  Read the data from the flash main area to the buffer
+*
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+u16 emu_Read_Page_Main(u8 *read_data, u32 Block,
+			  u16 Page, u16 PageCount)
+{
+	int i;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	if (Block >= DeviceInfo.wTotalBlocks)
+		return FAIL;
+
+	if (Page + PageCount > DeviceInfo.wPagesPerBlock)
+		return FAIL;
+
+	nand_dbg_print(NAND_DBG_DEBUG, "emu_Read_Page_Main: "
+		       "lba %u Page %u PageCount %u\n",
+		       (unsigned int)Block,
+		       (unsigned int)Page, (unsigned int)PageCount);
+
+	for (i = 0; i < PageCount; i++) {
+		if (NULL == flash_memory[Block * GLOB_LLD_PAGES + Page]) {
+			memset(read_data, 0xFF, DeviceInfo.wPageDataSize);
+		} else {
+			memcpy(read_data,
+			       (u8 *) (flash_memory[Block * GLOB_LLD_PAGES
+						      + Page]),
+			       DeviceInfo.wPageDataSize);
+		}
+		read_data += DeviceInfo.wPageDataSize;
+		Page++;
+	}
+
+	return PASS;
+}
+
+#ifndef ELDORA
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     emu_Read_Page_Main_Spare
+* Inputs:       Write Buffer
+*                       Address
+*                       Buffer size
+* Outputs:      PASS=0 (notice 0=ok here)
+* Description:          Read from flash main+spare area
+*
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+u16 emu_Read_Page_Main_Spare(u8 *read_data, u32 Block,
+				u16 Page, u16 PageCount)
+{
+	int i;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	if (Block >= DeviceInfo.wTotalBlocks) {
+		printk(KERN_ERR "Read Page Main+Spare "
+		       "Error: Block Address too big\n");
+		return FAIL;
+	}
+
+	if (Page + PageCount > DeviceInfo.wPagesPerBlock) {
+		printk(KERN_ERR "Read Page Main+Spare "
+		       "Error: Page number too big\n");
+		return FAIL;
+	}
+
+	nand_dbg_print(NAND_DBG_DEBUG, "Read Page Main + Spare - "
+		       "No. of pages %u block %u start page %u\n",
+		       (unsigned int)PageCount,
+		       (unsigned int)Block, (unsigned int)Page);
+
+	for (i = 0; i < PageCount; i++) {
+		if (NULL == flash_memory[Block * GLOB_LLD_PAGES + Page]) {
+			memset(read_data, 0xFF, DeviceInfo.wPageSize);
+		} else {
+			memcpy(read_data, (u8 *) (flash_memory[Block *
+								 GLOB_LLD_PAGES
+								 + Page]),
+			       DeviceInfo.wPageSize);
+		}
+
+		read_data += DeviceInfo.wPageSize;
+		Page++;
+	}
+
+	return PASS;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     emu_Write_Page_Main_Spare
+* Inputs:       Write buffer
+*                       address
+*                       buffer length
+* Outputs:      PASS=0 (notice 0=ok here)
+* Description:          Write the buffer to main+spare area of flash
+*
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+u16 emu_Write_Page_Main_Spare(u8 *write_data, u32 Block,
+				 u16 Page, u16 page_count)
+{
+	u16 i;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	if (Block >= DeviceInfo.wTotalBlocks) {
+		printk(KERN_ERR "Write Page Main + Spare "
+		       "Error: Block Address too big\n");
+		return FAIL;
+	}
+
+	if (Page + page_count > DeviceInfo.wPagesPerBlock) {
+		printk(KERN_ERR "Write Page Main + Spare "
+		       "Error: Page number too big\n");
+		return FAIL;
+	}
+
+	nand_dbg_print(NAND_DBG_DEBUG, "Write Page Main+Spare - "
+		       "No. of pages %u block %u start page %u\n",
+		       (unsigned int)page_count,
+		       (unsigned int)Block, (unsigned int)Page);
+
+	for (i = 0; i < page_count; i++) {
+		if (NULL == flash_memory[Block * GLOB_LLD_PAGES + Page]) {
+			printk(KERN_ERR "Run out of memory!\n");
+			return FAIL;
+		}
+		memcpy((u8 *) (flash_memory[Block * GLOB_LLD_PAGES + Page]),
+		       write_data, DeviceInfo.wPageSize);
+		write_data += DeviceInfo.wPageSize;
+		Page++;
+	}
+
+	return PASS;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     emu_Write_Page_Spare
+* Inputs:       Write buffer
+*                       Address
+*                       buffer size
+* Outputs:      PASS=0 (notice 0=ok here)
+* Description:          Write the buffer in the spare area
+*
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+u16 emu_Write_Page_Spare(u8 *write_data, u32 Block,
+			    u16 Page, u16 PageCount)
+{
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	if (Block >= DeviceInfo.wTotalBlocks) {
+		printk(KERN_ERR "Read Page Spare Error: "
+		       "Block Address too big\n");
+		return FAIL;
+	}
+
+	if (Page + PageCount > DeviceInfo.wPagesPerBlock) {
+		printk(KERN_ERR "Read Page Spare Error: "
+		       "Page number too big\n");
+		return FAIL;
+	}
+
+	nand_dbg_print(NAND_DBG_DEBUG, "Write Page Spare- "
+		       "block %u page %u\n",
+		       (unsigned int)Block, (unsigned int)Page);
+
+	if (NULL == flash_memory[Block * GLOB_LLD_PAGES + Page]) {
+		printk(KERN_ERR "Run out of memory!\n");
+		return FAIL;
+	}
+
+	memcpy((u8 *) (flash_memory[Block * GLOB_LLD_PAGES + Page] +
+			 DeviceInfo.wPageDataSize), write_data,
+	       (DeviceInfo.wPageSize - DeviceInfo.wPageDataSize));
+
+	return PASS;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     emu_Read_Page_Spare
+* Inputs:       Write Buffer
+*                       Address
+*                       Buffer size
+* Outputs:      PASS=0 (notice 0=ok here)
+* Description:          Read data from the spare area
+*
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+u16 emu_Read_Page_Spare(u8 *write_data, u32 Block,
+			   u16 Page, u16 PageCount)
+{
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	if (Block >= DeviceInfo.wTotalBlocks) {
+		printk(KERN_ERR "Read Page Spare "
+		       "Error: Block Address too big\n");
+		return FAIL;
+	}
+
+	if (Page + PageCount > DeviceInfo.wPagesPerBlock) {
+		printk(KERN_ERR "Read Page Spare "
+		       "Error: Page number too big\n");
+		return FAIL;
+	}
+
+	nand_dbg_print(NAND_DBG_DEBUG, "Read Page Spare- "
+		       "block %u page %u\n",
+		       (unsigned int)Block, (unsigned int)Page);
+
+	if (NULL == flash_memory[Block * GLOB_LLD_PAGES + Page]) {
+		memset(write_data, 0xFF,
+		       (DeviceInfo.wPageSize - DeviceInfo.wPageDataSize));
+	} else {
+		memcpy(write_data,
+		       (u8 *) (flash_memory[Block * GLOB_LLD_PAGES + Page]
+				 + DeviceInfo.wPageDataSize),
+		       (DeviceInfo.wPageSize - DeviceInfo.wPageDataSize));
+	}
+
+	return PASS;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     emu_Enable_Disable_Interrupts
+* Inputs:       enable or disable
+* Outputs:      none
+* Description:  NOP
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+void emu_Enable_Disable_Interrupts(u16 INT_ENABLE)
+{
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+}
+
+u16 emu_Get_Bad_Block(u32 block)
+{
+	return 0;
+}
+
+#if CMD_DMA
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Support for CDMA functions
+************************************
+*       emu_CDMA_Flash_Init
+*           CDMA_process_data command   (use LLD_CDMA)
+*           CDMA_MemCopy_CMD            (use LLD_CDMA)
+*       emu_CDMA_execute all commands
+*       emu_CDMA_Event_Status
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+u16 emu_CDMA_Flash_Init(void)
+{
+	u16 i;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	for (i = 0; i < MAX_DESCS + MAX_CHANS; i++) {
+		PendingCMD[i].CMD = 0;
+		PendingCMD[i].Tag = 0;
+		PendingCMD[i].DataAddr = 0;
+		PendingCMD[i].Block = 0;
+		PendingCMD[i].Page = 0;
+		PendingCMD[i].PageCount = 0;
+		PendingCMD[i].DataDestAddr = 0;
+		PendingCMD[i].DataSrcAddr = 0;
+		PendingCMD[i].MemCopyByteCnt = 0;
+		PendingCMD[i].ChanSync[0] = 0;
+		PendingCMD[i].ChanSync[1] = 0;
+		PendingCMD[i].ChanSync[2] = 0;
+		PendingCMD[i].ChanSync[3] = 0;
+		PendingCMD[i].ChanSync[4] = 0;
+		PendingCMD[i].Status = 3;
+	}
+
+	return PASS;
+}
+
+static void emu_isr(int irq, void *dev_id)
+{
+	/* TODO:  ... */
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     CDMA_Execute_CMDs
+* Inputs:       tag_count:  the number of pending cmds to do
+* Outputs:      PASS/FAIL
+* Description:  execute each command in the pending CMD array
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+u16 emu_CDMA_Execute_CMDs(u16 tag_count)
+{
+	u16 i, j;
+	u8 CMD;		/* cmd parameter */
+	u8 *data;
+	u32 block;
+	u16 page;
+	u16 count;
+	u16 status = PASS;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	nand_dbg_print(NAND_DBG_TRACE, "At start of Execute CMDs: "
+		       "Tag Count %u\n", tag_count);
+
+	for (i = 0; i < totalUsedBanks; i++) {
+		PendingCMD[i].CMD = DUMMY_CMD;
+		PendingCMD[i].Tag = 0xFF;
+		PendingCMD[i].Block =
+		    (DeviceInfo.wTotalBlocks / totalUsedBanks) * i;
+
+		for (j = 0; j <= MAX_CHANS; j++)
+			PendingCMD[i].ChanSync[j] = 0;
+	}
+
+	CDMA_Execute_CMDs(tag_count);
+
+#if DEBUG_SYNC
+	if (!(debug_sync_cnt % DBG_SNC_PRINTEVERY)) {
+		nand_dbg_print(NAND_DBG_DEBUG, "_%lu_", debug_sync_cnt);
+#endif
+#ifdef VERBOSE
+		PrintPendingCMDs(tag_count);
+#endif
+#if DEBUG_SYNC
+#ifdef VERBOSE
+		PrintPendingCMDsPerChannel(tag_count);
+#endif
+	}
+	debug_sync_cnt++;
+#endif
+
+	for (i = MAX_CHANS;
+	     i < tag_count + MAX_CHANS; i++) {
+		CMD = PendingCMD[i].CMD;
+		data = PendingCMD[i].DataAddr;
+		block = PendingCMD[i].Block;
+		page = PendingCMD[i].Page;
+		count = PendingCMD[i].PageCount;
+
+		switch (CMD) {
+		case ERASE_CMD:
+			emu_Erase_Block(block);
+			PendingCMD[i].Status = PASS;
+			break;
+		case WRITE_MAIN_CMD:
+			emu_Write_Page_Main(data, block, page, count);
+			PendingCMD[i].Status = PASS;
+			break;
+		case WRITE_MAIN_SPARE_CMD:
+			emu_Write_Page_Main_Spare(data, block, page, count);
+			PendingCMD[i].Status = PASS;
+			break;
+		case READ_MAIN_CMD:
+			emu_Read_Page_Main(data, block, page, count);
+			PendingCMD[i].Status = PASS;
+			break;
+		case MEMCOPY_CMD:
+			memcpy(PendingCMD[i].DataDestAddr,
+			       PendingCMD[i].DataSrcAddr,
+			       PendingCMD[i].MemCopyByteCnt);
+		case DUMMY_CMD:
+			PendingCMD[i].Status = PASS;
+			break;
+		default:
+			PendingCMD[i].Status = FAIL;
+			break;
+		}
+	}
+
+	/*
+	 * Temperory adding code to reset PendingCMD array for basic testing.
+	 * It should be done at the end of  event status function.
+	 */
+	for (i = tag_count + MAX_CHANS; i < MAX_DESCS; i++) {
+		PendingCMD[i].CMD = 0;
+		PendingCMD[i].Tag = 0;
+		PendingCMD[i].DataAddr = 0;
+		PendingCMD[i].Block = 0;
+		PendingCMD[i].Page = 0;
+		PendingCMD[i].PageCount = 0;
+		PendingCMD[i].DataDestAddr = 0;
+		PendingCMD[i].DataSrcAddr = 0;
+		PendingCMD[i].MemCopyByteCnt = 0;
+		PendingCMD[i].ChanSync[0] = 0;
+		PendingCMD[i].ChanSync[1] = 0;
+		PendingCMD[i].ChanSync[2] = 0;
+		PendingCMD[i].ChanSync[3] = 0;
+		PendingCMD[i].ChanSync[4] = 0;
+		PendingCMD[i].Status = CMD_NOT_DONE;
+	}
+
+	nand_dbg_print(NAND_DBG_TRACE, "At end of Execute CMDs.\n");
+
+	emu_isr(0, 0); /* This is a null isr now. Need fill it in future */
+
+	return status;
+}
+
+/*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&
+* Function:     emu_Event_Status
+* Inputs:       none
+* Outputs:      Event_Status code
+* Description:  This function can also be used to force errors
+*&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&*/
+u16 emu_CDMA_Event_Status(void)
+{
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	return EVENT_PASS;
+}
+
+#endif /* CMD_DMA */
+#endif /* !ELDORA */
+#endif /* FLASH_EMU */
diff --git a/drivers/staging/mrst_nand/lld_emu.h b/drivers/staging/mrst_nand/lld_emu.h
new file mode 100644
index 0000000..a34a552
--- /dev/null
+++ b/drivers/staging/mrst_nand/lld_emu.h
@@ -0,0 +1,51 @@
+/*
+ * NAND Flash Controller Device Driver
+ * Copyright (c) 2009, Intel Corporation and its suppliers.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
+ *
+ */
+
+#ifndef _LLD_EMU_
+#define _LLD_EMU_
+
+#include "ffsport.h"
+#include "ffsdefs.h"
+
+/* prototypes: emulator API functions */
+extern u16 emu_Flash_Reset(void);
+extern u16 emu_Flash_Init(void);
+extern u16 emu_Flash_Release(void);
+extern u16 emu_Read_Device_ID(void);
+extern u16 emu_Erase_Block(u32 block_addr);
+extern u16 emu_Write_Page_Main(u8 *write_data, u32 Block,
+				u16 Page, u16 PageCount);
+extern u16 emu_Read_Page_Main(u8 *read_data, u32 Block, u16 Page,
+				 u16 PageCount);
+extern u16 emu_Event_Status(void);
+extern void emu_Enable_Disable_Interrupts(u16 INT_ENABLE);
+extern u16 emu_Write_Page_Main_Spare(u8 *write_data, u32 Block,
+					u16 Page, u16 PageCount);
+extern u16 emu_Write_Page_Spare(u8 *write_data, u32 Block,
+					u16 Page, u16 PageCount);
+extern u16 emu_Read_Page_Main_Spare(u8 *read_data, u32 Block,
+				       u16 Page, u16 PageCount);
+extern u16 emu_Read_Page_Spare(u8 *read_data, u32 Block, u16 Page,
+				  u16 PageCount);
+extern u16 emu_Get_Bad_Block(u32 block);
+
+u16 emu_CDMA_Flash_Init(void);
+u16 emu_CDMA_Execute_CMDs(u16 tag_count);
+u16 emu_CDMA_Event_Status(void);
+#endif /*_LLD_EMU_*/
diff --git a/drivers/staging/mrst_nand/lld_nand.c b/drivers/staging/mrst_nand/lld_nand.c
new file mode 100644
index 0000000..56ef843
--- /dev/null
+++ b/drivers/staging/mrst_nand/lld_nand.c
@@ -0,0 +1,3113 @@
+/*
+ * NAND Flash Controller Device Driver
+ * Copyright (c) 2009, Intel Corporation and its suppliers.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
+ *
+ */
+
+#include "lld.h"
+#include "lld_nand.h"
+
+#include "spectraswconfig.h"
+#include "flash.h"
+#include "ffsdefs.h"
+
+#include <linux/interrupt.h>
+#include <linux/delay.h>
+#include <linux/wait.h>
+#include <linux/mutex.h>
+
+#if (FLASH_NAND  || FLASH_CDMA)
+#include "NAND_Regs_4.h"
+
+#define SPECTRA_NAND_NAME    "nd"
+
+#define CEIL_DIV(X, Y) (((X)%(Y)) ? ((X)/(Y)+1) : ((X)/(Y)))
+
+#define INT_IDLE_STATE                 0
+#define INT_READ_PAGE_MAIN    0x01
+#define INT_WRITE_PAGE_MAIN    0x02
+#define INT_PIPELINE_READ_AHEAD    0x04
+#define INT_PIPELINE_WRITE_AHEAD    0x08
+#define INT_MULTI_PLANE_READ    0x10
+#define INT_MULTI_PLANE_WRITE    0x11
+
+struct mrst_nand_info {
+	struct pci_dev *dev;
+	u32 state;
+	u32 flash_bank;
+	u8 *read_data;
+	u8 *write_data;
+	u32 block;
+	u16 page;
+	u32 use_dma;
+	void __iomem *ioaddr;  /* Mapped address */
+	int ret;
+	struct completion complete;
+};
+
+static struct mrst_nand_info info;
+
+int totalUsedBanks;
+u32 GLOB_valid_banks[LLD_MAX_FLASH_BANKS];
+
+/* Ugly hack to fix code that used an 8k bytes or 512bytes array
+ * in the < 4kB Linux kernel stack */
+/* static byte page_main_spare[MAX_PAGE_MAINSPARE_AREA]; */
+static u8 page_spare[MAX_PAGE_SPARE_AREA];
+static u8 pReadSpareBuf[MAX_PAGE_SPARE_AREA];
+
+void __iomem *FlashReg;
+void __iomem *FlashMem;
+
+u16 conf_parameters[] = {
+	0x0000,
+	0x0000,
+	0x01F4,
+	0x01F4,
+	0x01F4,
+	0x01F4,
+	0x0000,
+	0x0000,
+	0x0001,
+	0x0000,
+	0x0000,
+	0x0000,
+	0x0000,
+	0x0040,
+	0x0001,
+	0x000A,
+	0x000A,
+	0x000A,
+	0x0000,
+	0x0000,
+	0x0005,
+	0x0012,
+	0x000C
+};
+
+u16   NAND_Get_Bad_Block(u32 block)
+{
+	u32 status = PASS;
+	u32 flag_bytes  = 0;
+	u32 skip_bytes  = DeviceInfo.wSpareSkipBytes;
+	u32 page, i;
+
+	if (ioread32(FlashReg + ECC_ENABLE))
+		flag_bytes = DeviceInfo.wNumPageSpareFlag;
+
+	for (page = 0; page < 2; page++) {
+		status = NAND_Read_Page_Spare(pReadSpareBuf, block, page, 1);
+		if (status != PASS)
+			return READ_ERROR;
+		for (i = flag_bytes; i < (flag_bytes + skip_bytes); i++)
+			if (pReadSpareBuf[i] != 0xff)
+				return DEFECTIVE_BLOCK;
+	}
+
+	for (page = 1; page < 3; page++) {
+		status = NAND_Read_Page_Spare(pReadSpareBuf, block,
+			DeviceInfo.wPagesPerBlock - page , 1);
+		if (status != PASS)
+			return READ_ERROR;
+		for (i = flag_bytes; i < (flag_bytes + skip_bytes); i++)
+			if (pReadSpareBuf[i] != 0xff)
+				return DEFECTIVE_BLOCK;
+	}
+
+	return GOOD_BLOCK;
+}
+
+
+u16 NAND_Flash_Reset(void)
+{
+	u32 i;
+	u32 intr_status_rst_comp[4] = {INTR_STATUS0__RST_COMP,
+		INTR_STATUS1__RST_COMP,
+		INTR_STATUS2__RST_COMP,
+		INTR_STATUS3__RST_COMP};
+	u32 intr_status[4] = {INTR_STATUS0, INTR_STATUS1,
+		INTR_STATUS2, INTR_STATUS3};
+	u32 device_reset_banks[4] = {DEVICE_RESET__BANK0,
+		DEVICE_RESET__BANK1,
+		DEVICE_RESET__BANK2,
+		DEVICE_RESET__BANK3};
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	for (i = 0 ; i < LLD_MAX_FLASH_BANKS; i++)
+		iowrite32(intr_status_rst_comp[i], FlashReg + intr_status[i]);
+
+	for (i = 0 ; i < LLD_MAX_FLASH_BANKS; i++) {
+		if (!GLOB_valid_banks[i])
+			break;
+		iowrite32(device_reset_banks[i], FlashReg + DEVICE_RESET);
+		while (!(ioread32(FlashReg + intr_status[i]) &
+			intr_status_rst_comp[i]))
+			;
+	}
+
+	for (i = 0; i < LLD_MAX_FLASH_BANKS; i++)
+		iowrite32(intr_status_rst_comp[i], FlashReg + intr_status[i]);
+
+	return PASS;
+}
+
+static void NAND_ONFi_Timing_Mode(u16 mode)
+{
+	u16 Trea[6] = {40, 30, 25, 20, 20, 16};
+	u16 Trp[6] = {50, 25, 17, 15, 12, 10};
+	u16 Treh[6] = {30, 15, 15, 10, 10, 7};
+	u16 Trc[6] = {100, 50, 35, 30, 25, 20};
+	u16 Trhoh[6] = {0, 15, 15, 15, 15, 15};
+	u16 Trloh[6] = {0, 0, 0, 0, 5, 5};
+	u16 Tcea[6] = {100, 45, 30, 25, 25, 25};
+	u16 Tadl[6] = {200, 100, 100, 100, 70, 70};
+	u16 Trhw[6] = {200, 100, 100, 100, 100, 100};
+	u16 Trhz[6] = {200, 100, 100, 100, 100, 100};
+	u16 Twhr[6] = {120, 80, 80, 60, 60, 60};
+	u16 Tcs[6] = {70, 35, 25, 25, 20, 15};
+
+	u16 TclsRising = 1;
+	u16 data_invalid_rhoh, data_invalid_rloh, data_invalid;
+	u16 dv_window = 0;
+	u16 en_lo, en_hi;
+	u16 acc_clks;
+	u16 addr_2_data, re_2_we, re_2_re, we_2_re, cs_cnt;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	en_lo = CEIL_DIV(Trp[mode], CLK_X);
+	en_hi = CEIL_DIV(Treh[mode], CLK_X);
+
+#if ONFI_BLOOM_TIME
+	if ((en_hi * CLK_X) < (Treh[mode] + 2))
+		en_hi++;
+#endif
+
+	if ((en_lo + en_hi) * CLK_X < Trc[mode])
+		en_lo += CEIL_DIV((Trc[mode] - (en_lo + en_hi) * CLK_X), CLK_X);
+
+	if ((en_lo + en_hi) < CLK_MULTI)
+		en_lo += CLK_MULTI - en_lo - en_hi;
+
+	while (dv_window < 8) {
+		data_invalid_rhoh = en_lo * CLK_X + Trhoh[mode];
+
+		data_invalid_rloh = (en_lo + en_hi) * CLK_X + Trloh[mode];
+
+		data_invalid =
+		    data_invalid_rhoh <
+		    data_invalid_rloh ? data_invalid_rhoh : data_invalid_rloh;
+
+		dv_window = data_invalid - Trea[mode];
+
+		if (dv_window < 8)
+			en_lo++;
+	}
+
+	acc_clks = CEIL_DIV(Trea[mode], CLK_X);
+
+	while (((acc_clks * CLK_X) - Trea[mode]) < 3)
+		acc_clks++;
+
+	if ((data_invalid - acc_clks * CLK_X) < 2)
+		nand_dbg_print(NAND_DBG_WARN, "%s, Line %d: Warning!\n",
+			__FILE__, __LINE__);
+
+	addr_2_data = CEIL_DIV(Tadl[mode], CLK_X);
+	re_2_we = CEIL_DIV(Trhw[mode], CLK_X);
+	re_2_re = CEIL_DIV(Trhz[mode], CLK_X);
+	we_2_re = CEIL_DIV(Twhr[mode], CLK_X);
+	cs_cnt = CEIL_DIV((Tcs[mode] - Trp[mode]), CLK_X);
+	if (!TclsRising)
+		cs_cnt = CEIL_DIV(Tcs[mode], CLK_X);
+	if (cs_cnt == 0)
+		cs_cnt = 1;
+
+	if (Tcea[mode]) {
+		while (((cs_cnt * CLK_X) + Trea[mode]) < Tcea[mode])
+			cs_cnt++;
+	}
+
+	iowrite32(acc_clks, FlashReg + ACC_CLKS);
+	iowrite32(re_2_we, FlashReg + RE_2_WE);
+	iowrite32(re_2_re, FlashReg + RE_2_RE);
+	iowrite32(we_2_re, FlashReg + WE_2_RE);
+	iowrite32(addr_2_data, FlashReg + ADDR_2_DATA);
+	iowrite32(en_lo, FlashReg + RDWR_EN_LO_CNT);
+	iowrite32(en_hi, FlashReg + RDWR_EN_HI_CNT);
+	iowrite32(cs_cnt, FlashReg + CS_SETUP_CNT);
+}
+
+static void index_addr(u32 address, u32 data)
+{
+	iowrite32(address, FlashMem);
+	iowrite32(data, FlashMem + 0x10);
+}
+
+static void index_addr_read_data(u32 address, u32 *pdata)
+{
+	iowrite32(address, FlashMem);
+	*pdata = ioread32(FlashMem + 0x10);
+}
+
+static void set_ecc_config(void)
+{
+	if ((ioread32(FlashReg + ECC_CORRECTION) & ECC_CORRECTION__VALUE)
+		== 1) {
+		DeviceInfo.wECCBytesPerSector = 4;
+		DeviceInfo.wECCBytesPerSector *= DeviceInfo.wDevicesConnected;
+		DeviceInfo.wNumPageSpareFlag =
+			DeviceInfo.wPageSpareSize -
+			DeviceInfo.wPageDataSize /
+			(ECC_SECTOR_SIZE * DeviceInfo.wDevicesConnected) *
+			DeviceInfo.wECCBytesPerSector
+			- DeviceInfo.wSpareSkipBytes;
+	} else {
+		DeviceInfo.wECCBytesPerSector =
+			(ioread32(FlashReg + ECC_CORRECTION) &
+			ECC_CORRECTION__VALUE) * 13 / 8;
+		if ((DeviceInfo.wECCBytesPerSector) % 2 == 0)
+			DeviceInfo.wECCBytesPerSector += 2;
+		else
+			DeviceInfo.wECCBytesPerSector += 1;
+
+		DeviceInfo.wECCBytesPerSector *= DeviceInfo.wDevicesConnected;
+		DeviceInfo.wNumPageSpareFlag = DeviceInfo.wPageSpareSize -
+			DeviceInfo.wPageDataSize /
+			(ECC_SECTOR_SIZE * DeviceInfo.wDevicesConnected) *
+			DeviceInfo.wECCBytesPerSector
+			- DeviceInfo.wSpareSkipBytes;
+	}
+
+}
+
+static u16 get_onfi_nand_para(void)
+{
+	int i;
+	u16 blks_lun_l, blks_lun_h, n_of_luns;
+	u32 blockperlun, id;
+
+	iowrite32(DEVICE_RESET__BANK0, FlashReg + DEVICE_RESET);
+
+	while (!((ioread32(FlashReg + INTR_STATUS0) &
+		INTR_STATUS0__RST_COMP) |
+		(ioread32(FlashReg + INTR_STATUS0) &
+		INTR_STATUS0__TIME_OUT)))
+		;
+
+	if (ioread32(FlashReg + INTR_STATUS0) & INTR_STATUS0__RST_COMP) {
+		iowrite32(DEVICE_RESET__BANK1, FlashReg + DEVICE_RESET);
+		while (!((ioread32(FlashReg + INTR_STATUS1) &
+			INTR_STATUS1__RST_COMP) |
+			(ioread32(FlashReg + INTR_STATUS1) &
+			INTR_STATUS1__TIME_OUT)))
+			;
+
+		if (ioread32(FlashReg + INTR_STATUS1) &
+			INTR_STATUS1__RST_COMP) {
+			iowrite32(DEVICE_RESET__BANK2,
+				FlashReg + DEVICE_RESET);
+			while (!((ioread32(FlashReg + INTR_STATUS2) &
+				INTR_STATUS2__RST_COMP) |
+				(ioread32(FlashReg + INTR_STATUS2) &
+				INTR_STATUS2__TIME_OUT)))
+				;
+
+			if (ioread32(FlashReg + INTR_STATUS2) &
+				INTR_STATUS2__RST_COMP) {
+				iowrite32(DEVICE_RESET__BANK3,
+					FlashReg + DEVICE_RESET);
+				while (!((ioread32(FlashReg + INTR_STATUS3) &
+					INTR_STATUS3__RST_COMP) |
+					(ioread32(FlashReg + INTR_STATUS3) &
+					INTR_STATUS3__TIME_OUT)))
+					;
+			} else {
+				printk(KERN_ERR "Getting a time out for bank 2!\n");
+			}
+		} else {
+			printk(KERN_ERR "Getting a time out for bank 1!\n");
+		}
+	}
+
+	iowrite32(INTR_STATUS0__TIME_OUT, FlashReg + INTR_STATUS0);
+	iowrite32(INTR_STATUS1__TIME_OUT, FlashReg + INTR_STATUS1);
+	iowrite32(INTR_STATUS2__TIME_OUT, FlashReg + INTR_STATUS2);
+	iowrite32(INTR_STATUS3__TIME_OUT, FlashReg + INTR_STATUS3);
+
+	DeviceInfo.wONFIDevFeatures =
+		ioread32(FlashReg + ONFI_DEVICE_FEATURES);
+	DeviceInfo.wONFIOptCommands =
+		ioread32(FlashReg + ONFI_OPTIONAL_COMMANDS);
+	DeviceInfo.wONFITimingMode =
+		ioread32(FlashReg + ONFI_TIMING_MODE);
+	DeviceInfo.wONFIPgmCacheTimingMode =
+		ioread32(FlashReg + ONFI_PGM_CACHE_TIMING_MODE);
+
+	n_of_luns = ioread32(FlashReg + ONFI_DEVICE_NO_OF_LUNS) &
+		ONFI_DEVICE_NO_OF_LUNS__NO_OF_LUNS;
+	blks_lun_l = ioread32(FlashReg + ONFI_DEVICE_NO_OF_BLOCKS_PER_LUN_L);
+	blks_lun_h = ioread32(FlashReg + ONFI_DEVICE_NO_OF_BLOCKS_PER_LUN_U);
+
+	blockperlun = (blks_lun_h << 16) | blks_lun_l;
+
+	DeviceInfo.wTotalBlocks = n_of_luns * blockperlun;
+
+	if (!(ioread32(FlashReg + ONFI_TIMING_MODE) &
+		ONFI_TIMING_MODE__VALUE))
+		return FAIL;
+
+	for (i = 5; i > 0; i--) {
+		if (ioread32(FlashReg + ONFI_TIMING_MODE) & (0x01 << i))
+			break;
+	}
+
+#if MODE5_WORKAROUND
+	if (i == 5)
+		i = 4;
+#endif
+
+	NAND_ONFi_Timing_Mode(i);
+
+	index_addr(MODE_11 | 0, 0x90);
+	index_addr(MODE_11 | 1, 0);
+
+	for (i = 0; i < 3; i++)
+		index_addr_read_data(MODE_11 | 2, &id);
+
+	nand_dbg_print(NAND_DBG_DEBUG, "3rd ID: 0x%x\n", id);
+
+	DeviceInfo.MLCDevice = id & 0x0C;
+
+	return PASS;
+}
+
+static void get_samsung_nand_para(void)
+{
+	u8 no_of_planes;
+	u32 blk_size;
+	u64 plane_size, capacity;
+	u32 id_bytes[5];
+	int i;
+
+	index_addr((u32)(MODE_11 | 0), 0x90);
+	index_addr((u32)(MODE_11 | 1), 0);
+	for (i = 0; i < 5; i++)
+		index_addr_read_data((u32)(MODE_11 | 2), &id_bytes[i]);
+
+	nand_dbg_print(NAND_DBG_DEBUG,
+		"ID bytes: 0x%x, 0x%x, 0x%x, 0x%x, 0x%x\n",
+		id_bytes[0], id_bytes[1], id_bytes[2],
+		id_bytes[3], id_bytes[4]);
+
+	no_of_planes = 1 << ((id_bytes[4] & 0x0c) >> 2);
+	plane_size  = (u64)64 << ((id_bytes[4] & 0x70) >> 4);
+	blk_size = 64 << ((ioread32(FlashReg + DEVICE_PARAM_1) & 0x30) >> 4);
+	capacity = (u64)128 * plane_size * no_of_planes;
+
+	DeviceInfo.wTotalBlocks = (u32)GLOB_u64_Div(capacity, blk_size);
+}
+
+static void find_valid_banks(void)
+{
+	u32 id[LLD_MAX_FLASH_BANKS];
+	int i;
+
+	totalUsedBanks = 0;
+	for (i = 0; i < LLD_MAX_FLASH_BANKS; i++) {
+		index_addr((u32)(MODE_11 | (i << 24) | 0), 0x90);
+		index_addr((u32)(MODE_11 | (i << 24) | 1), 0);
+		index_addr_read_data((u32)(MODE_11 | (i << 24) | 2), &id[i]);
+
+		nand_dbg_print(NAND_DBG_DEBUG,
+			"Return 1st ID for bank[%d]: %x\n", i, id[i]);
+
+		if (i == 0) {
+			if (id[i] & 0x0ff)
+				GLOB_valid_banks[i] = 1;
+		} else {
+			if ((id[i] & 0x0ff) == (id[0] & 0x0ff))
+				GLOB_valid_banks[i] = 1;
+		}
+
+		totalUsedBanks += GLOB_valid_banks[i];
+	}
+
+	nand_dbg_print(NAND_DBG_DEBUG,
+		"totalUsedBanks: %d\n", totalUsedBanks);
+}
+
+static void detect_partition_feature(void)
+{
+	if (ioread32(FlashReg + FEATURES) & FEATURES__PARTITION) {
+		if ((ioread32(FlashReg + PERM_SRC_ID_1) &
+			PERM_SRC_ID_1__SRCID) == SPECTRA_PARTITION_ID) {
+			DeviceInfo.wSpectraStartBlock =
+			    ((ioread32(FlashReg + MIN_MAX_BANK_1) &
+			      MIN_MAX_BANK_1__MIN_VALUE) *
+			     DeviceInfo.wTotalBlocks)
+			    +
+			    (ioread32(FlashReg + MIN_BLK_ADDR_1) &
+			    MIN_BLK_ADDR_1__VALUE);
+
+			DeviceInfo.wSpectraEndBlock =
+			    (((ioread32(FlashReg + MIN_MAX_BANK_1) &
+			       MIN_MAX_BANK_1__MAX_VALUE) >> 2) *
+			     DeviceInfo.wTotalBlocks)
+			    +
+			    (ioread32(FlashReg + MAX_BLK_ADDR_1) &
+			    MAX_BLK_ADDR_1__VALUE);
+
+			DeviceInfo.wTotalBlocks *= totalUsedBanks;
+
+			if (DeviceInfo.wSpectraEndBlock >=
+			    DeviceInfo.wTotalBlocks) {
+				DeviceInfo.wSpectraEndBlock =
+				    DeviceInfo.wTotalBlocks - 1;
+			}
+
+			DeviceInfo.wDataBlockNum =
+				DeviceInfo.wSpectraEndBlock -
+				DeviceInfo.wSpectraStartBlock + 1;
+		} else {
+			DeviceInfo.wTotalBlocks *= totalUsedBanks;
+			DeviceInfo.wSpectraStartBlock = SPECTRA_START_BLOCK;
+			DeviceInfo.wSpectraEndBlock =
+				DeviceInfo.wTotalBlocks - 1;
+			DeviceInfo.wDataBlockNum =
+				DeviceInfo.wSpectraEndBlock -
+				DeviceInfo.wSpectraStartBlock + 1;
+		}
+	} else {
+		DeviceInfo.wTotalBlocks *= totalUsedBanks;
+		DeviceInfo.wSpectraStartBlock = SPECTRA_START_BLOCK;
+		DeviceInfo.wSpectraEndBlock = DeviceInfo.wTotalBlocks - 1;
+		DeviceInfo.wDataBlockNum =
+			DeviceInfo.wSpectraEndBlock -
+			DeviceInfo.wSpectraStartBlock + 1;
+	}
+}
+
+static void dump_device_info(void)
+{
+	nand_dbg_print(NAND_DBG_DEBUG, "DeviceInfo:\n");
+	nand_dbg_print(NAND_DBG_DEBUG, "DeviceMaker: 0x%x\n",
+		DeviceInfo.wDeviceMaker);
+	nand_dbg_print(NAND_DBG_DEBUG, "DeviceType: 0x%x\n",
+		DeviceInfo.wDeviceType);
+	nand_dbg_print(NAND_DBG_DEBUG, "SpectraStartBlock: %d\n",
+		DeviceInfo.wSpectraStartBlock);
+	nand_dbg_print(NAND_DBG_DEBUG, "SpectraEndBlock: %d\n",
+		DeviceInfo.wSpectraEndBlock);
+	nand_dbg_print(NAND_DBG_DEBUG, "TotalBlocks: %d\n",
+		DeviceInfo.wTotalBlocks);
+	nand_dbg_print(NAND_DBG_DEBUG, "PagesPerBlock: %d\n",
+		DeviceInfo.wPagesPerBlock);
+	nand_dbg_print(NAND_DBG_DEBUG, "PageSize: %d\n",
+		DeviceInfo.wPageSize);
+	nand_dbg_print(NAND_DBG_DEBUG, "PageDataSize: %d\n",
+		DeviceInfo.wPageDataSize);
+	nand_dbg_print(NAND_DBG_DEBUG, "PageSpareSize: %d\n",
+		DeviceInfo.wPageSpareSize);
+	nand_dbg_print(NAND_DBG_DEBUG, "NumPageSpareFlag: %d\n",
+		DeviceInfo.wNumPageSpareFlag);
+	nand_dbg_print(NAND_DBG_DEBUG, "ECCBytesPerSector: %d\n",
+		DeviceInfo.wECCBytesPerSector);
+	nand_dbg_print(NAND_DBG_DEBUG, "BlockSize: %d\n",
+		DeviceInfo.wBlockSize);
+	nand_dbg_print(NAND_DBG_DEBUG, "BlockDataSize: %d\n",
+		DeviceInfo.wBlockDataSize);
+	nand_dbg_print(NAND_DBG_DEBUG, "DataBlockNum: %d\n",
+		DeviceInfo.wDataBlockNum);
+	nand_dbg_print(NAND_DBG_DEBUG, "PlaneNum: %d\n",
+		DeviceInfo.bPlaneNum);
+	nand_dbg_print(NAND_DBG_DEBUG, "DeviceMainAreaSize: %d\n",
+		DeviceInfo.wDeviceMainAreaSize);
+	nand_dbg_print(NAND_DBG_DEBUG, "DeviceSpareAreaSize: %d\n",
+		DeviceInfo.wDeviceSpareAreaSize);
+	nand_dbg_print(NAND_DBG_DEBUG, "DevicesConnected: %d\n",
+		DeviceInfo.wDevicesConnected);
+	nand_dbg_print(NAND_DBG_DEBUG, "DeviceWidth: %d\n",
+		DeviceInfo.wDeviceWidth);
+	nand_dbg_print(NAND_DBG_DEBUG, "HWRevision: 0x%x\n",
+		DeviceInfo.wHWRevision);
+	nand_dbg_print(NAND_DBG_DEBUG, "HWFeatures: 0x%x\n",
+		DeviceInfo.wHWFeatures);
+	nand_dbg_print(NAND_DBG_DEBUG, "ONFIDevFeatures: 0x%x\n",
+		DeviceInfo.wONFIDevFeatures);
+	nand_dbg_print(NAND_DBG_DEBUG, "ONFIOptCommands: 0x%x\n",
+		DeviceInfo.wONFIOptCommands);
+	nand_dbg_print(NAND_DBG_DEBUG, "ONFITimingMode: 0x%x\n",
+		DeviceInfo.wONFITimingMode);
+	nand_dbg_print(NAND_DBG_DEBUG, "ONFIPgmCacheTimingMode: 0x%x\n",
+		DeviceInfo.wONFIPgmCacheTimingMode);
+	nand_dbg_print(NAND_DBG_DEBUG, "MLCDevice: %s\n",
+		DeviceInfo.MLCDevice ? "Yes" : "No");
+	nand_dbg_print(NAND_DBG_DEBUG, "SpareSkipBytes: %d\n",
+		DeviceInfo.wSpareSkipBytes);
+	nand_dbg_print(NAND_DBG_DEBUG, "BitsInPageNumber: %d\n",
+		DeviceInfo.nBitsInPageNumber);
+	nand_dbg_print(NAND_DBG_DEBUG, "BitsInPageDataSize: %d\n",
+		DeviceInfo.nBitsInPageDataSize);
+	nand_dbg_print(NAND_DBG_DEBUG, "BitsInBlockDataSize: %d\n",
+		DeviceInfo.nBitsInBlockDataSize);
+}
+
+u16 NAND_Read_Device_ID(void)
+{
+	u16 status = PASS;
+	u8 mfg_code, dev_code;
+	u8 no_of_planes;
+	u32 tmp;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	iowrite32(0x02, FlashReg + SPARE_AREA_SKIP_BYTES);
+	iowrite32(0xffff, FlashReg + SPARE_AREA_MARKER);
+	DeviceInfo.wDeviceMaker = ioread32(FlashReg + MANUFACTURER_ID);
+	DeviceInfo.wDeviceType = (((ioread32(FlashReg + DEVICE_WIDTH) >> 2)
+		> 0) ? 16 : 8);
+	DeviceInfo.wPagesPerBlock = ioread32(FlashReg + PAGES_PER_BLOCK);
+	DeviceInfo.wPageDataSize =
+		ioread32(FlashReg + LOGICAL_PAGE_DATA_SIZE);
+
+	/* Note: When using the Micon 4K NAND device, the controller will report
+	 * Page Spare Size as 216 bytes. But Micron's Spec say it's 218 bytes.
+	 * And if force set it to 218 bytes, the controller can not work
+	 * correctly. So just let it be. But keep in mind that this bug may
+	 * cause
+	 * other problems in future.       - Yunpeng  2008-10-10
+	 */
+	DeviceInfo.wPageSpareSize =
+		ioread32(FlashReg + LOGICAL_PAGE_SPARE_SIZE);
+
+	DeviceInfo.wPageSize =
+	    DeviceInfo.wPageDataSize + DeviceInfo.wPageSpareSize;
+	DeviceInfo.wBlockSize =
+	    DeviceInfo.wPageSize * DeviceInfo.wPagesPerBlock;
+	DeviceInfo.wBlockDataSize =
+	    DeviceInfo.wPagesPerBlock * DeviceInfo.wPageDataSize;
+	DeviceInfo.wHWRevision = ioread32(FlashReg + REVISION);
+
+	DeviceInfo.wDeviceMainAreaSize =
+		ioread32(FlashReg + DEVICE_MAIN_AREA_SIZE);
+	DeviceInfo.wDeviceSpareAreaSize =
+		ioread32(FlashReg + DEVICE_SPARE_AREA_SIZE);
+
+	DeviceInfo.wDeviceWidth = ioread32(FlashReg + DEVICE_WIDTH);
+	DeviceInfo.wDevicesConnected = ioread32(FlashReg + DEVICES_CONNECTED);
+	DeviceInfo.wHWFeatures = ioread32(FlashReg + FEATURES);
+
+	/* nand_dbg_print(NAND_DBG_DEBUG, "Will disable ECC for now:\n");*/
+	/* iowrite32(0, FlashReg + ECC_ENABLE); */
+
+	DeviceInfo.MLCDevice = ioread32(FlashReg + DEVICE_PARAM_0) & 0x0c;
+	DeviceInfo.wSpareSkipBytes = ioread32(FlashReg +
+				SPARE_AREA_SKIP_BYTES)
+				* DeviceInfo.wDevicesConnected;
+
+	DeviceInfo.nBitsInPageNumber =
+		(u8)GLOB_Calc_Used_Bits(DeviceInfo.wPagesPerBlock);
+	DeviceInfo.nBitsInPageDataSize =
+		(u8)GLOB_Calc_Used_Bits(DeviceInfo.wPageDataSize);
+	DeviceInfo.nBitsInBlockDataSize =
+		(u8)GLOB_Calc_Used_Bits(DeviceInfo.wBlockDataSize);
+
+#if SUPPORT_8BITECC
+	if ((ioread32(FlashReg + DEVICE_MAIN_AREA_SIZE) < 4096) ||
+		(ioread32(FlashReg + DEVICE_SPARE_AREA_SIZE) <= 128))
+		iowrite32(8, FlashReg + ECC_CORRECTION);
+#endif
+
+	nand_dbg_print(NAND_DBG_DEBUG, "FEATURES register value: 0x%x\n",
+		ioread32(FlashReg + FEATURES));
+	nand_dbg_print(NAND_DBG_DEBUG, "ECC_CORRECTION register value: 0x%x\n",
+		ioread32(FlashReg + ECC_CORRECTION));
+
+	/* Toshiba NAND */
+	if ((ioread32(FlashReg + MANUFACTURER_ID) == 0x98) &&
+		(ioread32(FlashReg + DEVICE_MAIN_AREA_SIZE) == 4096) &&
+		(ioread32(FlashReg + DEVICE_SPARE_AREA_SIZE) == 64)) {
+		iowrite32(216, FlashReg + DEVICE_SPARE_AREA_SIZE);
+		tmp = ioread32(FlashReg + DEVICES_CONNECTED) *
+			ioread32(FlashReg + DEVICE_SPARE_AREA_SIZE);
+		iowrite32(tmp, FlashReg + LOGICAL_PAGE_SPARE_SIZE);
+		DeviceInfo.wDeviceSpareAreaSize =
+			ioread32(FlashReg + DEVICE_SPARE_AREA_SIZE);
+		DeviceInfo.wPageSpareSize =
+			ioread32(FlashReg + LOGICAL_PAGE_SPARE_SIZE);
+#if SUPPORT_15BITECC
+		iowrite32(15, FlashReg + ECC_CORRECTION);
+#elif SUPPORT_8BITECC
+		iowrite32(8, FlashReg + ECC_CORRECTION);
+#endif
+	}
+
+	set_ecc_config();
+
+	mfg_code = DeviceInfo.wDeviceMaker;
+	dev_code = DeviceInfo.wDeviceType;
+
+	if (ioread32(FlashReg + ONFI_DEVICE_NO_OF_LUNS) &
+		ONFI_DEVICE_NO_OF_LUNS__ONFI_DEVICE) { /* ONFI 1.0 NAND */
+		if (FAIL == get_onfi_nand_para())
+			return FAIL;
+	} else if (mfg_code == 0xEC)	{ /* Samsung NAND */
+		get_samsung_nand_para();
+	} else {
+#if GLOB_DEVTSBA_ALT_BLK_NFO
+	u8 *tsba_ptr = (u8 *)GLOB_DEVTSBA_ALT_BLK_ADD;
+	DeviceInfo.wTotalBlocks = (1 << *tsba_ptr);
+	if (DeviceInfo.wTotalBlocks < 512)
+		DeviceInfo.wTotalBlocks = GLOB_HWCTL_DEFAULT_BLKS;
+#else
+	DeviceInfo.wTotalBlocks = GLOB_HWCTL_DEFAULT_BLKS;
+#endif
+	}
+
+	no_of_planes = ioread32(FlashReg + NUMBER_OF_PLANES) &
+		NUMBER_OF_PLANES__VALUE;
+
+	switch (no_of_planes) {
+	case 0:
+	case 1:
+	case 3:
+	case 7:
+		DeviceInfo.bPlaneNum = no_of_planes + 1;
+		break;
+	default:
+		status = FAIL;
+		break;
+	}
+
+	find_valid_banks();
+
+	detect_partition_feature();
+
+	dump_device_info();
+
+	return status;
+}
+
+u16 NAND_UnlockArrayAll(void)
+{
+	u64 start_addr, end_addr;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	start_addr = 0;
+	end_addr = ((u64)DeviceInfo.wBlockSize *
+		(DeviceInfo.wTotalBlocks - 1)) >>
+		DeviceInfo.nBitsInPageDataSize;
+
+	index_addr((u32)(MODE_10 | (u32)start_addr), 0x10);
+	index_addr((u32)(MODE_10 | (u32)end_addr), 0x11);
+
+	return PASS;
+}
+
+void NAND_LLD_Enable_Disable_Interrupts(u16 INT_ENABLE)
+{
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	if (INT_ENABLE)
+		iowrite32(1, FlashReg + GLOBAL_INT_ENABLE);
+	else
+		iowrite32(0, FlashReg + GLOBAL_INT_ENABLE);
+}
+
+u16 NAND_Erase_Block(u32 block)
+{
+	u16 status = PASS;
+	u64 flash_add;
+	u16 flash_bank;
+	u32 intr_status = 0;
+	u32 intr_status_addresses[4] = {INTR_STATUS0,
+		INTR_STATUS1, INTR_STATUS2, INTR_STATUS3};
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	flash_add = (u64)(block % (DeviceInfo.wTotalBlocks / totalUsedBanks))
+		* DeviceInfo.wBlockDataSize;
+
+	flash_bank = block / (DeviceInfo.wTotalBlocks / totalUsedBanks);
+
+	if (block >= DeviceInfo.wTotalBlocks)
+		status = FAIL;
+
+	if (status == PASS) {
+		intr_status = intr_status_addresses[flash_bank];
+
+		iowrite32(INTR_STATUS0__ERASE_COMP | INTR_STATUS0__ERASE_FAIL,
+			FlashReg + intr_status);
+
+		index_addr((u32)(MODE_10 | (flash_bank << 24) |
+			(flash_add >> DeviceInfo.nBitsInPageDataSize)), 1);
+
+		while (!(ioread32(FlashReg + intr_status) &
+			(INTR_STATUS0__ERASE_COMP | INTR_STATUS0__ERASE_FAIL)))
+			;
+
+		if (ioread32(FlashReg + intr_status) &
+			INTR_STATUS0__ERASE_FAIL)
+			status = FAIL;
+
+		iowrite32(INTR_STATUS0__ERASE_COMP | INTR_STATUS0__ERASE_FAIL,
+			FlashReg + intr_status);
+	}
+
+	return status;
+}
+
+static u32 Boundary_Check_Block_Page(u32 block, u16 page,
+						u16 page_count)
+{
+	u32 status = PASS;
+
+	if (block >= DeviceInfo.wTotalBlocks)
+		status = FAIL;
+
+	if (page + page_count > DeviceInfo.wPagesPerBlock)
+		status = FAIL;
+
+	return status;
+}
+
+u16 NAND_Read_Page_Spare(u8 *read_data, u32 block, u16 page,
+			    u16 page_count)
+{
+	u32 status = PASS;
+	u32 i;
+	u64 flash_add;
+	u32 PageSpareSize = DeviceInfo.wPageSpareSize;
+	u32 spareFlagBytes = DeviceInfo.wNumPageSpareFlag;
+	u32 flash_bank;
+	u32 intr_status = 0;
+	u32 intr_status_addresses[4] = {INTR_STATUS0,
+		INTR_STATUS1, INTR_STATUS2, INTR_STATUS3};
+
+	if (block >= DeviceInfo.wTotalBlocks) {
+		printk(KERN_ERR "block too big: %d\n", (int)block);
+		status = FAIL;
+	}
+
+	if (page >= DeviceInfo.wPagesPerBlock) {
+		printk(KERN_ERR "page too big: %d\n", page);
+		status = FAIL;
+	}
+
+	if (page_count > 1) {
+		printk(KERN_ERR "page count too big: %d\n", page_count);
+		status = FAIL;
+	}
+
+	flash_add = (u64)(block % (DeviceInfo.wTotalBlocks / totalUsedBanks))
+		* DeviceInfo.wBlockDataSize +
+		(u64)page * DeviceInfo.wPageDataSize;
+
+	flash_bank = block / (DeviceInfo.wTotalBlocks / totalUsedBanks);
+
+	if (status == PASS) {
+		intr_status = intr_status_addresses[flash_bank];
+		iowrite32(ioread32(FlashReg + intr_status),
+			FlashReg + intr_status);
+
+		index_addr((u32)(MODE_10 | (flash_bank << 24) |
+			(flash_add >> DeviceInfo.nBitsInPageDataSize)),
+			0x41);
+		index_addr((u32)(MODE_10 | (flash_bank << 24) |
+			(flash_add >> DeviceInfo.nBitsInPageDataSize)),
+			0x2000 | page_count);
+		while (!(ioread32(FlashReg + intr_status) &
+			INTR_STATUS0__LOAD_COMP))
+			;
+
+		iowrite32((u32)(MODE_01 | (flash_bank << 24) |
+			(flash_add >> DeviceInfo.nBitsInPageDataSize)),
+			FlashMem);
+
+		for (i = 0; i < (PageSpareSize / 4); i++)
+			*((u32 *)page_spare + i) =
+					ioread32(FlashMem + 0x10);
+
+		if (ioread32(FlashReg + ECC_ENABLE)) {
+			for (i = 0; i < spareFlagBytes; i++)
+				read_data[i] =
+					page_spare[PageSpareSize -
+						spareFlagBytes + i];
+			for (i = 0; i < (PageSpareSize - spareFlagBytes); i++)
+				read_data[spareFlagBytes + i] =
+							page_spare[i];
+		} else {
+			for (i = 0; i < PageSpareSize; i++)
+				read_data[i] = page_spare[i];
+		}
+
+		index_addr((u32)(MODE_10 | (flash_bank << 24) |
+			(flash_add >> DeviceInfo.nBitsInPageDataSize)), 0x42);
+	}
+
+	return status;
+}
+
+u16 NAND_Write_Page_Spare(u8 *write_data, u32 block, u16 page,
+			     u16 page_count)
+{
+	printk(KERN_ERR
+	       "Error! This function (NAND_Write_Page_Spare) should never"
+		" be called!\n");
+	return ERR;
+}
+
+#if DDMA
+/* op value:  0 - DDMA read;  1 - DDMA write */
+static void ddma_trans(u8 *data, u64 flash_add,
+			u32 flash_bank, int op, u32 numPages)
+{
+	/* Map virtual address to bus address for DDMA */
+	data = (u8 *)GLOB_MEMMAP_TOBUS((u32 *)data);
+
+	index_addr((u32)(MODE_10 | (flash_bank << 24) |
+		(flash_add >> DeviceInfo.nBitsInPageDataSize)),
+		(u16)(2 << 12) | (op << 8) | numPages);
+
+	index_addr((u32)(MODE_10 | (flash_bank << 24) |
+		((u16)(0x0FFFF & ((u32)data >> 16)) << 8)),
+		(u16)(2 << 12) | (2 << 8) | 0);
+
+	index_addr((u32)(MODE_10 | (flash_bank << 24) |
+		((u16)(0x0FFFF & (u32)data) << 8)),
+		(u16)(2 << 12) | (3 << 8) | 0);
+
+	index_addr((u32)(MODE_10 | (flash_bank << 24) |
+		(1 << 16) | (0x40 << 8)),
+		(u16)(2 << 12) | (4 << 8) | 0);
+}
+
+#endif
+
+/* If data in buf are all 0xff, then return 1; otherwise return 0 */
+static int check_all_1(u8 *buf)
+{
+	int i, j, cnt;
+
+	for (i = 0; i < DeviceInfo.wPageDataSize; i++) {
+		if (buf[i] != 0xff) {
+			cnt = 0;
+			nand_dbg_print(NAND_DBG_WARN,
+				"the first non-0xff data byte is: %d\n", i);
+			for (j = i; j < DeviceInfo.wPageDataSize; j++) {
+				nand_dbg_print(NAND_DBG_WARN, "0x%x ", buf[j]);
+				cnt++;
+				if (cnt > 8)
+					break;
+			}
+			nand_dbg_print(NAND_DBG_WARN, "\n");
+			return 0;
+		}
+	}
+
+	return 1;
+}
+
+static int do_ecc_new(unsigned long bank, u8 *buf,
+				u32 block, u16 page)
+{
+	int status = PASS;
+	u16 err_page = 0;
+	u16 err_byte;
+	u8 err_sect;
+	u8 err_dev;
+	u16 err_fix_info;
+	u16 err_addr;
+	u32 ecc_sect_size;
+	u8 *err_pos;
+	u32 err_page_addr[4] = {ERR_PAGE_ADDR0,
+		ERR_PAGE_ADDR1, ERR_PAGE_ADDR2, ERR_PAGE_ADDR3};
+
+	ecc_sect_size = ECC_SECTOR_SIZE * (DeviceInfo.wDevicesConnected);
+
+	do {
+		err_page = ioread32(FlashReg + err_page_addr[bank]);
+		err_addr = ioread32(FlashReg + ECC_ERROR_ADDRESS);
+		err_byte = err_addr & ECC_ERROR_ADDRESS__OFFSET;
+		err_sect = ((err_addr & ECC_ERROR_ADDRESS__SECTOR_NR) >> 12);
+		err_fix_info = ioread32(FlashReg + ERR_CORRECTION_INFO);
+		err_dev = ((err_fix_info & ERR_CORRECTION_INFO__DEVICE_NR)
+			>> 8);
+		if (err_fix_info & ERR_CORRECTION_INFO__ERROR_TYPE) {
+			nand_dbg_print(NAND_DBG_WARN,
+				"%s, Line %d Uncorrectable ECC error "
+				"when read block %d page %d."
+				"PTN_INTR register: 0x%x "
+				"err_page: %d, err_sect: %d, err_byte: %d, "
+				"err_dev: %d, ecc_sect_size: %d, "
+				"err_fix_info: 0x%x\n",
+				__FILE__, __LINE__, block, page,
+				ioread32(FlashReg + PTN_INTR),
+				err_page, err_sect, err_byte, err_dev,
+				ecc_sect_size, (u32)err_fix_info);
+
+			if (check_all_1(buf))
+				nand_dbg_print(NAND_DBG_WARN, "%s, Line %d"
+					       "All 0xff!\n",
+					       __FILE__, __LINE__);
+			else
+				nand_dbg_print(NAND_DBG_WARN, "%s, Line %d"
+					       "Not all 0xff!\n",
+					       __FILE__, __LINE__);
+			status = FAIL;
+		} else {
+			/* glob_mdelay(200); */ /* Add for test */
+			nand_dbg_print(NAND_DBG_WARN,
+				"%s, Line %d Found ECC error "
+				"when read block %d page %d."
+				"err_page: %d, err_sect: %d, err_byte: %d, "
+				"err_dev: %d, ecc_sect_size: %d, "
+				"err_fix_info: 0x%x\n",
+				__FILE__, __LINE__, block, page,
+				err_page, err_sect, err_byte, err_dev,
+				ecc_sect_size, (u32)err_fix_info);
+			if (err_byte < ecc_sect_size) {
+				err_pos = buf +
+					(err_page - page) *
+					DeviceInfo.wPageDataSize +
+					err_sect * ecc_sect_size +
+					err_byte *
+					DeviceInfo.wDevicesConnected +
+					err_dev;
+
+				*err_pos ^= err_fix_info &
+					ERR_CORRECTION_INFO__BYTEMASK;
+			} else {
+				nand_dbg_print(NAND_DBG_WARN,
+					"!!!Error - Too big err_byte!\n");
+			}
+		}
+	} while (!(err_fix_info & ERR_CORRECTION_INFO__LAST_ERR_INFO));
+
+	return status;
+}
+
+u16 NAND_Read_Page_Main_Polling(u8 *read_data,
+		u32 block, u16 page, u16 page_count)
+{
+	u32 status = PASS;
+	u64 flash_add;
+	u32 intr_status = 0;
+	u32 flash_bank;
+	u32 intr_status_addresses[4] = {INTR_STATUS0,
+		INTR_STATUS1, INTR_STATUS2, INTR_STATUS3};
+
+	nand_dbg_print(NAND_DBG_WARN, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	status = Boundary_Check_Block_Page(block, page, page_count);
+	if (status != PASS)
+		return status;
+
+	flash_add = (u64)(block % (DeviceInfo.wTotalBlocks / totalUsedBanks))
+		* DeviceInfo.wBlockDataSize +
+		(u64)page * DeviceInfo.wPageDataSize;
+	flash_bank = block / (DeviceInfo.wTotalBlocks / totalUsedBanks);
+
+	iowrite32(0, FlashReg + TRANSFER_SPARE_REG);
+
+	intr_status = intr_status_addresses[flash_bank];
+	iowrite32(ioread32(FlashReg + intr_status), FlashReg + intr_status);
+
+	if (page_count > 1) {
+		if (ioread32(FlashReg + MULTIPLANE_OPERATION))
+			status = NAND_Multiplane_Read(read_data,
+						block, page, page_count);
+		else
+			status = NAND_Pipeline_Read_Ahead_Polling(read_data,
+						block, page, page_count);
+		return status;
+	}
+
+	iowrite32(1, FlashReg + DMA_ENABLE);
+	while (!(ioread32(FlashReg + DMA_ENABLE) & DMA_ENABLE__FLAG))
+		;
+
+	iowrite32(0, FlashReg + TRANSFER_SPARE_REG);
+	iowrite32(ioread32(FlashReg + intr_status), FlashReg + intr_status);
+
+	ddma_trans(read_data, flash_add, flash_bank, 0, 1);
+
+	if (ioread32(FlashReg + ECC_ENABLE)) {
+		while (!(ioread32(FlashReg + intr_status) &
+			(INTR_STATUS0__ECC_TRANSACTION_DONE |
+			INTR_STATUS0__ECC_ERR)))
+			;
+
+		if (ioread32(FlashReg + intr_status) &
+			INTR_STATUS0__ECC_ERR) {
+			iowrite32(INTR_STATUS0__ECC_ERR,
+				FlashReg + intr_status);
+			status = do_ecc_new(flash_bank, read_data,
+					block, page);
+		}
+
+		if (ioread32(FlashReg + intr_status) &
+			INTR_STATUS0__ECC_TRANSACTION_DONE &
+			INTR_STATUS0__ECC_ERR)
+			iowrite32(INTR_STATUS0__ECC_TRANSACTION_DONE |
+				INTR_STATUS0__ECC_ERR,
+				FlashReg + intr_status);
+		else if (ioread32(FlashReg + intr_status) &
+			INTR_STATUS0__ECC_TRANSACTION_DONE)
+			iowrite32(INTR_STATUS0__ECC_TRANSACTION_DONE,
+				FlashReg + intr_status);
+		else if (ioread32(FlashReg + intr_status) &
+			INTR_STATUS0__ECC_ERR)
+			iowrite32(INTR_STATUS0__ECC_ERR,
+				FlashReg + intr_status);
+	} else {
+		while (!(ioread32(FlashReg + intr_status) &
+			INTR_STATUS0__DMA_CMD_COMP))
+			;
+		iowrite32(INTR_STATUS0__DMA_CMD_COMP, FlashReg + intr_status);
+	}
+
+	iowrite32(ioread32(FlashReg + intr_status), FlashReg + intr_status);
+
+	iowrite32(0, FlashReg + DMA_ENABLE);
+	while ((ioread32(FlashReg + DMA_ENABLE) & DMA_ENABLE__FLAG))
+		;
+
+	return status;
+}
+
+u16 NAND_Pipeline_Read_Ahead_Polling(u8 *read_data,
+			u32 block, u16 page, u16 page_count)
+{
+	u32 status = PASS;
+	u32 NumPages = page_count;
+	u64 flash_add;
+	u32 flash_bank;
+	u32 intr_status = 0;
+	u32 intr_status_addresses[4] = {INTR_STATUS0,
+		INTR_STATUS1, INTR_STATUS2, INTR_STATUS3};
+	u32 ecc_done_OR_dma_comp;
+
+	nand_dbg_print(NAND_DBG_WARN, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	status = Boundary_Check_Block_Page(block, page, page_count);
+
+	if (page_count < 2)
+		status = FAIL;
+
+	flash_add = (u64)(block % (DeviceInfo.wTotalBlocks / totalUsedBanks))
+		*DeviceInfo.wBlockDataSize +
+		(u64)page * DeviceInfo.wPageDataSize;
+
+	flash_bank = block / (DeviceInfo.wTotalBlocks / totalUsedBanks);
+
+	if (status == PASS) {
+		intr_status = intr_status_addresses[flash_bank];
+		iowrite32(ioread32(FlashReg + intr_status),
+			FlashReg + intr_status);
+
+		iowrite32(1, FlashReg + DMA_ENABLE);
+		while (!(ioread32(FlashReg + DMA_ENABLE) & DMA_ENABLE__FLAG))
+			;
+
+		iowrite32(0, FlashReg + TRANSFER_SPARE_REG);
+
+		index_addr((u32)(MODE_10 | (flash_bank << 24) |
+			(flash_add >> DeviceInfo.nBitsInPageDataSize)), 0x42);
+		ddma_trans(read_data, flash_add, flash_bank, 0, NumPages);
+
+		ecc_done_OR_dma_comp = 0;
+		while (1) {
+			if (ioread32(FlashReg + ECC_ENABLE)) {
+				while (!ioread32(FlashReg + intr_status))
+					;
+
+				if (ioread32(FlashReg + intr_status) &
+					INTR_STATUS0__ECC_ERR) {
+					iowrite32(INTR_STATUS0__ECC_ERR,
+						FlashReg + intr_status);
+					status = do_ecc_new(flash_bank,
+						read_data, block, page);
+				} else if (ioread32(FlashReg + intr_status) &
+					INTR_STATUS0__DMA_CMD_COMP) {
+					iowrite32(INTR_STATUS0__DMA_CMD_COMP,
+						FlashReg + intr_status);
+
+					if (1 == ecc_done_OR_dma_comp)
+						break;
+
+					ecc_done_OR_dma_comp = 1;
+				} else if (ioread32(FlashReg + intr_status) &
+					INTR_STATUS0__ECC_TRANSACTION_DONE) {
+					iowrite32(
+					INTR_STATUS0__ECC_TRANSACTION_DONE,
+					FlashReg + intr_status);
+
+					if (1 == ecc_done_OR_dma_comp)
+						break;
+
+					ecc_done_OR_dma_comp = 1;
+				}
+			} else {
+				while (!(ioread32(FlashReg + intr_status) &
+					INTR_STATUS0__DMA_CMD_COMP))
+					;
+
+				iowrite32(INTR_STATUS0__DMA_CMD_COMP,
+					FlashReg + intr_status);
+				break;
+			}
+
+			iowrite32((~INTR_STATUS0__ECC_ERR) &
+				(~INTR_STATUS0__ECC_TRANSACTION_DONE) &
+				(~INTR_STATUS0__DMA_CMD_COMP),
+				FlashReg + intr_status);
+
+		}
+
+		iowrite32(ioread32(FlashReg + intr_status),
+			FlashReg + intr_status);
+
+		iowrite32(0, FlashReg + DMA_ENABLE);
+
+		while ((ioread32(FlashReg + DMA_ENABLE) & DMA_ENABLE__FLAG))
+			;
+	}
+	return status;
+}
+
+u16 NAND_Read_Page_Main(u8 *read_data, u32 block, u16 page,
+			   u16 page_count)
+{
+	u32 status = PASS;
+	u64 flash_add;
+	u32 intr_status = 0;
+	u32 flash_bank;
+	u32 intr_status_addresses[4] = {INTR_STATUS0,
+		INTR_STATUS1, INTR_STATUS2, INTR_STATUS3};
+#if DDMA
+	int ret;
+#else
+	u32 i;
+#endif
+
+	nand_dbg_print(NAND_DBG_DEBUG, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	status = Boundary_Check_Block_Page(block, page, page_count);
+	if (status != PASS)
+		return status;
+
+	flash_add = (u64)(block % (DeviceInfo.wTotalBlocks / totalUsedBanks))
+		* DeviceInfo.wBlockDataSize +
+		(u64)page * DeviceInfo.wPageDataSize;
+	flash_bank = block / (DeviceInfo.wTotalBlocks / totalUsedBanks);
+
+	iowrite32(0, FlashReg + TRANSFER_SPARE_REG);
+
+	intr_status = intr_status_addresses[flash_bank];
+	iowrite32(ioread32(FlashReg + intr_status), FlashReg + intr_status);
+
+	if (page_count > 1) {
+		if (ioread32(FlashReg + MULTIPLANE_OPERATION))
+			status = NAND_Multiplane_Read(read_data,
+						block, page, page_count);
+		else
+			status = NAND_Pipeline_Read_Ahead(read_data,
+						block, page, page_count);
+		return status;
+	}
+
+#if DDMA
+	iowrite32(1, FlashReg + DMA_ENABLE);
+	while (!(ioread32(FlashReg + DMA_ENABLE) & DMA_ENABLE__FLAG))
+		;
+
+	iowrite32(0, FlashReg + TRANSFER_SPARE_REG);
+	iowrite32(ioread32(FlashReg + intr_status), FlashReg + intr_status);
+
+	/* Fill the mrst_nand_info structure */
+	info.state = INT_READ_PAGE_MAIN;
+	info.read_data = read_data;
+	info.flash_bank = flash_bank;
+	info.block = block;
+	info.page = page;
+	info.ret = PASS;
+
+	ddma_trans(read_data, flash_add, flash_bank, 0, 1);
+
+	iowrite32(1, FlashReg + GLOBAL_INT_ENABLE); /* Enable Interrupt */
+
+	ret = wait_for_completion_timeout(&info.complete, 10 * HZ);
+	if (!ret)
+		printk(KERN_ERR "Wait for completion timeout "
+			"in %s, Line %d\n", __FILE__, __LINE__);
+	status = info.ret;
+
+	iowrite32(ioread32(FlashReg + intr_status), FlashReg + intr_status);
+
+	iowrite32(0, FlashReg + DMA_ENABLE);
+	while ((ioread32(FlashReg + DMA_ENABLE) & DMA_ENABLE__FLAG))
+		;
+#else
+	index_addr((u32)(MODE_10 | (flash_bank << 24) |
+		(flash_add >> DeviceInfo.nBitsInPageDataSize)),
+		0x42);
+	index_addr((u32)(MODE_10 | (flash_bank << 24) |
+		(flash_add >> DeviceInfo.nBitsInPageDataSize)),
+		0x2000 | page_count);
+
+	while (!(ioread32(FlashReg + intr_status) & INTR_STATUS0__LOAD_COMP))
+		;
+
+	iowrite32((u32)(MODE_01 | (flash_bank << 24) |
+		(flash_add >> DeviceInfo.nBitsInPageDataSize)), FlashMem);
+
+	for (i = 0; i < DeviceInfo.wPageDataSize / 4; i++)
+		*(((u32 *)read_data) + i) = ioread32(FlashMem + 0x10);
+
+	if (ioread32(FlashReg + ECC_ENABLE)) {
+		while (!(ioread32(FlashReg + intr_status) &
+			(INTR_STATUS0__ECC_TRANSACTION_DONE |
+			INTR_STATUS0__ECC_ERR)))
+			;
+		if (ioread32(FlashReg + intr_status) & INTR_STATUS0__ECC_ERR) {
+			iowrite32(INTR_STATUS0__ECC_ERR,
+					FlashReg + intr_status);
+			status = do_ecc_new(flash_bank, read_data,
+						block, page);
+		}
+
+		if (ioread32(FlashReg + intr_status) &
+			INTR_STATUS0__ECC_TRANSACTION_DONE &
+			INTR_STATUS0__ECC_ERR)
+			iowrite32(INTR_STATUS0__ECC_TRANSACTION_DONE |
+				INTR_STATUS0__ECC_ERR,
+				FlashReg + intr_status);
+		else if (ioread32(FlashReg + intr_status) &
+			INTR_STATUS0__ECC_TRANSACTION_DONE)
+			iowrite32(INTR_STATUS0__ECC_TRANSACTION_DONE,
+				FlashReg + intr_status);
+		else if (ioread32(FlashReg + intr_status) &
+			INTR_STATUS0__ECC_ERR)
+			iowrite32(INTR_STATUS0__ECC_ERR,
+				FlashReg + intr_status);
+	}
+
+#endif
+
+	return status;
+}
+
+void Conv_Spare_Data_Log2Phy_Format(u8 *data)
+{
+	int i;
+	const u32 spareFlagBytes = DeviceInfo.wNumPageSpareFlag;
+	const u32 PageSpareSize  = DeviceInfo.wPageSpareSize;
+
+	if (ioread32(FlashReg + ECC_ENABLE)) {
+		for (i = spareFlagBytes - 1; i >= 0; i++)
+			data[PageSpareSize - spareFlagBytes + i] = data[i];
+	}
+}
+
+void Conv_Spare_Data_Phy2Log_Format(u8 *data)
+{
+	int i;
+	const u32 spareFlagBytes = DeviceInfo.wNumPageSpareFlag;
+	const u32 PageSpareSize = DeviceInfo.wPageSpareSize;
+
+	if (ioread32(FlashReg + ECC_ENABLE)) {
+		for (i = 0; i < spareFlagBytes; i++)
+			data[i] = data[PageSpareSize - spareFlagBytes + i];
+	}
+}
+
+
+void Conv_Main_Spare_Data_Log2Phy_Format(u8 *data, u16 page_count)
+{
+	const u32 PageSize = DeviceInfo.wPageSize;
+	const u32 PageDataSize = DeviceInfo.wPageDataSize;
+	const u32 eccBytes = DeviceInfo.wECCBytesPerSector;
+	const u32 spareSkipBytes = DeviceInfo.wSpareSkipBytes;
+	const u32 spareFlagBytes = DeviceInfo.wNumPageSpareFlag;
+	u32 eccSectorSize;
+	u32 page_offset;
+	int i, j;
+
+	eccSectorSize = ECC_SECTOR_SIZE * (DeviceInfo.wDevicesConnected);
+	if (ioread32(FlashReg + ECC_ENABLE)) {
+		while (page_count > 0) {
+			page_offset = (page_count - 1) * PageSize;
+			j = (DeviceInfo.wPageDataSize / eccSectorSize);
+			for (i = spareFlagBytes - 1; i >= 0; i--)
+				data[page_offset +
+					(eccSectorSize + eccBytes) * j + i] =
+					data[page_offset + PageDataSize + i];
+			for (j--; j >= 1; j--) {
+				for (i = eccSectorSize - 1; i >= 0; i--)
+					data[page_offset +
+					(eccSectorSize + eccBytes) * j + i] =
+						data[page_offset +
+						eccSectorSize * j + i];
+			}
+			for (i = (PageSize - spareSkipBytes) - 1;
+				i >= PageDataSize; i--)
+				data[page_offset + i + spareSkipBytes] =
+					data[page_offset + i];
+			page_count--;
+		}
+	}
+}
+
+void Conv_Main_Spare_Data_Phy2Log_Format(u8 *data, u16 page_count)
+{
+	const u32 PageSize = DeviceInfo.wPageSize;
+	const u32 PageDataSize = DeviceInfo.wPageDataSize;
+	const u32 eccBytes = DeviceInfo.wECCBytesPerSector;
+	const u32 spareSkipBytes = DeviceInfo.wSpareSkipBytes;
+	const u32 spareFlagBytes = DeviceInfo.wNumPageSpareFlag;
+	u32 eccSectorSize;
+	u32 page_offset;
+	int i, j;
+
+	eccSectorSize = ECC_SECTOR_SIZE * (DeviceInfo.wDevicesConnected);
+	if (ioread32(FlashReg + ECC_ENABLE)) {
+		while (page_count > 0) {
+			page_offset = (page_count - 1) * PageSize;
+			for (i = PageDataSize;
+				i < PageSize - spareSkipBytes;
+				i++)
+				data[page_offset + i] =
+					data[page_offset + i +
+					spareSkipBytes];
+			for (j = 1;
+			j < DeviceInfo.wPageDataSize / eccSectorSize;
+			j++) {
+				for (i = 0; i < eccSectorSize; i++)
+					data[page_offset +
+					eccSectorSize * j + i] =
+						data[page_offset +
+						(eccSectorSize + eccBytes) * j
+						+ i];
+			}
+			for (i = 0; i < spareFlagBytes; i++)
+				data[page_offset + PageDataSize + i] =
+					data[page_offset +
+					(eccSectorSize + eccBytes) * j + i];
+			page_count--;
+		}
+	}
+}
+
+u16 NAND_Multiplane_Read(u8 *read_data, u32 block, u16 page,
+			    u16 page_count)
+{
+	u32 status = PASS;
+	u32 NumPages = page_count;
+	u64 flash_add;
+	u32 flash_bank;
+	u32 intr_status = 0;
+	u32 intr_status_addresses[4] = {INTR_STATUS0,
+		INTR_STATUS1, INTR_STATUS2, INTR_STATUS3};
+
+#if DDMA
+	u32 ecc_done_OR_dma_comp;
+#else
+	u32 PageSize = DeviceInfo.wPageDataSize;
+	u32 sector_count = 0;
+	u32 SectorStart, SectorEnd;
+	u32 bSectorsPerPage = 4;
+	u32 i, page_num = 0;
+	u32 plane = 0;
+	u8 *read_data_l = read_data;
+#endif
+
+	nand_dbg_print(NAND_DBG_WARN, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	status = Boundary_Check_Block_Page(block, page, page_count);
+
+	flash_add = (u64)(block % (DeviceInfo.wTotalBlocks / totalUsedBanks))
+		* DeviceInfo.wBlockDataSize +
+		(u64)page * DeviceInfo.wPageDataSize;
+
+	flash_bank = block / (DeviceInfo.wTotalBlocks / totalUsedBanks);
+
+	if (status == PASS) {
+		intr_status = intr_status_addresses[flash_bank];
+		iowrite32(ioread32(FlashReg + intr_status),
+			FlashReg + intr_status);
+
+		iowrite32(0, FlashReg + TRANSFER_SPARE_REG);
+		iowrite32(0x01, FlashReg + MULTIPLANE_OPERATION);
+#if DDMA
+
+		iowrite32(1, FlashReg + DMA_ENABLE);
+		while (!(ioread32(FlashReg + DMA_ENABLE) & DMA_ENABLE__FLAG))
+			;
+		index_addr((u32)(MODE_10 | (flash_bank << 24) |
+			(flash_add >> DeviceInfo.nBitsInPageDataSize)), 0x42);
+		ddma_trans(read_data, flash_add, flash_bank, 0, NumPages);
+
+		ecc_done_OR_dma_comp = 0;
+		while (1) {
+			if (ioread32(FlashReg + ECC_ENABLE)) {
+				while (!ioread32(FlashReg + intr_status))
+					;
+
+				if (ioread32(FlashReg + intr_status) &
+					INTR_STATUS0__ECC_ERR) {
+					iowrite32(INTR_STATUS0__ECC_ERR,
+						FlashReg + intr_status);
+					status = do_ecc_new(flash_bank,
+						read_data, block, page);
+				} else if (ioread32(FlashReg + intr_status) &
+					INTR_STATUS0__DMA_CMD_COMP) {
+					iowrite32(INTR_STATUS0__DMA_CMD_COMP,
+						FlashReg + intr_status);
+
+					if (1 == ecc_done_OR_dma_comp)
+						break;
+
+					ecc_done_OR_dma_comp = 1;
+				} else if (ioread32(FlashReg + intr_status) &
+					INTR_STATUS0__ECC_TRANSACTION_DONE) {
+					iowrite32(
+					INTR_STATUS0__ECC_TRANSACTION_DONE,
+					FlashReg + intr_status);
+
+					if (1 == ecc_done_OR_dma_comp)
+						break;
+
+					ecc_done_OR_dma_comp = 1;
+				}
+			} else {
+				while (!(ioread32(FlashReg + intr_status) &
+					INTR_STATUS0__DMA_CMD_COMP))
+					;
+				iowrite32(INTR_STATUS0__DMA_CMD_COMP,
+					FlashReg + intr_status);
+				break;
+			}
+
+			iowrite32((~INTR_STATUS0__ECC_ERR) &
+				(~INTR_STATUS0__ECC_TRANSACTION_DONE) &
+				(~INTR_STATUS0__DMA_CMD_COMP),
+				FlashReg + intr_status);
+
+		}
+
+		iowrite32(ioread32(FlashReg + intr_status),
+			FlashReg + intr_status);
+
+		iowrite32(0, FlashReg + DMA_ENABLE);
+
+		while ((ioread32(FlashReg + DMA_ENABLE) & DMA_ENABLE__FLAG))
+			;
+
+		iowrite32(0, FlashReg + MULTIPLANE_OPERATION);
+
+#else
+
+		if (ioread32(FlashReg + ECC_ENABLE))
+			iowrite32(INTR_STATUS0__ECC_ERR,
+				FlashReg + intr_status);
+
+		index_addr((u32)(MODE_10 | (flash_bank << 24) |
+			(flash_add >> DeviceInfo.nBitsInPageDataSize)),
+			0x42);
+		index_addr((u32)(MODE_10 | (flash_bank << 24) |
+			(flash_add >> DeviceInfo.nBitsInPageDataSize)),
+			0x2000 | page_count);
+
+		while (NumPages > 0) {
+			if (plane == 0) {
+				iowrite32((u32)(MODE_01 | (flash_bank << 24) |
+					((flash_add +
+					page_num * DeviceInfo.wPageDataSize)
+					>> DeviceInfo.nBitsInPageDataSize)),
+					FlashMem);
+				plane = 1;
+			} else {
+				iowrite32((u32)(MODE_01 | (flash_bank << 24) |
+					((flash_add +
+					DeviceInfo.wBlockDataSize +
+					page_num * DeviceInfo.wPageDataSize)
+					>> DeviceInfo.nBitsInPageDataSize)),
+					FlashMem);
+				plane = 0;
+			}
+
+			for (sector_count = 0; sector_count < bSectorsPerPage;
+			     sector_count++) {
+				SectorStart = sector_count *
+					(DeviceInfo.wPageDataSize /
+					(4 * bSectorsPerPage));
+				SectorEnd = (sector_count + 1) *
+					(DeviceInfo.wPageDataSize /
+					(4 * bSectorsPerPage));
+
+				for (i = SectorStart; i < SectorEnd; i++)
+					*(((u32 *)read_data_l) + i) =
+						ioread32(FlashMem + 0x10);
+
+				if (ioread32(FlashReg + ECC_ENABLE)) {
+					if (ioread32(FlashReg + intr_status) &
+						INTR_STATUS0__ECC_ERR) {
+						iowrite32(
+						INTR_STATUS0__ECC_ERR,
+						FlashReg + intr_status);
+						status = do_ecc_new(
+							flash_bank,
+							read_data,
+							block, page);
+					}
+				}
+			}
+
+			if (plane == 0)
+				page_num++;
+
+			read_data_l += PageSize;
+			--NumPages;
+		}
+
+		if (ioread32(FlashReg + ECC_ENABLE)) {
+			while (!(ioread32(FlashReg + intr_status) &
+				(INTR_STATUS0__ECC_TRANSACTION_DONE |
+				 INTR_STATUS0__ECC_ERR)))
+				;
+
+			if (ioread32(FlashReg + intr_status) &
+				INTR_STATUS0__ECC_ERR) {
+				iowrite32(INTR_STATUS0__ECC_ERR,
+					FlashReg + intr_status);
+				status = do_ecc_new(flash_bank,
+					read_data, block, page);
+				while (!(ioread32(FlashReg + intr_status) &
+					INTR_STATUS0__ECC_TRANSACTION_DONE))
+					;
+
+				iowrite32(INTR_STATUS0__ECC_TRANSACTION_DONE,
+					FlashReg + intr_status);
+			} else if (ioread32(FlashReg + intr_status) &
+				   INTR_STATUS0__ECC_TRANSACTION_DONE) {
+				iowrite32(INTR_STATUS0__ECC_TRANSACTION_DONE,
+					FlashReg + intr_status);
+			}
+		}
+
+		iowrite32(0, FlashReg + MULTIPLANE_OPERATION);
+
+#endif
+	}
+	return status;
+}
+
+u16 NAND_Pipeline_Read_Ahead(u8 *read_data, u32 block,
+				u16 page, u16 page_count)
+{
+	u32 status = PASS;
+	u32 NumPages = page_count;
+	u64 flash_add;
+	u32 flash_bank;
+	u32 intr_status = 0;
+	u32 intr_status_addresses[4] = {INTR_STATUS0,
+		INTR_STATUS1, INTR_STATUS2, INTR_STATUS3};
+#if DDMA
+	int ret;
+#else
+	u32 PageSize = DeviceInfo.wPageDataSize;
+	u32 sector_count = 0;
+	u32 SectorStart, SectorEnd;
+	u32 bSectorsPerPage = 4;
+	u32 i, page_num = 0;
+	u8 *read_data_l = read_data;
+#endif
+	nand_dbg_print(NAND_DBG_DEBUG, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	status = Boundary_Check_Block_Page(block, page, page_count);
+
+	if (page_count < 2)
+		status = FAIL;
+
+	if (status != PASS)
+		return status;
+
+	flash_add = (u64)(block % (DeviceInfo.wTotalBlocks / totalUsedBanks))
+		*DeviceInfo.wBlockDataSize +
+		(u64)page * DeviceInfo.wPageDataSize;
+
+	flash_bank = block / (DeviceInfo.wTotalBlocks / totalUsedBanks);
+
+	intr_status = intr_status_addresses[flash_bank];
+	iowrite32(ioread32(FlashReg + intr_status), FlashReg + intr_status);
+
+#if DDMA
+	iowrite32(1, FlashReg + DMA_ENABLE);
+	while (!(ioread32(FlashReg + DMA_ENABLE) & DMA_ENABLE__FLAG))
+		;
+
+	iowrite32(0, FlashReg + TRANSFER_SPARE_REG);
+
+	/* Fill the mrst_nand_info structure */
+	info.state = INT_PIPELINE_READ_AHEAD;
+	info.read_data = read_data;
+	info.flash_bank = flash_bank;
+	info.block = block;
+	info.page = page;
+	info.ret = PASS;
+
+	index_addr((u32)(MODE_10 | (flash_bank << 24) |
+		(flash_add >> DeviceInfo.nBitsInPageDataSize)), 0x42);
+
+	ddma_trans(read_data, flash_add, flash_bank, 0, NumPages);
+
+	iowrite32(1, FlashReg + GLOBAL_INT_ENABLE); /* Enable Interrupt */
+
+	ret = wait_for_completion_timeout(&info.complete, 10 * HZ);
+	if (!ret)
+		printk(KERN_ERR "Wait for completion timeout "
+			"in %s, Line %d\n", __FILE__, __LINE__);
+
+	status = info.ret;
+
+	iowrite32(ioread32(FlashReg + intr_status), FlashReg + intr_status);
+
+	iowrite32(0, FlashReg + DMA_ENABLE);
+
+	while ((ioread32(FlashReg + DMA_ENABLE) & DMA_ENABLE__FLAG))
+		;
+
+#else
+	iowrite32(0, FlashReg + TRANSFER_SPARE_REG);
+
+	iowrite32(ioread32(FlashReg + intr_status), FlashReg + intr_status);
+
+	index_addr((u32)(MODE_10 | (flash_bank << 24) |
+		(flash_add >> DeviceInfo.nBitsInPageDataSize)), 0x42);
+
+	index_addr((u32)(MODE_10 | (flash_bank << 24) |
+		(flash_add >> DeviceInfo.nBitsInPageDataSize)),
+		0x2000 | NumPages);
+
+	while (NumPages > 0) {
+		iowrite32((u32)(MODE_01 | (flash_bank << 24) |
+			((flash_add + page_num * DeviceInfo.wPageDataSize) >>
+				DeviceInfo.nBitsInPageDataSize)), FlashMem);
+
+		for (sector_count = 0; sector_count < bSectorsPerPage;
+		sector_count++) {
+			SectorStart = sector_count *
+				(DeviceInfo.wPageDataSize /
+				(4 * bSectorsPerPage));
+			SectorEnd = (sector_count + 1) *
+				(DeviceInfo.wPageDataSize /
+				(4 * bSectorsPerPage));
+
+			for (i = SectorStart; i < SectorEnd; i++)
+				*(((u32 *)read_data_l) + i) =
+					ioread32(FlashMem + 0x10);
+
+			if (ioread32(FlashReg + ECC_ENABLE)) {
+				if (ioread32(FlashReg + intr_status) &
+					INTR_STATUS0__ECC_ERR) {
+					iowrite32(INTR_STATUS0__ECC_ERR,
+						FlashReg + intr_status);
+					status = do_ecc_new(flash_bank,
+						read_data, block, page);
+				}
+			}
+		}
+
+		read_data_l += PageSize;
+		--NumPages;
+		page_num++;
+	}
+
+	if (ioread32(FlashReg + ECC_ENABLE)) {
+		while (!(ioread32(FlashReg + intr_status) &
+			(INTR_STATUS0__ECC_TRANSACTION_DONE |
+			INTR_STATUS0__ECC_ERR)))
+			;
+
+		if (ioread32(FlashReg + intr_status) &
+			INTR_STATUS0__ECC_ERR) {
+			iowrite32(INTR_STATUS0__ECC_ERR,
+				FlashReg + intr_status);
+			status = do_ecc_new(flash_bank, read_data,
+				block, page);
+			while (!(ioread32(FlashReg + intr_status) &
+				INTR_STATUS0__ECC_TRANSACTION_DONE))
+				;
+
+			iowrite32(INTR_STATUS0__ECC_TRANSACTION_DONE,
+				FlashReg + intr_status);
+		} else if (ioread32(FlashReg + intr_status) &
+			INTR_STATUS0__ECC_TRANSACTION_DONE) {
+			iowrite32(INTR_STATUS0__ECC_TRANSACTION_DONE,
+				FlashReg + intr_status);
+		}
+	}
+#endif
+
+	return status;
+}
+
+
+#endif
+#if FLASH_NAND
+
+u16 NAND_Write_Page_Main(u8 *write_data, u32 block, u16 page,
+			    u16 page_count)
+{
+	u32 status = PASS;
+	u64 flash_add;
+	u32 intr_status = 0;
+	u32 flash_bank;
+	u32 intr_status_addresses[4] = {INTR_STATUS0,
+		INTR_STATUS1, INTR_STATUS2, INTR_STATUS3};
+#if  DDMA
+	int ret;
+#else
+	u32 i;
+#endif
+
+	nand_dbg_print(NAND_DBG_DEBUG, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	status = Boundary_Check_Block_Page(block, page, page_count);
+	if (status != PASS)
+		return status;
+
+	flash_add = (u64)(block % (DeviceInfo.wTotalBlocks / totalUsedBanks))
+		* DeviceInfo.wBlockDataSize +
+		(u64)page * DeviceInfo.wPageDataSize;
+
+	flash_bank = block / (DeviceInfo.wTotalBlocks / totalUsedBanks);
+
+	intr_status = intr_status_addresses[flash_bank];
+
+	iowrite32(0, FlashReg + TRANSFER_SPARE_REG);
+
+	iowrite32(INTR_STATUS0__PROGRAM_COMP |
+		INTR_STATUS0__PROGRAM_FAIL, FlashReg + intr_status);
+
+	if (page_count > 1) {
+		if (ioread32(FlashReg + MULTIPLANE_OPERATION))
+			status = NAND_Multiplane_Write(write_data,
+				block, page, page_count);
+		else
+			status = NAND_Pipeline_Write_Ahead(write_data,
+				block, page, page_count);
+		return status;
+	}
+
+#if DDMA
+	iowrite32(1, FlashReg + DMA_ENABLE);
+	while (!(ioread32(FlashReg + DMA_ENABLE) & DMA_ENABLE__FLAG))
+		;
+
+	iowrite32(0, FlashReg + TRANSFER_SPARE_REG);
+
+	iowrite32(ioread32(FlashReg + intr_status), FlashReg + intr_status);
+
+	/* Fill the mrst_nand_info structure */
+	info.state = INT_WRITE_PAGE_MAIN;
+	info.write_data = write_data;
+	info.flash_bank = flash_bank;
+	info.block = block;
+	info.page = page;
+	info.ret = PASS;
+
+	ddma_trans(write_data, flash_add, flash_bank, 1, 1);
+
+	iowrite32(1, FlashReg + GLOBAL_INT_ENABLE); /* Enable interrupt */
+
+	ret = wait_for_completion_timeout(&info.complete, 10 * HZ);
+	if (!ret)
+		printk(KERN_ERR "Wait for completion timeout "
+			"in %s, Line %d\n", __FILE__, __LINE__);
+
+	status = info.ret;
+
+	iowrite32(ioread32(FlashReg + intr_status), FlashReg + intr_status);
+
+	iowrite32(0, FlashReg + DMA_ENABLE);
+	while (ioread32(FlashReg + DMA_ENABLE) & DMA_ENABLE__FLAG)
+		;
+
+#else
+	iowrite32((u32)(MODE_01 | (flash_bank << 24) |
+		(flash_add >> DeviceInfo.nBitsInPageDataSize)), FlashMem);
+
+	for (i = 0; i < DeviceInfo.wPageDataSize / 4; i++)
+		iowrite32(*(((u32 *)write_data) + i), FlashMem + 0x10);
+
+	while (!(ioread32(FlashReg + intr_status) &
+		(INTR_STATUS0__PROGRAM_COMP | INTR_STATUS0__PROGRAM_FAIL)))
+		;
+
+	if (ioread32(FlashReg + intr_status) & INTR_STATUS0__PROGRAM_FAIL)
+		status = FAIL;
+
+	iowrite32(INTR_STATUS0__PROGRAM_COMP |
+		INTR_STATUS0__PROGRAM_FAIL, FlashReg + intr_status);
+
+#endif
+
+	return status;
+}
+
+void NAND_ECC_Ctrl(int enable)
+{
+	if (enable) {
+		nand_dbg_print(NAND_DBG_WARN,
+			"Will enable ECC in %s, Line %d, Function: %s\n",
+			__FILE__, __LINE__, __func__);
+		iowrite32(1, FlashReg + ECC_ENABLE);
+	} else {
+		nand_dbg_print(NAND_DBG_WARN,
+			"Will disable ECC in %s, Line %d, Function: %s\n",
+			__FILE__, __LINE__, __func__);
+		iowrite32(0, FlashReg + ECC_ENABLE);
+	}
+}
+
+u32 NAND_Memory_Pool_Size(void)
+{
+	return MAX_PAGE_MAINSPARE_AREA;
+}
+
+int NAND_Mem_Config(u8 *pMem)
+{
+	return 0;
+}
+
+u16 NAND_Write_Page_Main_Spare(u8 *write_data, u32 block,
+					u16 page, u16 page_count)
+{
+	u32 status = PASS;
+	u32 i, j, page_num = 0;
+	u32 PageSize = DeviceInfo.wPageSize;
+	u32 PageDataSize = DeviceInfo.wPageDataSize;
+	u32 eccBytes = DeviceInfo.wECCBytesPerSector;
+	u32 spareFlagBytes = DeviceInfo.wNumPageSpareFlag;
+	u32 spareSkipBytes  = DeviceInfo.wSpareSkipBytes;
+	u64 flash_add;
+	u32 eccSectorSize;
+	u32 flash_bank;
+	u32 intr_status = 0;
+	u32 intr_status_addresses[4] = {INTR_STATUS0,
+		INTR_STATUS1, INTR_STATUS2, INTR_STATUS3};
+	u8 *page_main_spare;
+
+	nand_dbg_print(NAND_DBG_WARN, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	page_main_spare = kmalloc(DeviceInfo.wPageSize, GFP_ATOMIC);
+	if (!page_main_spare) {
+		printk(KERN_ERR "Failed to kmalloc memory in %s Line %d, exit.\n",
+			__FILE__, __LINE__);
+		return FAIL;
+	}
+
+	eccSectorSize = ECC_SECTOR_SIZE * (DeviceInfo.wDevicesConnected);
+
+	status = Boundary_Check_Block_Page(block, page, page_count);
+
+	flash_bank = block / (DeviceInfo.wTotalBlocks / totalUsedBanks);
+
+	if (status == PASS) {
+		intr_status = intr_status_addresses[flash_bank];
+
+		iowrite32(1, FlashReg + TRANSFER_SPARE_REG);
+
+		while ((status != FAIL) && (page_count > 0)) {
+			flash_add = (u64)(block %
+			(DeviceInfo.wTotalBlocks / totalUsedBanks)) *
+			DeviceInfo.wBlockDataSize +
+			(u64)page * DeviceInfo.wPageDataSize;
+
+			iowrite32(ioread32(FlashReg + intr_status),
+				FlashReg + intr_status);
+
+			iowrite32((u32)(MODE_01 | (flash_bank << 24) |
+				(flash_add >>
+				DeviceInfo.nBitsInPageDataSize)),
+				FlashMem);
+
+			if (ioread32(FlashReg + ECC_ENABLE)) {
+				for (j = 0;
+				     j <
+				     DeviceInfo.wPageDataSize / eccSectorSize;
+				     j++) {
+					for (i = 0; i < eccSectorSize; i++)
+						page_main_spare[(eccSectorSize +
+								 eccBytes) * j +
+								i] =
+						    write_data[eccSectorSize *
+							       j + i];
+
+					for (i = 0; i < eccBytes; i++)
+						page_main_spare[(eccSectorSize +
+								 eccBytes) * j +
+								eccSectorSize +
+								i] =
+						    write_data[PageDataSize +
+							       spareFlagBytes +
+							       eccBytes * j +
+							       i];
+				}
+
+				for (i = 0; i < spareFlagBytes; i++)
+					page_main_spare[(eccSectorSize +
+							 eccBytes) * j + i] =
+					    write_data[PageDataSize + i];
+
+				for (i = PageSize - 1; i >= PageDataSize +
+							spareSkipBytes; i--)
+					page_main_spare[i] = page_main_spare[i -
+								spareSkipBytes];
+
+				for (i = PageDataSize; i < PageDataSize +
+							spareSkipBytes; i++)
+					page_main_spare[i] = 0xff;
+
+				for (i = 0; i < PageSize / 4; i++)
+					iowrite32(
+					*((u32 *)page_main_spare + i),
+					FlashMem + 0x10);
+			} else {
+
+				for (i = 0; i < PageSize / 4; i++)
+					iowrite32(*((u32 *)write_data + i),
+						FlashMem + 0x10);
+			}
+
+			while (!(ioread32(FlashReg + intr_status) &
+				(INTR_STATUS0__PROGRAM_COMP |
+				INTR_STATUS0__PROGRAM_FAIL)))
+				;
+
+			if (ioread32(FlashReg + intr_status) &
+				INTR_STATUS0__PROGRAM_FAIL)
+				status = FAIL;
+
+			iowrite32(ioread32(FlashReg + intr_status),
+					FlashReg + intr_status);
+
+			page_num++;
+			page_count--;
+			write_data += PageSize;
+		}
+
+		iowrite32(0, FlashReg + TRANSFER_SPARE_REG);
+	}
+
+	kfree(page_main_spare);
+	return status;
+}
+
+u16 NAND_Read_Page_Main_Spare(u8 *read_data, u32 block, u16 page,
+				 u16 page_count)
+{
+	u32 status = PASS;
+	u32 i, j;
+	u64 flash_add = 0;
+	u32 PageSize = DeviceInfo.wPageSize;
+	u32 PageDataSize = DeviceInfo.wPageDataSize;
+	u32 PageSpareSize = DeviceInfo.wPageSpareSize;
+	u32 eccBytes = DeviceInfo.wECCBytesPerSector;
+	u32 spareFlagBytes = DeviceInfo.wNumPageSpareFlag;
+	u32 spareSkipBytes  = DeviceInfo.wSpareSkipBytes;
+	u32 eccSectorSize;
+	u32 flash_bank;
+	u32 intr_status = 0;
+	u8 *read_data_l = read_data;
+	u32 intr_status_addresses[4] = {INTR_STATUS0,
+		INTR_STATUS1, INTR_STATUS2, INTR_STATUS3};
+	u8 *page_main_spare;
+
+	nand_dbg_print(NAND_DBG_WARN, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	page_main_spare = kmalloc(DeviceInfo.wPageSize, GFP_ATOMIC);
+	if (!page_main_spare) {
+		printk(KERN_ERR "Failed to kmalloc memory in %s Line %d, exit.\n",
+			__FILE__, __LINE__);
+		return FAIL;
+	}
+
+	eccSectorSize = ECC_SECTOR_SIZE * (DeviceInfo.wDevicesConnected);
+
+	status = Boundary_Check_Block_Page(block, page, page_count);
+
+	flash_bank = block / (DeviceInfo.wTotalBlocks / totalUsedBanks);
+
+	if (status == PASS) {
+		intr_status = intr_status_addresses[flash_bank];
+
+		iowrite32(1, FlashReg + TRANSFER_SPARE_REG);
+
+		iowrite32(ioread32(FlashReg + intr_status),
+				FlashReg + intr_status);
+
+		while ((status != FAIL) && (page_count > 0)) {
+			flash_add = (u64)(block %
+				(DeviceInfo.wTotalBlocks / totalUsedBanks))
+				* DeviceInfo.wBlockDataSize +
+				(u64)page * DeviceInfo.wPageDataSize;
+
+			index_addr((u32)(MODE_10 | (flash_bank << 24) |
+				(flash_add >> DeviceInfo.nBitsInPageDataSize)),
+				0x43);
+			index_addr((u32)(MODE_10 | (flash_bank << 24) |
+				(flash_add >> DeviceInfo.nBitsInPageDataSize)),
+				0x2000 | page_count);
+
+			while (!(ioread32(FlashReg + intr_status) &
+				INTR_STATUS0__LOAD_COMP))
+				;
+
+			iowrite32((u32)(MODE_01 | (flash_bank << 24) |
+				(flash_add >>
+				DeviceInfo.nBitsInPageDataSize)),
+				FlashMem);
+
+			for (i = 0; i < PageSize / 4; i++)
+				*(((u32 *)page_main_spare) + i) =
+					ioread32(FlashMem + 0x10);
+
+			if (ioread32(FlashReg + ECC_ENABLE)) {
+				for (i = PageDataSize;  i < PageSize -
+							spareSkipBytes; i++)
+					page_main_spare[i] = page_main_spare[i +
+								spareSkipBytes];
+
+				for (j = 0;
+				j < DeviceInfo.wPageDataSize / eccSectorSize;
+				j++) {
+
+					for (i = 0; i < eccSectorSize; i++)
+						read_data_l[eccSectorSize * j +
+							    i] =
+						    page_main_spare[
+							(eccSectorSize +
+							eccBytes) * j + i];
+
+					for (i = 0; i < eccBytes; i++)
+						read_data_l[PageDataSize +
+							    spareFlagBytes +
+							    eccBytes * j + i] =
+						    page_main_spare[
+							(eccSectorSize +
+							eccBytes) * j +
+							eccSectorSize + i];
+				}
+
+				for (i = 0; i < spareFlagBytes; i++)
+					read_data_l[PageDataSize + i] =
+					    page_main_spare[(eccSectorSize +
+							     eccBytes) * j + i];
+			} else {
+				for (i = 0; i < (PageDataSize + PageSpareSize);
+				     i++)
+					read_data_l[i] = page_main_spare[i];
+
+			}
+
+			if (ioread32(FlashReg + ECC_ENABLE)) {
+				while (!(ioread32(FlashReg + intr_status) &
+					(INTR_STATUS0__ECC_TRANSACTION_DONE |
+					INTR_STATUS0__ECC_ERR)))
+					;
+
+				if (ioread32(FlashReg + intr_status) &
+					INTR_STATUS0__ECC_ERR) {
+					iowrite32(INTR_STATUS0__ECC_ERR,
+						FlashReg + intr_status);
+					status = do_ecc_new(flash_bank,
+						read_data, block, page);
+				}
+
+				if (ioread32(FlashReg + intr_status) &
+					INTR_STATUS0__ECC_TRANSACTION_DONE &
+					INTR_STATUS0__ECC_ERR) {
+					iowrite32(INTR_STATUS0__ECC_ERR |
+					INTR_STATUS0__ECC_TRANSACTION_DONE,
+					FlashReg + intr_status);
+				} else if (ioread32(FlashReg + intr_status) &
+					INTR_STATUS0__ECC_TRANSACTION_DONE) {
+					iowrite32(
+					INTR_STATUS0__ECC_TRANSACTION_DONE,
+					FlashReg + intr_status);
+				} else if (ioread32(FlashReg + intr_status) &
+					INTR_STATUS0__ECC_ERR) {
+					iowrite32(INTR_STATUS0__ECC_ERR,
+						FlashReg + intr_status);
+				}
+			}
+
+			page++;
+			page_count--;
+			read_data_l += PageSize;
+		}
+	}
+
+	iowrite32(0, FlashReg + TRANSFER_SPARE_REG);
+
+	index_addr((u32)(MODE_10 | (flash_bank << 24) |
+		(flash_add >> DeviceInfo.nBitsInPageDataSize)), 0x42);
+
+	kfree(page_main_spare);
+	return status;
+}
+
+u16 NAND_Pipeline_Write_Ahead(u8 *write_data, u32 block,
+			u16 page, u16 page_count)
+{
+	u16 status = PASS;
+	u32 NumPages = page_count;
+	u64 flash_add;
+	u32 flash_bank;
+	u32 intr_status = 0;
+	u32 intr_status_addresses[4] = {INTR_STATUS0,
+		INTR_STATUS1, INTR_STATUS2, INTR_STATUS3};
+#if DDMA
+	int ret;
+#else
+	u32 PageSize = DeviceInfo.wPageDataSize;
+	u32 i, page_num = 0;
+#endif
+
+	nand_dbg_print(NAND_DBG_DEBUG, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	status = Boundary_Check_Block_Page(block, page, page_count);
+
+	if (page_count < 2)
+		status = FAIL;
+
+	if (status != PASS)
+		return status;
+
+	flash_add = (u64)(block % (DeviceInfo.wTotalBlocks / totalUsedBanks))
+		* DeviceInfo.wBlockDataSize +
+		(u64)page * DeviceInfo.wPageDataSize;
+
+	flash_bank = block / (DeviceInfo.wTotalBlocks / totalUsedBanks);
+
+	intr_status = intr_status_addresses[flash_bank];
+	iowrite32(ioread32(FlashReg + intr_status), FlashReg + intr_status);
+
+#if DDMA
+	iowrite32(1, FlashReg + DMA_ENABLE);
+	while (!(ioread32(FlashReg + DMA_ENABLE) & DMA_ENABLE__FLAG))
+		;
+
+	iowrite32(0, FlashReg + TRANSFER_SPARE_REG);
+
+	/* Fill the mrst_nand_info structure */
+	info.state = INT_PIPELINE_WRITE_AHEAD;
+	info.write_data = write_data;
+	info.flash_bank = flash_bank;
+	info.block = block;
+	info.page = page;
+	info.ret = PASS;
+
+	index_addr((u32)(MODE_10 | (flash_bank << 24) |
+		(flash_add >> DeviceInfo.nBitsInPageDataSize)), 0x42);
+
+	ddma_trans(write_data, flash_add, flash_bank, 1, NumPages);
+
+	iowrite32(1, FlashReg + GLOBAL_INT_ENABLE); /* Enable interrupt */
+
+	ret = wait_for_completion_timeout(&info.complete, 10 * HZ);
+	if (!ret)
+		printk(KERN_ERR "Wait for completion timeout "
+			"in %s, Line %d\n", __FILE__, __LINE__);
+
+	status = info.ret;
+
+	iowrite32(ioread32(FlashReg + intr_status), FlashReg + intr_status);
+
+	iowrite32(0, FlashReg + DMA_ENABLE);
+	while ((ioread32(FlashReg + DMA_ENABLE) & DMA_ENABLE__FLAG))
+		;
+
+#else
+
+	iowrite32(0, FlashReg + TRANSFER_SPARE_REG);
+
+	index_addr((u32)(MODE_10 | (flash_bank << 24) |
+		(flash_add >> DeviceInfo.nBitsInPageDataSize)), 0x42);
+
+	index_addr((u32)(MODE_10 | (flash_bank << 24) |
+		(flash_add >> DeviceInfo.nBitsInPageDataSize)),
+		0x2100 | NumPages);
+
+	while (NumPages > 0) {
+		iowrite32((u32)(MODE_01 | (flash_bank << 24) |
+			((flash_add + page_num * DeviceInfo.wPageDataSize) >>
+			DeviceInfo.nBitsInPageDataSize)), FlashMem);
+
+		for (i = 0; i < DeviceInfo.wPageDataSize / 4; i++)
+			iowrite32(*((u32 *)write_data + i), FlashMem + 0x10);
+
+		while (!(ioread32(FlashReg + intr_status) &
+			INTR_STATUS0__INT_ACT))
+			;
+
+		iowrite32(INTR_STATUS0__INT_ACT, FlashReg + intr_status);
+
+		write_data += PageSize;
+		--NumPages;
+		page_num++;
+	}
+
+	while (!(ioread32(FlashReg + intr_status) &
+		(INTR_STATUS0__PROGRAM_COMP | INTR_STATUS0__PROGRAM_FAIL)))
+		;
+
+	if (ioread32(FlashReg + intr_status) & INTR_STATUS0__PROGRAM_FAIL)
+		status = FAIL;
+
+	iowrite32(INTR_STATUS0__PROGRAM_COMP | INTR_STATUS0__PROGRAM_FAIL,
+		FlashReg + intr_status);
+
+#endif
+
+	return status;
+}
+
+u16 NAND_Multiplane_Write(u8 *write_data, u32 block, u16 page,
+			     u16 page_count)
+{
+	u16 status = PASS;
+	u32 NumPages = page_count;
+	u64 flash_add;
+	u32 flash_bank;
+	u32 intr_status = 0;
+	u32 intr_status_addresses[4] = {INTR_STATUS0,
+		INTR_STATUS1, INTR_STATUS2, INTR_STATUS3};
+#if DDMA
+	u16 status2 = PASS;
+	u32 t;
+#else
+	u32 PageSize = DeviceInfo.wPageDataSize;
+	u32 i, page_num = 0;
+	u32 plane = 0;
+#endif
+
+	nand_dbg_print(NAND_DBG_WARN, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	status = Boundary_Check_Block_Page(block, page, page_count);
+	if (status != PASS)
+		return status;
+
+	flash_add = (u64)(block % (DeviceInfo.wTotalBlocks / totalUsedBanks))
+		* DeviceInfo.wBlockDataSize +
+		(u64)page * DeviceInfo.wPageDataSize;
+
+	flash_bank = block / (DeviceInfo.wTotalBlocks / totalUsedBanks);
+
+	intr_status = intr_status_addresses[flash_bank];
+	iowrite32(ioread32(FlashReg + intr_status), FlashReg + intr_status);
+
+	iowrite32(0, FlashReg + TRANSFER_SPARE_REG);
+	iowrite32(0x01, FlashReg + MULTIPLANE_OPERATION);
+
+#if DDMA
+
+	iowrite32(1, FlashReg + DMA_ENABLE);
+	while (!(ioread32(FlashReg + DMA_ENABLE) & DMA_ENABLE__FLAG))
+		;
+
+	iowrite32(0, FlashReg + TRANSFER_SPARE_REG);
+
+	index_addr((u32)(MODE_10 | (flash_bank << 24) |
+		(flash_add >> DeviceInfo.nBitsInPageDataSize)), 0x42);
+
+	ddma_trans(write_data, flash_add, flash_bank, 1, NumPages);
+
+	while (1) {
+		while (!ioread32(FlashReg + intr_status))
+			;
+
+		if (ioread32(FlashReg + intr_status) &
+			INTR_STATUS0__DMA_CMD_COMP) {
+			iowrite32(INTR_STATUS0__DMA_CMD_COMP,
+				FlashReg + intr_status);
+			status = PASS;
+			if (status2 == FAIL)
+				status = FAIL;
+			break;
+		} else if (ioread32(FlashReg + intr_status) &
+				INTR_STATUS0__PROGRAM_FAIL) {
+			status2 = FAIL;
+			status = FAIL;
+			t = ioread32(FlashReg + intr_status) &
+				INTR_STATUS0__PROGRAM_FAIL;
+			iowrite32(t, FlashReg + intr_status);
+		} else {
+			iowrite32((~INTR_STATUS0__PROGRAM_FAIL) &
+				(~INTR_STATUS0__DMA_CMD_COMP),
+				FlashReg + intr_status);
+		}
+	}
+
+	iowrite32(ioread32(FlashReg + intr_status), FlashReg + intr_status);
+
+	iowrite32(0, FlashReg + DMA_ENABLE);
+
+	while ((ioread32(FlashReg + DMA_ENABLE) & DMA_ENABLE__FLAG))
+		;
+
+	iowrite32(0, FlashReg + MULTIPLANE_OPERATION);
+
+#else
+	iowrite32(0, FlashReg + TRANSFER_SPARE_REG);
+
+	index_addr((u32)(MODE_10 | (flash_bank << 24) |
+		(flash_add >> DeviceInfo.nBitsInPageDataSize)),
+		0x42);
+	index_addr((u32)(MODE_10 | (flash_bank << 24) |
+		(flash_add >> DeviceInfo.nBitsInPageDataSize)),
+		0x2100 | NumPages);
+
+	while (NumPages > 0) {
+		if (0 == plane) {
+			iowrite32((u32)(MODE_01 | (flash_bank << 24) |
+				((flash_add +
+				page_num * DeviceInfo.wPageDataSize) >>
+				DeviceInfo.nBitsInPageDataSize)),
+				FlashMem);
+			plane = 1;
+		} else {
+			iowrite32((u32)(MODE_01 | (flash_bank << 24) |
+				((flash_add + DeviceInfo.wBlockDataSize +
+				page_num * DeviceInfo.wPageDataSize) >>
+				DeviceInfo.nBitsInPageDataSize)),
+				FlashMem);
+			plane = 0;
+		}
+
+		for (i = 0; i < DeviceInfo.wPageDataSize / 4; i++)
+			iowrite32(*((u32 *)write_data + i),
+				FlashMem + 0x10);
+
+		write_data += PageSize;
+
+		if (0 == plane)
+			page_num++;
+
+		--NumPages;
+	}
+
+	while (!(ioread32(FlashReg + intr_status) &
+		(INTR_STATUS0__PROGRAM_COMP |
+		INTR_STATUS0__PROGRAM_FAIL)))
+		;
+
+	if (ioread32(FlashReg + intr_status) & INTR_STATUS0__PROGRAM_FAIL)
+		status = FAIL;
+
+	iowrite32(INTR_STATUS0__PROGRAM_COMP | INTR_STATUS0__PROGRAM_FAIL,
+		FlashReg + intr_status);
+
+	iowrite32(0, FlashReg + MULTIPLANE_OPERATION);
+#endif
+
+	return status;
+}
+
+u16 NAND_LLD_Event_Status(void)
+{
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	return PASS;
+}
+
+static void handle_nand_int_read(struct mrst_nand_info *dev)
+{
+	u32 intr_status_addresses[4] = {INTR_STATUS0,
+		INTR_STATUS1, INTR_STATUS2, INTR_STATUS3};
+	u32 intr_status;
+	u32 ecc_done_OR_dma_comp = 0;
+
+	nand_dbg_print(NAND_DBG_DEBUG, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	dev->ret = PASS;
+	intr_status = intr_status_addresses[dev->flash_bank];
+
+	while (1) {
+		if (ioread32(FlashReg + ECC_ENABLE)) {
+			if (ioread32(FlashReg + intr_status) &
+				INTR_STATUS0__ECC_ERR) {
+				iowrite32(INTR_STATUS0__ECC_ERR,
+					FlashReg + intr_status);
+				dev->ret = do_ecc_new(dev->flash_bank,
+						dev->read_data,
+						dev->block, dev->page);
+			} else if (ioread32(FlashReg + intr_status) &
+				INTR_STATUS0__DMA_CMD_COMP) {
+				iowrite32(INTR_STATUS0__DMA_CMD_COMP,
+					FlashReg + intr_status);
+				if (1 == ecc_done_OR_dma_comp)
+					break;
+				ecc_done_OR_dma_comp = 1;
+			} else if (ioread32(FlashReg + intr_status) &
+				INTR_STATUS0__ECC_TRANSACTION_DONE) {
+				iowrite32(INTR_STATUS0__ECC_TRANSACTION_DONE,
+					FlashReg + intr_status);
+				if (1 == ecc_done_OR_dma_comp)
+					break;
+				ecc_done_OR_dma_comp = 1;
+			}
+		} else {
+			if (ioread32(FlashReg + intr_status) &
+				INTR_STATUS0__DMA_CMD_COMP) {
+				iowrite32(INTR_STATUS0__DMA_CMD_COMP,
+					FlashReg + intr_status);
+				break;
+			} else {
+				printk(KERN_ERR "Illegal INTS "
+					"(offset addr 0x%x) value: 0x%x\n",
+					intr_status,
+					ioread32(FlashReg + intr_status));
+			}
+		}
+
+		iowrite32((~INTR_STATUS0__ECC_ERR) &
+		(~INTR_STATUS0__ECC_TRANSACTION_DONE) &
+		(~INTR_STATUS0__DMA_CMD_COMP),
+		FlashReg + intr_status);
+	}
+}
+
+static void handle_nand_int_write(struct mrst_nand_info *dev)
+{
+	u32 intr_status;
+	u32 intr[4] = {INTR_STATUS0, INTR_STATUS1,
+		INTR_STATUS2, INTR_STATUS3};
+	int status = PASS;
+
+	nand_dbg_print(NAND_DBG_DEBUG, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	dev->ret = PASS;
+	intr_status = intr[dev->flash_bank];
+
+	while (1) {
+		while (!ioread32(FlashReg + intr_status))
+			;
+
+		if (ioread32(FlashReg + intr_status) &
+			INTR_STATUS0__DMA_CMD_COMP) {
+			iowrite32(INTR_STATUS0__DMA_CMD_COMP,
+				FlashReg + intr_status);
+			if (FAIL == status)
+				dev->ret = FAIL;
+			break;
+		} else if (ioread32(FlashReg + intr_status) &
+			INTR_STATUS0__PROGRAM_FAIL) {
+			status = FAIL;
+			iowrite32(INTR_STATUS0__PROGRAM_FAIL,
+				FlashReg + intr_status);
+		} else {
+			iowrite32((~INTR_STATUS0__PROGRAM_FAIL) &
+				(~INTR_STATUS0__DMA_CMD_COMP),
+				FlashReg + intr_status);
+		}
+	}
+}
+
+static irqreturn_t ddma_isr(int irq, void *dev_id)
+{
+	struct mrst_nand_info *dev = dev_id;
+	u32 int_mask, ints0, ints1, ints2, ints3, ints_offset;
+	u32 intr[4] = {INTR_STATUS0, INTR_STATUS1,
+		INTR_STATUS2, INTR_STATUS3};
+
+	int_mask = INTR_STATUS0__DMA_CMD_COMP |
+		INTR_STATUS0__ECC_TRANSACTION_DONE |
+		INTR_STATUS0__ECC_ERR |
+		INTR_STATUS0__PROGRAM_FAIL |
+		INTR_STATUS0__ERASE_FAIL;
+
+	ints0 = ioread32(FlashReg + INTR_STATUS0);
+	ints1 = ioread32(FlashReg + INTR_STATUS1);
+	ints2 = ioread32(FlashReg + INTR_STATUS2);
+	ints3 = ioread32(FlashReg + INTR_STATUS3);
+
+	ints_offset = intr[dev->flash_bank];
+
+	nand_dbg_print(NAND_DBG_DEBUG,
+		"INTR0: 0x%x, INTR1: 0x%x, INTR2: 0x%x, INTR3: 0x%x, "
+		"DMA_INTR: 0x%x, "
+		"dev->state: 0x%x, dev->flash_bank: %d\n",
+		ints0, ints1, ints2, ints3,
+		ioread32(FlashReg + DMA_INTR),
+		dev->state, dev->flash_bank);
+
+	if (!(ioread32(FlashReg + ints_offset) & int_mask)) {
+		iowrite32(ints0, FlashReg + INTR_STATUS0);
+		iowrite32(ints1, FlashReg + INTR_STATUS1);
+		iowrite32(ints2, FlashReg + INTR_STATUS2);
+		iowrite32(ints3, FlashReg + INTR_STATUS3);
+		nand_dbg_print(NAND_DBG_WARN,
+			"ddma_isr: Invalid interrupt for NAND controller. "
+			"Ignore it\n");
+		return IRQ_NONE;
+	}
+
+	switch (dev->state) {
+	case INT_READ_PAGE_MAIN:
+	case INT_PIPELINE_READ_AHEAD:
+		/* Disable controller interrupts */
+		iowrite32(0, FlashReg + GLOBAL_INT_ENABLE);
+		handle_nand_int_read(dev);
+		break;
+	case INT_WRITE_PAGE_MAIN:
+	case INT_PIPELINE_WRITE_AHEAD:
+		iowrite32(0, FlashReg + GLOBAL_INT_ENABLE);
+		handle_nand_int_write(dev);
+		break;
+	default:
+		printk(KERN_ERR "ddma_isr - Illegal state: 0x%x\n",
+			dev->state);
+		return IRQ_NONE;
+	}
+
+	dev->state = INT_IDLE_STATE;
+	complete(&dev->complete);
+	return IRQ_HANDLED;
+}
+
+static const struct pci_device_id nand_pci_ids[] = {
+	{
+	 .vendor = 0x8086,
+	 .device = 0x0809,
+	 .subvendor = PCI_ANY_ID,
+	 .subdevice = PCI_ANY_ID,
+	 },
+	{ /* end: all zeroes */ }
+};
+
+static int dump_pci_config_register(struct pci_dev *dev)
+{
+	int err = 0;
+	unsigned int data32;
+	u16 data16;
+	u8 data8;
+
+	nand_dbg_print(NAND_DBG_DEBUG, "Dump MRST PCI Config Registers:\n");
+
+	err = pci_read_config_word(dev, PCI_VENDOR_ID, &data16);
+	if (err) {
+		printk(KERN_ERR "Read PCI_VENDOR_ID fail, "
+		       "error code: %d\n", err);
+		return err;
+	} else {
+		nand_dbg_print(NAND_DBG_DEBUG, "PCI_VENDOR_ID: 0x%x\n", data16);
+	}
+
+	err = pci_read_config_word(dev, PCI_DEVICE_ID, &data16);
+	if (err) {
+		printk(KERN_ERR "Read PCI_DEVICE_ID fail, "
+		       "error code: %d\n", err);
+		return err;
+	} else {
+		nand_dbg_print(NAND_DBG_DEBUG, "PCI_DEVICE_ID: 0x%x\n", data16);
+	}
+
+	err = pci_read_config_word(dev, PCI_COMMAND, &data16);
+	if (err) {
+		printk(KERN_ERR "Read PCI_COMMAND fail, "
+		       "error code: %d\n", err);
+		return err;
+	} else {
+		nand_dbg_print(NAND_DBG_DEBUG, "PCI_COMMAND: 0x%x\n", data16);
+	}
+
+	err = pci_read_config_word(dev, PCI_STATUS, &data16);
+	if (err) {
+		printk(KERN_ERR "Read PCI_STATUS fail, "
+		       "error code: %d\n", err);
+		return err;
+	} else {
+		nand_dbg_print(NAND_DBG_DEBUG, "PCI_STATUS: 0x%x\n", data16);
+	}
+
+	err = pci_read_config_byte(dev, PCI_CLASS_REVISION, &data8);
+	if (err) {
+		printk(KERN_ERR "Read PCI_CLASS_REVISION fail, "
+		       "error code: %d\n", err);
+		return err;
+	} else {
+		nand_dbg_print(NAND_DBG_DEBUG, "PCI_CLASS_REVISION: 0x%x\n",
+			       data8);
+	}
+
+	err = pci_read_config_byte(dev, PCI_CLASS_PROG, &data8);
+	if (err) {
+		printk(KERN_ERR "Read PCI_CLASS_PROG fail, "
+		       "error code: %d\n", err);
+		return err;
+	} else {
+		nand_dbg_print(NAND_DBG_DEBUG, "PCI_CLASS_PROG: 0x%x\n", data8);
+	}
+
+	err = pci_read_config_word(dev, PCI_CLASS_DEVICE, &data16);
+	if (err) {
+		printk(KERN_ERR "Read PCI_CLASS_DEVICE fail, "
+		       "error code: %d\n", err);
+		return err;
+	} else {
+		nand_dbg_print(NAND_DBG_DEBUG, "PCI_CLASS_DEVICE: 0x%x\n",
+			       data16);
+	}
+
+	err = pci_read_config_byte(dev, PCI_CACHE_LINE_SIZE, &data8);
+	if (err) {
+		printk(KERN_ERR "Read PCI_CACHE_LINE_SIZE fail, "
+		       "error code: %d\n", err);
+		return err;
+	} else {
+		nand_dbg_print(NAND_DBG_DEBUG, "PCI_CACHE_LINE_SIZE: 0x%x\n",
+			       data8);
+	}
+
+	err = pci_read_config_byte(dev, PCI_LATENCY_TIMER, &data8);
+	if (err) {
+		printk(KERN_ERR "Read PCI_LATENCY_TIMER fail, "
+		       "error code: %d\n", err);
+		return err;
+	} else {
+		nand_dbg_print(NAND_DBG_DEBUG, "PCI_LATENCY_TIMER: 0x%x\n",
+			       data8);
+	}
+
+	err = pci_read_config_byte(dev, PCI_HEADER_TYPE, &data8);
+	if (err) {
+		printk(KERN_ERR "Read PCI_HEADER_TYPE fail, "
+		       "error code: %d\n", err);
+		return err;
+	} else {
+		nand_dbg_print(NAND_DBG_DEBUG, "PCI_HEADER_TYPE: 0x%x\n",
+			       data8);
+	}
+
+	err = pci_read_config_byte(dev, PCI_BIST, &data8);
+	if (err) {
+		printk(KERN_ERR "Read PCI_BIST fail, " "error code: %d\n", err);
+		return err;
+	} else {
+		nand_dbg_print(NAND_DBG_DEBUG, "PCI_BIST: 0x%x\n", data8);
+	}
+
+	err = pci_read_config_dword(dev, PCI_BASE_ADDRESS_0, &data32);
+	if (err) {
+		printk(KERN_ERR "Read PCI_BASE_ADDRESS_0 fail, "
+		       "error code: %d\n", err);
+		return err;
+	} else {
+		nand_dbg_print(NAND_DBG_DEBUG, "PCI_BASE_ADDRESS_0: 0x%x\n",
+			       data32);
+	}
+
+	err = pci_read_config_dword(dev, PCI_BASE_ADDRESS_1, &data32);
+	if (err) {
+		printk(KERN_ERR "Read PCI_BASE_ADDRESS_1 fail, "
+		       "error code: %d\n", err);
+		return err;
+	} else {
+		nand_dbg_print(NAND_DBG_DEBUG, "PCI_BASE_ADDRESS_1: 0x%x\n",
+			       data32);
+	}
+
+	err = pci_read_config_dword(dev, PCI_BASE_ADDRESS_2, &data32);
+	if (err) {
+		printk(KERN_ERR "Read PCI_BASE_ADDRESS_2 fail, "
+		       "error code: %d\n", err);
+		return err;
+	} else {
+		nand_dbg_print(NAND_DBG_DEBUG, "PCI_BASE_ADDRESS_2: 0x%x\n",
+			       data32);
+	}
+
+	err = pci_read_config_dword(dev, PCI_BASE_ADDRESS_3, &data32);
+	if (err) {
+		printk(KERN_ERR "Read PCI_BASE_ADDRESS_3 fail, "
+		       "error code: %d\n", err);
+		return err;
+	} else {
+		nand_dbg_print(NAND_DBG_DEBUG, "PCI_BASE_ADDRESS_3: 0x%x\n",
+			       data32);
+	}
+
+	err = pci_read_config_dword(dev, PCI_BASE_ADDRESS_4, &data32);
+	if (err) {
+		printk(KERN_ERR "Read PCI_BASE_ADDRESS_4 fail, "
+		       "error code: %d\n", err);
+		return err;
+	} else {
+		nand_dbg_print(NAND_DBG_DEBUG, "PCI_BASE_ADDRESS_4: 0x%x\n",
+			       data32);
+	}
+
+	err = pci_read_config_dword(dev, PCI_BASE_ADDRESS_5, &data32);
+	if (err) {
+		printk(KERN_ERR "Read PCI_BASE_ADDRESS_5 fail, "
+		       "error code: %d\n", err);
+		return err;
+	} else {
+		nand_dbg_print(NAND_DBG_DEBUG, "PCI_BASE_ADDRESS_5: 0x%x\n",
+			       data32);
+	}
+
+	err = pci_read_config_dword(dev, PCI_CARDBUS_CIS, &data32);
+	if (err) {
+		printk(KERN_ERR "Read PCI_CARDBUS_CIS fail, "
+		       "error code: %d\n", err);
+		return err;
+	} else {
+		nand_dbg_print(NAND_DBG_DEBUG, "PCI_CARDBUS_CIS: 0x%x\n",
+			       data32);
+	}
+
+	err = pci_read_config_word(dev, PCI_SUBSYSTEM_VENDOR_ID, &data16);
+	if (err) {
+		printk(KERN_ERR "Read PCI_SUBSYSTEM_VENDOR_ID fail, "
+		       "error code: %d\n", err);
+		return err;
+	} else {
+		nand_dbg_print(NAND_DBG_DEBUG,
+			       "PCI_SUBSYSTEM_VENDOR_ID: 0x%x\n", data16);
+	}
+
+	err = pci_read_config_word(dev, PCI_SUBSYSTEM_ID, &data16);
+	if (err) {
+		printk(KERN_ERR "Read PCI_SUBSYSTEM_ID fail, "
+		       "error code: %d\n", err);
+		return err;
+	} else {
+		nand_dbg_print(NAND_DBG_DEBUG, "PCI_SUBSYSTEM_ID: 0x%x\n",
+			       data16);
+	}
+
+	err = pci_read_config_dword(dev, PCI_ROM_ADDRESS, &data32);
+	if (err) {
+		printk(KERN_ERR "Read PCI_ROM_ADDRESS fail, "
+		       "error code: %d\n", err);
+		return err;
+	} else {
+		nand_dbg_print(NAND_DBG_DEBUG, "PCI_ROM_ADDRESS: 0x%x\n",
+			       data32);
+	}
+
+	err = pci_read_config_byte(dev, PCI_INTERRUPT_LINE, &data8);
+	if (err) {
+		printk(KERN_ERR "Read PCI_INTERRUPT_LINE fail, "
+		       "error code: %d\n", err);
+		return err;
+	} else {
+		nand_dbg_print(NAND_DBG_DEBUG, "PCI_INTERRUPT_LINE: 0x%x\n",
+			       data8);
+	}
+
+	err = pci_read_config_byte(dev, PCI_INTERRUPT_PIN, &data8);
+	if (err) {
+		printk(KERN_ERR "Read PCI_INTERRUPT_PIN fail, "
+		       "error code: %d\n", err);
+		return err;
+	} else {
+		nand_dbg_print(NAND_DBG_DEBUG, "PCI_INTERRUPT_PIN: 0x%x\n",
+			       data8);
+	}
+
+	err = pci_read_config_byte(dev, PCI_MIN_GNT, &data8);
+	if (err) {
+		printk(KERN_ERR "Read PCI_MIN_GNT fail, "
+		       "error code: %d\n", err);
+		return err;
+	} else {
+		nand_dbg_print(NAND_DBG_DEBUG, "PCI_MIN_GNT: 0x%x\n", data8);
+	}
+
+	err = pci_read_config_byte(dev, PCI_MAX_LAT, &data8);
+	if (err) {
+		printk(KERN_ERR "Read PCI_MAX_LAT fail, "
+		       "error code: %d\n", err);
+		return err;
+	} else {
+		nand_dbg_print(NAND_DBG_DEBUG, "PCI_MAX_LAT: 0x%x\n", data8);
+	}
+
+	return err;
+}
+
+static int nand_pci_probe(struct pci_dev *dev, const struct pci_device_id *id)
+{
+	int ret = -ENODEV;
+	unsigned long csr_base;
+	unsigned long csr_len;
+	struct mrst_nand_info *pndev = &info;
+
+	nand_dbg_print(NAND_DBG_WARN, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	ret = pci_enable_device(dev);
+	if (ret) {
+		printk(KERN_ERR "Spectra: pci_enable_device failed.\n");
+		return ret;
+	}
+
+	dump_pci_config_register(dev);
+
+	pci_set_master(dev);
+	pndev->dev = dev;
+
+	csr_base = pci_resource_start(dev, 0);
+	if (!csr_base) {
+		printk(KERN_ERR "Spectra: pci_resource_start failed!\n");
+		return -ENODEV;
+	}
+
+	csr_len = pci_resource_len(dev, 0);
+	if (!csr_len) {
+		printk(KERN_ERR "Spectra: pci_resource_len failed!\n");
+		return -ENODEV;
+	}
+
+	ret = pci_request_regions(dev, SPECTRA_NAND_NAME);
+	if (ret) {
+		printk(KERN_ERR "Spectra: Unable to request "
+		       "memory region\n");
+		goto failed_req_csr;
+	}
+
+	pndev->ioaddr = ioremap_nocache(csr_base, csr_len);
+	if (!pndev->ioaddr) {
+		printk(KERN_ERR "Spectra: Unable to remap memory region\n");
+		ret = -ENOMEM;
+		goto failed_remap_csr;
+	}
+	nand_dbg_print(NAND_DBG_DEBUG, "Spectra: CSR 0x%08lx -> 0x%p (0x%lx)\n",
+		       csr_base, pndev->ioaddr, csr_len);
+
+#if DDMA
+	init_completion(&pndev->complete);
+	nand_dbg_print(NAND_DBG_DEBUG, "Spectra: IRQ %d\n", dev->irq);
+	if (request_irq(dev->irq, ddma_isr, IRQF_SHARED,
+			SPECTRA_NAND_NAME, &info)) {
+		printk(KERN_ERR "Spectra: Unable to allocate IRQ\n");
+		ret = -ENODEV;
+		iounmap(pndev->ioaddr);
+		goto failed_remap_csr;
+	}
+#endif
+
+	pci_set_drvdata(dev, pndev);
+
+	return 0;
+
+failed_remap_csr:
+	pci_release_regions(dev);
+failed_req_csr:
+
+	return ret;
+}
+
+static void nand_pci_remove(struct pci_dev *dev)
+{
+	struct mrst_nand_info *pndev = pci_get_drvdata(dev);
+
+	nand_dbg_print(NAND_DBG_WARN, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+#if CMD_DMA
+	free_irq(dev->irq, pndev);
+#endif
+	iounmap(pndev->ioaddr);
+	pci_release_regions(dev);
+	pci_disable_device(dev);
+}
+
+MODULE_DEVICE_TABLE(pci, nand_pci_ids);
+
+static struct pci_driver nand_pci_driver = {
+	.name = SPECTRA_NAND_NAME,
+	.id_table = nand_pci_ids,
+	.probe = nand_pci_probe,
+	.remove = nand_pci_remove,
+};
+
+u16 NAND_Flash_Init(void)
+{
+	int retval;
+	u32 int_mask = INTR_STATUS0__DMA_CMD_COMP |
+		INTR_STATUS0__ECC_TRANSACTION_DONE |
+		INTR_STATUS0__ECC_ERR |
+		INTR_STATUS0__PROGRAM_FAIL |
+		INTR_STATUS0__ERASE_FAIL;
+
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	FlashReg = GLOB_MEMMAP_NOCACHE(GLOB_HWCTL_REG_BASE,
+			GLOB_HWCTL_REG_SIZE);
+	if (!FlashReg) {
+		printk(KERN_ERR "Spectra: ioremap_nocache failed!");
+		return -ENOMEM;
+	}
+	nand_dbg_print(NAND_DBG_WARN,
+		"Spectra: Remapped reg base address: "
+		"0x%p, len: %d\n",
+		FlashReg, GLOB_HWCTL_REG_SIZE);
+
+	FlashMem = GLOB_MEMMAP_NOCACHE(GLOB_HWCTL_MEM_BASE,
+			GLOB_HWCTL_MEM_SIZE);
+	if (!FlashMem) {
+		printk(KERN_ERR "Spectra: ioremap_nocache failed!");
+		return -ENOMEM;
+	}
+
+	nand_dbg_print(NAND_DBG_WARN,
+		"Spectra: Remapped flash base address: "
+		"0x%p, len: %d\n",
+		(void *)FlashMem, GLOB_HWCTL_MEM_SIZE);
+
+	NAND_Flash_Reset();
+
+	iowrite32(0, FlashReg + GLOBAL_INT_ENABLE);
+/*
+	iowrite32(0, FlashReg + INTR_EN0);
+	iowrite32(0, FlashReg + INTR_EN1);
+	iowrite32(0, FlashReg + INTR_EN2);
+	iowrite32(0, FlashReg + INTR_EN3);
+*/
+
+	iowrite32(int_mask, FlashReg + INTR_EN0);
+	iowrite32(int_mask, FlashReg + INTR_EN1);
+	iowrite32(int_mask, FlashReg + INTR_EN2);
+	iowrite32(int_mask, FlashReg + INTR_EN3);
+
+	iowrite32(0xFFFF, FlashReg + INTR_STATUS0);
+	iowrite32(0xFFFF, FlashReg + INTR_STATUS1);
+	iowrite32(0xFFFF, FlashReg + INTR_STATUS2);
+	iowrite32(0xFFFF, FlashReg + INTR_STATUS3);
+
+	iowrite32(0x0F, FlashReg + RB_PIN_ENABLED);
+	iowrite32(CHIP_EN_DONT_CARE__FLAG, FlashReg + CHIP_ENABLE_DONT_CARE);
+
+	/* Should set value for these registers when init */
+	iowrite32(1, FlashReg + ECC_ENABLE);
+	iowrite32(0, FlashReg + TWO_ROW_ADDR_CYCLES);
+
+	/* Enable the 2 lines code will enable pipeline_rw_ahead feature */
+	/* and improve performance for about 10%. But will also cause a */
+	/* 1 or 2 bit error when do a 300MB+ file copy/compare testing. */
+	/* Suspect it's an ECC FIFO overflow issue. -- Yunpeng 2009.03.26 */
+	/* iowrite32(1, FlashReg + CACHE_WRITE_ENABLE); */
+	/* iowrite32(1, FlashReg + CACHE_READ_ENABLE); */
+
+	retval = pci_register_driver(&nand_pci_driver);
+	if (retval)
+		return -ENOMEM;
+
+	return PASS;
+}
+
+#endif
+
+#if FLASH_CDMA
+u16 NAND_Flash_Init(void)
+{
+	nand_dbg_print(NAND_DBG_TRACE, "%s, Line %d, Function: %s\n",
+		       __FILE__, __LINE__, __func__);
+
+	FlashReg = GLOB_MEMMAP_NOCACHE(GLOB_HWCTL_REG_BASE,
+			GLOB_HWCTL_REG_SIZE);
+	if (!FlashReg) {
+		printk(KERN_ERR "Spectra: ioremap_nocache failed!");
+		return -ENOMEM;
+	}
+	nand_dbg_print(NAND_DBG_WARN,
+		"Spectra: Remapped reg base address: "
+		"0x%p, len: %d\n",
+		FlashReg, GLOB_HWCTL_REG_SIZE);
+
+	FlashMem = GLOB_MEMMAP_NOCACHE(GLOB_HWCTL_MEM_BASE,
+			GLOB_HWCTL_MEM_SIZE);
+	if (!FlashMem) {
+		printk(KERN_ERR "Spectra: ioremap_nocache failed!");
+		return -ENOMEM;
+	}
+
+	nand_dbg_print(NAND_DBG_WARN,
+		"Spectra: Remapped flash base address: "
+		"0x%p, len: %d\n",
+		(void *)FlashMem, GLOB_HWCTL_MEM_SIZE);
+
+	NAND_Flash_Reset();
+
+	iowrite32(0, FlashReg + GLOBAL_INT_ENABLE);
+
+	iowrite32(0, FlashReg + INTR_EN0);
+	iowrite32(0, FlashReg + INTR_EN1);
+	iowrite32(0, FlashReg + INTR_EN2);
+	iowrite32(0, FlashReg + INTR_EN3);
+
+	iowrite32(0xFFFF, FlashReg + INTR_STATUS0);
+	iowrite32(0xFFFF, FlashReg + INTR_STATUS1);
+	iowrite32(0xFFFF, FlashReg + INTR_STATUS2);
+	iowrite32(0xFFFF, FlashReg + INTR_STATUS3);
+
+	iowrite32(0x0F, FlashReg + RB_PIN_ENABLED);
+	iowrite32(CHIP_EN_DONT_CARE__FLAG, FlashReg + CHIP_ENABLE_DONT_CARE);
+
+	iowrite32(1, FlashReg + ECC_ENABLE);
+	iowrite32(0, FlashReg + TWO_ROW_ADDR_CYCLES);
+
+	/* Enable the 2 lines code will enable pipeline_rw_ahead feature */
+	/* and improve performance for about 10%. But will also cause a */
+	/* 1 or 2 bit error when do a 300MB+ file copy/compare testing. */
+	/* Suspect it's an ECC FIFO overflow issue. -- Yunpeng 2009.03.26 */
+	/* iowrite32(1, FlashReg + CACHE_WRITE_ENABLE); */
+	/* iowrite32(1, FlashReg + CACHE_READ_ENABLE); */
+
+	return PASS;
+}
+
+#endif
+
diff --git a/drivers/staging/mrst_nand/lld_nand.h b/drivers/staging/mrst_nand/lld_nand.h
new file mode 100644
index 0000000..88d0554
--- /dev/null
+++ b/drivers/staging/mrst_nand/lld_nand.h
@@ -0,0 +1,116 @@
+/*
+ * NAND Flash Controller Device Driver
+ * Copyright (c) 2009, Intel Corporation and its suppliers.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
+ *
+ */
+
+#ifndef _LLD_NAND_
+#define _LLD_NAND_
+
+#ifdef ELDORA
+#include "defs.h"
+#else
+#include "flash.h"
+#include "ffsport.h"
+#endif
+
+#define MODE_00    0x00000000
+#define MODE_01    0x04000000
+#define MODE_10    0x08000000
+#define MODE_11    0x0C000000
+
+
+#define DATA_TRANSFER_MODE              0
+#define PROTECTION_PER_BLOCK            1
+#define LOAD_WAIT_COUNT                 2
+#define PROGRAM_WAIT_COUNT              3
+#define ERASE_WAIT_COUNT                4
+#define INT_MONITOR_CYCLE_COUNT         5
+#define READ_BUSY_PIN_ENABLED           6
+#define MULTIPLANE_OPERATION_SUPPORT    7
+#define PRE_FETCH_MODE                  8
+#define CE_DONT_CARE_SUPPORT            9
+#define COPYBACK_SUPPORT                10
+#define CACHE_WRITE_SUPPORT             11
+#define CACHE_READ_SUPPORT              12
+#define NUM_PAGES_IN_BLOCK              13
+#define ECC_ENABLE_SELECT               14
+#define WRITE_ENABLE_2_READ_ENABLE      15
+#define ADDRESS_2_DATA                  16
+#define READ_ENABLE_2_WRITE_ENABLE      17
+#define TWO_ROW_ADDRESS_CYCLES          18
+#define MULTIPLANE_ADDRESS_RESTRICT     19
+#define ACC_CLOCKS                      20
+#define READ_WRITE_ENABLE_LOW_COUNT     21
+#define READ_WRITE_ENABLE_HIGH_COUNT    22
+
+#define ECC_SECTOR_SIZE     512
+#define LLD_MAX_FLASH_BANKS     4
+
+
+u16  NAND_Flash_Init(void);
+u16  NAND_Flash_Reset(void);
+u16  NAND_Read_Device_ID(void);
+u16  NAND_Erase_Block(u32 flash_add);
+u16  NAND_Write_Page_Main(u8 *write_data, u32 block, u16 page,
+				u16 page_count);
+u16  NAND_Read_Page_Main(u8 *read_data, u32 block, u16 page,
+				u16 page_count);
+u16  NAND_UnlockArrayAll(void);
+u16  NAND_Write_Page_Main_Spare(u8 *write_data, u32 block,
+				u16 page, u16 page_count);
+u16  NAND_Write_Page_Spare(u8 *read_data, u32 block, u16 page,
+				u16 page_count);
+u16  NAND_Read_Page_Main_Spare(u8 *read_data, u32 block, u16 page,
+				u16 page_count);
+u16  NAND_Read_Page_Spare(u8 *read_data, u32 block, u16 page,
+				u16 page_count);
+u16  NAND_LLD_Event_Status(void);
+void    NAND_LLD_Enable_Disable_Interrupts(u16 INT_ENABLE);
+u16  NAND_Get_Bad_Block(u32 block);
+
+u32  NAND_Memory_Pool_Size(void);
+int NAND_Mem_Config(u8 *pMem);
+
+u16  NAND_Pipeline_Read_Ahead(u8 *read_data, u32 block, u16 page,
+				u16 page_count);
+u16  NAND_Pipeline_Write_Ahead(u8 *write_data, u32 block,
+				u16 page, u16 page_count);
+u16  NAND_Multiplane_Read(u8 *read_data, u32 block, u16 page,
+				u16 page_count);
+u16  NAND_Multiplane_Write(u8 *write_data, u32 block, u16 page,
+				u16 page_count);
+void NAND_ECC_Ctrl(int enable);
+u16 NAND_Read_Page_Main_Polling(u8 *read_data,
+		u32 block, u16 page, u16 page_count);
+u16 NAND_Pipeline_Read_Ahead_Polling(u8 *read_data,
+			u32 block, u16 page, u16 page_count);
+
+void Conv_Spare_Data_Log2Phy_Format(u8 *data);
+void Conv_Spare_Data_Phy2Log_Format(u8 *data);
+void Conv_Main_Spare_Data_Log2Phy_Format(u8 *data, u16 page_count);
+void Conv_Main_Spare_Data_Phy2Log_Format(u8 *data, u16 page_count);
+
+extern void __iomem *FlashReg;
+extern void __iomem *FlashMem;
+
+extern int totalUsedBanks;
+extern u32 GLOB_valid_banks[LLD_MAX_FLASH_BANKS];
+
+#endif /*_LLD_NAND_*/
+
+
+
diff --git a/drivers/staging/mrst_nand/spectraswconfig.h b/drivers/staging/mrst_nand/spectraswconfig.h
new file mode 100644
index 0000000..f9e35e8
--- /dev/null
+++ b/drivers/staging/mrst_nand/spectraswconfig.h
@@ -0,0 +1,86 @@
+/*
+ * NAND Flash Controller Device Driver
+ * Copyright (c) 2009, Intel Corporation and its suppliers.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; if not, write to the Free Software Foundation, Inc.,
+ * 51 Franklin St - Fifth Floor, Boston, MA 02110-1301 USA.
+ *
+ */
+
+#ifndef _SPECTRASWCONFIG_
+#define _SPECTRASWCONFIG_
+
+/***** Common Parameters *****/
+#define RETRY_TIMES                   3
+
+#define READ_BADBLOCK_INFO            1
+#define READBACK_VERIFY               0
+#define AUTO_FORMAT_FLASH             0
+
+/***** Cache Parameters *****/
+#define PAGES_PER_CACHE_BLOCK         0
+#define CACHE_BLOCK_NUMBER            2
+
+/***** Block Table Parameters *****/
+#define BLOCK_TABLE_INDEX             0
+
+/***** Wear Leveling Parameters *****/
+#define WEAR_LEVELING_GATE         0x10
+#define WEAR_LEVELING_BLOCK_NUM      10
+
+#define DEBUG_BNDRY             0
+
+/***** Product Feature Support *****/
+
+#define FLASH_EMU               defined(CONFIG_MRST_NAND_EMU)
+#define FLASH_NAND              defined(CONFIG_MRST_NAND_HW)
+#define FLASH_CDMA              0
+#define CMD_DMA                   0
+#define DDMA                          1
+
+#define SPECTRA_PARTITION_ID    0
+
+/* Enable this macro if the number of flash blocks is larger than 16K. */
+#define SUPPORT_LARGE_BLOCKNUM  1
+
+/**** Block Table and Reserved Block Parameters *****/
+#define SPECTRA_START_BLOCK     3
+#define NUM_FREE_BLOCKS_GATE    30
+
+/**** Linux Block Driver Parameters ****/
+#define SBD_MEMPOOL_PTR      0x0c800000
+
+/**** Hardware Parameters ****/
+#define GLOB_HWCTL_REG_BASE     0xFFA40000
+#define GLOB_HWCTL_REG_SIZE     4096
+
+#define GLOB_HWCTL_MEM_BASE     0xFFA48000
+#define GLOB_HWCTL_MEM_SIZE     1024
+
+#define GLOB_HWCTL_DEFAULT_BLKS    2048
+
+/**** Toshiba Device Parameters ****/
+
+#define GLOB_DEVTSBA_ALT_BLK_NFO    1
+#define GLOB_DEVTSBA_ALT_BLK_ADD    \
+	(GLOB_HWCTL_REG_BASE + (DEVICE_PARAM_2 << 2))
+
+#define SUPPORT_15BITECC        1
+#define SUPPORT_8BITECC         1
+
+#define CUSTOM_CONF_PARAMS      0
+
+#define ONFI_BLOOM_TIME         0
+#define MODE5_WORKAROUND        1
+
+#endif /*_SPECTRASWCONFIG_*/
-- 
1.5.4.5


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ