[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161130090749.37484fff@bbrezillon>
Date: Wed, 30 Nov 2016 09:07:49 +0100
From: Boris Brezillon <boris.brezillon@...e-electrons.com>
To: Masahiro Yamada <yamada.masahiro@...ionext.com>
Cc: linux-mtd@...ts.infradead.org,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Marek Vasut <marek.vasut@...il.com>,
Brian Norris <computersforpeace@...il.com>,
Richard Weinberger <richard@....at>,
David Woodhouse <dwmw2@...radead.org>,
Cyrille Pitchen <cyrille.pitchen@...el.com>
Subject: Re: [PATCH 28/39] mtd: nand: denali: move multi NAND fixup code to
a helper function
On Wed, 30 Nov 2016 15:09:27 +0900
Masahiro Yamada <yamada.masahiro@...ionext.com> wrote:
> Hi Boris,
>
>
> 2016-11-28 1:24 GMT+09:00 Boris Brezillon <boris.brezillon@...e-electrons.com>:
> > On Sun, 27 Nov 2016 03:06:14 +0900
> > Masahiro Yamada <yamada.masahiro@...ionext.com> wrote:
> >
> >> Collect multi NAND fixups into a helper function instead of
> >> scattering them in denali_init().
> >
> > Can you tell me more about this multi-NAND feature?
> > The core is already able to detect multi-die NAND chips in a generic
> > way,
>
> This is not the case.
>
> > but I fear this is something else, like "put two 8-bits chips on a
> > 16bits bus to emulate a single 16bits chip".
>
> Yes, it is.
>
> (I have never used this controller like that.
> But, I am pretty sure it is
> from the code and the
> Denali's User Guide mentions such usage.)
>
>
> Just in case, I will clearly rephrase the comment block like follows in v2:
>
> /*
> * Support for multi device:
> * When the IP configuration is x16 capable and two x8 chips are
> * connected in parallel, DEVICES_CONNECTED should be set to 2.
> * In this case, the core framework knows nothing about this fact,
> * so we should tell it the _logical_ pagesize and anything necessary.
> */
>
BTW, you should also set the NAND_BUSWIDTH_16 flag in this case.
>
>
>
> > If that's a case, and this feature is actually used, then it's a bad
> > idea IMHO.
> > For example, how do you handle the case where one block is bad on a
> > chip but not on the other? And I fear this is not the only problem
> > with this approach :-/.
>
> As you expect, if one block is bad,
> the correspond block on the other chip can not be used.
>
Hm, last time I thought about this usage I found others things that
could cause problems, but I can't remember exactly what.
Anyway, if this feature is already used, let's keep it.
Powered by blists - more mailing lists