[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ea6f3dd7-0732-4de9-8bf1-e88a45ad6ac2@ti.com>
Date: Wed, 10 Dec 2025 17:04:09 +0530
From: Santhosh Kumar K <s-k6@...com>
To: Miquel Raynal <miquel.raynal@...tlin.com>, Michael Walle
<mwalle@...nel.org>
CC: Pratyush Yadav <pratyush@...nel.org>, <richard@....at>, <vigneshr@...com>,
<broonie@...nel.org>, <tudor.ambarus@...aro.org>, <p-mantena@...com>,
<linux-spi@...r.kernel.org>, <linux-mtd@...ts.infradead.org>,
<linux-kernel@...r.kernel.org>, <a-dutta@...com>, <u-kumar1@...com>,
<praneeth@...com>, <s-k6@...com>
Subject: Re: [RFC PATCH 01/10] spi: spi-mem: Introduce support for tuning
controller
Hello Michael and Miquel,
On 03/12/25 15:20, Miquel Raynal wrote:
>
>>>> I think we should start with the requirement to have the pattern flashed
>>>> already and figure out how SPI NOR or SPI NAND can discover that
>>>> (perhaps via NVMEM?).
>>
>> But we should also keep in mind that certain flashes might return
>> tuning data during the dummy cycles. I.e. the PHY might probably be
>> tuned on each read and there is no need for any pre-programmed
>> pattern.
>>
>> I'm not saying it should be implemented, but the current
>> implementation should be that flexible that it will be easy to add
>> that later.
>
> Conceptually, yes, but in practice, I know no controller capable of
> using just a few cycles every transfer to calibrate themselves
> automatically and reaching such an optimized speed state as the cadence
> controller is capable of ATM.
>
> Despite the end result being close, I would still consider this other
> way to optimize the I/Os somewhat orthogonal. If someone has some
> knowledge to share about the training patterns sent during the dummy
> cycles, I am all ears though.
>
>>> For SPI NOR, we do not have an equivalent "write-to-cache" possible, so
>>> we still require a pre-flashed pattern region. At the moment this is
>>> provided via a dedicated "phypattern" partition, and its offset is
>>> obtained through the of_get_* APIs.
>>>
>>> Regarding ways to locate the partition:
>>>
>>> 1. Using NVMEM:
>>> a. Exposing the phypattern partition as an NVMEM cell and issuing an
>>> NVMEM read during tuning does not work reliably, because NVMEM
>>> ends up calling into the MTD read path and we cannot control which
>>> read_op variant is used for the read.
>>>
>>> b. Advertising the partition as an NVMEM cell and using NVMEM only
>>> to fetch the offset is not possible either. NVMEM abstracts the
>>> private data, including partition offsets, so we can't retrieve
>>> the offset as well.
>>
>> You can probably extend the NVMEM API in some way - or switching the
>> read_op on the fly.
>>
>>> 2. Using of_get_* APIs:
>>> Using the standard OF helpers to locate the phypattern partition
>>> and retrieve its offset is both reliable and straighforward, and
>>> is the approach currently implemented in v2.
>>
>> I don't like that hardcoded partition name which is basically
>> becoming an ABI then.
>>
>> At least we'd need some kind of phandle to the partition inside the
>> controller node (and get the ACK from the DT maintainers).
>
> Yes, agreed, this is controller specific, if we need to use an of_ API
> (which is still not needed for SPI NANDs, only for tuning the read SPI
> NOR path), it should not just be a partition hardcoded name but a
> phandle in the controller node.
Yes, using a phandle is a valid idea to avoid relying on a hard-coded
name. But, it does not work well when multiple chip selects are
involved. The controller is not tied to a single flash device - a single
SPI controller may host both NOR and NAND flashes, for example. In such
case, only the NOR would require this phandle, while the NAND would not,
which makes the phandle approach unsuitable. Another example is a
controller hosting two NOR flashes - both would then need their own
phandle references.
An alternative would be to associate the phandle with the flash device
itself rather than with the controller. Let me know your thoughts on
this approach.
Thanks,
Santhosh.
>
> Thanks,
> Miquèl
Powered by blists - more mailing lists