[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <5d1eaf2e-0b8d-3f09-a351-709a95a265eb@linaro.org>
Date: Sun, 10 Jun 2018 14:28:12 +0100
From: Srinivas Kandagatla <srinivas.kandagatla@...aro.org>
To: Alban <albeu@...e.fr>
Cc: linux-kernel@...r.kernel.org, Rob Herring <robh+dt@...nel.org>,
Mark Rutland <mark.rutland@....com>,
David Woodhouse <dwmw2@...radead.org>,
Brian Norris <computersforpeace@...il.com>,
Boris Brezillon <boris.brezillon@...e-electrons.com>,
Marek Vasut <marek.vasut@...il.com>,
Richard Weinberger <richard@....at>,
Cyrille Pitchen <cyrille.pitchen@...ev4u.fr>,
devicetree@...r.kernel.org, linux-mtd@...ts.infradead.org
Subject: Re: [PATCH v3 1/3] nvmem: Update the OF binding to use a subnode for
the cells list
On 10/06/18 12:36, Alban wrote:
> On Sun, 10 Jun 2018 11:32:36 +0100
> Srinivas Kandagatla <srinivas.kandagatla@...aro.org> wrote:
>
>> On 08/06/18 18:07, Alban wrote:
>>> On Fri, 8 Jun 2018 12:34:12 +0100
>>> Srinivas Kandagatla <srinivas.kandagatla@...aro.org> wrote:
>>>
>> ...
>>>
>>> I looked into this. It would work fine for the cells but not so nicely
>>> for the nvmem device API. The phandle for the nvmem device would have
>>> to reference the node passed here and not the real device. We would end
>>> up with a DT like this:
>>>
>>> flash@0 {
>>> compatible = "mtd";
>>> ...
>>> nvmem_dev: nvmem-cells {
>>> compatible = "nvmem-cells";
>>> ...
>>> };
>>> };
>>>
>>> other-device@10 {
>>> ...
>>> nvmem = <&nvmem_dev>;
>>> };
>>>
>>> Now if there is no cell defined we have this empty child node that make
>>> very little sense, it is just there to accommodate the nvmem API.
>>>
>> NO. This just looks fine!
>> nvmem-cells is the nvmem provider node without which you would not have
>> any provider instance.
>> All this looks as expected!
>> Am not sure what is the problem here!
>
> The problem is that DT should represent the hardware, not the OS API.
Exactly!! flash/mtd has nvmem provider which should be represented in
the DT.
There is no change in DT side from your original patch vs the new
approach. You still are going to have the same subnode.
Isn't it?
AFAIU, the new approach will make it explicit that there is a nvmem
provider in the DT.
...
> What should be represented is that other drivers can access data stored
> on this device. It is my understanding that this wouldn't be an
> acceptable binding as the nvmem provider node would only exists because
> of how the NVMEM API currently works, a correct binding would just
> directly reference the storage device without this extra node.
>
...
>> Having a subnode still sounds very fragile to me,
>> and this could be much specific case of MTD provider. We might have
>> instances where this could be sub-sub node of the the original provider
>> for other providers. Also I do not want to bring in Provider specifics
>> layout into nvmem bindings.
>>
>> I can not make myself any clearer than this, Its going to be a NAK from
>> my side for the above reasons!
>
> I fully understand you concern but I think they are overblown. First I
> highly doubt that more layouts will ever be needed, using a compatible
> string pretty much guarantee that we won't clash with another binding.
> Furthermore even if you consider this extension "MTD specific" the
> amount of code is very small, non intrusive and only run once at
> registration time. I would understand if we were talking about pages of
> code nesting in various place, but not really when it is a single small
> if block with an obvious condition. And finally I don't see that as MTD
> specific as any other device could use this feature without any code
> change.
>
>> Also, patch I shared should give enough flexibility to various range of
>> providers which have different child node layouts without touching the
>> nvmem bindings. If it works please use it.
>
> It works from a code POV but it break the basic guidelines of DT
> bindings. As I want to have this done, I'm going to do a patch as you
> want, but I see a high chance that the binding is going to be rejected
> by the DT maintainers and we'll be back here again.
If you think the sub node is going to be a problem from MTD point of
view then that is worth discussing.
Adding subnode in nvmem bindings is not going to help or make the
situation any better.
Lets see how this goes!
thanks,
srini
>
> Alban
>
Powered by blists - more mailing lists