[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAGETcx-0VboaAeoa+_AqDtrDj6v6ZytFj6pU-FVyAu-pk-hG6A@mail.gmail.com>
Date: Mon, 6 Feb 2023 12:08:28 -0800
From: Saravana Kannan <saravanak@...gle.com>
To: Miquel Raynal <miquel.raynal@...tlin.com>
Cc: Maxim Kiselev <bigunclemax@...il.com>,
Sudeep Holla <sudeep.holla@....com>,
Naresh Kamboju <naresh.kamboju@...aro.org>,
abel.vesa@...aro.org, alexander.stein@...tq-group.com,
andriy.shevchenko@...ux.intel.com, brgl@...ev.pl,
colin.foster@...advantage.com, cristian.marussi@....com,
devicetree@...r.kernel.org, dianders@...omium.org,
djrscally@...il.com, dmitry.baryshkov@...aro.org,
festevam@...il.com, fido_max@...ox.ru, frowand.list@...il.com,
geert+renesas@...der.be, geert@...ux-m68k.org,
gregkh@...uxfoundation.org, heikki.krogerus@...ux.intel.com,
jpb@...nel.org, jstultz@...gle.com, kernel-team@...roid.com,
kernel@...gutronix.de, lenb@...nel.org, linus.walleij@...aro.org,
linux-acpi@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
linux-gpio@...r.kernel.org, linux-imx@....com,
linux-kernel@...r.kernel.org, linux-renesas-soc@...r.kernel.org,
linux@...ck-us.net, lkft@...aro.org, luca.weiss@...rphone.com,
magnus.damm@...il.com, martin.kepplinger@...i.sm, maz@...nel.org,
rafael@...nel.org, robh+dt@...nel.org, s.hauer@...gutronix.de,
sakari.ailus@...ux.intel.com, shawnguo@...nel.org,
tglx@...utronix.de, tony@...mide.com,
Srinivas Kandagatla <srinivas.kandagatla@...aro.org>
Subject: Re: [PATCH v2 00/11] fw_devlink improvements
On Mon, Feb 6, 2023 at 1:39 AM Miquel Raynal <miquel.raynal@...tlin.com> wrote:
>
> Hi Saravana,
>
> + Srinivas, nvmem maintainer
>
> saravanak@...gle.com wrote on Sun, 5 Feb 2023 17:32:57 -0800:
>
> > On Fri, Feb 3, 2023 at 1:39 AM Maxim Kiselev <bigunclemax@...il.com> wrote:
> > >
> > > пт, 3 февр. 2023 г. в 09:07, Saravana Kannan <saravanak@...gle.com>:
> > > >
> > > > On Thu, Feb 2, 2023 at 9:36 AM Maxim Kiselev <bigunclemax@...il.com> wrote:
> > > > >
> > > > > Hi Saravana,
> > > > >
> > > > > > Can you try the patch at the end of this email under these
> > > > > > configurations and tell me which ones fail vs pass? I don't need logs
> > > > >
> > > > > I did these tests and here is the results:
> > > >
> > > > Did you hand edit the In-Reply-To: in the header? Because in the
> > > > thread you are reply to the wrong email, but the context in your email
> > > > seems to be from the right email.
> > > >
> > > > For example, see how your reply isn't under the email you are replying
> > > > to in this thread overview:
> > > > https://lore.kernel.org/lkml/20230127001141.407071-1-saravanak@google.com/#r
> > > >
> > > > > 1. On top of this series - Not works
> > > > > 2. Without this series - Works
> > > > > 3. On top of the series with the fwnode_dev_initialized() deleted - Not works
> > > > > 4. Without this series, with the fwnode_dev_initialized() deleted - Works
> > > > >
> > > > > So your nvmem/core.c patch helps only when it is applied without the series.
> > > > > But despite the fact that this helps to avoid getting stuck at probing
> > > > > my ethernet device, there is still regression.
> > > > >
> > > > > When the ethernet module is loaded it takes a lot of time to drop dependency
> > > > > from the nvmem-cell with mac address.
> > > > >
> > > > > Please look at the kernel logs below.
> > > >
> > > > The kernel logs below really aren't that useful for me in their
> > > > current state. See more below.
> > > >
> > > > ---8<---- <snip> --->8----
> > > >
> > > > > P.S. Your nvmem patch definitely helps to avoid a device probe stuck
> > > > > but look like it is not best way to solve a problem which we discussed
> > > > > in the MTD thread.
> > > > >
> > > > > P.P.S. Also I don't know why your nvmem-cell patch doesn't help when it was
> > > > > applied on top of this series. Maybe I missed something.
> > > >
> > > > Yeah, I'm not too sure if the test was done correctly. You also didn't
> > > > answer my question about the dts from my earlier email.
> > > > https://lore.kernel.org/lkml/CAGETcx8FpmbaRm2CCwqt3BRBpgbogwP5gNB+iA5OEtuxWVTNLA@mail.gmail.com/#t
> > > >
> > > > So, can you please retest config 1 with all pr_debug and dev_dbg in
> > > > drivers/core/base.c changed to the _info variants? And then share the
> > > > kernel log from the beginning of boot? Maybe attach it to the email so
> > > > it doesn't get word wrapped by your email client. And please point me
> > > > to the .dts that corresponds to your board. Without that, I can't
> > > > debug much.
> > > >
> > > > Thanks,
> > > > Saravana
> > >
> > > > Did you hand edit the In-Reply-To: in the header? Because in the
> > > > thread you are reply to the wrong email, but the context in your email
> > > > seems to be from the right email.
> > >
> > > Sorry for that, it seems like I accidently deleted it.
> > >
> > > > So, can you please retest config 1 with all pr_debug and dev_dbg in
> > > > drivers/core/base.c changed to the _info variants? And then share the
> > > > kernel log from the beginning of boot? Maybe attach it to the email so
> > > > it doesn't get word wrapped by your email client. And please point me
> > > > to the .dts that corresponds to your board. Without that, I can't
> > > > debug much.
> > >
> > > Ok, I retested config 1 with all _debug logs changed to the _info. I
> > > added the kernel log and the dts file to the attachment of this email.
> >
> > Ah, so your device is not supported/present upstream? Even though it's
> > not upstream, I'll help fix this because it should fix what I believe
> > are unreported issues in upstream.
> >
> > Ok I know why configs 1 - 4 behaved the way they did and why my test
> > patch didn't help.
> >
> > After staring at mtd/nvmem code for a few hours I think mtd/nvmem
> > interaction is kind of a mess.
>
> nvmem is a recent subsystem but mtd carries a lot of legacy stuff we
> cannot really re-wire without breaking users, so nvmem on top of mtd
> of course inherit from the fragile designs in place.
Thanks for the context. Yeah, I figured. That's why I explicitly
limited my comment to "interaction". Although, I'd love to see the MTD
parsers all be converted to proper drivers that probe. MTD is
essentially repeating the driver matching logic. I think it can be
cleaned up to move to proper drivers and still not break backward
compatibility. Not saying it'll be trivial, but it should be possible.
Ironically MTD uses mtd_class but has real drivers that work on the
device (compared to nvmem_bus below).
> > mtd core creates "partition" platform
> > devices (including for nvmem-cells) that are probed by drivers in
> > drivers/nvmem. However, there's no driver for "nvmem-cells" partition
> > platform device. However, the nvmem core creates nvmem_device when
> > nvmem_register() is called by MTD or these partition platform devices
> > created by MTD. But these nvmem_devices are added to a nvmem_bus but
> > the bus has no means to even register a driver (it should really be a
> > nvmem_class and not nvmem_bus).
>
> Srinivas, do you think we could change this?
Yeah, this part gets a bit tricky. It depends on whether the sysfs
files for nvmem devices is considered an ABI. Changing from bus to
class would change the sysfs path for nvmem devices from:
/sys/class/nvmem to /sys/bus/nvmem
> > And the nvmem_device sometimes points
> > to the DT node of the MTD device or sometimes the partition platform
> > devices or maybe no DT node at all.
>
> I guess this comes from the fact that this is not strongly defined in
> mtd and depends on the situation (not mentioning 20 years of history
> there as well). "mtd" is a bit inconsistent on what it means. Older
> designs mixed: controllers, ECC engines when relevant and memories;
> while these three components are completely separated. Hence
> sometimes the mtd device ends up being the top level controller,
> sometimes it's just one partition...
>
> But I'm surprised not all of them point to a DT node. Could you show us
> an example? Because that might likely be unexpected (or perhaps I am
> missing something).
Well, the logic that sets the DT node for nvmem_device is like so:
if (config->of_node)
nvmem->dev.of_node = config->of_node;
else if (!config->no_of_node)
nvmem->dev.of_node = config->dev->of_node;
So there's definitely a path (where both if's could be false) where
the DT node will not get set. I don't know if that path is possible
with the existing users of nvmem_register(), but it's definitely
possible.
> > So it's a mess of multiple devices pointing to the same DT node with
> > no clear way to identify which ones will point to a DT node and which
> > ones will probe and which ones won't. In the future, we shouldn't
> > allow adding new compatible strings for partitions for which we don't
> > plan on adding nvmem drivers.
> >
> > Can you give the patch at the end of the email a shot? It should fix
> > the issue with this series and without this series. It just avoids
> > this whole mess by not creating useless platform device for
> > nvmem-cells compatible DT nodes.
>
> Thanks a lot for your help.
No problem. I want fw_devlink to work for everyone.
> >
> > Thanks,
> > Saravana
> >
> > diff --git a/drivers/mtd/mtdpart.c b/drivers/mtd/mtdpart.c
> > index d442fa94c872..88a213f4d651 100644
> > --- a/drivers/mtd/mtdpart.c
> > +++ b/drivers/mtd/mtdpart.c
> > @@ -577,6 +577,7 @@ static int mtd_part_of_parse(struct mtd_info *master,
> > {
> > struct mtd_part_parser *parser;
> > struct device_node *np;
> > + struct device_node *child;
> > struct property *prop;
> > struct device *dev;
> > const char *compat;
> > @@ -594,6 +595,10 @@ static int mtd_part_of_parse(struct mtd_info *master,
> > else
> > np = of_get_child_by_name(np, "partitions");
> >
> > + for_each_child_of_node(np, child)
> > + if (of_device_is_compatible(child, "nvmem-cells"))
> > + of_node_set_flag(child, OF_POPULATED);
>
> What about a comment explaining why we need that in the final patch
> (with a comment)? Otherwise it's a little bit obscure.
This wasn't meant to be reviewed :) Just a quick patch to make sure
I'm going down the right path. Once Maxim confirms I was going to roll
this into a proper patch.
But point noted. Will add a comment.
Thanks,
Saravana
Powered by blists - more mailing lists