[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20211025155429.yac6ig6pmdcg424i@gilmour>
Date: Mon, 25 Oct 2021 17:54:29 +0200
From: Maxime Ripard <maxime@...no.tech>
To: Samuel Holland <samuel@...lland.org>
Cc: Chen-Yu Tsai <wens@...e.org>,
Jernej Skrabec <jernej.skrabec@...il.com>,
Rob Herring <robh+dt@...nel.org>,
Michael Turquette <mturquette@...libre.com>,
Stephen Boyd <sboyd@...nel.org>, devicetree@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org, linux-clk@...r.kernel.org,
linux-sunxi@...ts.linux.dev, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 0/7] clk: sunxi-ng: Add a RTC CCU driver
On Tue, Sep 28, 2021 at 10:54:26PM -0500, Samuel Holland wrote:
> Hi Maxime,
>
> Thanks for your reply.
>
> On 9/28/21 4:06 AM, Maxime Ripard wrote:
> > On Tue, Sep 28, 2021 at 02:46:39AM -0500, Samuel Holland wrote:
> >> On 9/9/21 3:45 AM, Maxime Ripard wrote:
> >>> On Fri, Sep 03, 2021 at 10:21:13AM -0500, Samuel Holland wrote:
> >>>> On 9/3/21 9:50 AM, Maxime Ripard wrote:
> >>>>> And since we can register all those clocks at device probe time, we
> >>>>> don't really need to split the driver in two (and especially in two
> >>>>> different places). The only obstacle to this after your previous series
> >>>>> is that we don't have of_sunxi_ccu_probe / devm_sunxi_ccu_probe
> >>>>> functions public, but that can easily be fixed by moving their
> >>>>> definition to include/linux/clk/sunxi-ng.h
> >>>>
> >>>> Where are you thinking the clock definitions would go? We don't export
> >>>> any of those structures (ccu_mux, ccu_common) or macros
> >>>> (SUNXI_CCU_GATE_DATA) in a public header either.
> >>>
> >>> Ah, right...
> >>>
> >>>> Would you want to export those? That seems like a lot of churn. Or would
> >>>> we put the CCU descriptions in drivers/clk/sunxi-ng and export a
> >>>> function that the RTC driver can call? (Or some other idea?)
> >>>
> >>> I guess we could export it. There's some fairly big headers in
> >>> include/linux/clk already (tegra and ti), it's not uAPI and we do have
> >>> reasons to do so, so I guess it's fine.
> >>>
> >>> I'd like to avoid having two drivers for the same device if possible,
> >>> especially in two separate places. This creates some confusion since the
> >>> general expectation is that there's only one driver per device. There's
> >>> also the fact that this could lead to subtle bugs since the probe order
> >>> is the link order (or module loading).
> >>
> >> I don't think there can be two "struct device"s for a single OF node.
> >
> > That's not what I meant, there's indeed a single of_node for a single
> > struct device. If we dig a bit into the core framework, the most likely
> > scenario is that we would register both the RTC and clock driver at
> > module_init, and with the device already created with its of_node set
> > during the initial DT parsing.
> >
> > We register our platform driver using module_platform_driver, which
> > expands to calling driver_register() at module_init(), setting the
> > driver bus to the platform_bus in the process (in
> > __platform_driver_register()).
> >
> > After some sanity check, driver_register() calls bus_add_driver(), which
> > will call driver_attach() if drivers_autoprobe is set (which is the
> > default, set into bus_register()).
> >
> > driver_attach() will, for each device on the platform bus, call
> > __driver_attach(). If there's a match between that device and our driver
> > (which is evaluated by platform_match() in our case), we'll call our
> > driver probe with that device through driver_probe_device(),
> > __driver_probe_device() and finally really_probe().
> >
> > However, at no point in time there's any check about whether that device
> > has already been bound to a driver, nor does it create a new device for
> > each driver.
>
> I would expect this to hit the:
>
> if (dev->driver)
> return -EBUSY;
>
> in __driver_probe_device(), or fail the "if (!dev->driver)" check in
> __driver_attach() for the async case, once the first driver is bound.
Hmmm, it might. I know we "leveraged" this some time ago for another
platform, but it might not be working anymore indeed.
> > So this means that, if you have two drivers that match the
> > same device (like our clock and RTC drivers), you'll have both probe
> > being called with the same device, and the probe order will be defined
> > by the link order. Worse, they would share the same driver_data, with
> > each driver not being aware of the other. This is incredibly fragile,
> > and hard to notice since it goes against the usual expectations.
> >
> >> So if the CCU part is in drivers/clk/sunxi-ng, the CCU "probe"
> >> function would have to be called from the RTC driver.
> >
> > No, it would be called by the core directly if there's a compatible to
> > match.
> >
> >> Since there has to be cooperation anyway, I don't think there would be
> >> any ordering problems.
> >
> > My initial point was that, with a direct function call, it's both
> > deterministic and obvious.
>
> I believe I did what you are suggesting for v2. From patch 7:
>
> --- a/drivers/rtc/rtc-sun6i.c
> +++ b/drivers/rtc/rtc-sun6i.c
> @@ -683,6 +684,10 @@ static int sun6i_rtc_probe(struct platform_device
> *pdev)
> chip->base = devm_platform_ioremap_resource(pdev, 0);
> if (IS_ERR(chip->base))
> return PTR_ERR(chip->base);
> +
> + ret = sun6i_rtc_ccu_probe(&pdev->dev, chip->base);
> + if (ret)
> + return ret;
> }
Ah, sorry, I entirely missed it. Yes, that totally fine by me then. I'd
prefer to have the spinlock passed as an argument as well, but it can be
done in a follow-up patch.
> platform_set_drvdata(pdev, chip);
>
> >>> And synchronizing access to registers between those two drivers will be
> >>> hard, while we could just share the same spin lock between the RTC and
> >>> clock drivers if they are instanciated in the same place.
> >>
> >> While the RTC driver currently shares a spinlock between the clock part
> >> and the RTC part, there isn't actually any overlap in register usage
> >> between the two. So there doesn't need to be any synchronization.
> >
> > I know, but this was more of a social problem than a technical one. Each
> > contributor and reviewer in the future will have to know or remember
> > that it's there, and make sure that it's still the case after any change
> > they make or review.
> >
> > This is again a fairly fragile assumption.
>
> Yeah, I agree that having a lock that is only sometimes safe to use with
> certain registers is quite fragile.
>
> Would splitting the spinlock in rtc-sun6i.c into "losc_lock" (for the
> clock provider) and "alarm_lock" (for the RTC driver) make this
> distinction clear enough?
>
> Eventually, I want to split up the struct between the clock provider and
> RTC driver so it's clear which members belong to whom, and there's no
> ugly global pointer use. Maybe I should do this first?
Yeah, it sounds like a good plan
Thanks!
Maxime
Download attachment "signature.asc" of type "application/pgp-signature" (229 bytes)
Powered by blists - more mailing lists