[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a297079e-2dc9-d311-5415-a58332e7a711@linaro.org>
Date: Tue, 29 Nov 2022 15:34:26 +0100
From: Krzysztof Kozlowski <krzysztof.kozlowski@...aro.org>
To: Hector Martin <marcan@...can.st>,
Ulf Hansson <ulf.hansson@...aro.org>
Cc: "Rafael J. Wysocki" <rafael@...nel.org>,
Viresh Kumar <viresh.kumar@...aro.org>,
Matthias Brugger <matthias.bgg@...il.com>,
Sven Peter <sven@...npeter.dev>,
Alyssa Rosenzweig <alyssa@...enzweig.io>,
Rob Herring <robh+dt@...nel.org>,
Krzysztof Kozlowski <krzysztof.kozlowski+dt@...aro.org>,
Stephen Boyd <sboyd@...nel.org>, Marc Zyngier <maz@...nel.org>,
Mark Kettenis <mark.kettenis@...all.nl>, asahi@...ts.linux.dev,
linux-arm-kernel@...ts.infradead.org, linux-pm@...r.kernel.org,
devicetree@...r.kernel.org, linux-kernel@...r.kernel.org,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH v5 2/4] dt-bindings: cpufreq: apple,soc-cpufreq: Add
binding for Apple SoC cpufreq
On 29/11/2022 15:00, Hector Martin wrote:
> On 29/11/2022 20.36, Ulf Hansson wrote:
>> On Mon, 28 Nov 2022 at 15:29, Hector Martin <marcan@...can.st> wrote:
>>> +examples:
>>> + - |
>>> + // This example shows a single CPU per domain and 2 domains,
>>> + // with two p-states per domain.
>>> + // Shipping hardware has 2-4 CPUs per domain and 2-6 domains.
>>> + cpus {
>>> + #address-cells = <2>;
>>> + #size-cells = <0>;
>>> +
>>> + cpu@0 {
>>> + compatible = "apple,icestorm";
>>> + device_type = "cpu";
>>> + reg = <0x0 0x0>;
>>> + operating-points-v2 = <&ecluster_opp>;
>>
>> To me, it looks like the operating-points-v2 phandle better belongs in
>> the performance-domains provider node. I mean, isn't the OPPs really a
>> description of the performance-domain provider?
>>
>> That said, I suggest we try to extend the generic performance-domain
>> binding [1] with an "operating-points-v2". In that way, we should
>> instead be able to reference it from this binding.
>>
>> In fact, that would be very similar to what already exists for the
>> generic power-domain binding [2]. I think it would be rather nice to
>> follow a similar pattern for the performance-domain binding.
>
> While I agree with the technical rationale and the proposed approach
> being better in principle...
>
> We're at v5 of bikeshedding this trivial driver's DT binding, and the
> comment could've been made at v3. To quote IRC just now:
>
>> this way the machines will be obsolete before things are fully upstreamed
>
> I think it's long overdue for the kernel community to take a deep look
> at itself and its development and review process, because it is quite
> honestly insane how pathologically inefficient it is compared to,
> basically, every other large and healthy open source project of similar
> or even greater impact and scope.
>
> Cc Linus, because this is for your Mac and I assume you care. We're at
> v5 here for this silly driver. Meanwhile, rmk recently threw the towel
> on upstreaming macsmc for us. We're trying, and I'll keep trying because
> I actually get paid (by very generous donors) to do this, but if I
> weren't I'd have given up a long time ago. And while I won't give up, I
> can't deny this situation affects my morale and willingness to keep
> pushing on upstreaming on a regular basis.
>
> Meanwhile, OpenBSD has been *shipping* full M1 support for a while now
> in official release images (and since Linux is the source of truth for
> DT bindings, every time we re-bikeshed it we break their users because
> they, quite reasonably, aren't interested in waiting for us Linux
> slowpokes to figure it out first).
>
> Please, let's introspect about this for a moment. Something is deeply
> broken if people with 25+ years being an arch maintainer can't get a
If arch maintainer sends patches which does not build (make
dt_binding_check), then what do you exactly expect? Accept them just
because it is 25+ years of experience or a maintainer? So we have
difference processes - for beginners code should compile. For
experienced people, it does not have to build because otherwise they
will get discouraged?
> 700-line mfd driver upstreamed before giving up. I don't know how we
> expect to ever get a Rust GPU driver merged if it takes 6+ versions to
> upstream the world's easiest cpufreq hardware.
While I understand your point about bikeschedding, but I think your
previous bindings got pretty nice and fast reviews, so using examples of
non-building case is poor choice.
Best regards,
Krzysztof
Powered by blists - more mailing lists