lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y5HSQU1aCGpOZLiJ@nanopsycho>
Date:   Thu, 8 Dec 2022 13:02:09 +0100
From:   Jiri Pirko <jiri@...nulli.us>
To:     Jakub Kicinski <kuba@...nel.org>
Cc:     "Kubalewski, Arkadiusz" <arkadiusz.kubalewski@...el.com>,
        Vadim Fedorenko <vfedorenko@...ek.ru>,
        Jonathan Lemon <jonathan.lemon@...il.com>,
        Paolo Abeni <pabeni@...hat.com>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        "linux-arm-kernel@...ts.infradead.org" 
        <linux-arm-kernel@...ts.infradead.org>,
        "linux-clk@...r.kernel.org" <linux-clk@...r.kernel.org>
Subject: Re: [RFC PATCH v4 0/4] Create common DPLL/clock configuration API

Wed, Dec 07, 2022 at 06:19:46PM CET, kuba@...nel.org wrote:
>On Wed, 7 Dec 2022 15:51:42 +0100 Jiri Pirko wrote:
>> Wed, Dec 07, 2022 at 03:47:40AM CET, kuba@...nel.org wrote:
>> >On Fri, 2 Dec 2022 17:12:06 +0100 Jiri Pirko wrote:  
>> >> Yep, you have the knowledge of sharing inside the driver, so you should
>> >> do it there. For multiple instances, use in-driver notifier for example.  
>> >
>> >No, complexity in the drivers is not a good idea. The core should cover
>> >the complexity and let the drivers be simple.  
>> 
>> Really, even in case only one driver actually consumes the complexicity?
>> I understand having a "libs" to do common functionality for drivers,
>> even in case there is one. But this case, I don't see any benefit.
>
>In the same email thread you admit that mlx5 has multiple devlink
>instances for the same ASIC and refuse to try to prevent similar
>situation happening in the new subsystem.

I don't understand your point. In CX there is 1 clock for 2 pci PFs. I
plan to have 1 dpll instance for the clock shared.

But how is what you write relevant to the discussion? We are talking
about:
a) 1 pin in 2 dpll instances
what I undestand you say here is to prevent:
b) 2 dpll instances for 1 clock
apples and oranges. Am I missing something?

I'm totally against b) but that is not what we discuss here, correct?


>
>> >> There are currently 3 drivers for dpll I know of. This in ptp_ocp and
>> >> mlx5 there is no concept of sharing pins. You you are talking about a
>> >> single driver.
>> >> 
>> >> What I'm trying to say is, looking at the code, the pin sharing,
>> >> references and locking makes things uncomfortably complex. You are so
>> >> far the only driver to need this, do it internally. If in the future
>> >> other driver appears, this code would be eventually pushed into dpll
>> >> core. No impact on UAPI from what I see. Please keep things as simple as
>> >> possible.  
>> >
>> >But the pin is shared for one driver. Who cares if it's not shared in
>> >another. The user space must be able to reason about the constraints.  
>> 
>> Sorry, I don't follow :/ Could you please explain what do you mean by
>> this?
>
>We don't wait with adding APIs until there is more than one driver that
>needs them.

Agreed. I was under impression that this is only kernel internals and
won't affect the UAPI. Perhaps I'm wrong.


>
>> >You are suggesting drivers to magically flip state in core objects
>> >because of some hidden dependencies?!  
>> 
>> It's not a state flip. It's more like a well propagated event of a state
>> change. The async changes may happen anyway, so the userspace needs
>> to handle them. Why is this different?
>
>What if the user space wants conflicting configurations for the muxes
>behind a shared pin?
>
>The fact that there is a notification does not solve the problem of
>user space not knowing what's going on. Why would the user space play
>guessing games if the driver _knows_ the topology and can easily tell
>it.

Okay. I get your point. This visibility is probably something nice to
have. If it weights over the added complexicity, I'm not sure. But it
looks like you are, and I don't care that much. So let's focus on
defining the shared pin model properly.


>> >> There is a big difference if we model flat list of pins with a set of
>> >> attributes for each, comparing to a tree of pins, some acting as leaf,
>> >> node and root. Do we really need such complexicity? What value does it
>> >> bring to the user to expose this?  
>> >
>> >The fact that you can't auto select from devices behind muxes.  
>> 
>> Why? What's wrong with the mechanism I described in another part of this
>> thread?
>> 
>> Extending my example from above
>> 
>>    pin 1 source
>>    pin 2 output
>>    pin 3 muxid 100 source
>>    pin 4 muxid 100 source
>>    pin 5 muxid 101 source
>>    pin 6 muxid 101 source
>>    pin 7 output
>> 
>> User now can set individial prios for sources:
>> 
>> dpll x pin 1 set prio 10
>> etc
>> The result would be:
>> 
>>    pin 1 source prio 10
>>    pin 2 output
>>    pin 3 muxid 100 source prio 8
>>    pin 4 muxid 100 source prio 20
>>    pin 5 muxid 101 source prio 50
>>    pin 6 muxid 101 source prio 60
>>    pin 7 output
>> 
>> Now when auto is enabled, the pin 3 is selected. Why would user need to
>> manually select between 3 and 4? This is should be abstracted out by the
>> driver.
>> 
>> Actually, this is neat as you have one cmd to do selection in manual
>> mode and you have uniform way of configuring/monitoring selection in
>> autosel. Would the muxed pin make this better?
>> 
>> For muxed pin being output, only one pin from mux would be set:
>> 
>>    pin 1 source
>>    pin 2 output
>>    pin 3 muxid 100 disconnected
>>    pin 4 muxid 100 disconnected
>>    pin 5 muxid 101 output
>>    pin 6 muxid 101 disconnected
>>    pin 7 output
>
>Sorry, can't parse, could you draw the diagram?

There is no diagram. It's a plain list of pins with attributes, one pin
with attributes per line.


>
>To answer the most basic question - my understanding is that for
>prio-based selection there needs to be silicon that can tell if
>there is a valid clock on the line. While mux is just a fancy switch,
>it has no complex logic, just connects wires.
>
>Arkadiusz, is my understanding incorrect? I may have "intuited" this.
>
>IDK if there are any bidirectional pins after a mux, but that'd be
>another problem. Muxes are only simple for inputs.
>
>> >The HW topology is of material importance to user space.  
>> 
>> Interesting. When I was working on linecards, you said that the user
>> does not care about the inner HW topology. And it makes sense. When
>> things could be abstracted out to make iface clean, I think they should.
>
>I recall only the FW related conversations, but what I think is key 
>is whether the information can be acted upon.

What I was refering to was the device/gearbox exposure per-linecard.

>
>> >How many times does Arkadiusz have to explain this :|  
>> 
>> Pardon my ignorance, I don't see why exactly we need mux hierarchy
>> (trees) exposed to user.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ