lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZFOTj70DxE2IMitO@nanopsycho>
Date: Thu, 4 May 2023 13:14:23 +0200
From: Jiri Pirko <jiri@...nulli.us>
To: Jakub Kicinski <kuba@...nel.org>
Cc: "Kubalewski, Arkadiusz" <arkadiusz.kubalewski@...el.com>,
	Vadim Fedorenko <vadim.fedorenko@...ux.dev>,
	Vadim Fedorenko <vadfed@...a.com>,
	Jonathan Lemon <jonathan.lemon@...il.com>,
	Paolo Abeni <pabeni@...hat.com>, poros <poros@...hat.com>,
	mschmidt <mschmidt@...hat.com>,
	"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
	linux-arm-kernel@...ts.infradead.org,
	"linux-clk@...r.kernel.org" <linux-clk@...r.kernel.org>,
	"Olech, Milena" <milena.olech@...el.com>,
	"Michalik, Michal" <michal.michalik@...el.com>
Subject: Re: [PATCH RFC v6 2/6] dpll: Add DPLL framework base functions

Thu, May 04, 2023 at 01:00:42PM CEST, jiri@...nulli.us wrote:
>Thu, May 04, 2023 at 04:16:43AM CEST, kuba@...nel.org wrote:
>>On Wed, 3 May 2023 09:56:57 +0200 Jiri Pirko wrote:
>>> >Yup, non-deterministic, just a cyclic ID allocated by the core starting
>>> >from 1. Finding the right device / pin needs to be done via
>>> >informational attributes not making assumptions about the ID.  
>>> 
>>> Okay.
>>> 
>>> When netdev will have pin ID in the RT netlink message (as it is done
>>> in RFCv7), it is easy to get the pin/dpll for netdev. No problem there.
>>> 
>>> However, for non-SyncE usecase, how do you imagine scripts to work?
>>> I mean, the script have to obtain dpll/pin ID by deterministic
>>> module_name/clock_id/idx tuple.
>>
>>No scoped idx.
>
>That means, no index defined by a driver if I undestand you correctly,
>right?
>
>
>>
>>> There are 2 options to do that:
>>> 1) dump all dplls/pins and do lookup in userspace
>>> 2) get a dpll/pin according to given module_name/clock_id/idx tuple
>>> 
>>> The first approach is not very nice.
>>> The currently pushed RFCv7 of the patchset does not support 2)
>>> 
>>> Now if we add support for 2), we basically use module_name/clock_id/idx
>>> as a handle for "get cmd". My point is, why can't we use it for "set
>>> cmd" as well and avoid the ID entirely?
>>
>>Sure, we don't _have_ to have an ID, but it seems go against normal
>>data normalization rules. And I don't see any harm in it.
>>
>>But you're asking for per-device "idx" and that's a no-go for me,
>>given already cited experience.
>>
>>The user space can look up the ID based on identifying information it
>>has. IMO it's better to support multiple different intelligible elements
>
>Do you mean fixed tuple or variable tuple?
>
>CMD_GET_ID
>  -> DPLL_A_MODULE_NAME
>     DPLL_A_CLOCK_ID

Sorry, I hit the send button by a mistake.
I ment to add a question here:
What is the next intelligible element to identify DPLL device here?

>  <- DPLL_A_ID
>
>CMD_GET_PIN_ID
>  -> DPLL_A_MODULE_NAME
>     DPLL_A_CLOCK_ID

What is the next intelligible element to identify a pin here?

>  <- DPLL_A_PIN_ID
>
>
>
>>than single integer index into which drivers will start encoding all
>>sort of info, using locally invented schemes.
>
>There could be multiple DPLL and pin instances for a single
>module/clock_id tuple we have to distinguish somehow. If the driver
>can't pass "index" of DPLL or a pin, how we distinguish them?
>
>Plus is is possible that 2 driver instances share the same dpll
>instance, then to get the dpll pointer reference, they do:
>INSTANCE A:
>dpll_0 = dpll_device_get(clock_id, 0, THIS_MODULE);
>dpll_1 = dpll_device_get(clock_id, 1, THIS_MODULE);
>
>INSTANCE B:
>dpll_0 = dpll_device_get(clock_id, 0, THIS_MODULE);
>dpll_1 = dpll_device_get(clock_id, 1, THIS_MODULE);
>
>My point is, event if we don't expose the index to the userspace,
>we need to have it internally.
>

Powered by blists - more mailing lists