lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 8 Jun 2023 11:53:47 +0200
From:   Ulf Hansson <ulf.hansson@...aro.org>
To:     Cristian Marussi <cristian.marussi@....com>
Cc:     Sudeep Holla <sudeep.holla@....com>,
        Viresh Kumar <vireshk@...nel.org>, Nishanth Menon <nm@...com>,
        Stephen Boyd <sboyd@...nel.org>,
        Nikunj Kela <nkela@...cinc.com>,
        Prasad Sodagudi <psodagud@...cinc.com>,
        Alexandre Torgue <alexandre.torgue@...s.st.com>,
        linux-pm@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH 00/16] arm_scmi/opp/dvfs: Add generic performance scaling support

On Wed, 7 Jun 2023 at 16:43, Cristian Marussi <cristian.marussi@....com> wrote:
>
> On Wed, Jun 07, 2023 at 02:46:12PM +0200, Ulf Hansson wrote:
> > The current SCMI performance scaling support is limited to cpufreq. This series
> > extends the support, so it can be used for all kind of devices and not only for
> > CPUs.
> >
> > The changes are spread over a couple of different subsystems, although the
> > changes that affects the other subsystems than the arm_scmi directory are
> > mostly smaller. The series is based upon v6.4-rc5. That said, let's figure out
> > on how to best move forward with this. I am of course happy to help in any way.
> >
> > Note that, so far this is only be tested on the Qemu virt platform with Optee
> > running an SCMI server. If you want some more details about my test setup, I am
> > certainly open to share that with you!
> >
> > Looking forward to get your feedback!
> >
>
> Hi Ulf,
>
> thanks for this first of all.
>
> I'll have a look at this properly in the next weeks, in the meantime
> just a small minor remark after having had a quick look.
>
> You expose a few new perf_ops to fit your needs and in fact PERF was
> still not exposing those data for (apparent) lack of users needing
> those. (and/or historical reason I think)
>
> My concern is that this would lead to a growing number of ops as soon as
> more data will be needed by future users; indeed other protocols do
> expose more data but use a different approach: instead of custom ops
> they let the user access a common static info structure like
>
>
> +       int (*num_domains_get)(const struct scmi_protocol_handle *ph);
> +       const struct scmi_perf_dom_info __must_check *(*info_get)
> +               (const struct scmi_protocol_handle *ph, u32 domain);
>
> and expose the related common info struct in scmi_protocol.h too.
> Another reason to stick to this aproach would be consistency with other
> protos (even though I think PERF is not the only lacking info_get)
>
> Now, since really there was already a hidden user for this perf data
> (that would be me :P ... in terms of an unpublished SCMI test-driver),
> I happen to have a tested patch that just expose those 2 above ops and
> exports scmi_perf_dom_info and related structures to scmi_protocol.h
>
> If you (and Sudeep) agree with this approach of limiting the number of
> exposed ops in favour of sharing upfront some static info data, I can
> quickly cleanup and post this patch for you to pick it up in your next
> iteration.

I think your suggestions make perfect sense to me too.

While I was adding the new ops in scmi_perf_proto_ops, I was merely
trying to get inspiration from the scmi_power_proto_ops, it seems like
those need an update too.

Although, there is no need for you to send a patch for "perf" at this
moment - as this piece of code is easy for me to put together myself.
I will simply replace a few of the patches in the series with a new
one, no problem at all.

>
> (really I'd have more conversion of this kind also for other remaining
>  protos but these are unrelated to your series and I'd post it later)

Yes, that can be handled separately, and I leave that for you to manage.

Kind regards
Uffe

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ