lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAKfTPtD==UMjbSGHvKxT_z=LyvsJt=UCHsqkbgVnagbn5Aqpcw@mail.gmail.com>
Date:   Tue, 14 Jun 2022 19:59:23 +0200
From:   Vincent Guittot <vincent.guittot@...aro.org>
To:     Sudeep Holla <sudeep.holla@....com>
Cc:     Dietmar Eggemann <dietmar.eggemann@....com>,
        linux-kernel@...r.kernel.org, Atish Patra <atishp@...shpatra.org>,
        Atish Patra <atishp@...osinc.com>,
        Morten Rasmussen <morten.rasmussen@....com>,
        Qing Wang <wangqing@...o.com>,
        linux-arm-kernel@...ts.infradead.org,
        linux-riscv@...ts.infradead.org, Rob Herring <robh+dt@...nel.org>
Subject: Re: [PATCH v3 15/16] arch_topology: Set cluster identifier in each
 core/thread from /cpu-map

On Fri, 10 Jun 2022 at 12:27, Sudeep Holla <sudeep.holla@....com> wrote:
>
> On Fri, Jun 10, 2022 at 12:08:44PM +0200, Vincent Guittot wrote:
> > On Mon, 6 Jun 2022 at 12:22, Sudeep Holla <sudeep.holla@....com> wrote:
> > >
>
> [...]
>
> > > Why ? Are you suggesting that we shouldn't present the hardware cluster
> > > to the topology because of the above reason ? If so, sorry that is not a
> > > valid reason. We could add login to return NULL or appropriate value
> > > needed in cpu_clustergroup_mask id it matches MC level mask if we can't
> > > deal that in generic scheduler code. But the topology code can't be
> > > compromised for that reason as it is user visible.
> >
> > I tend to agree with Dietmar. The legacy use of cluster node in DT
> > refers to the dynamiQ or legacy b.L cluster which is also aligned to
> > the LLC and the MC scheduling level. The new cluster level that has
> > been introduced recently does not target this level but some
> > intermediate levels either inside like for the kupeng920 or the v9
> > complex or outside like for the ampere altra. So I would say that
> > there is one cluster node level in DT that refers to the same MC/LLC
> > level and only an additional child/parent cluster node should be used
> > to fill the clustergroup_mask.
> >
>
> Again I completely disagree. Let us look at the problems separately.
> The hardware topology that some of the tools like lscpu and lstopo expects
> what the hardware looks like and not the scheduler's view of the hardware.
> So the topology masks that gets exposed to the user-space needs fixing
> even today. I have reports from various tooling people about the same.
> E.g. Juno getting exposed as dual socket system is utter non-sense.
>
> Yes scheduler uses most of the topology masks as is but that is not a must.
> There are these *group_mask functions that can implement what scheduler
> needs to be fed.
>
> I am not sure why the 2 issues are getting mixed up and that is the main
> reason why I jumped into this to make sure the topology masks are
> not tampered based on the way it needs to be used for scheduler.
>
> Both ACPI and DT on a platform must present exact same hardware topology
> to the user-space, there is no space for argument there.

But that's exactly my point there:
ACPI doesn't show the dynamiQ level anywhere but only the llc which
are the same and your patch makes the dynamiQ level visible for DT in
addition to llc

>
> > IIUC, we don't describe the dynamiQ level in ACPI which  uses cache
> > topology instead to define cpu_coregroup_mask whereas DT described the
> > dynamiQ instead of using cache topology. If you use cache topology
> > now, then you should skip the dynamiQ
> >
>
> Yes, unless someone can work out a binding to represent that and convince
> DT maintainers ;).
>
> > Finally, even if CLS and MC have the same scheduling behavior for now,
> > they might ends up with different scheduling properties which would
> > mean that replacing MC level by CLS one for current SoC would become
> > wrong
> >
>
> Again as I mentioned to Dietmar, that is something we can and must deal with
> in those *group_mask and not expect topology mask to be altered to meet
> CLS/MC or whatever sched domains needs. Sorry, that is my strong opinion
> as the topology is already user-space visible and (tooling) people are
> complaining that DT systems are broken and doesn't match ACPI systems.

again, your proposal doesn't help here because the DT will show a
level that doesn't appears in ACPI

>
> So unless someone gives me non-scheduler and topology specific reasons
> to change that, sorry but my opinion on this matter is not going to change ;).
>
> You will get this view of topology, find a way to manage with all those
> *group_mask functions. By the way it is already handled for ACPI systems,

AFAICT, no it's not, the cluster described in ACPI is not the dynamiQ
level that you make now visible to DT

> so if you are not happy with that, then that needs fixing as this change
> set just aligns the behaviour on similar ACPI system. So the Juno example
> is incorrect for the reason that the behaviour of scheduler there is different
> with DT and ACPI.
>
> --
> Regards,
> Sudeep

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ