lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180102113014.GA28799@e105550-lin.cambridge.arm.com>
Date:   Tue, 2 Jan 2018 11:30:14 +0000
From:   Morten Rasmussen <morten.rasmussen@...s.arm.com>
To:     Xiongfeng Wang <wangxiongfeng2@...wei.com>
Cc:     Jeremy Linton <jeremy.linton@....com>,
        Lorenzo Pieralisi <lorenzo.pieralisi@....com>,
        linux-acpi@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
        sudeep.holla@....com, hanjun.guo@...aro.org, rjw@...ysocki.net,
        will.deacon@....com, catalin.marinas@....com,
        gregkh@...uxfoundation.org, viresh.kumar@...aro.org,
        mark.rutland@....com, linux-kernel@...r.kernel.org,
        linux-pm@...r.kernel.org, jhugo@...eaurora.org,
        Jonathan.Zhang@...ium.com, ahs3@...hat.com,
        Jayachandran.Nair@...ium.com, austinwc@...eaurora.org,
        lenb@...nel.org, morten.rasmussen@....com, dietmar.eggemann@....com
Subject: Re: [PATCH v5 7/9] arm64: Topology, rename cluster_id

On Tue, Jan 02, 2018 at 10:29:35AM +0800, Xiongfeng Wang wrote:
> Hi,
> 
> On 2017/12/18 20:42, Morten Rasmussen wrote:
> > On Fri, Dec 15, 2017 at 10:36:35AM -0600, Jeremy Linton wrote:
> >> Hi,
> >>
> >> On 12/13/2017 12:02 PM, Lorenzo Pieralisi wrote:
> >>> [+Morten, Dietmar]
> >>>
> >>> $SUBJECT should be:
> >>>
> >>> arm64: topology: rename cluster_id
> >>
> [cut]
> >>
> >> I was hoping someone else would comment here, but my take at this point is
> >> that it doesn't really matter in a functional sense at the moment.
> >> Like the chiplet discussion it can be the subject of a future patch along
> >> with the patches which tweak the scheduler to understand the split.
> >>
> >> BTW, given that i'm OoO next week, and the following that are the holidays,
> >> I don't intend to repost this for a couple weeks. I don't think there are
> >> any issues with this set.
> >>
> >>>
> >>> There is also arch/arm to take into account, again, this patch is
> >>> just renaming (as it should have named since the beginning) a
> >>> topology level but we should consider everything from a legacy
> >>> perspective.
> > 
> > arch/arm has gone for thread/core/socket for the three topology levels
> > it supports.
> > 
> > I'm not sure what short term value keeping cluster_id has? Isn't it just
> > about where we make the package = cluster assignment? Currently it is in
> > the definition of topology_physical_package_id. If we keep cluster_id
> > and add package_id, it gets moved into the MPIDR/DT parsing code. 
> > 
> > Keeping cluster_id and introducing a topology_cluster_id function could
> > help cleaning up some of the users of topology_physical_package_id that
> > currently assumes package_id == cluster_id though.
> 
> I think we still need the information describing which cores are in one cluster.
> Many arm64 chips have the architecture core/cluster/socket. Cores in one cluster may
> share a same L2 cache. That information can be used to build the sched_domain. If we put
> cores in one cluster in one sched_domain, the performance will be better.(please see
> kernel/sched/topology.c:1197, cpu_coregroup_mask() uses 'core_sibling' to build a multi-core
> sched_domain)
> So I think we still need variable to record which cores are in one sched_domain for future use.

I agree that information about clusters is useful. The problem is that
we currently treat clusters as sockets (also known as dies or
physical_package_id depending on which bit of code you are looking at). We
don't handle the core/cluster/socket topology you mention as three
'levels' of grouping of cores. We currently create one sched_domain
levels for cores (in a cluster) and a parent sched_domain level of all
clusters in the system ignoring if those are in different sockets and
tell the scheduler that those clusters are all separate sockets. This
doesn't work very well for systems with many clusters and/or multiple
sockets.

How to solve that problem is a longer discussion which I'm happy to
have. This patch is just preparing to get rid of the assumption that
clusters are sockets as this is not in line with what other
architectures does and the assumptions made by the scheduler.

Morten

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ