[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d6497158-c58f-4fd4-4613-699fb2af7a61@gmail.com>
Date: Thu, 8 Mar 2018 21:41:17 +0100
From: Brice Goglin <brice.goglin@...il.com>
To: Morten Rasmussen <morten.rasmussen@....com>,
Jeremy Linton <jeremy.linton@....com>
Cc: mark.rutland@....com, vkilari@...eaurora.org,
lorenzo.pieralisi@....com, catalin.marinas@....com,
tnowicki@...iumnetworks.com, gregkh@...uxfoundation.org,
will.deacon@....com, dietmar.eggemann@....com, rjw@...ysocki.net,
linux-kernel@...r.kernel.org, ahs3@...hat.com,
linux-acpi@...r.kernel.org, palmer@...ive.com,
hanjun.guo@...aro.org, sudeep.holla@....com,
austinwc@...eaurora.org, linux-riscv@...ts.infradead.org,
john.garry@...wei.com, wangxiongfeng2@...wei.com,
linux-arm-kernel@...ts.infradead.org, lenb@...nel.org
Subject: Re: [PATCH v7 13/13] arm64: topology: divorce MC scheduling domain
from core_siblings
> Is there a good reason for diverging instead of adjusting the
> core_sibling mask? On x86 the core_siblings mask is defined by the last
> level cache span so they don't have this issue.
No. core_siblings is defined as the list of cores that have the same
physical_package_id (see the doc of sysfs topology files), and LLC can
be smaller than that.
Example with E5v3 with cluster-on-die (two L3 per package, core_siblings
is twice larger than L3 cpumap):
https://www.open-mpi.org/projects/hwloc/lstopo/images/2XeonE5v3.v1.11.png
On AMD EPYC, you even have up to 8 LLC per package.
Brice
Powered by blists - more mailing lists