lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87v8bcdly7.fsf@yhuang6-desk2.ccr.corp.intel.com>
Date:   Thu, 12 Oct 2023 21:12:00 +0800
From:   "Huang, Ying" <ying.huang@...el.com>
To:     Mel Gorman <mgorman@...hsingularity.net>
Cc:     <linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
        Arjan Van De Ven <arjan@...ux.intel.com>,
        Sudeep Holla <sudeep.holla@....com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Vlastimil Babka <vbabka@...e.cz>,
        "David Hildenbrand" <david@...hat.com>,
        Johannes Weiner <jweiner@...hat.com>,
        "Dave Hansen" <dave.hansen@...ux.intel.com>,
        Michal Hocko <mhocko@...e.com>,
        "Pavel Tatashin" <pasha.tatashin@...een.com>,
        Matthew Wilcox <willy@...radead.org>,
        Christoph Lameter <cl@...ux.com>
Subject: Re: [PATCH 02/10] cacheinfo: calculate per-CPU data cache size

Mel Gorman <mgorman@...hsingularity.net> writes:

> On Thu, Oct 12, 2023 at 08:08:32PM +0800, Huang, Ying wrote:
>> Mel Gorman <mgorman@...hsingularity.net> writes:
>> 
>> > On Wed, Sep 20, 2023 at 02:18:48PM +0800, Huang Ying wrote:
>> >> Per-CPU data cache size is useful information.  For example, it can be
>> >> used to determine per-CPU cache size.  So, in this patch, the data
>> >> cache size for each CPU is calculated via data_cache_size /
>> >> shared_cpu_weight.
>> >> 
>> >> A brute-force algorithm to iterate all online CPUs is used to avoid
>> >> to allocate an extra cpumask, especially in offline callback.
>> >> 
>> >> Signed-off-by: "Huang, Ying" <ying.huang@...el.com>
>> >
>> > It's not necessarily relevant to the patch, but at least the scheduler
>> > also stores some per-cpu topology information such as sd_llc_size -- the
>> > number of CPUs sharing the same last-level-cache as this CPU. It may be
>> > worth unifying this at some point if it's common that per-cpu
>> > information is too fine and per-zone or per-node information is too
>> > coarse. This would be particularly true when considering locking
>> > granularity,
>> >
>> >> Cc: Sudeep Holla <sudeep.holla@....com>
>> >> Cc: Andrew Morton <akpm@...ux-foundation.org>
>> >> Cc: Mel Gorman <mgorman@...hsingularity.net>
>> >> Cc: Vlastimil Babka <vbabka@...e.cz>
>> >> Cc: David Hildenbrand <david@...hat.com>
>> >> Cc: Johannes Weiner <jweiner@...hat.com>
>> >> Cc: Dave Hansen <dave.hansen@...ux.intel.com>
>> >> Cc: Michal Hocko <mhocko@...e.com>
>> >> Cc: Pavel Tatashin <pasha.tatashin@...een.com>
>> >> Cc: Matthew Wilcox <willy@...radead.org>
>> >> Cc: Christoph Lameter <cl@...ux.com>
>> >> ---
>> >>  drivers/base/cacheinfo.c  | 42 ++++++++++++++++++++++++++++++++++++++-
>> >>  include/linux/cacheinfo.h |  1 +
>> >>  2 files changed, 42 insertions(+), 1 deletion(-)
>> >> 
>> >> diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c
>> >> index cbae8be1fe52..3e8951a3fbab 100644
>> >> --- a/drivers/base/cacheinfo.c
>> >> +++ b/drivers/base/cacheinfo.c
>> >> @@ -898,6 +898,41 @@ static int cache_add_dev(unsigned int cpu)
>> >>  	return rc;
>> >>  }
>> >>  
>> >> +static void update_data_cache_size_cpu(unsigned int cpu)
>> >> +{
>> >> +	struct cpu_cacheinfo *ci;
>> >> +	struct cacheinfo *leaf;
>> >> +	unsigned int i, nr_shared;
>> >> +	unsigned int size_data = 0;
>> >> +
>> >> +	if (!per_cpu_cacheinfo(cpu))
>> >> +		return;
>> >> +
>> >> +	ci = ci_cacheinfo(cpu);
>> >> +	for (i = 0; i < cache_leaves(cpu); i++) {
>> >> +		leaf = per_cpu_cacheinfo_idx(cpu, i);
>> >> +		if (leaf->type != CACHE_TYPE_DATA &&
>> >> +		    leaf->type != CACHE_TYPE_UNIFIED)
>> >> +			continue;
>> >> +		nr_shared = cpumask_weight(&leaf->shared_cpu_map);
>> >> +		if (!nr_shared)
>> >> +			continue;
>> >> +		size_data += leaf->size / nr_shared;
>> >> +	}
>> >> +	ci->size_data = size_data;
>> >> +}
>> >
>> > This needs comments.
>> >
>> > It would be nice to add a comment on top describing the limitation of
>> > CACHE_TYPE_UNIFIED here in the context of
>> > update_data_cache_size_cpu().
>> 
>> Sure.  Will do that.
>> 
>
> Thanks.
>
>> > The L2 cache could be unified but much smaller than a L3 or other
>> > last-level-cache. It's not clear from the code what level of cache is being
>> > used due to a lack of familiarity of the cpu_cacheinfo code but size_data
>> > is not the size of a cache, it appears to be the share of a cache a CPU
>> > would have under ideal circumstances.
>> 
>> Yes.  And it isn't for one specific level of cache.  It's sum of per-CPU
>> shares of all levels of cache.  But the calculation is inaccurate.  More
>> details are in the below reply.
>> 
>> > However, as it appears to also be
>> > iterating hierarchy then this may not be accurate. Caches may or may not
>> > allow data to be duplicated between levels so the value may be inaccurate.
>> 
>> Thank you very much for pointing this out!  The cache can be inclusive
>> or not.  So, we cannot calculate the per-CPU slice of all-level caches
>> via adding them together blindly.  I will change this in a follow-on
>> patch.
>> 
>
> Please do, I would strongly suggest basing this on LLC only because it's
> the only value you can be sure of. This change is the only change that may
> warrant a respin of the series as the history will be somewhat confusing
> otherwise.

I am still checking whether it's possible to get cache inclusive
information via cpuid.

If there's no reliable way to do that.  We can use the max value of
per-CPU share of each level of cache.  For inclusive cache, that will be
the value of LLC.  For non-inclusive cache, the value will be more
accurate.  For example, on Intel Sapphire Rapids, the L2 cache is 2 MB
per core, while LLC is 1.875 MB per core according to [1].

[1] https://www.intel.com/content/www/us/en/developer/articles/technical/fourth-generation-xeon-scalable-family-overview.html

I will respin the series.

Thanks a lot for review!

--
Best Regards,
Huang, Ying

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ