lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87h6mwf3gf.fsf@yhuang6-desk2.ccr.corp.intel.com>
Date:   Thu, 12 Oct 2023 20:08:32 +0800
From:   "Huang, Ying" <ying.huang@...el.com>
To:     Mel Gorman <mgorman@...hsingularity.net>
Cc:     <linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
        Arjan Van De Ven <arjan@...ux.intel.com>,
        Sudeep Holla <sudeep.holla@....com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Vlastimil Babka <vbabka@...e.cz>,
        "David Hildenbrand" <david@...hat.com>,
        Johannes Weiner <jweiner@...hat.com>,
        "Dave Hansen" <dave.hansen@...ux.intel.com>,
        Michal Hocko <mhocko@...e.com>,
        "Pavel Tatashin" <pasha.tatashin@...een.com>,
        Matthew Wilcox <willy@...radead.org>,
        Christoph Lameter <cl@...ux.com>
Subject: Re: [PATCH 02/10] cacheinfo: calculate per-CPU data cache size

Mel Gorman <mgorman@...hsingularity.net> writes:

> On Wed, Sep 20, 2023 at 02:18:48PM +0800, Huang Ying wrote:
>> Per-CPU data cache size is useful information.  For example, it can be
>> used to determine per-CPU cache size.  So, in this patch, the data
>> cache size for each CPU is calculated via data_cache_size /
>> shared_cpu_weight.
>> 
>> A brute-force algorithm to iterate all online CPUs is used to avoid
>> to allocate an extra cpumask, especially in offline callback.
>> 
>> Signed-off-by: "Huang, Ying" <ying.huang@...el.com>
>
> It's not necessarily relevant to the patch, but at least the scheduler
> also stores some per-cpu topology information such as sd_llc_size -- the
> number of CPUs sharing the same last-level-cache as this CPU. It may be
> worth unifying this at some point if it's common that per-cpu
> information is too fine and per-zone or per-node information is too
> coarse. This would be particularly true when considering locking
> granularity,
>
>> Cc: Sudeep Holla <sudeep.holla@....com>
>> Cc: Andrew Morton <akpm@...ux-foundation.org>
>> Cc: Mel Gorman <mgorman@...hsingularity.net>
>> Cc: Vlastimil Babka <vbabka@...e.cz>
>> Cc: David Hildenbrand <david@...hat.com>
>> Cc: Johannes Weiner <jweiner@...hat.com>
>> Cc: Dave Hansen <dave.hansen@...ux.intel.com>
>> Cc: Michal Hocko <mhocko@...e.com>
>> Cc: Pavel Tatashin <pasha.tatashin@...een.com>
>> Cc: Matthew Wilcox <willy@...radead.org>
>> Cc: Christoph Lameter <cl@...ux.com>
>> ---
>>  drivers/base/cacheinfo.c  | 42 ++++++++++++++++++++++++++++++++++++++-
>>  include/linux/cacheinfo.h |  1 +
>>  2 files changed, 42 insertions(+), 1 deletion(-)
>> 
>> diff --git a/drivers/base/cacheinfo.c b/drivers/base/cacheinfo.c
>> index cbae8be1fe52..3e8951a3fbab 100644
>> --- a/drivers/base/cacheinfo.c
>> +++ b/drivers/base/cacheinfo.c
>> @@ -898,6 +898,41 @@ static int cache_add_dev(unsigned int cpu)
>>  	return rc;
>>  }
>>  
>> +static void update_data_cache_size_cpu(unsigned int cpu)
>> +{
>> +	struct cpu_cacheinfo *ci;
>> +	struct cacheinfo *leaf;
>> +	unsigned int i, nr_shared;
>> +	unsigned int size_data = 0;
>> +
>> +	if (!per_cpu_cacheinfo(cpu))
>> +		return;
>> +
>> +	ci = ci_cacheinfo(cpu);
>> +	for (i = 0; i < cache_leaves(cpu); i++) {
>> +		leaf = per_cpu_cacheinfo_idx(cpu, i);
>> +		if (leaf->type != CACHE_TYPE_DATA &&
>> +		    leaf->type != CACHE_TYPE_UNIFIED)
>> +			continue;
>> +		nr_shared = cpumask_weight(&leaf->shared_cpu_map);
>> +		if (!nr_shared)
>> +			continue;
>> +		size_data += leaf->size / nr_shared;
>> +	}
>> +	ci->size_data = size_data;
>> +}
>
> This needs comments.
>
> It would be nice to add a comment on top describing the limitation of
> CACHE_TYPE_UNIFIED here in the context of
> update_data_cache_size_cpu().

Sure.  Will do that.

> The L2 cache could be unified but much smaller than a L3 or other
> last-level-cache. It's not clear from the code what level of cache is being
> used due to a lack of familiarity of the cpu_cacheinfo code but size_data
> is not the size of a cache, it appears to be the share of a cache a CPU
> would have under ideal circumstances.

Yes.  And it isn't for one specific level of cache.  It's sum of per-CPU
shares of all levels of cache.  But the calculation is inaccurate.  More
details are in the below reply.

> However, as it appears to also be
> iterating hierarchy then this may not be accurate. Caches may or may not
> allow data to be duplicated between levels so the value may be inaccurate.

Thank you very much for pointing this out!  The cache can be inclusive
or not.  So, we cannot calculate the per-CPU slice of all-level caches
via adding them together blindly.  I will change this in a follow-on
patch.

> A respin of the patch is not necessary but a follow-on patch adding
> clarifing comments would be very welcome covering
>
> o What levels of cache are being used
> o Describe what size_data actually is and preferably rename the field
>   to be more explicit as "size" could be the total cache capacity, the
>   cache slice under ideal circumstances or even the number of CPUs sharing
>   that cache.

Sure.

> The cache details *may* need a follow-on patch if the size_data value is
> misleading. If it is a hierarchy and the value does not always represent
> the slice of cache a CPU could have under ideal circumstances then the
> value should be based on the LLC only so that it is predictable across
> architectures.

Sure.

--
Best Regards,
Huang, Ying

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ