[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87cyyawshy.fsf@yhuang6-desk2.ccr.corp.intel.com>
Date: Fri, 22 Sep 2023 15:56:41 +0800
From: "Huang, Ying" <ying.huang@...el.com>
To: Sudeep Holla <sudeep.holla@....com>
Cc: <linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
Arjan Van De Ven <arjan@...ux.intel.com>,
"Andrew Morton" <akpm@...ux-foundation.org>,
Mel Gorman <mgorman@...hsingularity.net>,
Vlastimil Babka <vbabka@...e.cz>,
"David Hildenbrand" <david@...hat.com>,
Johannes Weiner <jweiner@...hat.com>,
"Dave Hansen" <dave.hansen@...ux.intel.com>,
Michal Hocko <mhocko@...e.com>,
"Pavel Tatashin" <pasha.tatashin@...een.com>,
Matthew Wilcox <willy@...radead.org>,
Christoph Lameter <cl@...ux.com>
Subject: Re: [PATCH 02/10] cacheinfo: calculate per-CPU data cache size
Sudeep Holla <sudeep.holla@....com> writes:
> On Wed, Sep 20, 2023 at 02:18:48PM +0800, Huang Ying wrote:
>> Per-CPU data cache size is useful information. For example, it can be
>> used to determine per-CPU cache size. So, in this patch, the data
>> cache size for each CPU is calculated via data_cache_size /
>> shared_cpu_weight.
>>
>> A brute-force algorithm to iterate all online CPUs is used to avoid
>> to allocate an extra cpumask, especially in offline callback.
>>
>
> You have not mentioned who will use this information ? Looking at the
> change, it is not exposed to the user-space. Also I see this is actually
> part of the series [1]. Is this info used in any of those patches ? Can you
> point me to the same ?
Yes. It is used by [PATCH 03/10] of the series. If the per-CPU data
cache size is large enough, we will cache more pages in the per-CPU
pageset to reduce the zone lock contention.
> Not all architecture use cacheinfo yet. How will the mm changes affect those
> platforms ?
If cacheinfo isn't available, we will fallback to the original
behavior. That is, we will drain per-CPU pageset more often (that is,
cache less to improve cache-hot pages sharing between CPUs).
> --
> Regards,
> Sudeep
>
> [1] https://lore.kernel.org/all/20230920061856.257597-1-ying.huang@intel.com/
--
Best Regards,
Huang, Ying
Powered by blists - more mailing lists