[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87r0m0dlmg.fsf@yhuang6-desk2.ccr.corp.intel.com>
Date: Thu, 12 Oct 2023 21:19:03 +0800
From: "Huang, Ying" <ying.huang@...el.com>
To: Mel Gorman <mgorman@...hsingularity.net>
Cc: <linux-mm@...ck.org>, <linux-kernel@...r.kernel.org>,
Arjan Van De Ven <arjan@...ux.intel.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>,
David Hildenbrand <david@...hat.com>,
Johannes Weiner <jweiner@...hat.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Michal Hocko <mhocko@...e.com>,
Pavel Tatashin <pasha.tatashin@...een.com>,
Matthew Wilcox <willy@...radead.org>,
"Christoph Lameter" <cl@...ux.com>
Subject: Re: [PATCH 09/10] mm, pcp: avoid to reduce PCP high unnecessarily
Mel Gorman <mgorman@...hsingularity.net> writes:
> On Thu, Oct 12, 2023 at 03:48:04PM +0800, Huang, Ying wrote:
>> "
>> On a 2-socket Intel server with 224 logical CPU, we run 8 kbuild
>> instances in parallel (each with `make -j 28`) in 8 cgroup. This
>> simulates the kbuild server that is used by 0-Day kbuild service.
>> With the patch, The number of pages allocated from zone (instead of
>> from PCP) decreases 21.4%.
>> "
>>
>> I also showed the performance number for each step of optimization as
>> follows (copied from the above patchset V2 link).
>>
>> "
>> build time lock contend% free_high alloc_zone
>> ---------- ---------- --------- ----------
>> base 100.0 13.5 100.0 100.0
>> patch1 99.2 10.6 19.2 95.6
>> patch3 99.2 11.7 7.1 95.6
>> patch5 98.4 10.0 8.2 97.1
>> patch7 94.9 0.7 3.0 19.0
>> patch9 94.9 0.6 2.7 15.0 <-- this patch
>> patch10 94.9 0.9 8.8 18.6
>> "
>>
>> Although I think the patch is helpful via avoiding the unnecessary
>> pcp->high decaying, thus reducing the zone lock contention. There's no
>> visible benchmark score change for the patch.
>>
>
> Thanks!
>
> Given that it's another PCP field with an update in a relatively hot
> path, I would suggest dropping this patch entirely if it does not affect
> performance. It has the risk of being a magical heuristic that we forget
> later whether it's even worthwhile.
OK. Hope we can find some workloads that can benefit from the patch in
the future.
--
Best Regards,
Huang, Ying
Powered by blists - more mailing lists