lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 21 Sep 2023 21:32:35 +0800
From:   "Huang, Ying" <ying.huang@...el.com>
To:     Andrew Morton <akpm@...ux-foundation.org>
Cc:     linux-mm@...ck.org, linux-kernel@...r.kernel.org,
        Arjan Van De Ven <arjan@...ux.intel.com>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Vlastimil Babka <vbabka@...e.cz>,
        David Hildenbrand <david@...hat.com>,
        Johannes Weiner <jweiner@...hat.com>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Michal Hocko <mhocko@...e.com>,
        Pavel Tatashin <pasha.tatashin@...een.com>,
        Matthew Wilcox <willy@...radead.org>,
        Christoph Lameter <cl@...ux.com>
Subject: Re: [PATCH 00/10] mm: PCP high auto-tuning

Hi, Andrew,

Andrew Morton <akpm@...ux-foundation.org> writes:

> On Wed, 20 Sep 2023 14:18:46 +0800 Huang Ying <ying.huang@...el.com> wrote:
>
>> The page allocation performance requirements of different workloads
>> are often different.  So, we need to tune the PCP (Per-CPU Pageset)
>> high on each CPU automatically to optimize the page allocation
>> performance.
>
> Some of the performance changes here are downright scary.
>
> I've never been very sure that percpu pages was very beneficial (and
> hey, I invented the thing back in the Mesozoic era).  But these numbers
> make me think it's very important and we should have been paying more
> attention.
>
>> The list of patches in series is as follows,
>> 
>>  1 mm, pcp: avoid to drain PCP when process exit
>>  2 cacheinfo: calculate per-CPU data cache size
>>  3 mm, pcp: reduce lock contention for draining high-order pages
>>  4 mm: restrict the pcp batch scale factor to avoid too long latency
>>  5 mm, page_alloc: scale the number of pages that are batch allocated
>>  6 mm: add framework for PCP high auto-tuning
>>  7 mm: tune PCP high automatically
>>  8 mm, pcp: decrease PCP high if free pages < high watermark
>>  9 mm, pcp: avoid to reduce PCP high unnecessarily
>> 10 mm, pcp: reduce detecting time of consecutive high order page freeing
>> 
>> Patch 1/2/3 optimize the PCP draining for consecutive high-order pages
>> freeing.
>> 
>> Patch 4/5 optimize batch freeing and allocating.
>> 
>> Patch 6/7/8/9 implement and optimize a PCP high auto-tuning method.
>> 
>> Patch 10 optimize the PCP draining for consecutive high order page
>> freeing based on PCP high auto-tuning.
>> 
>> The test results for patches with performance impact are as follows,
>> 
>> kbuild
>> ======
>> 
>> On a 2-socket Intel server with 224 logical CPU, we tested kbuild on
>> one socket with `make -j 112`.
>> 
>> 	build time	zone lock%	free_high	alloc_zone
>> 	----------	----------	---------	----------
>> base	     100.0	      43.6          100.0            100.0
>> patch1	      96.6	      40.3	     49.2	      95.2
>> patch3	      96.4	      40.5	     11.3	      95.1
>> patch5	      96.1	      37.9	     13.3	      96.8
>> patch7	      86.4	       9.8	      6.2	      22.0
>> patch9	      85.9	       9.4	      4.8	      16.3
>> patch10	      87.7	      12.6	     29.0	      32.3
>
> You're seriously saying that kbuild got 12% faster?
>
> I see that [07/10] (autotuning) alone sped up kbuild by 10%?

Thank you very much for questioning!

I double-checked the my test results and configuration and found that I
used an uncommon configuration.  So the description of the test should
have been,

On a 2-socket Intel server with 224 logical CPU, we tested kbuild with
`numactl -m 1 -- make -j 112`.

This will make processes running on socket 0 to use the normal zone of
socket 1.  The remote accessing to zone->lock cause heavy lock
contention.

I apologize for any confusing caused by the above test results.

If we test kbuild with `make -j 224` on the machine, the test results
becomes,

	build time	     lock%	free_high	alloc_zone
	----------	----------	---------	----------
base	     100.0	      16.8          100.0            100.0
patch5	      99.2	      13.9	      9.5	      97.0
patch7	      98.5	       5.4	      4.8	      19.2

Although lock contention cycles%, draining PCP for high order freeing,
and allocating from zone reduces greatly, the build time almost doesn't
change.

We also tested kbuild in the following way, created 8 cgroup, and run
`make -j 28` in each cgroup.  That is, the total parallel is same, but
LRU lock contention can be eliminated via cgroup.  And, the
single-process link stage take less proportion to the parallel compiling
stage.  This isn't common for personal usage.  But it can be used by
something like 0Day kbuild service.  The test result is as follows,

	build time	     lock%	free_high	alloc_zone
	----------	----------	---------	----------
base	     100.0	      14.2          100.0            100.0
patch5	      98.5	       8.5	      8.1	      97.1
patch7	      95.0	       0.7	      3.0	      19.0

The lock contention cycles% reduces to nearly 0, because LRU lock
contention is eliminated too.  The build time reduction becomes visible
too.  We will continue to do a full test with this configuration.

> Other thoughts:
>
> - What if any facilities are provided to permit users/developers to
>   monitor the operation of the autotuning algorithm?

/proc/zoneinfo can be used to observe PCP high and count for each CPU.

> - I'm not seeing any Documentation/ updates.  Surely there are things
>   we can tell users?

I will think about that.

> - This:
>
>   : It's possible that PCP high auto-tuning doesn't work well for some
>   : workloads.  So, when PCP high is tuned by hand via the sysctl knob,
>   : the auto-tuning will be disabled.  The PCP high set by hand will be
>   : used instead.
>
>   Is it a bit hacky to disable autotuning when the user alters
>   pcp-high?  Would it be cleaner to have a separate on/off knob for
>   autotuning?

This was suggested by Mel Gormon,

https://lore.kernel.org/linux-mm/20230714140710.5xbesq6xguhcbyvi@techsingularity.net/

"
I'm not opposed to having an adaptive pcp->high in concept. I think it would
be best to disable adaptive tuning if percpu_pagelist_high_fraction is set
though. I expect that users of that tunable are rare and that if it *is*
used that there is a very good reason for it.
"

Do you think that this is reasonable?

>   And how is the user to determine that "PCP high auto-tuning doesn't work
>   well" for their workload?

One way is to check the perf profiling results.  If there is heavy zone
lock contention, the PCP high auto-tuning doesn't work well enough to
eliminate the zone lock contention.  Users may try to tune PCP high by
hand.

--
Best Regards,
Huang, Ying

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ