lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <cover.1683033105.git.raghavendra.kt@amd.com>
Date:   Wed, 3 May 2023 07:35:47 +0530
From:   Raghavendra K T <raghavendra.kt@....com>
To:     <linux-kernel@...r.kernel.org>, <linux-mm@...ck.org>
CC:     Ingo Molnar <mingo@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        "Mel Gorman" <mgorman@...e.de>,
        Andrew Morton <akpm@...ux-foundation.org>,
        "David Hildenbrand" <david@...hat.com>, <rppt@...nel.org>,
        Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Bharata B Rao <bharata@....com>,
        Raghavendra K T <raghavendra.kt@....com>
Subject: [RFC PATCH V1 0/2] sched/numa: Disjoint set vma scan improvements

With the numa scan enhancements [1], only the threads which had previously
accessed vma are allowed to scan.
While this has improved significant system time overhead, there are corner
cases, which genuinely needs some relaxation for e.g., concern raised by
PeterZ where unfairness amongst the threadbelonging to disjoint set of VMSs
can potentially amplify the side effects of vma regions belonging to some of
the tasks being left unscanned.

Currently that is handled by allowing first two scans at mm level
(mm->numa_scan_seq) unconditionally.

One of the test that exercise similar side effect is numa01_THREAD_ALLOC where
allocation happen by main thread and it is divided into memory chunks of 24MB
to be continuously bzeroed.

(this is run default by LKP tests while numa01 is run default in mmtests which
operate on full 3GB region by each thread)

So to address this issue, proposal here is:
1) Have per vma scan counter that gets incremented for every successful scan
(which potentially scans 256MB or sysctl_scan_size)
2) Do unconditional scan for first few times (To be precise, half of the window
  calculated for scanning normally otherwise)
3) Do reset of the counter when whole mm is scanned (this needs remembering
mm->numa_scan_sequece) at vma level

With this patch I am seeing good improvement in numa01_THREAD_ALLOC case, 
but please note that with [1] there was a drastic decrease in system time when
benchmarks run, this patch adds back some of the system time.

Your comments/Ideas are welcome.

Result:
SUT: Milan w/ 2 numa nodes 256 cpus

Manaul run of numa01_THREAD__ALLOC
Base 11-apr-next
	w/numascan 	w/o numascan	numascan+patch

real	1m33.579s	1m2.042s	1m11.738s
user	280m46.032s	213m38.647s	231m40.226s
sys	0m18.061s	6m54.963s	4m43.174s
		
numa_hit		5813057	6166060	6146064
numa_local 		5812546	6165471 6145573
numa_other 		511	589	491
numa_pte_updates 	0	2098276	1248398
numa_hint_faults 	10	1768382	982034
numa_hint_faults_local 	10	981824	625424
numa_pages_migrated 	0	786558	356604

Below is the mmtest kernbench and autonuma performance

kernbench
===========
Base 11-apr-next
			w/numascan      	w/o numascan    	numascan+patch

Amean     user-256    23873.01 (   0.00%)    23688.21 *   0.77%*    23948.47 *  -0.32%*
Amean     syst-256     4990.73 (   0.00%)     5113.32 *  -2.46%*     4800.86 *   3.80%*
Amean     elsp-256      150.67 (   0.00%)      150.52 *   0.10%*      150.63 *   0.03%*

Duration User       71628.53    71074.04    71855.31
Duration System     14985.61    15354.33    14416.72
Duration Elapsed      472.69      473.24      473.72

Ops NUMA alloc hit                1739476674.00  1739443601.00  1739591558.00
Ops NUMA alloc local              1739534231.00  1739519795.00  1739647666.00
Ops NUMA base-page range updates      485073.00      673766.00      733129.00
Ops NUMA PTE updates                  485073.00      673766.00      733129.00
Ops NUMA hint faults                  107776.00      181920.00      186250.00
Ops NUMA hint local faults %            1789.00        6165.00       10889.00
Ops NUMA hint local percent                1.66           3.39           5.85
Ops NUMA pages migrated               105987.00      175755.00      175356.00
Ops AutoNUMA cost                        544.29         917.66         939.71

autonumabench
===============
					 w/numascan      	w/o numascan    	numascan+patch
Amean     syst-NUMA01                   33.10 (   0.00%)      571.68 *-1627.21%*      219.51 *-563.21%*
Amean     syst-NUMA01_THREADLOCAL        0.23 (   0.00%)        0.22 *   4.38%*        0.22 *   5.00%*
Amean     syst-NUMA02                    0.81 (   0.00%)        0.75 *   7.76%*        0.76 *   6.00%*
Amean     syst-NUMA02_SMT                0.68 (   0.00%)        0.73 *  -7.79%*        0.65 *   3.58%*
Amean     elsp-NUMA01                  299.71 (   0.00%)      333.24 * -11.19%*      329.60 *  -9.97%*
Amean     elsp-NUMA01_THREADLOCAL        1.06 (   0.00%)        1.06 *   0.00%*        1.06 *  -0.68%*
Amean     elsp-NUMA02                    3.29 (   0.00%)        3.23 *   1.95%*        3.18 *   3.51%*
Amean     elsp-NUMA02_SMT                3.75 (   0.00%)        3.38 *   9.86%*        3.79 *  -0.95%*

Duration User      321693.29   437210.09   376657.80
Duration System       244.25     4014.23     1548.57
Duration Elapsed     2165.83     2395.53     2373.46


Ops NUMA alloc hit                  49608099.00    62272320.00    55815229.00
Ops NUMA alloc local                49585747.00    62236996.00    55812601.00
Ops NUMA base-page range updates        1571.00   202868357.00    96006221.00
Ops NUMA PTE updates                    1571.00   202868357.00    96006221.00
Ops NUMA hint faults                    1203.00   204902318.00    97246909.00
Ops NUMA hint local faults %             981.00   187233695.00    81136933.00
Ops NUMA hint local percent               81.55          91.38          83.43
Ops NUMA pages migrated                  222.00    10011134.00     6060787.00
Ops AutoNUMA cost                          6.03     1026121.88      487021.74

Notes: Implementation considered/tried
1) Limit the disjoint set vma scan to 4 (hardcoded) = 1GB per whole mm scan
2) Current PID reset window = 4 * sysctl_scan_delay is changed to
8 * sysctl_scan_delay (to ensure some random overlapping overtime in scanning)


links:
[1] https://lore.kernel.org/lkml/cover.1677672277.git.raghavendra.kt@amd.com/T/#t

Raghavendra K T (2):
  sched/numa: Introduce per vma scan counter
  sched/numa: Introduce per vma numa_scan_seq

 include/linux/mm_types.h |  2 ++
 kernel/sched/fair.c      | 44 +++++++++++++++++++++++++++++++++++++---
 2 files changed, 43 insertions(+), 3 deletions(-)

-- 
2.34.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ