lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240709045750.GA32083@linuxonhyperv3.guj3yctzbm1etfxqx2vob5hsef.xx.internal.cloudapp.net>
Date: Mon, 8 Jul 2024 21:57:50 -0700
From: Saurabh Singh Sengar <ssengar@...ux.microsoft.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: linux-mm@...ck.org, linux-kernel@...r.kernel.org, ssengar@...rosoft.com,
	wei.liu@...nel.org
Subject: Re: [PATCH] mm/vmstat: Defer the refresh_zone_stat_thresholds after
 all CPUs bringup

On Fri, Jul 05, 2024 at 01:59:11PM -0700, Andrew Morton wrote:
> On Fri,  5 Jul 2024 01:48:21 -0700 Saurabh Sengar <ssengar@...ux.microsoft.com> wrote:
> 
> > refresh_zone_stat_thresholds function has two loops which is expensive for
> > higher number of CPUs and NUMA nodes.
> > 
> > Below is the rough estimation of total iterations done by these loops
> > based on number of NUMA and CPUs.
> > 
> > Total number of iterations: nCPU * 2 * Numa * mCPU
> > Where:
> >  nCPU = total number of CPUs
> >  Numa = total number of NUMA nodes
> >  mCPU = mean value of total CPUs (e.g., 512 for 1024 total CPUs)
> > 
> > For the system under test with 16 NUMA nodes and 1024 CPUs, this
> > results in a substantial increase in the number of loop iterations
> > during boot-up when NUMA is enabled:
> > 
> > No NUMA = 1024*2*1*512  =   1,048,576 : Here refresh_zone_stat_thresholds
> > takes around 224 ms total for all the CPUs in the system under test.
> > 16 NUMA = 1024*2*16*512 =  16,777,216 : Here refresh_zone_stat_thresholds
> > takes around 4.5 seconds total for all the CPUs in the system under test.
> 
> Did you measure the overall before-and-after times?  IOW, how much of
> that 4.5s do we reclaim?

This entire gain is accounted in over all boot processi time. Most of the Linux
kernel boot process is sequential and doesn't take advantage of SMP.

> 
> > Calling this for each CPU is expensive when there are large number
> > of CPUs along with multiple NUMAs. Fix this by deferring
> > refresh_zone_stat_thresholds to be called later at once when all the
> > secondary CPUs are up. Also, register the DYN hooks to keep the
> > existing hotplug functionality intact.
> > 
> 
> Seems risky - we'll now have online CPUs which have unintialized data,
> yes?  What assurance do we have that this data won't be accessed?

I understand that this data is only accessed by userspace tools, and they can
only access it post late_initcall. Please let me know if there are any other
cases, I will look to address them.

> 
> Another approach might be to make the code a bit smarter - instead of
> calculating thresholds for the whole world, we make incremental changes
> to the existing thresholds on behalf of the new resource which just
> became available?

I agree, and I have spent good amount of time undertanding the calculation,
but couldn't find any obvious way to code everything it does in incremental way.

I would be happy to assist if you have any suggestions how to do this.

- Saurabh

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ