lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 26 Oct 2022 11:10:10 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Cc:     Vishal Chourasia <vishalc@...ux.vnet.ibm.com>,
        linux-kernel@...r.kernel.org, mingo@...hat.com,
        vincent.guittot@...aro.org, vschneid@...hat.com,
        srikar@...ux.vnet.ibm.com, sshegde@...ux.ibm.com,
        linuxppc-dev@...ts.ozlabs.org, ritesh.list@...il.com,
        aneesh.kumar@...ux.ibm.com
Subject: Re: sched/debug: CPU hotplug operation suffers in a large cpu systems

On Wed, Oct 26, 2022 at 09:02:28AM +0200, Greg Kroah-Hartman wrote:
> On Wed, Oct 26, 2022 at 12:07:01PM +0530, Vishal Chourasia wrote:
> > On Tue, Oct 18, 2022 at 01:04:40PM +0200, Greg Kroah-Hartman wrote:
> > 
> > > Why do you need to?  What tools require these debugfs files to be
> > > present?
> > 
> > We are not entirely sure what applications (if any) might be using this interface.
> 
> Then just disable it and see what happens :)

It's mostly a debug interface for developers. A lot of people complained
when I moved things to debugfs, and I told them their program was broken
for a SCHED_DEBUG=n build anyway, but nobody complained about
this particular thing IIRC.

It's mostly affected by things like hotplug and cpusets, you can
discover the resulting topology by looking at these files.

Also; while we generally try and keep SCHED_DEBUG impact low, it is
still measurable; there are a number of people that run SCHED_DEBUG=n
kernels for the extra little gain.

> > > And if you only have 7-8 files per CPU, that does not seem like a lot of
> > > files overall (14000-16000)?  If you only offline 1 cpu, how is removing
> > > 7 or 8 files a bottleneck?  Do you really offline 1999 cpus for a 2k
> > > system?
> > 
> > It's 7-8 files per domain per cpu, so, in a system with approx 2k cpus and five
> > domains, the total file count goes above 70k-80k files. And, when we offline 1
> > CPU, the entire directory is rebuilt, resulting in creation of all the files
> > again.
> 
> Perhaps change the logic to not rebuild the whole thing and instead just
> remove the required files?

Unplugging a single cpu can change the topology and the other cpus might
need to be updated too.

Simplest example would be the SMT case, if you reduce from SMT>1 to SMT1
the SMT domain goes away (because a single CPU domain is as pointless as
it sounds) and that affects the CPU that remains.

Tracking all that is a pain. Simply rebuilding the whole thing is by
*far* the simplest option. And given this all is debug code, simple is
good.

> Or as I mentioned before, you can move debugfs to use kernfs, which
> should resolve most of these issues automatically.  Why not take the
> time to do that which will solve the problem no matter what gets added
> in the future in other subsystems?

This sounds like a good approach.

Powered by blists - more mailing lists