lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZulbS1MvZVVYe-YO@localhost.localdomain>
Date: Tue, 17 Sep 2024 12:34:51 +0200
From: Frederic Weisbecker <frederic@...nel.org>
To: Michal Hocko <mhocko@...e.com>
Cc: LKML <linux-kernel@...r.kernel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Kees Cook <kees@...nel.org>, Peter Zijlstra <peterz@...radead.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Vlastimil Babka <vbabka@...e.cz>, linux-mm@...ck.org,
	"Paul E. McKenney" <paulmck@...nel.org>,
	Neeraj Upadhyay <neeraj.upadhyay@...nel.org>,
	Joel Fernandes <joel@...lfernandes.org>,
	Boqun Feng <boqun.feng@...il.com>,
	Zqiang <qiang.zhang1211@...il.com>, rcu@...r.kernel.org
Subject: Re: [PATCH 12/19] kthread: Default affine kthread to its preferred
 NUMA node

Le Tue, Sep 17, 2024 at 08:26:49AM +0200, Michal Hocko a écrit :
> On Tue 17-09-24 00:49:16, Frederic Weisbecker wrote:
> > Kthreads attached to a preferred NUMA node for their task structure
> > allocation can also be assumed to run preferrably within that same node.
> > 
> > A more precise affinity is usually notified by calling
> > kthread_create_on_cpu() or kthread_bind[_mask]() before the first wakeup.
> > 
> > For the others, a default affinity to the node is desired and sometimes
> > implemented with more or less success when it comes to deal with hotplug
> > events and nohz_full / CPU Isolation interactions:
> > 
> > - kcompactd is affine to its node and handles hotplug but not CPU Isolation
> > - kswapd is affine to its node and ignores hotplug and CPU Isolation
> > - A bunch of drivers create their kthreads on a specific node and
> >   don't take care about affining further.
> > 
> > Handle that default node affinity preference at the generic level
> > instead, provided a kthread is created on an actual node and doesn't
> > apply any specific affinity such as a given CPU or a custom cpumask to
> > bind to before its first wake-up.
> 
> Makes sense.
> 
> > This generic handling is aware of CPU hotplug events and CPU isolation
> > such that:
> > 
> > * When a housekeeping CPU goes up and is part of the node of a given
> >   kthread, it is added to its applied affinity set (and
> >   possibly the default last resort online housekeeping set is removed
> >   from the set).
> > 
> > * When a housekeeping CPU goes down while it was part of the node of a
> >   kthread, it is removed from the kthread's applied
> >   affinity. The last resort is to affine the kthread to all online
> >   housekeeping CPUs.
> 
> But I am not really sure about this part. Sure it makes sense to set the
> affinity to exclude isolated CPUs but why do we care about hotplug
> events at all. Let's say we offline all cpus from a given node (or
> that all but isolated cpus are offline - is this even
> realistic/reasonable usecase?). Wouldn't scheduler ignore the kthread's
> affinity in such a case? In other words how is that different from
> tasksetting an userspace task to a cpu that goes offline? We still do
> allow such a task to run, right? We just do not care about affinity
> anymore.

Suppose we have this artificial online set:

NODE 0 -> CPU 0
NODE 1 -> CPU 1
NODE 2 -> CPU 2

And we have nohz_full=1,2

So there is kswapd/2 that is affine to NODE 2 and thus CPU 2 for now.

Now CPU 2 goes offline. The scheduler migrates off all
tasks. select_fallback_rq() for kswapd/2 doesn't find a suitable CPU
to run to so it affines kswapd/2 to all remaining online CPUs (CPU 0, CPU 1)
(see the "No more Mr. Nice Guy" comment).

But CPU 1 is nohz_full, so kswapd/2 could run on that isolated CPU. Unless we
handle things before, like this patchset does.

And note that adding isolcpus=domain,1,2 or setting 1,2 as isolated
cpuset partition (like most isolated workloads should do) is not helping
here. And I'm not sure this last resort scheduler code is the right place
to handle isolated cpumasks.

So it looks necessary, unless I am missing something else?

And that is just for reaffine on CPU down. CPU up needs mirroring treatment
and also it must handle new CPUs freshly added to a node.

Thanks.

> -- 
> Michal Hocko
> SUSE Labs
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ