lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Z_fBq2AQjzyg8m5w@localhost.localdomain>
Date: Thu, 10 Apr 2025 15:03:39 +0200
From: Frederic Weisbecker <frederic@...nel.org>
To: Gabriele Monaco <gmonaco@...hat.com>
Cc: Thomas Gleixner <tglx@...utronix.de>, linux-kernel@...r.kernel.org,
	Waiman Long <longman@...hat.com>
Subject: Re: [PATCH] timers: Exclude isolated cpus from timer migation

Le Thu, Apr 10, 2025 at 12:38:25PM +0200, Gabriele Monaco a écrit :
> On Thu, 2025-04-10 at 10:26 +0200, Thomas Gleixner wrote:
> > How can that happen? There is always at least _ONE_ housekeeping,
> > non-isolated, CPU online, no?
> > 
> 
> In my understanding it shouldn't, but I'm not sure there's anything
> preventing the user from isolating everything via cpuset.
> Anyway that's something no one in their mind should do, so I guess I'd
> just opt for the cpumask_first (or actually cpumask_any, like before
> the change).

With "nohz_full=..." or "isolcpus=nohz,..." there is always at least one
housekeeping CPU. But with isolcpus=[domain] or cpusets equivalents
(v1 cpuset.sched_load_balance, v2 isolated partion) there is nothing that
prevents all CPUs from being isolated.

Speaking of, those are two different issues here:

* nohz_full CPUs are handled just like idle CPUs. Once the tick is stopped,
  the global timers are handled by other CPUs (housekeeping). There is always
  one housekeeping CPU that never goes idle.
  One subtle thing though: if the nohz_full CPU fires a tick, because there
  is a local timer to be handled for example, it will also possibly handle
  some global timers along the way. If it happens to be a problem, it should
  be easy to resolve.

* Domain isolated CPUs are treated just like other CPUs. But there is not
  always a housekeeping CPU around. And no guarantee that there is always
  a non-idle CPU to take care of global timers.

> > That brings me to the general design decision here. Your changelog
> > explains at great length WHAT the change is doing, but completely
> > fails
> > to explain the consequences and the rationale why this is the right
> > thing to do.
> > 
> > By excluding the isolated CPUs from migration completely, any
> > 'global'
> > timer, which is armed on such a CPU, has to be expired on that
> > isolated
> > CPU. That's fundamentaly different from e.g. RCU isolation.
> > 
> > It might be the right thing to do and harmless, but without a proper
> > explanation it's a silent side effect of your changes, which leaves
> > people scratching their heads.
> 
> Mmh, essentially the idea is that global timer should not migrate from
> housekeeping to isolated cores. I assumed the opposite never occurs (as
> global timers /should/ not even start on isolated cores on a properly
> isolated system), but you're right, that's not quite true.

Indeed, they can definetly start there.
I'm tempted to propose to offline/reonline isolated CPUs in order to migrate
away those timers. But that only works for timers that are currently queued.

> 
> Thinking about it now, since global timers /can/ start on isolated
> cores, that makes them quite different from offline ones and probably
> considering them the same is just not the right thing to do..
> 
> I'm going to have a deeper thought about this whole approach, perhaps
> something simpler just preventing migration in that one direction would
> suffice.

I think we can use your solution, which involves isolating the CPU from tmigr
hierarchy. And also always queue global timers to non-isolated targets.

-- 
Frederic Weisbecker
SUSE Labs

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ