lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 21 Jun 2017 15:23:25 +0200
From:   Frederic Weisbecker <fweisbec@...il.com>
To:     Mike Galbraith <efault@....de>
Cc:     Rik van Riel <riel@...hat.com>,
        Peter Zijlstra <peterz@...radead.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...nel.org>
Subject: Re: [PATCH 3/3] sched: Spare idle load balancing on nohz_full CPUs

On Tue, Jun 20, 2017 at 09:06:48PM +0200, Mike Galbraith wrote:
> On Tue, 2017-06-20 at 13:42 -0400, Rik van Riel wrote:
> > On Mon, 2017-06-19 at 04:12 +0200, Frederic Weisbecker wrote:
> > > Although idle load balancing obviously only concern idle CPUs, it can
> > > be a disturbance on a busy nohz_full CPU. Indeed a CPU can only get
> > > rid
> > > of an idle load balancing duty once a tick fires while it runs a task
> > > and this can take a while in a nohz_full CPU.
> > > 
> > > We could fix that and escape the idle load balancing duty from the
> > > very
> > > idle exit path but that would bring unecessary overhead. Lets just
> > > not
> > > bother and leave that job to housekeeping CPUs (those outside
> > > nohz_full
> > > range). The nohz_full CPUs simply don't want any disturbance.
> > > 
> > > Signed-off-by: Frederic Weisbecker <fweisbec@...il.com>
> > > Cc: Thomas Gleixner <tglx@...utronix.de>
> > > Cc: Ingo Molnar <mingo@...nel.org>
> > > Cc: Rik van Riel <riel@...hat.com>
> > > Cc: Peter Zijlstra <peterz@...radead.org>
> > > ---
> > >  kernel/sched/fair.c | 4 ++++
> > >  1 file changed, 4 insertions(+)
> > > 
> > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > index d711093..cfca960 100644
> > > --- a/kernel/sched/fair.c
> > > +++ b/kernel/sched/fair.c
> > > @@ -8659,6 +8659,10 @@ void nohz_balance_enter_idle(int cpu)
> > >  	if (!cpu_active(cpu))
> > >  		return;
> > >  
> > > +	/* Spare idle load balancing on CPUs that don't want to be
> > > disturbed */
> > > +	if (!is_housekeeping_cpu(cpu))
> > > +		return;
> > > +
> > >  	if (test_bit(NOHZ_TICK_STOPPED, nohz_flags(cpu)))
> > >  		return;
> > 
> > I am not entirely convinced on this one.
> > 
> > Doesn't the if (on_null_domain(cpu_rq(cpu)) test
> > a few lines down take care of this already?
> > 
> > Do we want nohz_full to always automatically
> > imply that no idle balancing will happen, like
> > on isolated CPUs?
> 
> IMO, nohz_full capable CPUs that are not isolated should automatically
> become housekeepers, and nohz_full _active_ upon becoming isolated.
>  When a used as a housekeeper, you still pay a price for having the
> nohz_full capability available, but it doesn't have to be as high. 

That's right. So in the end checking for housekeeper on idle load balancing
is something we want, but not with the current definition of housekeepers
which is every CPU outside of nohz_full.

I should set this patch aside until I manage to decouple housekeeping from
nohz_full.

> In my kernels, I use cpusets to turn nohz on/off set wise, so CPUs can
> be ticking, dyntick, nohz_full or housekeeper, RT load balancing and
> cpupri on/off as well if you want to assume full responsibility.  It's
> a tad (from box of xxl tads) ugly, but more flexible.

Indeed I think that, in the end, driving the isolation "intensity" through
cpusets is a good idea. It's going to be quite a headache in the case
of nohz_full though if we want to avoid races against tick dependency,
cputime accounting.

But at least I can start to move the other various isolation features
to cpusets.

Thanks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ