lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110928043427.GI4357@linux.vnet.ibm.com>
Date:	Wed, 28 Sep 2011 10:04:27 +0530
From:	Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>
To:	Venki Pallipadi <venki@...gle.com>
Cc:	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Paul Turner <pjt@...gle.com>, Ingo Molnar <mingo@...e.hu>,
	Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com>,
	Kamalesh Babulal <kamalesh@...ux.vnet.ibm.com>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH v1] sched: fix nohz idle load balancer issues

* Venki Pallipadi <venki@...gle.com> [2011-09-27 12:53:21]:

> Some comments:
> 
> Another potential change here is to
> - either reverse the order of rebalance_domains() and
> nohz_idle_balance() in run_rebalance_domains()

I thought of that, but then realized that it won't influence our
"idle_at_tick" check in nohz_idle_balance(). Did you have any other benefit in
mind behind that change?

> - or to kick another idle CPU in case of need_resched() in nohz_idle_balance.
> This should help with idle balance of tickless CPUs when ilb CPU gets
> a new task through load balance and hence aborts ilb.

Yes good point. I will add that in next version.

> > - The patch introduces a 'nohz.next_balance_lock' spinlock which is used
> >  to update nohz.next_balance, so that it stays as min(rq->next_balance).
> >  This fixes issue #2. I don't like a global spinlock so much, but don't
> >  see easy way out. Besides, this lock is taken in context of idle cpu.
> >
> 
> The problem I see with this is that there is no way to reset
> next_balance when a previously idle CPU goes busy. This probably will
> result in frequent ilb than needed and potential power and
> performance(due to SMT or freq timer interrupts) impact.

That already seems to be an issue with existing code. One possibility is
to rescan idle cpus looking for the next min rq->next_balance (unless we
want to go for more sophisticated like a sorted list of rq->next_balance
in a rb-tree).

> > - It allows any busy cpu to kick ilb_cpu if it has greater than 2
> >  runnable tasks. This addresses issue #3
> 
> 
> This again may have power impact with frequent kicking.

I don't know how much additional kicks it would add - I mean the system
is busy and ilb_cpu deserves a kick. With this, we are forcing it to
happen sooner rather than wait for first/second_pick_cpu to do
"justice"?

> Especially with higher number of logical CPUs. Likely cleaner way is to clear
> first_pick, second_pick on idle instead of clearing on tickless.

I think I tried that (cleared first/second_pick_cpu in
nohz_kick_needed() upon idle) but didn't get the best results. Let me
try that again and post idle time numbers.

> Would be interesting to see some tests power impact (or number of
> interrupts, resched IPIs etc) of this change. Both with netbook kind
> of systems and on servers with partially idle configuration.

Ok - will get some numbers in that regard as well.

- vatsa
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ