lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 06 May 2020 21:21:15 +0100 From: Valentin Schneider <valentin.schneider@....com> To: Vincent Guittot <vincent.guittot@...aro.org> Cc: Peng Liu <iwtbavbm@...il.com>, Dietmar Eggemann <dietmar.eggemann@....com>, Ingo Molnar <mingo@...hat.com>, Peter Zijlstra <peterz@...radead.org>, Juri Lelli <juri.lelli@...hat.com>, Steven Rostedt <rostedt@...dmis.org>, Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>, linux-kernel <linux-kernel@...r.kernel.org> Subject: Re: [PATCH] sched/fair: Fix nohz.next_balance update On 06/05/20 17:56, Vincent Guittot wrote: > On Wed, 6 May 2020 at 18:03, Valentin Schneider > <valentin.schneider@....com> wrote: >> >> >> On 06/05/20 14:45, Vincent Guittot wrote: >> >> But then we may skip an update if we goto abort, no? Imagine we have just >> >> NOHZ_STATS_KICK, so we don't call any rebalance_domains(), and then as we >> >> go through the last NOHZ CPU in the loop we hit need_resched(). We would >> >> end in the abort part without any update to nohz.next_balance, despite >> >> having accumulated relevant data in the local next_balance variable. >> > >> > Yes but on the other end, the last CPU has not been able to run the >> > rebalance_domain so we must not move nohz.next_balance otherwise it >> > will have to wait for at least another full period >> > In fact, I think that we have a problem with current implementation >> > because if we abort because local cpu because busy we might end up >> > skipping idle load balance for a lot of idle CPUs >> > >> > As an example, imagine that we have 10 idle CPUs with the same >> > rq->next_balance which equal nohz.next_balance. _nohz_idle_balance >> > starts on CPU0, it processes idle lb for CPU1 but then has to abort >> > because of need_resched. If we update nohz.next_balance like >> > currently, the next idle load balance will happen after a full >> > balance interval whereas we still have 8 CPUs waiting for running an >> > idle load balance. >> > >> > My proposal also fixes this problem >> > >> >> That's a very good point; so with NOHZ_BALANCE_KICK we can reduce >> nohz.next_balance via rebalance_domains(), and otherwise we would only >> increase it if we go through a complete for_each_cpu() loop in >> _nohz_idle_balance(). >> >> That said, if for some reason we keep bailing out of the loop, we won't >> push nohz.next_balance forward and thus may repeatedly nohz-balance only >> the first few CPUs in the NOHZ mask. I think that can happen if we have >> say 2 tasks pinned to a single rq, in that case nohz_balancer_kick() will >> kick a NOHZ balance whenever now >= nohz.next_balance. > > If we take my example above and we have CPU0 which is idle at every > tick and selected as ilb_cpu but unluckily CPU0 has to abort before > running ilb for CPU1 everytime, I agree that we can end up trying to > run ilb on CPU0 at every tick without any success. We might consider > to kick_ilb in _nohz_idle_balance if we have to abort to let another > CPU handle the ilb That's an idea; maybe something like the next CPU that was due to be rebalanced (i.e. the one for which we hit the goto abort).
Powered by blists - more mailing lists