lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 04 Apr 2016 15:12:23 -0400
From:	Rik van Riel <riel@...hat.com>
To:	Chris Metcalf <cmetcalf@...lanox.com>,
	Frederic Weisbecker <fweisbec@...il.com>,
	Christoph Lameter <cl@...ux.com>,
	Ingo Molnar <mingo@...nel.org>,
	Luiz Capitulino <lcapitulino@...hat.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Viresh Kumar <viresh.kumar@...aro.org>,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH] nohz_full: Make sched_should_stop_tick() more
 conservative

On Fri, 2016-04-01 at 15:42 -0400, Chris Metcalf wrote:
> On arm64, when calling enqueue_task_fair() from migration_cpu_stop(),
> we find the nr_running value updated by add_nr_running(), but the
> cfs.nr_running value has not always yet been updated.  Accordingly,
> the sched_can_stop_tick() false returns true when we are migrating a
> second task onto a core.

I don't get it.

Looking at the enqueue_task_fair(), I see this:

        for_each_sched_entity(se) {
                cfs_rq = cfs_rq_of(se);
                cfs_rq->h_nr_running++;
		...
	}

        if (!se)
                add_nr_running(rq, 1);

What is the difference between cfs_rq->h_nr_running,
and rq->cfs.nr_running?

Why do we have two?

Are we simply testing against the wrong one in
sched_can_stop_tick?

> Correct this by using rq->nr_running instead of rq->cfs.nr_running.
> This should always be more conservative, and reverts the test to the
> form it had before commit 76d92ac305f2 ("sched: Migrate sched to use
> new tick dependency mask model").

That would cause us to run the timer tick while running
a single SCHED_RR real time task, with a single
SCHED_OTHER task sitting in the background (which will
not get run until the SCHED_RR task is done).

I don't think that is the quite behaviour we want.

> Signed-off-by: Chris Metcalf <cmetcalf@...lanox.com>
> ---
> I found this bug because I had a program running in nohz_full
> on a core, and from a different core I called sched_setaffinity()
> to force that task onto the nohz_full core, but I did not end up with
> a kick to the nohz_full core, so tick-based scheduling did not start.
> This is probably bad enough that we should fix it for 4.6.
> 
> Strangely, for some reason, the existing code worked correctly for me
> for tilegx, but not for arm64.  I see that the enqueue_task_fair()
> code calls enqueue_entity(), which calls account_entity_enqueue() to
> adjust cfs.nr_running.  That seemed to happen on tilegx, but not
> arm64.
> Perhaps there is some difference in how the sched_entity stuff is
> done,
> but frankly that took me a little deeper into the CFS stuff than I
> was
> willing to dive in this moment.
> 
> I could also argue that sched/core.c shouldn't have a lot of CFS
> stuff in it anyway, and if we view the FIFO/RR stuff as handling the
> real special cases in sched_can_stop_tick() anyway, then just
> checking
> the core nr_running feels like the right thing to do regardless.
> 
>  kernel/sched/core.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 00649f7ad567..1737d63c65fa 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -599,7 +599,7 @@ bool sched_can_stop_tick(struct rq *rq)
>  	}
>  
>  	/* Normal multitasking need periodic preemption checks */
> -	if (rq->cfs.nr_running > 1)
> +	if (rq->nr_running > 1)
>  		return false;
>  
>  	return true;
-- 
All Rights Reversed.


Download attachment "signature.asc" of type "application/pgp-signature" (474 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ