[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160120165657.GN6357@twins.programming.kicks-ass.net>
Date: Wed, 20 Jan 2016 17:56:57 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Frederic Weisbecker <fweisbec@...il.com>
Cc: LKML <linux-kernel@...r.kernel.org>,
Byungchul Park <byungchul.park@....com>,
Chris Metcalf <cmetcalf@...hip.com>,
Thomas Gleixner <tglx@...utronix.de>,
Luiz Capitulino <lcapitulino@...hat.com>,
Christoph Lameter <cl@...ux.com>,
"Paul E . McKenney" <paulmck@...ux.vnet.ibm.com>,
Mike Galbraith <efault@....de>, Rik van Riel <riel@...hat.com>
Subject: Re: [RFC PATCH 4/4] sched: Upload nohz full CPU load on task
enqueue/dequeue
On Wed, Jan 20, 2016 at 03:54:19PM +0100, Frederic Weisbecker wrote:
> > You can simply do:
> >
> > for_each_nohzfull_cpu(cpu) {
> > struct rq *rq = rq_of(cpu);
> >
> > raw_spin_lock(&rq->lock);
> > update_cpu_load_active(rq);
> > raw_spin_unlock(&rq->lock);
> > }
>
> But from where should we do that?
house keeper thingy
> Maybe we can do it before we call source/target_load(), on the
> selected targets needed by the caller? The problem is that if we do
> that right after a task got enqueued on the nohz runqueue, we may
> accidentally account it as the whole dynticks frame (I mean, if we get
> rid of that enqueue/dequeue accounting).
Yes so? What if the current tick happens right after a task get
enqueued? Then we account the whole tick as !idle, even tough we might
have been idle for 99% of the time.
Not a problem, this is sampling.
Doing it locally or remotely doesn't matter.
> > Also, since when can we have enqueues/dequeues while NOHZ_FULL ? I
> > thought that was the 1 task 100% cpu case, there are no
> > enqueues/dequeues there.
>
> That's the most optimized case but we can definetly have small moments
> with more than one task running. For example if we have a workqueue,
> or such short and quick tasks.
The moment you have nr_running>1 the tick comes back on.
Powered by blists - more mailing lists