lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20130723110345.GX27075@twins.programming.kicks-ass.net>
Date:	Tue, 23 Jul 2013 13:03:45 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Jason Low <jason.low2@...com>
Cc:	Srikar Dronamraju <srikar@...ux.vnet.ibm.com>,
	Ingo Molnar <mingo@...hat.com>,
	LKML <linux-kernel@...r.kernel.org>,
	Mike Galbraith <efault@....de>,
	Thomas Gleixner <tglx@...utronix.de>,
	Paul Turner <pjt@...gle.com>, Alex Shi <alex.shi@...el.com>,
	Preeti U Murthy <preeti@...ux.vnet.ibm.com>,
	Vincent Guittot <vincent.guittot@...aro.org>,
	Morten Rasmussen <morten.rasmussen@....com>,
	Namhyung Kim <namhyung@...nel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Kees Cook <keescook@...omium.org>,
	Mel Gorman <mgorman@...e.de>, Rik van Riel <riel@...hat.com>,
	aswin@...com, scott.norton@...com, chegu_vinod@...com
Subject: Re: [RFC PATCH v2] sched: Limit idle_balance()

On Mon, Jul 22, 2013 at 11:57:47AM -0700, Jason Low wrote:
> On Mon, 2013-07-22 at 12:31 +0530, Srikar Dronamraju wrote:
> > > 
> > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > > index e8b3350..da2cb3e 100644
> > > --- a/kernel/sched/core.c
> > > +++ b/kernel/sched/core.c
> > > @@ -1348,6 +1348,8 @@ ttwu_do_wakeup(struct rq *rq, struct task_struct *p, int wake_flags)
> > >  		else
> > >  			update_avg(&rq->avg_idle, delta);
> > >  		rq->idle_stamp = 0;
> > > +
> > > +		rq->idle_duration = (rq->idle_duration + delta) / 2;
> > 
> > Cant we just use avg_idle instead of introducing idle_duration?
> 
> A potential issue I have found with avg_idle is that it may sometimes be
> not quite as accurate for the purposes of this patch, because it is
> always given a max value (default is 1000000 ns). For example, a CPU
> could have remained idle for 1 second and avg_idle would be set to 1
> millisecond. Another question I have is whether we can update avg_idle
> at all times without putting a maximum value on avg_idle, or increase
> the maximum value of avg_idle by a lot.

The only user of avg_idle is idle_balance(); since you're building a new
limiter we can completely scrap/rework avg_idle to do as you want it to.
No point in having two of them.

Also, we now have rq->cfs.{blocked,runnable}_load_avg that might help with
estimating if you're so inclined :-)

> > Should we take the consideration of whether a idle_balance was
> > successful or not?
> 
> I recently ran fserver on the 8 socket machine with HT-enabled and found
> that load balance was succeeding at a higher than average rate, but idle
> balance was still lowering performance of that workload by a lot.
> However, it makes sense to allow idle balance to run longer/more often
> when it has a higher success rate.
> 
> > I am not sure whats a reasonable value for n can be, but may be we could
> > try with n=3.
> 
> Based on some of the data I collected, n = 10 to 20 provides much better
> performance increases.

Right, so I'm still a bit puzzled by why this is so; maybe we're
over-estimating the idle duration due to significant variance in the
idle time?

Maybe we should try with something like the below to test this?


/*
 * http://en.wikipedia.org/wiki/Algorithms_for_calculating_variance
 */
struct stats {
	long mean;
	long M2;
	unsigned int n;
};

static void stats_update(struct stats *stats, long x)
{
	long delta;

	stats->n++;
	delta = x - stats->mean;
	stats->mean += delta / stats->n;
	stats->M2 += delta * (x - stats->mean);
}

static long stats_var(struct stats *stats)
{
	long variance;

	if (!stats->n)
		return 0;

	variance = stats->M2 / (stats->n - 1);

	return int_sqrt(variance);
}

static long stats_mean(struct stats *stats)
{
	return stats->mean;
}

> Yes, I have done quite a bit of testing with sched_migration_cost and
> adjusting it does help performance when idle balance overhead is high.
> But I have found that a higher value may decrease the performance during
> situations where the cost of idle_balance is not high. Additionally,
> when to modify this tunable and by how much to modify it by can
> sometimes be unpredictable. 

So the history if sched_migration_cost is that it used to be a per sd
value; see also:

  https://lkml.org/lkml/2008/9/4/215

Ingo wrote it initially for the O(1) scheduler and ripped it out when he
did CFS. He now doesn't like it because it introduces boot-to-boot
scheduling differences -- you never measure the exact numbers again.

That said, there is a case for restoring it since the one measure really
doesn't do justice to larger systems.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ