[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1378242394.3460.37.camel@j-VirtualBox>
Date: Tue, 03 Sep 2013 14:06:34 -0700
From: Jason Low <jason.low2@...com>
To: Srikar Dronamraju <srikar@...ux.vnet.ibm.com>
Cc: mingo@...hat.com, peterz@...radead.org,
linux-kernel@...r.kernel.org, efault@....de, pjt@...gle.com,
preeti@...ux.vnet.ibm.com, akpm@...ux-foundation.org,
mgorman@...e.de, riel@...hat.com, aswin@...com, scott.norton@...com
Subject: Re: [PATCH v4 2/3] sched: Consider max cost of idle balance per
sched domain
On Mon, 2013-09-02 at 12:24 +0530, Srikar Dronamraju wrote:
> If we face a runq lock contention, then domain_cost can go up.
> The runq lock contention could be temporary, but we carry the domain
> cost forever (i.e till the next reboot). How about averaging the cost +
> penalty for unsuccessful balance.
>
> Something like
> domain_cost = sched_clock_cpu(smp_processor_id()) - t0;
> if (!pulled_task)
> domain_cost *= 2;
>
> sd->max_newidle_lb_cost += domain_cost;
> sd->max_newidle_lb_cost /= 2;
>
>
> Maybe the name could then change to avg_newidle_lb_cost.
>
> > +
> > + curr_cost += domain_cost;
> > }
> >
We tried keeping track of the avg in the v2 patch. It didn't really help
reduce the contention in idle balancing and we needed to also reduce
avg_idle by a factor of 10-20+ when comparing it to
avg_idle_balance_cost in order to get the good performance boosts.
One potential explanation why is that avg idle balance cost can often
have a large variation. That would make both computing the avg_idle and
comparing avg_idle with avg idle balance cost to not really be
consistent.
I think using the max allows us to keep the cost at a more constant rate
so that we can more meaningfully compare avg_idle with respect to "idle
balance cost". It also helps reduce the chance avg_idle overruns the
balance cost. Patch 3 reduces the max cost so that the value isn't kept
until the next reboot.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists