[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <AANLkTimZK2mHvHRk-iaBZpEkZ1MYooP24y6HetsJo0br@mail.gmail.com>
Date: Thu, 21 Oct 2010 11:25:04 -0700
From: Paul Turner <pjt@...gle.com>
To: bharata@...ux.vnet.ibm.com
Cc: linux-kernel@...r.kernel.org,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Ingo Molnar <mingo@...e.hu>,
Srivatsa Vaddagiri <vatsa@...ibm.com>,
Chris Friesen <cfriesen@...tel.com>,
Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com>,
Pierre Bourdon <pbourdon@...ellency.fr>
Subject: Re: [RFC tg_shares_up improvements - v1 05/12] sched: fix
update_cfs_load synchronization
On Thu, Oct 21, 2010 at 2:52 AM, Bharata B Rao
<bharata@...ux.vnet.ibm.com> wrote:
> On Fri, Oct 15, 2010 at 09:43:54PM -0700, pjt@...gle.com wrote:
>> Using cfs_rq->nr_running is not sufficient to synchronize update_cfs_load with
>> the put path since nr_running accounting occurs at deactivation.
>>
>> It's also not safe to make the removal decision based on load_avg as this fails
>> with both high periods and low shares. Resolve this by clipping history at 8
>> foldings.
>>
>
> Did you mean 4 foldings (and not 8) above since I see you are truncating
> the load history at 4 idle periods ?
We end up folding every period/2 (after the first fold) since after
that we start at period/2 and not 0.
Thus, if we were issuing 'continuous' load updates triggering this
condition would mean that we had folded idle periods into our load_avg
8 times.
Granted, it's possible for there to be less folds given sporadic load
updates so I'll just update the comment to reference periods.
Thanks
>
>
>> @@ -685,9 +686,19 @@ static void update_cfs_load(struct cfs_r
>> now = rq_of(cfs_rq)->clock;
>> delta = now - cfs_rq->load_stamp;
>>
>> + /* truncate load history at 4 idle periods */
>> + if (cfs_rq->load_stamp > cfs_rq->load_last &&
>> + now - cfs_rq->load_last > 4 * period) {
>> + cfs_rq->load_period = 0;
>> + cfs_rq->load_avg = 0;
>> + }
>> +
>> cfs_rq->load_stamp = now;
>> cfs_rq->load_period += delta;
>> - cfs_rq->load_avg += delta * cfs_rq->load.weight;
>> + if (load) {
>> + cfs_rq->load_last = now;
>> + cfs_rq->load_avg += delta * load;
>> + }
>>
>> while (cfs_rq->load_period > period) {
>> /*
>> @@ -700,10 +711,8 @@ static void update_cfs_load(struct cfs_r
>> cfs_rq->load_avg /= 2;
>> }
>>
>> - if (lb && !cfs_rq->nr_running) {
>> - if (cfs_rq->load_avg < (period / 8))
>> - list_del_leaf_cfs_rq(cfs_rq);
>> - }
>> + if (!cfs_rq->curr && !cfs_rq->nr_running && !cfs_rq->load_avg)
>> + list_del_leaf_cfs_rq(cfs_rq);
>> }
>>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists