[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BANLkTikhAF=nzJA4qg_8By=CL6t9iFPQwg@mail.gmail.com>
Date: Thu, 28 Apr 2011 11:51:59 -0700
From: Paul Turner <pjt@...gle.com>
To: Nikhil Rao <ncrao@...gle.com>
Cc: vatsa@...ux.vnet.ibm.com,
"Nikunj A. Dadhania" <nikunj@...ux.vnet.ibm.com>,
Ingo Molnar <mingo@...e.hu>,
Peter Zijlstra <peterz@...radead.org>,
Mike Galbraith <efault@....de>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Bharata B Rao <bharata@...ux.vnet.ibm.com>
Subject: Re: [RFC][PATCH 00/18] Increase resolution of load weights
On Thu, Apr 28, 2011 at 11:33 AM, Nikhil Rao <ncrao@...gle.com> wrote:
> On Thu, Apr 28, 2011 at 5:12 AM, Srivatsa Vaddagiri
> <vatsa@...ux.vnet.ibm.com> wrote:
>> On Thu, Apr 28, 2011 at 05:18:27PM +0530, Nikunj A. Dadhania wrote:
>>> --- kernel/sched.c.orig 2011-04-28 16:34:24.000000000 +0530
>>> +++ kernel/sched.c 2011-04-28 16:36:29.000000000 +0530
>>> @@ -1336,7 +1336,7 @@ calc_delta_mine(unsigned long delta_exec
>>> lw->inv_weight = 1 + (WMULT_CONST - w/2) / (w + 1);
>>> }
>>>
>>> - tmp = (u64)delta_exec * weight;
>>> + tmp = (u64)delta_exec * (weight >> SCHED_LOAD_RESOLUTION);
>>
>> Should we be fixing inv_weight rather to account for SCHED_LOAD_RESOLUTION here?
>>
>
> Yes, I have been looking into fixing inv_weight and calc_delta_mine()
> calculations based on the assumption that we have u64 weights. IMO the
> function is complicated because the return value needs to be
> calculated to fit into unsigned long. I would like to update users of
> calc_delta_mine() to use u64 instead of unsigned longs and I think
> this can be easily done (quick inspection of the code shows two call
> sites that need to be updated - update_curr() and wakeup_gran()).
> Without the restriction to fit into unsigned long, I think we can make
> calc_delta_mine() and the inv_weight calculations simpler.
>
I don't think you have much room to maneuver here, the calculations in
c_d_m() are already u64 based, even on 32bit. Changing the external
load factors to 64 bit doesn't change this.
We lose fairness in cdm beyond 32 bits, at the old LOAD_SCALE=10
you've got 22 bits with which you can maintain fairness. This gives
total accuracy in total curr on any delta <= ~4ms (for a NICE_0 task).
If you bump this up (and don't downshift before computing the inverse
as you are) then you start introducing rounding errors beyond ~4us.
This would also be further exacerbated in sched_period() since that's
using the total cfs_rq weight.
> -Thanks,
> Nikhil
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists