[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5379BD2C.6080106@linux.vnet.ibm.com>
Date: Mon, 19 May 2014 13:43:32 +0530
From: Preeti U Murthy <preeti@...ux.vnet.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>,
Ben Segall <bsegall@...gle.com>
CC: Preeti Murthy <preeti.lkml@...il.com>,
Ingo Molnar <mingo@...hat.com>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] sched: fix exec_start/task_hot on migrated tasks
On 05/16/2014 07:47 PM, Peter Zijlstra wrote:
> On Fri, May 16, 2014 at 07:27:32PM +0530, Preeti Murthy wrote:
>>> 0 isn't strictly the right thing to do here, since the clock can wrap,
>>> but being wrong every ~585 years isn't too big an issue for this.
>>
>> I don't understand this. Will setting it to 0 not indicate beginning
>> of ticking?
>
> In modular spaces there is no beginnings nor endings.
>
>> So when you find out for how long the task has run, the
>> difference would be larger than what would have been had you
>> let exec_start be at its previous value of the old cpu's clock_task right?
>
> Nope, see the modular space thing, once every ~585 years we'll wrap the
> clock and 0 is within range of recently ran.
>
>> Will not setting exec_start to the clock_task of the destination rq
>> during migration be better? This would be the closest we could
>> come to estimating the amount of time the task has run on this new
>> cpu while deciding task_hot or not no?
>
> Setting it to the exact clock_task of the destination rq would make it
> hot on that rq, even though it hasn't yet ran there, so you'd have to do
> something like: rq_clock_task(dst_rq) - sysctl_sched_migration_cost.
>
> But seeing as how that is far more work, and all this is heuristics
> anyhow and an extra fail term of 1/585 years is far below the current
> fail rate, all is well.
>
Ok now I understand this.Thanks!
Reviewed-by: Preeti U Murthy <preeti@...ux.vnet.ibm.com>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists