[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20080930070326.GA5331@amitarora.in.ibm.com>
Date: Tue, 30 Sep 2008 12:33:26 +0530
From: "Amit K. Arora" <aarora@...ux.vnet.ibm.com>
To: Chris Friesen <cfriesen@...tel.com>
Cc: linux-kernel@...r.kernel.org, vatsa@...ux.vnet.ibm.com,
a.p.zijlstra@...llo.nl, mingo@...e.hu
Subject: Re: [PATCH] sched: minor optimizations in wake_affine and
select_task_rq_fair
On Mon, Sep 29, 2008 at 10:09:41AM -0600, Chris Friesen wrote:
> Amit K. Arora wrote:
>> sched: Minor optimizations in wake_affine and select_task_rq_fair
>>
>> This patch does following:
>> o Reduces the number of arguments to wake_affine().
>
> At what point is it cheaper to pass items as args rather than recalculating
> them? If reducing the number of args is desirable, what about removing the
> "this_cpu" and "prev_cpu" args and recalculating them in wake_affine()?
Thats a good question. Its kind of arguable and I wasn't sure if
everyone will be happy if I removed more arguments from wake_affine() than
what I did in my patch (because of the recalculations required).
wake_affine() currently has 11 arguments and I thought it may make sense
in reducing it to a sane number. For that I chose arguments which I
thought can be recalculated with minimum overhead (involves single struct
dereference, a simple per cpu variable and/or a simple arithmetic). And
one argument ("rq") which is being removed, isn't used at all in the
function!
Regarding the two variables you have mentioned, I didn't remove them as
args since I wasn't sure of "this_cpu" (which is nothing but
smp_processor_id()) as it is arch dependent, and calculating "prev_cpu"
involves two struct dereferences (((struct thread_info *)(task)->stack)->cpu).
And the calculation for other arguments (like, this_sd, load and this_load)
involves good amount of instructions.
If you disagree, what do you suggest we do here ?
Regards,
Amit Arora
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists