[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Tue, 11 Nov 2008 10:22:40 +0530
From: Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com>
To: Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc: Linux Kernel <linux-kernel@...r.kernel.org>,
Suresh B Siddha <suresh.b.siddha@...el.com>,
Venkatesh Pallipadi <venkatesh.pallipadi@...el.com>,
Ingo Molnar <mingo@...e.hu>,
Dipankar Sarma <dipankar@...ibm.com>,
Balbir Singh <balbir@...ux.vnet.ibm.com>,
Vatsa <vatsa@...ux.vnet.ibm.com>,
Gautham R Shenoy <ego@...ibm.com>,
Andi Kleen <andi@...stfloor.org>,
David Collier-Brown <davecb@....com>,
Tim Connors <tconnors@...ro.swin.edu.au>,
Max Krasnyansky <maxk@...lcomm.com>
Subject: Re: [RFC PATCH v3 0/5] Tunable sched_mc_power_savings=n
* Peter Zijlstra <a.p.zijlstra@...llo.nl> [2008-11-10 19:50:16]:
>
> a quick response, I'll read them more carefully tomorrow:
Hi Peter,
Thanks for the quick review.
>
> - why are the preferred cpu things pointers? afaict using just the cpu
> number is both smaller and clearer to the reader.
I would need each cpu within a partitioned sched domain to point to
the _same_ preferred wakeup cpu. The preferred CPU will be updated in
one place in find_busiest_group() and used by wake_idle.
If I have a per cpu value, then updating it for each cpu in the
partitioned sched domain will be slow.
The actual number of preferred_wakeup_cpu will be equal to the number
of partitions. If there are no partitions in the sched domains, then
then all per-cpu pointers will point to the same variable.
> - in patch 5/5 you do:
>
> + spin_unlock(&this_rq->lock);
> + double_rq_lock(this_rq, busiest);
>
> we call that double_lock_balance()
Will fix this. Did not look for such a routine :)
> - comments go like:
>
> /*
> * this is a multi-
> * line comment
> */
Will fix this too.
Thanks,
Vaidy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists