[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <m3r5xkupre.fsf@pobox.com>
Date: Tue, 16 Jun 2009 11:06:45 -0500
From: Nathan Lynch <ntl@...ox.com>
To: Gautham R Shenoy <ego@...ibm.com>
Cc: linux-kernel@...r.kernel.org,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Balbir Singh <balbir@...ibm.com>,
Rusty Russel <rusty@...tcorp.com.au>,
Paul E McKenney <paulmck@...ibm.com>,
Ingo Molnar <mingo@...e.hu>,
Venkatesh Pallipadi <venkatesh.pallipadi@...el.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com>,
Dipankar Sarma <dipankar@...ibm.com>,
Shoahua Li <shaohua.li@...ux.com>
Subject: Re: [RFD PATCH 1/4] powerpc: cpu: Reduce the polling interval in
__cpu_up()
Please cc linuxppc-dev if you want the powerpc maintainer to pick this
up.
Gautham R Shenoy <ego@...ibm.com> writes:
> The cpu online operation on a powerpc today takes order of 200-220ms. Of
> this time, approximately 200ms is taken up by __cpu_up(). This is because
> we poll every 200ms to check if the new cpu has notified it's presence
> through the cpu_callin_map. We poll every 200ms until the new cpu sets
> the value in cpu_callin_map or 5 seconds elapse, whichever comes earlier.
>
> However, the time taken by the new processor to indicate it's presence has
> found to be less than a millisecond
Only with your particular configuration (which is not identified). It
can take much longer than 1ms on others.
> Keeping this in mind, reduce the
> polling interval from 200ms to 1ms while retaining the 5 second
> timeout.
Ack on the patch, but the changelog needs work. I assume your
observations are from a pseries system -- please state this in the
changelog ("powerpc" is too broad), along with the processor model and
whether the LPAR's processors were configured in dedicated or shared
mode.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists