[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3877989d0906290121l15705d2cn72e4c49dd96ed950@mail.gmail.com>
Date: Mon, 29 Jun 2009 16:21:59 +0800
From: Luming Yu <luming.yu@...il.com>
To: Ingo Molnar <mingo@...e.hu>
Cc: LKML <linux-kernel@...r.kernel.org>, suresh.b.siddha@...el.com,
venkatesh.pallipadi@...el.com,
Thomas Gleixner <tglx@...utronix.de>,
"H. Peter Anvin" <hpa@...or.com>
Subject: Re: [RFC patch] Use IPI_shortcut for lapic timer broadcast
On Mon, Jun 29, 2009 at 4:16 PM, Ingo Molnar<mingo@...e.hu> wrote:
>
> * Luming Yu <luming.yu@...il.com> wrote:
>
>> On Mon, Jun 29, 2009 at 3:20 PM, Ingo Molnar<mingo@...e.hu> wrote:
>> >
>> > * Luming Yu <luming.yu@...il.com> wrote:
>> >
>> >> Hello,
>> >>
>> >> We need to use IPI shortcut to send lapic timer broadcast
>> >> to avoid the latency of sending IPI one bye one on systems with many
>> >> logical processors when NO_HZ is disabled.
>> >> Without this patch,I have seen upstream kernel with RHEL 5 kernel
>> >> config boot hang .
>> >
>> > hm, that might be a valid optimization - but why does the lack of
>> > this optimization result in a hang?
>>
>> It is hang caused by kernel code for work around lapic-timer-stop
>> issue. With HZ=1000, and a lot of cpus (eg. 64 logical cpus), cpu
>> 0 will be busy working on send TIMER IPI instead of making
>> progress in boot (right after deep-C-state has been used).
>
> that's a bit weird. With HZ=1000 we have 1000 usecs between each
> timer tick. Assuming a CPU sends to a lot of CPUs (64 logical CPUs)
> that means that each IPI takes more than ~15 microseconds to
> process. On what hardware/platform can this happen realistically?
https://bugzilla.redhat.com/show_bug.cgi?id=499271
Someone has measured that it needs 50-100us latency to send one IPI
>
> Ingo
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists