[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJvTdKkvS_rDAsxEw=-wBou-wC=XvGXZSOMSY-PhNYuXHo-yzQ@mail.gmail.com>
Date: Thu, 28 May 2015 01:31:20 -0400
From: Len Brown <lenb@...nel.org>
To: Ingo Molnar <mingo@...nel.org>
Cc: Jan H. Schönherr <jschoenh@...zon.de>,
Thomas Gleixner <tglx@...utronix.de>, X86 ML <x86@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Anthony Liguori <aliguori@...zon.com>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>, Tim Deegan <tim@....org>,
Gang Wei <gang.wei@...el.com>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH] x86: skip delays during SMP initialization similar to Xen
>> >> BTW. this time can be reduced by 7% (113 ms) by deleting
>> >> announce_cpu():
>> >>
>> >> [ 1.445815] x86: Booted up 4 nodes, 120 CPUs
>> >
>> > so that kind of info looks pretty useful, especially when there's
>> > hangs/failures.
>>
>> I think the messages we print on failure are useful.
>> I think the success case should be a 1-line summary.
>
> But we usually don't know a failure until it happens, and then people
> often don't know which quirky debug option to turn on before sending a
> capture of the failure.
>
> It also pretty compressed and looks kind of cool, especially with
> larger CPU counts. Would love to see a 6K CPUs system boot up ;-)
>
>> > I'm wondering what takes 113 msecs to print 120 CPUs - that's
>> > about 1 msec per a few chars of printk produced, seems excessive.
>> > Do you have any idea what's going on there? Does your system print
>> > to a serial console perhaps?
>>
>> Yes, serial console -- that server is actually much
>> closer to you than it is to me, it is in Finland:-)
>
> LOL ;-)
>
>> I should benchmark it, because 115200 should be faster...
>
> So 115200 baud == 14400 bytes/sec == 14.4 bytes/msec == 0.07 msecs/byte
>
> So with 120 CPUs we print about 5-6 chars per CPU, which is 6*120==720
> bytes, which should take about 50 msecs.
>
> So serial explains about half of the observed overhead.
I do not believe that the serial console drain-rate is is a measurable
factor in boot speed.
Right before the remote processors were on-lined, I did a bunch of
back-back printk's
on the same box, and they time-stamped in microseconds -- presumably at the rate
that they could be added to the message buffer:
[ 0.802350] 12345678901234567890123456789012345678901234567890123456789
[ 0.802350] 123456789012345678901234567890123456789012345678901234567890
[ 0.802351] 1234567890123456789012345678901234567890123456789012345678901
[ 0.802352] 12345678901234567890123456789012345678901234567890123456789012
[ 0.802352] 123456789012345678901234567890123456789012345678901234567890123
[ 0.802353] 1234567890123456789012345678901234567890123456789012345678901234
[ 0.802354] 12345678901234567890123456789012345678901234567890123456789012345
[ 0.802354] 123456789012345678901234567890123456789012345678901234567890123456
[ 0.802355] 1234567890123456789012345678901234567890123456789012345678901234567
[ 0.802356] 12345678901234567890123456789012345678901234567890123456789012345678
[ 0.802356] 123456789012345678901234567890123456789012345678901234567890123456789
[ 0.802357] 1234567890123456789012345678901234567890123456789012345678901234567890
cheers,
Len Brown, Intel Open Source Technology Center
ps. yes, if you're watching the console drain...
[ 0.807246] NMI watchdog: enabled on all CPUs, permanently consumes
one hw-PMU counter.
[ 0.817989] x86: Booting SMP configuration:
[ 0.822942] .... node #0, CPUs: #1 #2 #3 #4 #5
#6 #7 #8 #9 #10 #11 #12 #13 #14
[ 0.909214] .... node #1, CPUs: #15 #16 #17 #18 #19 #20
#21 #22 #23 #24 #25 #26 #27 #28 #29
[ 1.108675] .... node #2, CPUs: #30 #31 #32 #33 #34 #35
#36 #37 #38 #39 #40 #41 #42 #43 #44
[ 1.308132] .... node #3, CPUs: #45 #46 #47 #48 #49 #50
#51 #52 #53 #54 #55 #56 #57 #58 #59
[ 1.507550] .... node #0, CPUs: #60 #61 #62 #63 #64 #65
#66 #67 #68 #69 #70 #71 #72 #73 #74
[ 1.593693] .... node #1, CPUs: #75 #76 #77 #78 #79 #80
#81 #82 #83 #84 #85 #86 #87 #88 #89
[ 1.715893] .... node #2, CPUs: #90 #91 #92 #93 #94 #95
#96 #97 #98 #99 #100 #101 #102 #103 #104
[ 1.838143] .... node #3, CPUs: #105 #106 #107 #108 #109 #110
#111 #112 #113 #114 #115 #116 #117 #118 #119
[ 1.959675] x86: Booted up 4 nodes, 120 CPUs
It is taking 119 CPUs 1.959675-0.817989 = 1.141686 seconds total
(or 1.141686/119 = 9.5ms / CPU)
While the console could drain that output much faster than it printed
into the msgbuf:
112 bytes/line * 8 lines = 896 bytes to dump all the cpu numbers
896 bytes * 10 bits/byte / 115200 bits/sec = 77ms total
(or 77/119 = 647 us per cpu)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists