[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201022040440.GX25604@MiWiFi-R3L-srv>
Date: Thu, 22 Oct 2020 12:04:40 +0800
From: "bhe@...hat.com" <bhe@...hat.com>
To: Rahul Gopakumar <gopakumarr@...are.com>
Cc: "linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"natechancellor@...il.com" <natechancellor@...il.com>,
"ndesaulniers@...gle.com" <ndesaulniers@...gle.com>,
"clang-built-linux@...glegroups.com"
<clang-built-linux@...glegroups.com>,
"rostedt@...dmis.org" <rostedt@...dmis.org>,
Rajender M <manir@...are.com>,
Yiu Cho Lau <lauyiuch@...are.com>,
Peter Jonasson <pjonasson@...are.com>,
Venkatesh Rajaram <rajaramv@...are.com>
Subject: Re: Performance regressions in "boot_time" tests in Linux 5.8 Kernel
Hi Rahul,
On 10/20/20 at 03:26pm, Rahul Gopakumar wrote:
> >> Here, do you mean it even cost more time with the patch applied?
>
> Yes, we ran it multiple times and it looks like there is a
> very minor increase with the patch.
>
......
> On 10/20/20 at 01:45pm, Rahul Gopakumar wrote:
> > Hi Baoquan,
> >
> > We had some trouble applying the patch to problem commit and the latest upstream commit. Steven (CC'ed) helped us by providing the updated draft patch. We applied it on the latest commit (3e4fb4346c781068610d03c12b16c0cfb0fd24a3), and it doesn't look like improving the performance numbers.
>
> Thanks for your feedback. From the code, I am sure what the problem is,
> but I didn't test it on system with huge memory. Forget mentioning my
> draft patch is based on akpm/master branch since it's a mm issue, it
> might be a little different with linus's mainline kernel, sorry for the
> inconvenience.
>
> I will test and debug this on a server with 4T memory in our lab, and
> update if any progress.
>
> >
> > Patch on latest commit - 20.161 secs
> > Vanilla latest commit - 19.50 secs
>
Can you tell how you measure the boot time? I checked the boot logs you
attached, E.g in below two logs, I saw patch_dmesg.log even has less
time during memmap init. Now I have got a machine with 1T memory for
testing, but didn't see obvious time cost increase. At above, you said
"Patch on latest commit - 20.161 secs", could you tell where this 20.161
secs comes from, so that I can investigate and reproduce on my system?
patch_dmesg.log:
[ 0.023126] Initmem setup node 1 [mem 0x0000005600000000-0x000000aaffffffff]
[ 0.023128] On node 1 totalpages: 89128960
[ 0.023129] Normal zone: 1392640 pages used for memmap
[ 0.023130] Normal zone: 89128960 pages, LIFO batch:63
[ 0.023893] Initmem setup node 2 [mem 0x000000ab00000000-0x000001033fffffff]
[ 0.023895] On node 2 totalpages: 89391104
[ 0.023896] Normal zone: 1445888 pages used for memmap
[ 0.023897] Normal zone: 89391104 pages, LIFO batch:63
[ 0.026744] ACPI: PM-Timer IO Port: 0x448
[ 0.026747] ACPI: Local APIC address 0xfee00000
vanilla_dmesg.log:
[ 0.024295] Initmem setup node 1 [mem 0x0000005600000000-0x000000aaffffffff]
[ 0.024298] On node 1 totalpages: 89128960
[ 0.024299] Normal zone: 1392640 pages used for memmap
[ 0.024299] Normal zone: 89128960 pages, LIFO batch:63
[ 0.025289] Initmem setup node 2 [mem 0x000000ab00000000-0x000001033fffffff]
[ 0.025291] On node 2 totalpages: 89391104
[ 0.025292] Normal zone: 1445888 pages used for memmap
[ 0.025293] Normal zone: 89391104 pages, LIFO batch:63
[ 2.096982] ACPI: PM-Timer IO Port: 0x448
[ 2.096987] ACPI: Local APIC address 0xfee00000
Powered by blists - more mailing lists