[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <mhng-3f62072d-7b42-4fa0-9076-3899054749cc@palmer-ri-x1c9a>
Date: Thu, 22 Jun 2023 14:42:04 -0700 (PDT)
From: Palmer Dabbelt <palmer@...belt.com>
To: ndesaulniers@...gle.com
CC: nathan@...nel.org, bjorn@...nel.org,
Conor Dooley <conor@...nel.org>, jszhang@...nel.org,
llvm@...ts.linux.dev, Paul Walmsley <paul.walmsley@...ive.com>,
aou@...s.berkeley.edu, Arnd Bergmann <arnd@...db.de>,
linux-riscv@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-arch@...r.kernel.org
Subject: Re: [PATCH v2 0/4] riscv: enable HAVE_LD_DEAD_CODE_DATA_ELIMINATION
On Thu, 22 Jun 2023 14:40:59 PDT (-0700), ndesaulniers@...gle.com wrote:
> On Wed, Jun 21, 2023 at 12:46 PM Palmer Dabbelt <palmer@...belt.com> wrote:
>>
>> On Wed, 21 Jun 2023 11:19:31 PDT (-0700), Palmer Dabbelt wrote:
>> > On Wed, 21 Jun 2023 10:51:15 PDT (-0700), bjorn@...nel.org wrote:
>> >> Conor Dooley <conor@...nel.org> writes:
>> >>
>> >> [...]
>> >>
>> >>>> So I'm no longer actually sure there's a hang, just something slow.
>> >>>> That's even more of a grey area, but I think it's sane to call a 1-hour
>> >>>> link time a regression -- unless it's expected that this is just very
>> >>>> slow to link?
>> >>>
>> >>> I dunno, if it was only a thing for allyesconfig, then whatever - but
>> >>> it's gonna significantly increase build times for any large kernels if LLD
>> >>> is this much slower than LD. Regression in my book.
>> >>>
>> >>> I'm gonna go and experiment with mixed toolchain builds, I'll report
>> >>> back..
>> >>
>> >> I took palmer/for-next (1bd2963b2175 ("Merge patch series "riscv: enable
>> >> HAVE_LD_DEAD_CODE_DATA_ELIMINATION"")) for a tuxmake build with llvm-16:
>> >>
>> >> | ~/src/tuxmake/run -v --wrapper ccache --target-arch riscv \
>> >> | --toolchain=llvm-16 --runtime docker --directory . -k \
>> >> | allyesconfig
>> >>
>> >> Took forever, but passed after 2.5h.
>> >
>> > Thanks. I just re-ran mine 17/trunk LLD under time (rather that just
>> > checking top sometimes), it's at 1.5h but even that seems quite long.
>> >
>> > I guess this is sort of up to the LLVM folks: if it's expected that DCE
>> > takes a very long time to link then I'm not opposed to allowing it, but
>> > if this is probably a bug in LLD then it seems best to turn it off until
>> > we sort things out over there.
>> >
>> > I think maybe Nick or Nathan is the best bet to know?
>>
>> Looks like it's about 2h for me. I'm going to drop these from my
>> staging tree in the interest of making progress on other stuff, but if
>> this is just expected behavior them I'm OK taking them (though that's
>> too much compute for me to test regularly):
>>
>> $ time ../../../../llvm/install/bin/ld.lld -melf64lriscv -z noexecstack -r -o vmlinux.o --whole-archive vmlinux.a --no-whole-archive --start-group ./drivers/firmware/efi/libstub/lib.a --end-group
>>
>> real 111m50.678s
>> user 111m18.739s
>> sys 1m13.147s
>
> Ah, I think you meant s/allmodconfig/allyesconfig/ in your initial
> report. That makes more sense, and I can reproduce. Let me work on a
> report.
Awesome, thanks!
>
>>
>> >> CONFIG_CC_VERSION_TEXT="Debian clang version 16.0.6 (++20230610113307+7cbf1a259152-1~exp1~20230610233402.106)"
>> >>
>> >>
>> >> Björn
Powered by blists - more mailing lists