[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87wmzwn1po.fsf@all.your.base.are.belong.to.us>
Date: Wed, 21 Jun 2023 19:51:15 +0200
From: Björn Töpel <bjorn@...nel.org>
To: Conor Dooley <conor@...nel.org>,
Palmer Dabbelt <palmer@...belt.com>
Cc: ndesaulniers@...gle.com, jszhang@...nel.org, llvm@...ts.linux.dev,
Paul Walmsley <paul.walmsley@...ive.com>,
aou@...s.berkeley.edu, Arnd Bergmann <arnd@...db.de>,
linux-riscv@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-arch@...r.kernel.org
Subject: Re: [PATCH v2 0/4] riscv: enable HAVE_LD_DEAD_CODE_DATA_ELIMINATION
Conor Dooley <conor@...nel.org> writes:
[...]
>> So I'm no longer actually sure there's a hang, just something slow.
>> That's even more of a grey area, but I think it's sane to call a 1-hour
>> link time a regression -- unless it's expected that this is just very
>> slow to link?
>
> I dunno, if it was only a thing for allyesconfig, then whatever - but
> it's gonna significantly increase build times for any large kernels if LLD
> is this much slower than LD. Regression in my book.
>
> I'm gonna go and experiment with mixed toolchain builds, I'll report
> back..
I took palmer/for-next (1bd2963b2175 ("Merge patch series "riscv: enable
HAVE_LD_DEAD_CODE_DATA_ELIMINATION"")) for a tuxmake build with llvm-16:
| ~/src/tuxmake/run -v --wrapper ccache --target-arch riscv \
| --toolchain=llvm-16 --runtime docker --directory . -k \
| allyesconfig
Took forever, but passed after 2.5h.
CONFIG_CC_VERSION_TEXT="Debian clang version 16.0.6 (++20230610113307+7cbf1a259152-1~exp1~20230610233402.106)"
Björn
Powered by blists - more mailing lists