[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230622215327.GA1135447@dev-arch.thelio-3990X>
Date: Thu, 22 Jun 2023 21:53:27 +0000
From: Nathan Chancellor <nathan@...nel.org>
To: Palmer Dabbelt <palmer@...belt.com>
Cc: bjorn@...nel.org, ndesaulniers@...gle.com,
Conor Dooley <conor@...nel.org>, jszhang@...nel.org,
llvm@...ts.linux.dev, Paul Walmsley <paul.walmsley@...ive.com>,
aou@...s.berkeley.edu, Arnd Bergmann <arnd@...db.de>,
linux-riscv@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-arch@...r.kernel.org
Subject: Re: [PATCH v2 0/4] riscv: enable HAVE_LD_DEAD_CODE_DATA_ELIMINATION
On Wed, Jun 21, 2023 at 11:19:31AM -0700, Palmer Dabbelt wrote:
> On Wed, 21 Jun 2023 10:51:15 PDT (-0700), bjorn@...nel.org wrote:
> > Conor Dooley <conor@...nel.org> writes:
> >
> > [...]
> >
> > > > So I'm no longer actually sure there's a hang, just something
> > > > slow. That's even more of a grey area, but I think it's sane to
> > > > call a 1-hour link time a regression -- unless it's expected
> > > > that this is just very slow to link?
> > >
> > > I dunno, if it was only a thing for allyesconfig, then whatever - but
> > > it's gonna significantly increase build times for any large kernels if LLD
> > > is this much slower than LD. Regression in my book.
> > >
> > > I'm gonna go and experiment with mixed toolchain builds, I'll report
> > > back..
> >
> > I took palmer/for-next (1bd2963b2175 ("Merge patch series "riscv: enable
> > HAVE_LD_DEAD_CODE_DATA_ELIMINATION"")) for a tuxmake build with llvm-16:
> >
> > | ~/src/tuxmake/run -v --wrapper ccache --target-arch riscv \
> > | --toolchain=llvm-16 --runtime docker --directory . -k \
> > | allyesconfig
> >
> > Took forever, but passed after 2.5h.
>
> Thanks. I just re-ran mine 17/trunk LLD under time (rather that just
> checking top sometimes), it's at 1.5h but even that seems quite long.
>
> I guess this is sort of up to the LLVM folks: if it's expected that DCE
> takes a very long time to link then I'm not opposed to allowing it, but if
> this is probably a bug in LLD then it seems best to turn it off until we
> sort things out over there.
>
> I think maybe Nick or Nathan is the best bet to know?
I can confirm a regression with allyesconfig but not allmodconfig using
LLVM 16.0.6 on my 80-core Ampere Altra system.
allmodconfig: 8m 4s
allmodconfig + CONFIG_LD_DEAD_CODE_DATA_ELIMINATION=n: 7m 4s
allyesconfig: 1h 58m 30s
allyesconfig + CONFIG_LD_DEAD_CODE_DATA_ELIMINATION=n: 12m 41s
I am sure there is something that ld.lld can do better, given GNU ld
does not have any problems as earlier established, so that should
definitely be explored further. I see Nick already had a response about
writing up a report (I wrote most of this before that email so I am
still sending this one).
However, allyesconfig is pretty special and not really indicative of a
"real world" kernel build in my opinion (which will either be a fully
modular kernel to allow use on a wide range of hardware or a monolithic
kernel with just the drivers needed for a specific platform, which will
be much smaller than allyesconfig); it has given us problems with large
kernels before on other architectures.
CONFIG_LD_DEAD_CODE_DATA_ELIMINATION is already marked with 'depends on
EXPERT' and its help text mentions its perils, so it does not seem
unreasonable to me to add an additional dependency on !COMPILE_TEST so
that allmodconfig and allyesconfig cannot flip this on, something like
the following perhaps?
diff --git a/init/Kconfig b/init/Kconfig
index 32c24950c4ce..25434cbd2a6e 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1388,7 +1388,7 @@ config HAVE_LD_DEAD_CODE_DATA_ELIMINATION
config LD_DEAD_CODE_DATA_ELIMINATION
bool "Dead code and data elimination (EXPERIMENTAL)"
depends on HAVE_LD_DEAD_CODE_DATA_ELIMINATION
- depends on EXPERT
+ depends on EXPERT && !COMPILE_TEST
depends on $(cc-option,-ffunction-sections -fdata-sections)
depends on $(ld-option,--gc-sections)
help
If applying that dependency to all architectures is too much, the
selection in arch/riscv/Kconfig could be gated on the same condition.
Cheers,
Nathan
Powered by blists - more mailing lists