lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <mhng-ad2d02fa-2d4d-4bf1-ab2a-fd84fa4bcb40@palmer-ri-x1c9a>
Date:   Wed, 21 Jun 2023 12:46:21 -0700 (PDT)
From:   Palmer Dabbelt <palmer@...belt.com>
To:     ndesaulniers@...gle.com, nathan@...nel.org
CC:     bjorn@...nel.org, Conor Dooley <conor@...nel.org>,
        jszhang@...nel.org, llvm@...ts.linux.dev,
        Paul Walmsley <paul.walmsley@...ive.com>,
        aou@...s.berkeley.edu, Arnd Bergmann <arnd@...db.de>,
        linux-riscv@...ts.infradead.org, linux-kernel@...r.kernel.org,
        linux-arch@...r.kernel.org
Subject:     Re: [PATCH v2 0/4] riscv: enable HAVE_LD_DEAD_CODE_DATA_ELIMINATION

On Wed, 21 Jun 2023 11:19:31 PDT (-0700), Palmer Dabbelt wrote:
> On Wed, 21 Jun 2023 10:51:15 PDT (-0700), bjorn@...nel.org wrote:
>> Conor Dooley <conor@...nel.org> writes:
>>
>> [...]
>>
>>>> So I'm no longer actually sure there's a hang, just something slow.
>>>> That's even more of a grey area, but I think it's sane to call a 1-hour
>>>> link time a regression -- unless it's expected that this is just very
>>>> slow to link?
>>>
>>> I dunno, if it was only a thing for allyesconfig, then whatever - but
>>> it's gonna significantly increase build times for any large kernels if LLD
>>> is this much slower than LD. Regression in my book.
>>>
>>> I'm gonna go and experiment with mixed toolchain builds, I'll report
>>> back..
>>
>> I took palmer/for-next (1bd2963b2175 ("Merge patch series "riscv: enable
>> HAVE_LD_DEAD_CODE_DATA_ELIMINATION"")) for a tuxmake build with llvm-16:
>>
>>   | ~/src/tuxmake/run -v --wrapper ccache --target-arch riscv \
>>   |     --toolchain=llvm-16 --runtime docker --directory . -k \
>>   |     allyesconfig
>>
>> Took forever, but passed after 2.5h.
>
> Thanks.  I just re-ran mine 17/trunk LLD under time (rather that just
> checking top sometimes), it's at 1.5h but even that seems quite long.
>
> I guess this is sort of up to the LLVM folks: if it's expected that DCE
> takes a very long time to link then I'm not opposed to allowing it, but
> if this is probably a bug in LLD then it seems best to turn it off until
> we sort things out over there.
>
> I think maybe Nick or Nathan is the best bet to know?

Looks like it's about 2h for me.  I'm going to drop these from my 
staging tree in the interest of making progress on other stuff, but if 
this is just expected behavior them I'm OK taking them (though that's 
too much compute for me to test regularly):

$ time ../../../../llvm/install/bin/ld.lld -melf64lriscv -z noexecstack -r -o vmlinux.o --whole-archive vmlinux.a --no-whole-archive --start-group ./drivers/firmware/efi/libstub/lib.a --end-group                                                                                                                                    

real    111m50.678s
user    111m18.739s
sys     1m13.147s

>> CONFIG_CC_VERSION_TEXT="Debian clang version 16.0.6 (++20230610113307+7cbf1a259152-1~exp1~20230610233402.106)"
>>
>>
>> Björn

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ