lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c5a2c9fc-e541-7a0d-daa8-5af802f8336d@fb.com>
Date:   Mon, 7 Jun 2021 08:42:46 -0700
From:   Yonghong Song <yhs@...com>
To:     Arnaldo Carvalho de Melo <acme@...nel.org>,
        Andrii Nakryiko <andrii.nakryiko@...il.com>
CC:     Andrii Nakryiko <andrii@...nel.org>, Jiri Olsa <jolsa@...nel.org>,
        <dwarves@...r.kernel.org>, bpf <bpf@...r.kernel.org>,
        Kernel Team <kernel-team@...com>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: Parallelizing vmlinux BTF encoding. was Re: [RFT] Testing 1.22



On 6/7/21 6:20 AM, Arnaldo Carvalho de Melo wrote:
> Em Fri, Jun 04, 2021 at 07:55:17PM -0700, Andrii Nakryiko escreveu:
>> On Thu, Jun 3, 2021 at 7:57 AM Arnaldo Carvalho de Melo <acme@...nel.org> wrote:
>>> Em Sat, May 29, 2021 at 05:40:17PM -0700, Andrii Nakryiko escreveu:
> 
>>>> At some point it probably would make sense to formalize
>>>> "btf_encoder" as a struct with its own state instead of passing in
>>>> multiple variables. It would probably also
> 
>>> Take a look at the tmp.master branch at:
> 
>>> https://git.kernel.org/pub/scm/devel/pahole/pahole.git/log/?h=tmp.master
>   
>> Oh wow, that's a lot of commits! :) Great that you decided to do this
>> refactoring, thanks!
>   
>>> that btf_elf class isn't used anymore by btf_loader, that uses only
>>> libbpf's APIs, and now we have a btf_encoder class with all the globals,
>>> etc, more baby steps are needed to finally ditch btf_elf altogether and
>>> move on to the parallelization.
>   
>> So do you plan to try to parallelize as a next step? I'm pretty
> 
> So, I haven't looked at details but what I thought would be interesting
> to investigate is to see if we can piggyback DWARF generation with BTF
> one, i.e. when we generate a .o file with -g we encode the DWARF info,
> so, right after this, we could call pahole as-is and encode BTF, then,
> when vmlinux is linked, we would do the dedup.
> 
> I.e. when generating ../build/v5.13.0-rc4+/kernel/fork.o, that comes
> with:
> 
> ⬢[acme@...lbox perf]$ readelf -SW ../build/v5.13.0-rc4+/kernel/fork.o | grep debug
>    [78] .debug_info       PROGBITS        0000000000000000 00daec 032968 00      0   0  1
>    [79] .rela.debug_info  RELA            0000000000000000 040458 053b68 18   I 95  78  8
>    [80] .debug_abbrev     PROGBITS        0000000000000000 093fc0 0012e9 00      0   0  1
>    [81] .debug_loclists   PROGBITS        0000000000000000 0952a9 00aa43 00      0   0  1
>    [82] .rela.debug_loclists RELA         0000000000000000 09fcf0 009d98 18   I 95  81  8
>    [83] .debug_aranges    PROGBITS        0000000000000000 0a9a88 000080 00      0   0  1
>    [84] .rela.debug_aranges RELA          0000000000000000 0a9b08 0000a8 18   I 95  83  8
>    [85] .debug_rnglists   PROGBITS        0000000000000000 0a9bb0 001509 00      0   0  1
>    [86] .rela.debug_rnglists RELA         0000000000000000 0ab0c0 001bc0 18   I 95  85  8
>    [87] .debug_line       PROGBITS        0000000000000000 0acc80 0086b7 00      0   0  1
>    [88] .rela.debug_line  RELA            0000000000000000 0b5338 002550 18   I 95  87  8
>    [89] .debug_str        PROGBITS        0000000000000000 0b7888 0177ad 01  MS  0   0  1
>    [90] .debug_line_str   PROGBITS        0000000000000000 0cf035 001308 01  MS  0   0  1
>    [93] .debug_frame      PROGBITS        0000000000000000 0d0370 000e38 00      0   0  8
>    [94] .rela.debug_frame RELA            0000000000000000 0d11a8 000e70 18   I 95  93  8
> ⬢[acme@...lbox perf]$
> 
> We would do:
> 
> ⬢[acme@...lbox perf]$ pahole -J ../build/v5.13.0-rc4+/kernel/fork.o
> ⬢[acme@...lbox perf]$
> 
> Which would get us to have:
> 
> ⬢[acme@...lbox perf]$ readelf -SW ../build/v5.13.0-rc4+/kernel/fork.o | grep BTF
>    [103] .BTF              PROGBITS        0000000000000000 0db658 030550 00      0   0  1
> ⬢[acme@...lbox perf]
> 
> ⬢[acme@...lbox perf]$ pahole -F btf -C hlist_node ../build/v5.13.0-rc4+/kernel/fork.o
> struct hlist_node {
> 	struct hlist_node *        next;                 /*     0     8 */
> 	struct hlist_node * *      pprev;                /*     8     8 */
> 
> 	/* size: 16, cachelines: 1, members: 2 */
> 	/* last cacheline: 16 bytes */
> };
> ⬢[acme@...lbox perf]$
> 
> So, a 'pahole --dedup_btf vmlinux' would just go on looking at:
> 
> ⬢[acme@...lbox perf]$ readelf -wi ../build/v5.13.0-rc4+/vmlinux | grep -A10 DW_TAG_compile_unit | grep -w DW_AT_name | grep fork
>      <f220eb>   DW_AT_name        : (indirect line string, offset: 0x62e7): /var/home/acme/git/linux/kernel/fork.c
> 
> To go there and go on extracting those ELF sections to combine and
> dedup.
> 
> This combine thing could be done even by the linker, I think, when all
> the DWARF data in the .o file are combined into vmlinux, we could do it
> for the .BTF sections as well, that way would be even more elegant, I

The linker will do the combine. It should just concatenate
all .BTF sections together like
    .BTF section
       .BTF data from file 1
       .BTF data from file 2
       ...

> think. Then, the combined vmlinux .BTF section would be read and fed in
> one go to libbtf's dedup arg.

I think this should work based on today's implementation but we do have
a caveat here.

The issue is related to DATASEC's. In DATASEC, we tried to encode
section offset for variables. These section offset should be
relocated during linking stage. But currently pahole does not
generate reloations for such variables so linker will ignore
them.

This shouldn't be an issue for global variables as we can find its
name in VAR and look up final symbol table for its section offset.

But this might be an issue for static variables with the same
name and just matching names in VAR is not enough as their
may be multiple ones in the symbol table. We could have a
workaround though, e.g., rename all static variables with a unique name
like <file_name>.[<func_name>.]<var_name> and went to dwarf
to find this static variable offset. dwarf should have
static variable section offset properly relocated.

Another solution is for pahole to generate .rel.BTF which
encodes relocations.

Currently, we don't emit static variables in vmlinux BTF (only
percpu globals), but not sure whether in the future we still
won't.

> 
> This way the encoding of BTF would be as paralellized as the kernel build
> process, following the same logic (-j NR_PROCESSORS).
> 
> wdyt?
> 
> If this isn't the case, we can process vmlinux as is today and go on
> creating N threads and feeding each with a DW_TAG_compile_unit
> "container", i.e. each thread would consume all the tags below each
> DW_TAG_compile_unit and produce a foo.BTF file that in the end would be
> combined and deduped by libbpf.
> 
> Doing it as my first sketch above would take advantage of locality of
> reference, i.e. the DWARF data would be freshly produced and in the
> cache hierarchy when we first encode BTF, later, when doing the
> combine+dedup we wouldn't be touching the more voluminous DWARF data.
> 
> - Arnaldo
> 
>> confident about BTF encoding part: dump each CU into its own BTF, use
>> btf__add_type() to merge multiple BTFs together. Just need to re-map
>> IDs (libbpf internally has API to visit each field that contains
>> type_id, it's well-defined enough to expose that as a public API, if
>> necessary). Then final btf_dedup().
>   
>> But the DWARF loading and parsing part is almost a black box to me, so
>> I'm not sure how much work it would involve.
> 
>>> I'm doing 'pahole -J vmlinux && btfdiff' after each cset and doing it
>>> very piecemeal as I'm doing will help bisecting any subtle bug this may
>>> introduce.
> 
>>>> allow to parallelize BTF generation, where each CU would proceed in
>>>> parallel generating local BTF, and then the final pass would merge and
>>>> dedup BTFs. Currently reading and processing DWARF is the slowest part
>>>> of the DWARF-to-BTF conversion, parallelization and maybe some other
>>>> optimization seems like the only way to speed the process up.
> 
>>>> Acked-by: Andrii Nakryiko <andrii@...nel.org>
> 
>>> Thanks!

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ