lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEf4BzZV2Uks_iE5v+7fvQXBvnLgmDQGwn3Bh2+4T-XODxeRJQ@mail.gmail.com>
Date:   Fri, 6 Nov 2020 17:51:56 -0800
From:   Andrii Nakryiko <andrii.nakryiko@...il.com>
To:     Song Liu <songliubraving@...com>
Cc:     Andrii Nakryiko <andrii@...nel.org>, bpf <bpf@...r.kernel.org>,
        Networking <netdev@...r.kernel.org>,
        Alexei Starovoitov <ast@...com>,
        Daniel Borkmann <daniel@...earbox.net>,
        Kernel Team <Kernel-team@...com>,
        open list <linux-kernel@...r.kernel.org>,
        "rafael@...nel.org" <rafael@...nel.org>,
        "jeyu@...nel.org" <jeyu@...nel.org>,
        Arnaldo Carvalho de Melo <acme@...hat.com>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>
Subject: Re: [PATCH v2 bpf-next 1/5] bpf: add in-kernel split BTF support

On Fri, Nov 6, 2020 at 5:28 PM Song Liu <songliubraving@...com> wrote:
>
>
>
> > On Nov 6, 2020, at 3:02 PM, Andrii Nakryiko <andrii@...nel.org> wrote:
> >
> > Adjust in-kernel BTF implementation to support a split BTF mode of operation.
> > Changes are mostly mirroring libbpf split BTF changes, with the exception of
> > start_id being 0 for in-kernel implementation due to simpler read-only mode.
> >
> > Otherwise, for split BTF logic, most of the logic of jumping to base BTF,
> > where necessary, is encapsulated in few helper functions. Type numbering and
> > string offset in a split BTF are logically continuing where base BTF ends, so
> > most of the high-level logic is kept without changes.
> >
> > Type verification and size resolution is only doing an added resolution of new
> > split BTF types and relies on already cached size and type resolution results
> > in the base BTF.
> >
> > Signed-off-by: Andrii Nakryiko <andrii@...nel.org>
>
> [...]
>
> >
> > @@ -600,8 +618,15 @@ static const struct btf_kind_operations *btf_type_ops(const struct btf_type *t)
> >
> > static bool btf_name_offset_valid(const struct btf *btf, u32 offset)
> > {
> > -     return BTF_STR_OFFSET_VALID(offset) &&
> > -             offset < btf->hdr.str_len;
> > +     if (!BTF_STR_OFFSET_VALID(offset))
> > +             return false;
> > +again:
> > +     if (offset < btf->start_str_off) {
> > +             btf = btf->base_btf;
> > +             goto again;
>
> Can we do a while loop instead of "goto again;"?

yep, not sure why I went with goto...

while (offset < btf->start_str_off)
    btf = btf->base_btf;

Shorter.

>
> > +     }
> > +     offset -= btf->start_str_off;
> > +     return offset < btf->hdr.str_len;
> > }
> >
> > static bool __btf_name_char_ok(char c, bool first, bool dot_ok)
> > @@ -615,10 +640,25 @@ static bool __btf_name_char_ok(char c, bool first, bool dot_ok)
> >       return true;
> > }
> >
> > +static const char *btf_str_by_offset(const struct btf *btf, u32 offset)
> > +{
> > +again:
> > +     if (offset < btf->start_str_off) {
> > +             btf = btf->base_btf;
> > +             goto again;
> > +     }
>
> Maybe add a btf_find_base_btf(btf, offset) helper for this logic?

No strong feelings about this, but given it's a two-line loop might
not be worth it. I'd also need a pretty verbose
btf_find_base_btf_for_str_offset() and
btf_find_base_btf_for_type_id(). I feel like loop might be less
distracting actually.

>
> > +
> > +     offset -= btf->start_str_off;
> > +     if (offset < btf->hdr.str_len)
> > +             return &btf->strings[offset];
> > +
> > +     return NULL;
> > +}
> > +
>
> [...]
>
> > }
> >
> > const char *btf_name_by_offset(const struct btf *btf, u32 offset)
> > {
> > -     if (offset < btf->hdr.str_len)
> > -             return &btf->strings[offset];
> > -
> > -     return NULL;
> > +     return btf_str_by_offset(btf, offset);
> > }
>
> IIUC, btf_str_by_offset() and btf_name_by_offset() are identical. Can we
> just keep btf_name_by_offset()?

btf_str_by_offset() is static, so should be inlinable, while
btf_name_by_offset() is a global function, I was worrying about
regressing performance for __btf_name_valid() and
__btf_name_by_offset(). Premature optimization you think?

>
> >
> > const struct btf_type *btf_type_by_id(const struct btf *btf, u32 type_id)
> > {
> > -     if (type_id > btf->nr_types)
> > -             return NULL;
> > +again:
> > +     if (type_id < btf->start_id) {
> > +             btf = btf->base_btf;
> > +             goto again;
> > +     }
>
> ditto, goto again..
>
> [...]
>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ