lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <2b66d64c-3398-44e0-897e-39dce82a6935@linux.dev>
Date:   Sun, 26 Nov 2023 21:20:30 -0800
From:   Yonghong Song <yonghong.song@...ux.dev>
To:     Eduard Zingerman <eddyz87@...il.com>,
        Alexei Starovoitov <alexei.starovoitov@...il.com>
Cc:     Daniel Xu <dxu@...uu.xyz>, Shuah Khan <shuah@...nel.org>,
        Daniel Borkmann <daniel@...earbox.net>,
        Andrii Nakryiko <andrii@...nel.org>,
        Alexei Starovoitov <ast@...nel.org>,
        Steffen Klassert <steffen.klassert@...unet.com>,
        antony.antony@...unet.com, Mykola Lysenko <mykolal@...com>,
        Martin KaFai Lau <martin.lau@...ux.dev>,
        Song Liu <song@...nel.org>,
        John Fastabend <john.fastabend@...il.com>,
        KP Singh <kpsingh@...nel.org>,
        Stanislav Fomichev <sdf@...gle.com>,
        Hao Luo <haoluo@...gle.com>, Jiri Olsa <jolsa@...nel.org>,
        bpf <bpf@...r.kernel.org>,
        "open list:KERNEL SELFTEST FRAMEWORK" 
        <linux-kselftest@...r.kernel.org>,
        LKML <linux-kernel@...r.kernel.org>, devel@...ux-ipsec.org,
        Network Development <netdev@...r.kernel.org>
Subject: Re: [PATCH ipsec-next v1 6/7] bpf: selftests: test_tunnel: Disable
 CO-RE relocations


On 11/26/23 3:14 PM, Eduard Zingerman wrote:
> On Sat, 2023-11-25 at 20:22 -0800, Yonghong Song wrote:
> [...]
>> --- a/tools/testing/selftests/bpf/progs/test_tunnel_kern.c
>> +++ b/tools/testing/selftests/bpf/progs/test_tunnel_kern.c
>> @@ -6,7 +6,10 @@
>>     * modify it under the terms of version 2 of the GNU General Public
>>     * License as published by the Free Software Foundation.
>>     */
>> -#define BPF_NO_PRESERVE_ACCESS_INDEX
>> +#if __has_attribute(preserve_static_offset)
>> +struct __attribute__((preserve_static_offset)) erspan_md2;
>> +struct __attribute__((preserve_static_offset)) erspan_metadata;
>> +#endif
>>    #include "vmlinux.h"
> [...]
>>    int bpf_skb_get_fou_encap(struct __sk_buff *skb_ctx,
>> @@ -174,9 +177,13 @@ int erspan_set_tunnel(struct __sk_buff *skb)
>>           __u8 hwid = 7;
>>    
>>           md.version = 2;
>> +#if __has_attribute(preserve_static_offset)
>>           md.u.md2.dir = direction;
>>           md.u.md2.hwid = hwid & 0xf;
>>           md.u.md2.hwid_upper = (hwid >> 4) & 0x3;
>> +#else
>> +       /* Change bit-field store to byte(s)-level stores. */
>> +#endif
>>    #endif
>>    
>>           ret = bpf_skb_set_tunnel_opt(skb, &md, sizeof(md));
>>
>> ====
>>
>> Eduard, could you double check whether this is a valid use case
>> to solve this kind of issue with preserve_static_offset attribute?
> Tbh I'm not sure. This test passes with preserve_static_offset
> because it suppresses preserve_access_index. In general clang
> translates bitfield access to a set of IR statements like:
>
>    C:
>      struct foo {
>        unsigned _;
>        unsigned a:1;
>        ...
>      };
>      ... foo->a ...
>
>    IR:
>      %a = getelementptr inbounds %struct.foo, ptr %0, i32 0, i32 1
>      %bf.load = load i8, ptr %a, align 4
>      %bf.clear = and i8 %bf.load, 1
>      %bf.cast = zext i8 %bf.clear to i32
>
> With preserve_static_offset the getelementptr+load are replaced by a
> single statement which is preserved as-is till code generation,
> thus load with align 4 is preserved.
>
> On the other hand, I'm not sure that clang guarantees that load or
> stores used for bitfield access would be always aligned according to
> verifier expectations.

I think it should be true. The frontend does alignment analysis based on
types and (packed vs. unpacked) and assign each load/store with proper
alignment (like 'align 4' in the above). 'align 4' truely means
the load itself is 4-byte aligned. Otherwise, it will be very confusing
for arch's which do not support unaligned memory access (e.g. BPF).

>
> I think we should check if there are some clang knobs that prevent
> generation of unaligned memory access. I'll take a look.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ