[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <58136182-0eb1-78c9-ceb9-402418c7d10c@iogearbox.net>
Date: Tue, 10 Jul 2018 10:21:02 +0200
From: Daniel Borkmann <daniel@...earbox.net>
To: Martin KaFai Lau <kafai@...com>, Okash Khawaja <osk@...com>
Cc: Alexei Starovoitov <ast@...nel.org>, Yonghong Song <yhs@...com>,
Jakub Kicinski <jakub.kicinski@...ronome.com>,
"David S. Miller" <davem@...emloft.net>, netdev@...r.kernel.org,
kernel-team@...com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH bpf 1/1] bpf: btf: Fix bitfield extraction for big endian
On 07/09/2018 08:32 PM, Martin KaFai Lau wrote:
> On Sun, Jul 08, 2018 at 05:22:03PM -0700, Okash Khawaja wrote:
>> When extracting bitfield from a number, btf_int_bits_seq_show() builds
>> a mask and accesses least significant byte of the number in a way
>> specific to little-endian. This patch fixes that by checking endianness
>> of the machine and then shifting left and right the unneeded bits.
>>
>> Thanks to Martin Lau for the help in navigating potential pitfalls when
>> dealing with endianess and for the final solution.
>>
>> Fixes: b00b8daec828 ("bpf: btf: Add pretty print capability for data with BTF type info")
>> Signed-off-by: Okash Khawaja <osk@...com>
>>
>> ---
>> kernel/bpf/btf.c | 32 +++++++++++++++-----------------
>> 1 file changed, 15 insertions(+), 17 deletions(-)
>>
>> --- a/kernel/bpf/btf.c
>> +++ b/kernel/bpf/btf.c
>> @@ -162,6 +162,8 @@
>> #define BITS_ROUNDDOWN_BYTES(bits) ((bits) >> 3)
>> #define BITS_ROUNDUP_BYTES(bits) \
>> (BITS_ROUNDDOWN_BYTES(bits) + !!BITS_PER_BYTE_MASKED(bits))
>> +const int one = 1;
>> +#define is_big_endian() ((*(char *)&one) == 0)
Also here, in the kernel archs provide proper definitions.
>> #define BTF_INFO_MASK 0x0f00ffff
>> #define BTF_INT_MASK 0x0fffffff
>> @@ -991,16 +993,13 @@ static void btf_int_bits_seq_show(const
>> void *data, u8 bits_offset,
>> struct seq_file *m)
>> {
>> + u8 left_shift_bits, right_shift_bits;
> Nit.
> Although only max 64 bit int is allowed now (ensured by btf_int_check_meta),
> it is better to use u16 such that it will be consistent to BTF_INT_BITS.
>
>> u32 int_data = btf_type_int(t);
>> u16 nr_bits = BTF_INT_BITS(int_data);
>> u16 total_bits_offset;
>> u16 nr_copy_bytes;
>> u16 nr_copy_bits;
>> - u8 nr_upper_bits;
>> - union {
>> - u64 u64_num;
>> - u8 u8_nums[8];
>> - } print_num;
>> + u64 print_num;
>>
>> total_bits_offset = bits_offset + BTF_INT_OFFSET(int_data);
>> data += BITS_ROUNDDOWN_BYTES(total_bits_offset);
>> @@ -1008,21 +1007,20 @@ static void btf_int_bits_seq_show(const
>> nr_copy_bits = nr_bits + bits_offset;
>> nr_copy_bytes = BITS_ROUNDUP_BYTES(nr_copy_bits);
>>
>> - print_num.u64_num = 0;
>> - memcpy(&print_num.u64_num, data, nr_copy_bytes);
>> -
>> - /* Ditch the higher order bits */
>> - nr_upper_bits = BITS_PER_BYTE_MASKED(nr_copy_bits);
>> - if (nr_upper_bits) {
>> - /* We need to mask out some bits of the upper byte. */
>> - u8 mask = (1 << nr_upper_bits) - 1;
>> -
>> - print_num.u8_nums[nr_copy_bytes - 1] &= mask;
>> + print_num = 0;
>> + memcpy(&print_num, data, nr_copy_bytes);
>> + if (is_big_endian()) {
>> + left_shift_bits = bits_offset;
>> + right_shift_bits = BITS_PER_U64 - nr_bits;
>> + } else {
>> + left_shift_bits = BITS_PER_U64 - nr_copy_bits;
>> + right_shift_bits = BITS_PER_U64 - nr_bits;
> Nit.
> right_shift_bits is the same for both cases. Lets simplify it.
>
>> }
>>
>> - print_num.u64_num >>= bits_offset;
>> + print_num <<= left_shift_bits;
>> + print_num >>= right_shift_bits;
>>
>> - seq_printf(m, "0x%llx", print_num.u64_num);
>> + seq_printf(m, "0x%llx", print_num);
>> }
>>
>> static void btf_int_seq_show(const struct btf *btf, const struct btf_type *t,
>>
Powered by blists - more mailing lists