lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 6 Nov 2019 21:18:47 +0000
From:   Yonghong Song <yhs@...com>
To:     Andrii Nakryiko <andriin@...com>,
        "bpf@...r.kernel.org" <bpf@...r.kernel.org>,
        "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
        Alexei Starovoitov <ast@...com>,
        "daniel@...earbox.net" <daniel@...earbox.net>
CC:     "andrii.nakryiko@...il.com" <andrii.nakryiko@...il.com>,
        Kernel Team <Kernel-team@...com>
Subject: Re: [PATCH bpf-next] libbpf: simplify BPF_CORE_READ_BITFIELD_PROBED
 usage



On 11/6/19 12:15 PM, Andrii Nakryiko wrote:
> Streamline BPF_CORE_READ_BITFIELD_PROBED interface to follow
> BPF_CORE_READ_BITFIELD (direct) and BPF_CORE_READ, in general, i.e., just
> return read result or 0, if underlying bpf_probe_read() failed.
> 
> In practice, real applications rarely check bpf_probe_read() result, because
> it has to always work or otherwise it's a bug. So propagating internal
> bpf_probe_read() error from this macro hurts usability without providing real
> benefits in practice. This patch fixes the issue and simplifies usage,
> noticeable even in selftest itself.

Agreed. This will be consistent with direct read where
returning value will be 0 if any fault happens.

In really rare cases, if user want to distinguish good value 0 from
bpf_probe_read() returning error, all building macros are in the header
file, user can have a custom solution. But let us have API work
for common use case with good usability.

> 
> Cc: Yonghong Song <yhs@...com>
> Signed-off-by: Andrii Nakryiko <andriin@...com>

Acked-by: Yonghong Song <yhs@...com>


> ---
>   tools/lib/bpf/bpf_core_read.h                 | 27 ++++++++-----------
>   .../progs/test_core_reloc_bitfields_probed.c  | 19 +++++--------
>   2 files changed, 18 insertions(+), 28 deletions(-)
> 
> diff --git a/tools/lib/bpf/bpf_core_read.h b/tools/lib/bpf/bpf_core_read.h
> index 11461b2623b0..7009dc90e012 100644
> --- a/tools/lib/bpf/bpf_core_read.h
> +++ b/tools/lib/bpf/bpf_core_read.h
> @@ -39,32 +39,27 @@ enum bpf_field_info_kind {
>   #endif
>   
>   /*
> - * Extract bitfield, identified by src->field, and put its value into u64
> - * *res. All this is done in relocatable manner, so bitfield changes such as
> + * Extract bitfield, identified by s->field, and return its value as u64.
> + * All this is done in relocatable manner, so bitfield changes such as
>    * signedness, bit size, offset changes, this will be handled automatically.
>    * This version of macro is using bpf_probe_read() to read underlying integer
>    * storage. Macro functions as an expression and its return type is
>    * bpf_probe_read()'s return value: 0, on success, <0 on error.
>    */
> -#define BPF_CORE_READ_BITFIELD_PROBED(src, field, res) ({		      \
> -	unsigned long long val;						      \
> +#define BPF_CORE_READ_BITFIELD_PROBED(s, field) ({			      \
> +	unsigned long long val = 0;					      \
>   									      \
> -	*res = 0;							      \
> -	val = __CORE_BITFIELD_PROBE_READ(res, src, field);		      \
> -	if (!val) {							      \
> -		*res <<= __CORE_RELO(src, field, LSHIFT_U64);		      \
> -		val = __CORE_RELO(src, field, RSHIFT_U64);		      \
> -		if (__CORE_RELO(src, field, SIGNED))			      \
> -			*res = ((long long)*res) >> val;		      \
> -		else							      \
> -			*res = ((unsigned long long)*res) >> val;	      \
> -		val = 0;						      \
> -	}								      \
> +	__CORE_BITFIELD_PROBE_READ(&val, s, field);			      \
> +	val <<= __CORE_RELO(s, field, LSHIFT_U64);			      \
> +	if (__CORE_RELO(s, field, SIGNED))				      \
> +		val = ((long long)val) >> __CORE_RELO(s, field, RSHIFT_U64);  \
> +	else								      \
> +		val = val >> __CORE_RELO(s, field, RSHIFT_U64);		      \
>   	val;								      \
>   })
>   
>   /*
> - * Extract bitfield, identified by src->field, and return its value as u64.
> + * Extract bitfield, identified by s->field, and return its value as u64.
>    * This version of macro is using direct memory reads and should be used from
>    * BPF program types that support such functionality (e.g., typed raw
>    * tracepoints).
> diff --git a/tools/testing/selftests/bpf/progs/test_core_reloc_bitfields_probed.c b/tools/testing/selftests/bpf/progs/test_core_reloc_bitfields_probed.c
> index a381f8ac2419..e466e3ab7de4 100644
> --- a/tools/testing/selftests/bpf/progs/test_core_reloc_bitfields_probed.c
> +++ b/tools/testing/selftests/bpf/progs/test_core_reloc_bitfields_probed.c
> @@ -37,11 +37,6 @@ struct core_reloc_bitfields_output {
>   	int64_t		s32;
>   };
>   
> -#define TRANSFER_BITFIELD(in, out, field)				\
> -	if (BPF_CORE_READ_BITFIELD_PROBED(in, field, &res))		\
> -		return 1;						\
> -	out->field = res
> -
>   SEC("raw_tracepoint/sys_enter")
>   int test_core_bitfields(void *ctx)
>   {
> @@ -49,13 +44,13 @@ int test_core_bitfields(void *ctx)
>   	struct core_reloc_bitfields_output *out = (void *)&data.out;
>   	uint64_t res;
>   
> -	TRANSFER_BITFIELD(in, out, ub1);
> -	TRANSFER_BITFIELD(in, out, ub2);
> -	TRANSFER_BITFIELD(in, out, ub7);
> -	TRANSFER_BITFIELD(in, out, sb4);
> -	TRANSFER_BITFIELD(in, out, sb20);
> -	TRANSFER_BITFIELD(in, out, u32);
> -	TRANSFER_BITFIELD(in, out, s32);
> +	out->ub1 = BPF_CORE_READ_BITFIELD_PROBED(in, ub1);
> +	out->ub2 = BPF_CORE_READ_BITFIELD_PROBED(in, ub2);
> +	out->ub7 = BPF_CORE_READ_BITFIELD_PROBED(in, ub7);
> +	out->sb4 = BPF_CORE_READ_BITFIELD_PROBED(in, sb4);
> +	out->sb20 = BPF_CORE_READ_BITFIELD_PROBED(in, sb20);
> +	out->u32 = BPF_CORE_READ_BITFIELD_PROBED(in, u32);
> +	out->s32 = BPF_CORE_READ_BITFIELD_PROBED(in, s32);
>   
>   	return 0;
>   }
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ