[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <fd710301-2197-1e8f-740b-049dfa494be2@fb.com>
Date: Fri, 14 Aug 2020 09:55:11 -0700
From: Yonghong Song <yhs@...com>
To: Miaohe Lin <linmiaohe@...wei.com>, <ast@...nel.org>,
<daniel@...earbox.net>, <kafai@...com>, <songliubraving@...com>,
<andriin@...com>, <john.fastabend@...il.com>,
<kpsingh@...omium.org>, <davem@...emloft.net>, <kuba@...nel.org>,
<hawk@...nel.org>
CC: <netdev@...r.kernel.org>, <bpf@...r.kernel.org>,
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] bpf: Convert to use the preferred fallthrough macro
On 8/14/20 2:16 AM, Miaohe Lin wrote:
> Convert the uses of fallthrough comments to fallthrough macro.
>
> Signed-off-by: Miaohe Lin <linmiaohe@...wei.com>
This is not a bug fix but rather an enhancement so not sure whether
this should push to bpf tree or wait until bpf-next.
It may be worthwhile to mention Commit 294f69e662d1
("compiler_attributes.h: Add 'fallthrough' pseudo keyword for
switch/case use") so people can understand why this patch is
needed.
With above suggestions,
Acked-by: Yonghong Song <yhs@...com>
> ---
> kernel/bpf/cgroup.c | 2 +-
> kernel/bpf/cpumap.c | 2 +-
> kernel/bpf/syscall.c | 2 +-
> kernel/bpf/verifier.c | 6 +++---
> 4 files changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/kernel/bpf/cgroup.c b/kernel/bpf/cgroup.c
> index 83ff127ef7ae..e21de4f1754c 100644
> --- a/kernel/bpf/cgroup.c
> +++ b/kernel/bpf/cgroup.c
> @@ -1794,7 +1794,7 @@ static bool cg_sockopt_is_valid_access(int off, int size,
> return prog->expected_attach_type ==
> BPF_CGROUP_GETSOCKOPT;
> case offsetof(struct bpf_sockopt, optname):
> - /* fallthrough */
> + fallthrough;
> case offsetof(struct bpf_sockopt, level):
> if (size != size_default)
> return false;
> diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c
> index f1c46529929b..6386b7bb98f2 100644
> --- a/kernel/bpf/cpumap.c
> +++ b/kernel/bpf/cpumap.c
> @@ -279,7 +279,7 @@ static int cpu_map_bpf_prog_run_xdp(struct bpf_cpu_map_entry *rcpu,
> break;
> default:
> bpf_warn_invalid_xdp_action(act);
> - /* fallthrough */
> + fallthrough;
> case XDP_DROP:
> xdp_return_frame(xdpf);
> stats->drop++;
> diff --git a/kernel/bpf/syscall.c b/kernel/bpf/syscall.c
> index 86299a292214..1bf960aa615c 100644
> --- a/kernel/bpf/syscall.c
> +++ b/kernel/bpf/syscall.c
> @@ -2029,7 +2029,7 @@ bpf_prog_load_check_attach(enum bpf_prog_type prog_type,
> case BPF_PROG_TYPE_EXT:
> if (expected_attach_type)
> return -EINVAL;
> - /* fallthrough */
> + fallthrough;
> default:
> return 0;
> }
> diff --git a/kernel/bpf/verifier.c b/kernel/bpf/verifier.c
> index ef938f17b944..1e7f34663f86 100644
> --- a/kernel/bpf/verifier.c
> +++ b/kernel/bpf/verifier.c
> @@ -2639,7 +2639,7 @@ static bool may_access_direct_pkt_data(struct bpf_verifier_env *env,
> case BPF_PROG_TYPE_CGROUP_SKB:
> if (t == BPF_WRITE)
> return false;
> - /* fallthrough */
> + fallthrough;
>
> /* Program types with direct read + write access go here! */
> case BPF_PROG_TYPE_SCHED_CLS:
> @@ -5236,7 +5236,7 @@ static int adjust_ptr_min_max_vals(struct bpf_verifier_env *env,
> off_reg == dst_reg ? dst : src);
> return -EACCES;
> }
> - /* fall-through */
> + fallthrough;
> default:
> break;
> }
> @@ -10988,7 +10988,7 @@ static int check_attach_btf_id(struct bpf_verifier_env *env)
> default:
> if (!prog_extension)
> return -EINVAL;
> - /* fallthrough */
> + fallthrough;
> case BPF_MODIFY_RETURN:
> case BPF_LSM_MAC:
> case BPF_TRACE_FENTRY:
>
Powered by blists - more mailing lists